Within cloud security, one of the most prevalent tools is dynamic application security testing, or DAST. DAST is a critical component of a robust application security framework, identifying vulnerabilities in your cloud applications either pre or post deployment that can be remediated for a stronger security posture.
But what if the very tools you use to identify vulnerabilities in your own applications can be used by attackers to find those same vulnerabilities? Sadly, that’s the case with DASTs. The very same brute-force DAST techniques that alert security teams to vulnerabilities can be used by nefarious outfits for that exact purpose.
There is good news, however. A new research paper written by Rapid7’s Pojan Shahrivar and Dr. Stuart Millar and published by the Institute of Electrical and Electronics Engineers (IEEE) shows how artificial intelligence (AI) and machine learning (ML) can be used to thwart unwanted brute-force DAST attacks before they even begin. The paper Detecting Web Application DAST Attacks with Machine Learning was presented yesterday to the specialist AI/ML in Cybersecurity workshop at the 6th annual IEEE Dependable and Secure Computing conference, hosted this year at the University of Southern Florida (USF) in Tampa.
The team designed and evaluated AI and ML techniques to detect brute-force DAST attacks during the reconnaissance phase, effectively preventing 94% of DAST attacks and eliminating the entire kill-chain at the source. This presents security professionals with an automated way to stop DAST brute-force attacks before they even start. Essentially, AI and ML are being used to keep attackers from even casing the joint in advance of an attack.
This novel work is the first application of AI in cloud security to automatically detect brute-force DAST reconnaissance with a view to an attack. It shows the potential this technology has in preventing attacks from getting off the ground, plus it enables significant time savings for security administrators and lets them complete other high-value investigative work.
Here’s how it is done: Using a real-world dataset of millions of events from enterprise-grade apps, a random forest model is trained using tumbling windows of time to generate aggregated event features from source IPs. In this way the characteristics of a DAST attack related to, for example, the number of unique URLs visited per IP or payloads per session, is learned by the model. This avoids the conventional threshold approach, which is brittle and causes excessive false positives.
This is not the first time Millar and team have made major advances in the use of AI and ML to improve the effectiveness of cloud application security. Late last year, Millar published new research at AISec in Los Angeles, the leading venue for AI/ML cybersecurity innovations, into the use of AI/ML to triage vulnerability remediation, reducing false positives by 96%. The team was also delighted to win AISec’s highly coveted Best Paper Award, ahead of the likes of Apple and Microsoft.
A complimentary pre-print version of the paper Detecting Web Application DAST Attacks with Machine Learning is available on the Rapid7 website by clicking here.
Product security teams play a critical role to help ensure that new services, products, and features are built and shipped securely to customers. However, since security teams are in the product launch path, they can form a bottleneck if organizations struggle to scale their security teams to support their growing product development teams. In this post, we will share how Amazon Web Services (AWS) developed a mechanism to scale security processes and expertise by distributing security ownership between security teams and development teams. This mechanism has many names in the industry — Security Champions, Security Advocates, and others — and it’s often part of a shift-left approach to security. At AWS, we call this mechanism Security Guardians.
In many organizations, there are fewer security professionals than product developers. Our experience is that it takes much more time to hire a security professional than other technical job roles, and research conducted by (ISC)2 shows that the cybersecurity industry is short 3.4 million workers. When product development teams continue to grow at a faster rate than security teams, the disparity between security professionals and product developers continues to increase as well. Although most businesses understand the importance of security, frustration and tensions can arise when it becomes a bottleneck for the business and its ability to serve customers.
At AWS, we require the teams that build products to undergo an independent security review with an AWS application security engineer before launching. This is a mechanism to verify that new services, features, solutions, vendor applications, and hardware meet our high security bar. This intensive process impacts how quickly product teams can ship to customers. As shown in Figure 1, we found that as the product teams scaled, so did the problem: there were more products being built than the security teams could review and approve for launch. Because security reviews are required and non-negotiable, this could potentially lead to delays in the shipping of products and features.
Figure 1: More products are being developed than can be reviewed and shipped
How AWS builds a culture of security
Because of its size and scale, many customers look to AWS to understand how we scale our own security teams. To tell our story and provide insight, let’s take a look at the culture of security at AWS.
Security is a business priority
At AWS, security is a business priority. Business leaders prioritize building products and services that are designed to be secure, and they consider security to be an enabler of the business rather than an obstacle.
Leaders also strive to create a safe environment by encouraging employees to identify and escalate potential security issues. Escalation is the process of making sure that the right people know about the problem at the right time. Escalation encompasses “Dive Deep”, which is one of our corporate values at Amazon, because it requires owners and leaders to dive into the details of the issue. If you don’t know the details, you can’t make good decisions about what’s going on and how to run your business effectively.
This aspect of the culture goes beyond intention — it’s embedded in our organizational structure:
CISOs and IT leaders play a key role in demystifying what security and compliance represent for the business. At AWS, we made an intentional choice for the security team to report directly to the CEO. The goal was to build security into the structural fabric of how AWS makes decisions, and every week our security team spends time with AWS leadership to ensure we’re making the right choices on tactical and strategic security issues.
Because our leadership supports security, it’s understood within AWS that security is everyone’s job. Security teams and product development teams work together to help ensure that products are built and shipped securely. Despite this collaboration, the product teams own the security of their product. They are responsible for making sure that security controls are built into the product and that customers have the tools they need to use the product securely.
On the other hand, central security teams are responsible for helping developers to build securely and verifying that security requirements are met before launch. They provide guidance to help developers understand what security controls to build, provide tools to make it simpler for developers to implement and test controls, provide support in threat modeling activities, use mechanisms to help ensure that customers’ security expectations are met before launch, and so on.
This responsibility model highlights how security ownership is distributed between the security and product development teams. At AWS, we learned that without this distribution, security doesn’t scale. Regardless of the number of security experts we hire, product teams always grow faster. Although the culture around security and the need to distribute ownership is now well understood, without the right mechanisms in place, this model would have collapsed.
Mechanisms compared to good intentions
Mechanisms are the final pillar of AWS culture that has allowed us to successfully distribute security across our organization. A mechanism is a complete process, or virtuous cycle, that reinforces and improves itself as it operates. As shown in Figure 2, a mechanism takes controllable inputs and transforms them into ongoing outputs to address a recurring business challenge. At AWS, the business challenge that we’re facing is that security teams create bottlenecks for the business. The culture of security at AWS provides support to help address this challenge, but we needed a mechanism to actually do it.
Figure 2: AWS sees mechanisms as a complete process, or virtuous cycle
“Often, when we find a recurring problem, something that happens over and over again, we pull the team together, ask them to try harder, do better – essentially, we ask for good intentions. This rarely works… When you are asking for good intentions, you are not asking for a change… because people already had good intentions. But if good intentions don’t work, what does? Mechanisms work.
At AWS, we’ve learned that we can help solve the challenge of scaling security by distributing security ownership with a mechanism we call the Security Guardians program. Like other mechanisms, it has inputs and outputs, and transforms over time.
AWS distributes security ownership with the Security Guardians program
At AWS, the Security Guardians program trains, develops, and empowers developers to be security ambassadors, or Guardians, within the product teams. At a high level, Guardians make sure that security considerations for a product are made earlier and more often, helping their peers build and ship their product faster. They also work closely with the central security team to help ensure that the security bar at AWS is rising and the Security Guardians program is improving over time. As shown in Figure 3, embedding security expertise within the product teams helps products with Guardian involvement move through security review faster.
Figure 3: Security expertise is embedded in the product teams by Guardians
Guardians are informed, security-minded product builders who volunteer to be consistent champions of security on their teams and are deeply familiar with the security processes and tools. They provide security guidance throughout the development lifecycle and are stakeholders in the security of the products being shipped, helping their teams make informed decisions that lead to more secure, on-time launches. Guardians are the security points-of-contact for their product teams.
In this distributed security ownership model, accountability for product security sits with the product development teams. However, the Guardians are responsible for performing the first evaluation of a development team’s security review submission. They confirm the quality and completeness of the new service’s resources, design documents, threat model, automated findings, and penetration test readiness. The development teams, supported by the Guardian, submit their security review to AWS Application Security (AppSec) engineers for the final pre-launch review.
In practice, as part of this development journey, Guardians help ensure that security considerations are made early, when teams are assessing customer requests and the feature or product design. This can be done by starting the threat modeling processes. Next, they work to make sure that mitigations identified during threat modeling are developed. Guardians also play an active role in software testing, including security scans such as static application security testing (SAST) and dynamic application security testing (DAST). To close out the security review, security engineers work with Guardians to make sure that findings are resolved and the product is ready to ship.
Figure 4: Expedited security review process supported by Guardians
Guardians are, after all, Amazonians. Therefore, Guardians exemplify a number of the Amazon Leadership Principles and often have the following characteristics:
They are exemplary practitioners for security ownership and empower their teams to own the security of their service.
They hold a high security bar and exercise strong security judgement, don’t accept quick or easy answers, and drive continuous improvement.
They advocate for security needs in internal discussions with the product team.
They are thoughtful yet assertive to make customer security a top priority on their team.
They maintain and showcase their security knowledge to their peers, continuously building knowledge from many different sources to gain perspective and to stay up to date on the constantly evolving threat landscape.
They aren’t afraid to have their work independently validated by the central security team.
AWS has benefited greatly from the Security Guardians program. We’ve had 22.5 percent fewer medium and high severity security findings generated during the security review process and have taken about 26.9 percent less time to review a new service or feature. This data demonstrates that with Guardians involved we’re identifying fewer issues late in the process, reducing remediation work, and as a result securely shipping services faster for our customers. To help both builders and Guardians improve over time, our security review tool captures feedback from security engineers on their inputs. This helps ensure that our security ownership mechanism reinforces and improves itself over time.
AWS and other organizations have benefited from this mechanism because it generates specialized security resources and distributes security knowledge that scales without needing to hire additional staff.
A program such as this could help your business build and ship faster, as it has for AWS, while maintaining an appropriately high security bar that rises over time. By training builders to be security practitioners and advocates within your development cycle, you can increase the chances of identifying risks and security findings earlier. These findings, earlier in the development lifecycle, can reduce the likelihood of having to patch security bugs or even start over after the product has already been built. We also believe that a consistent security experience for your product teams is an important aspect of successfully distributing your security ownership. An experience with less confusion and friction will help build trust between the product and security teams.
There are many different ways to use InsightAppSec to authenticate to web apps, but sometimes you need to go deeper into the advanced settings to fully automate your logins, especially with API scanning. Today, we’ll cover one of those advanced settings: Token Replacement.
InsightAppSec Token Replacement can be used to capture and replay Bearer Authentication tokens, JWT Authentication tokens, or any other type of session token.
The token replacement values are under your scan configs in the following location: Custom Options > Advanced > AuthConfig > TokenReplacementList
When you press Add, the following values can be set.
Where the token you want to extract is located.
Request HeaderRequest BodyRequest URLResponse HeadersResponse Body
Regex used to extract the token. Anything placed in brackets can be returned in the InjectionTokenRegex using @token@.
Any regex, such as:”token”: ?”([^”]*)”access_token”: ?”([-a-f0-9]+)”[?]sessionId=([^&]*)
Where the captured token should be injected.
Request URLRequest HeadersRequest Body
The format in which the token should be sent to the web app. @token@ is replaced with the value captured by ExtractionTokenLocation.
Any string. @token@ is replaced with the captured value. Such as:Authorization: Bearer @token@Authorization: Token @token@&sessionId=@token@
Why Token Replacement?
Under Custom Options > HTTP Headers > Extra Header, you can manually pass an authentication token to your web app. While this is the easiest way to set up this form of authentication, unless you generate a token that will not expire, you will have to replace this token every scan. Automating this process using token replacement will save you time and effort in the long run, especially if you have multiple apps you need to generate tokens for.
For this example, we will be using the Rapid7 Hackazon web app. If you want to configure your own Hackazon instance, details around installation and setup can be found here.
Alternatively, there are free public test sites you can use instead, such as this one.
The main difference you’ll encounter when using the Hackazon web app is the API authentication does not have a UI, therefore we must record and pass a traffic file for InsightAppSec to authenticate.
We will use Postman to send the API request to the web app and Burp Suite to record the traffic. You could alternately use the Rapid7 Insight AppSec Toolkit, to record the traffic as well. Here is a video running through setup using the InsightAppSec Toolkit.
The first step is to set up your proxy settings. In Postman, you can go to Settings by clicking the gear icon in the upper right, and then clicking into the proxy settings. We’re going to set the proxy server to “localhost” and change the port to “5000”.
After setting the proxy in Postman, you must set it up in Burp Suite. In Burp, go to the Proxy tab, then click on Proxy Settings. Next, add a proxy listener, specifying port 5000 to match the setting in Postman. Then, set the interface to Loopback Only.
Go back to Postman, add your basic authentication, and then send the traffic. In Burp, click on the HTTP History tab, right click on the captured traffic, then click “Save Item”. Make sure you save the traffic as an xml file.
You can also record the traffic using the Rapid7 Insight AppSec Plugin, or from within the Chrome browser. Instructions for how to do this are located under Traffic Authentication or can be found here.
When recording using the Rapid7 Appsec Plugin, make sure that the recording includes the Bearer Auth or Token in the recorded details.
After recording the login, upload the traffic file to Site Authentication. Make sure you adjust the Logged-In Regex as well to make sure the scan doesn’t fail.
After authenticating to your web app and grabbing the token, the next step is to configure a regex to ensure the token is able to be extracted. There are a wide variety of ways to test the regex, but we will be using https://regex101.com/ for this example.
We will then grab the web app response containing the token info, paste it into the website, and configure a regular expression to ensure only the token is selected. In this use case, the expression “token”: ?”([^”]*) was successful in only highlighting the info we want to extract. We can ensure that only the token is selected in capture Group 1 as that will be returned when we specify @token@ under the InjectionTokenRegex.
Next, we want to configure the TokenReplacementList.
The token appeared in the body after authenticating
This successfully isolated the auth token
Where the web app is expecting the token
Authorization: Token @token@
The header format the web app is expecting
Make sure you upload the swagger API file. You can either upload the file or point InsightAppSec to the specific URL. You can optionally restrict the scan to just the swagger file for more targeted scanning.
To ensure we were successful, click Download Additional Logsfrom the Scan Logs page after the scan is complete and open the Operation log file. You are looking for the log entry “[good]: Added imported token from response body”.Once you see this, you know the taken was imported into the scan properly and we were able to use it to log in to the API.
For further testing, you can look in the vulnerability traffic requests to ensure the Authorization: Token header has been passed successfully.
To detect if the token has expired, you can modify the sessionLossRegex and sessionLossHeaderRegex under Authentication > Additional Settings, or by using a CanaryPage if that has been set up. When configured correctly, the token replacement will grab the token again, ensuring we stay logged in to your API.
Further information on configuring Scan Authentication can be found here. When in doubt, please reach out to your web app developers and/or Rapid7 support for assistance.
The OWASP Top 10 API Security Risks is a list of the highest priority API based threats in 2023. Let’s dig a little deeper into each item on the OWASP Top 10 API Security Risks list to outline the type of threats you may encounter and appropriate responses to curtail each threat.
1. Broken object level authorization
Object level authorization is a control method that restricts access to objects to minimize system exposures. All API endpoints that handle objects should perform authorization checks utilizing user group policies.
We recommend using this authorization mechanism in every function that receives client input to access objects from a data store. As an additional means for hardening, it is recommended to use cryptographically secure random GUID values for object reference IDs.
2. Broken authentication
Authentication relates to all endpoints and data flows that handle the identity of users or entities accessing an API. This includes credentials, keys, tokens, and even password reset functionality. Broken authentication can lead to many issues such as credential stuffing, brute force attacks, weak unsigned keys, and expired tokens.
Authentication covers a wide range of functionality and requires strict scrutiny and strong practices. Detailed threat modeling should be performed against all authentication functionality to understand data flows, entities, and risks involved in an API. Multi-factor authentication should be enforced where possible to mitigate the risk of compromised credentials.
To prevent brute force and other automated password attacks, rate-limitation should be implemented with a reasonable threshold. Weak and expired credentials should not be accepted, this includes JWTs, passwords, and keys. Integrity checks should be performed against all tokens as well, ensuring signature algorithms and values are valid to prevent tampering attacks.
3. Broken object property level authorization
Related to object level authorization, object property level authorization is another control method to restrict access to specific properties or fields of an object. This category combines aspects of 2019 OWASP API Security’s “excessive data exposure” and “mass assignment”. If an API endpoint is exposing sensitive object properties that should not be read or modified by an unauthorized user it is considered vulnerable.
The overall mitigation strategy for this is to validate user permissions in all API endpoints that handle object properties. Access to properties and fields should be kept to a bare minimum at an as-needed basis scoped to the functionality of a given endpoint.
4. Unrestricted resource consumption
API resource consumption pertains to CPU, memory, storage, network, and service provider usage for an API. Denial of service attacks result from overconsumption of these resources leading to downtime and racked up service charges.
Setting minimum and maximum limits relative to business functional needs is the overall strategy to mitigating resource consumption risks. API endpoints should limit the rate and maximum number of calls at a per-client basis. For API infrastructure, using containers and serverless code with defined resource limits will mitigate the risk of server resource consumption.
Coding practices that limit resource consumption need to be in place, as well. Limit the number of records returned in API responses with careful use of paging, as appropriate. File uploads should also have size limits enforced to prevent overuse of storage. Additionally, regular expressions and other data-processing means must be carefully evaluated for performance in order to avoid high CPU and memory consumption.
5. Broken function level authorization
Lack of authorization checks in controllers or functions behind API endpoints are covered under broken function level authorization. This vulnerability class allows attackers to access unauthorized functionality; whether they are changing an HTTP method from a `GET` to a `PUT` to modify data that is not expected to be modified, or changing a URL string from `user` to `admin`. Proper authorization checks can be difficult due to controller complexities and the numbers of user groups and roles.
Comprehensive threat modeling against an API architecture and design is paramount in preventing these vulnerabilities. Ensure that API functionality is carefully structured and corresponding controllers are performing authentication checks. For example, all functionality under an `/api/v1/admin` endpoint should be handled by an admin controller class that performs strict authentication checks. When in doubt, access should be denied by default and grants should be given on a as needed basis.
6. Unrestricted Access to Sensitive Business Flows
Automated threats are becoming increasingly more difficult to combat and must be addressed on a case-by-case basis. An API is vulnerable if sensitive functionality is exposed in such a way that harm could occur if excessive automated use occurs. There may not be a specific implementation bug, but rather an exposure of business flow that can be abused in an automated fashion.
Threat modeling exercises are important as an overall mitigation strategy. Business functionality and all dataflows must be carefully considered, and the excessive automated use threat scenario must be discussed. From an implementation perspective, device fingerprinting, human detection, irregular API flow and sequencing pattern detection, and IP blocking can be implemented on a case-by-case basis.
7. Server side request forgery
Server side request forgery (SSRF) vulnerabilities happen when a client provides a URL or other remote resource as data to an API. The result is a crafted outbound request to that URL on behalf of the API. These are common in redirect URL parameters, webhooks, file fetching functionality, and URL previews.
SSRF can be leveraged by attackers in many ways. Modern usage of cloud providers and containers exposes instance metadata URLs and internal management consoles that can be targeted to leak credentials and abuse privileged functionality. Internal network calls such as backend service-to-service requests, even when protected by service meshes and mTLS, can be exploited for unexpected results. Internal repositories, build tools, and other internal resources can all be targeted with SSRF attacks.
We recommend validating and sanitizing all client provided data to mitigate SSRF vulnerabilities. Strict allow-listing must be enforced when implementing resource-fetching functionality. Allow lists should be granular, restricting all but specified services, URLs, schemes, ports, and media types. If possible, isolate this functionality within a controlled network environment with careful monitoring to prevent probing of internal resources.
8. Security misconfiguration
Misconfigurations in any part of the API stack can result in weakened security. This can be the result of incomplete or inconsistent patching, enabling unnecessary features, or improperly configuring permissions. Attackers will enumerate the entire surface area of an API to discover these misconfigurations, which could be exploited to leak data, abuse extra functionality, or find additional vulnerabilities in out of date components.
Having a robust, fast, and repeatable hardening process is paramount to mitigating the risk of misconfiguration issues. Security updates must be regularly applied and tracked with a patch management process. Configurations across the entire API stack should be regularly reviewed. Asset Management and Vulnerability Management solutions should be considered to automate this hardening process.
9. Improper inventory management
Complex services with multiple interconnected APIs present a difficult inventory management problem and introduces more exposure to risk. Having multiple versions of APIs across various environments further increases the challenge. Improper inventory management can lead to running unpatched systems and exposing data to attackers. With modern microservices making it easier than ever to deploy many applications, it is important to have strong inventory management practices.
Documentation for all assets including hosts, applications, environments, and users should be carefully collected and managed in an asset management solution. All third-party integrations need to be vetted and documented, as well, to have visibility into any risk exposure. API documentation should be standardized and available to those authorized to use the API. Careful controls over access to and changes of environments, plus what’s shared externally vs. internally, and data protection measures must be in place to ensure that production data does not fall into other environments.
10. Unsafe consumption of APIs
Data consumed from other APIs must be handled with caution to prevent unexpected behavior. Third-party APIs could be compromised and leveraged to attack other API services. Attacks such as SQL injection, XML External Entity injection, deserialization attacks, and more, should be considered when handling data from other APIs.
Careful development practices must be in place to ensure all data is validated and properly sanitized. Evaluate third-party integrations and service providers’ security posture. Ensure all API communications occur over a secure channel such as TLS. Mutual authentication should also be enforced when connections between services are established.
The OWASP Top 10 API Security Risks template is now ready and available for use within InsightAppSec, mapping each of Rapid7’s API attack modules to their corresponding OWASP categories for ease of reference and enhanced API threat coverage.
Make sure to utilize the new template to ensure best in class coverage against API security threats today! And of course, as is always the case, ensure you are following Rapid7’s best practices for securing your APIs.
In November 2022, AWS introduced support for granular geographic (geo) match conditions in AWS WAF. This blog post demonstrates how you can use this new feature to customize your AWS WAF implementation and improve the security posture of your protected application.
AWS WAF provides inline inspection of inbound traffic at the application layer. You can use AWS WAF to detect and filter common web exploits and bots that could affect application availability or security, or consume excessive resources. Inbound traffic is inspected against web access control list (web ACL) rules. A web ACL rule consists of rule statements that instruct AWS WAF on how to inspect a web request.
The AWS WAF geographic match rule statement functionality allows you to restrict application access based on the location of your viewers. This feature is crucial for use cases like licensing and legal regulations that limit the delivery of your applications outside of specific geographic areas.
AWS recently released a new feature that you can use to build precise geographic rules based on International Organization for Standardization (ISO) 3166 country and area codes. With this release, you can now manage access at the ISO 3166 region level. This capability is available across AWS Regions where AWS WAF is offered and for all AWS WAF supported services. In this post, you will learn how to use this new feature with Amazon CloudFront and Elastic Load Balancing (ELB) origin types.
Summary of concepts
Before we discuss use cases and setup instructions, make sure that you are familiar with the following AWS services and concepts:
Amazon CloudFront: CloudFront is a web service that gives businesses and web application developers a cost-effective way to distribute content with low latency and high data transfer speeds.
Amazon Simple Storage Service (Amazon S3):Amazon S3 is an object storage service built to store and retrieve large amounts of data from anywhere.
AWS WAF labels: Labels contain metadata that can be added to web requests when a rule is matched. Labels can alter the behavior or default action of managed rules.
ISO (International Organization for Standardization) 3166 codes:ISO codes are internationally recognized codes that designate for every country and most of the dependent areas a two- or three-letter combination. Each code consists of two parts, separated by a hyphen. For example, in the code AU-QLD, AU is the ISO 3166 alpha-2 code for Australia, and QLD is the subdivision code of the state or territory—in this case, Queensland.
How granular geo labels work
Previously, geo match statements in AWS WAF were used to allow or block access to applications based on country of origin of web requests. With updated geographic match rule statements, you can control access at the region level.
In a web ACL rule with a geo match statement, AWS WAF determines the country and region of a request based on its IP address. After inspection, AWS WAF adds labels to each request to indicate the ISO 3166 country and region codes. You can use labels generated in the geo match statement to create a label match rule statement to control access.
AWS WAF generates two types of labels based on origin IP or a forwarded IP configuration that is defined in the AWS WAF geo match rule. These labels are the country and region labels.
By default, AWS WAF uses the IP address of the web request’s origin. You can instruct AWS WAF to use an IP address from an alternate request header, like X-Forwarded-For, by enabling forwarded IP configuration in the rule statement settings. For example, the country label for the United States with origin IP and forwarded IP configuration are awswaf:clientip:geo:country:US and awswaf:forwardedip:geo:country:US, respectively. Similarly, the region labels for a request originating in Oregon (US) with origin and forwarded IP configuration are awswaf:clientip:geo:region:US-OR and awswaf:forwardedip:geo:region:US-OR, respectively.
To demonstrate this AWS WAF feature, we will outline two distinct use cases.
Use case 1: Restrict content for copyright compliance using AWS WAF and CloudFront
Licensing agreements might prevent you from distributing content in some geographical locations, regions, states, or entire countries. You can deploy the following setup to geo-block content in specific regions to help meet these requirements.
In this example, we will use an AWS WAF web ACL that is applied to a CloudFront distribution with an S3 bucket origin. The web ACL contains a geo match rule to tag requests from Australia with labels, followed by a label match rule to block requests from the Queensland region. All other requests with source IP originating from Australia are allowed.
To configure the AWS WAF web ACL rule for granular geo restriction
In the navigation pane, choose Web ACLs, select Global (CloudFront) from the dropdown list, and then choose Create web ACL.
For Name, enter a name to identify this web ACL.
For Resource type, choose the CloudFront distribution that you created in step 1, and then choose Add.
Choose Add rules, and then choose Add my own rules and rule groups.
For Name, enter a name to identify this rule.
For Rule type, choose Regular rule.
Configure a rule statement for a request that matches the statement Originates from a Country and select the Australia (AU) country code from the dropdown list.
Set the IP inspection configuration parameter to Source IP address.
Under Action, choose Count, and then choose Add Rule.
Create a new rule by following the same actions as in step 7 and enter a name to identify the rule.
For Rule type, choose Regular rule.
Configure a rule statement for a request that matches the statement Has a Label and enter awswaf:clientip:geo:region:AU-QLD for the match key.
Set the action to Block and choose Add rule.
For Actions, keep the default action of Allow.
For Amazon CloudWatch metrics, select the AWS WAF rules that you created in steps 8 and 14.
For Request sampling options, choose Enable sampled requests, and then choose Next.
Review and create the web ACL rule.
After the web ACL is created, you should see the web ACL configuration, as shown in the following figures. Figure 1 shows the geo match rule configuration.
Figure 1: Web ACL rule configuration
Figure 2 shows the Queensland regional geo restriction.
Figure 2: Queensland regional geo restriction – web ACL configuration<
The setup is now complete—you have a web ACL with two regular rules. The first rule matches requests that originate from Australia and adds geographic labels automatically. The label match rule statement inspects requests with Queensland granular geo labels and blocks them. To understand where requests are originating from, you can configure logging on the AWS WAF web ACL.
You can test this setup by making requests from Queensland, Australia, to the DNS name of the CloudFront distribution to invoke a block. CloudFront will return a 403 error, similar to the following example.
As shown in these test results, requests originating from Queensland, Australia, are blocked.
Use case 2: Allow incoming traffic from specific regions with AWS WAF and Application Load Balancer
We recently had a customer ask us how to allow traffic from only one region, and deny the traffic from other regions within a country. You might have similar requirements, and the following section will explain how to achieve that. In the example, we will show you how to allow only visitors from Washington state, while disabling traffic from the rest of the US.
This example uses an AWS WAF web ACL applied to an application load balancer in the US East (N. Virginia) Region with an Amazon EC2 instance as the target. The web ACL contains a geo match rule to tag requests from the US with labels. After we enable forwarded IP configuration, we will inspect the X-Forwarded-For header to determine the origin IP of web requests. Next, we will add a label match rule to allow requests from the Washington region. All other requests from the United States are blocked.
To configure the AWS WAF web ACL rule for granular geo restriction
The setup is now complete—you have a web ACL with two regular rules. The first rule matches requests that originate from the US after inspecting the origin IP in the X-Forwarded-For header, and adds geographic labels. The label match rule statement inspects requests with the Washington region granular geo labels and allows these requests.
If a user makes a web request from outside of the Washington region, the request will be blocked and a HTTP 403 error response will be returned, similar to the following.
AWS WAF now supports the ability to restrict traffic based on granular geographic labels. This gives you further control based on geographic location within a country.
In this post, we demonstrated two different use cases that show how this feature can be applied with CloudFront distributions and application load balancers. Note that, apart from CloudFront and application load balancers, this feature is supported by other origin types that are supported by AWS WAF, such as Amazon API Gateway and Amazon Cognito.
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS WAF re:Post or contact AWS Support.
Want more AWS Security news? Follow us on Twitter.
For complete visibility into the vulnerabilities in your environment, proper authentication to web apps in InsightAppSec is essential. In this article, we’ll look at issues you might encounter with macro, traffic, and selenium authentication and how to troubleshoot them. Additionally, you’ll get practical and actionable tips on using InsightAppSec to its full potential.
The first step to troubleshooting InsightAppSec authentication is to look over the scan logs. The scan logs can be located under the scan in the upper left hand corner. The logs can give you useful information such as if the authentication fails, the website is unavailable, or if any other problems arose during the scan.
Event log will give you information about the scan itself.
Platform event log will give you information about the scan engine and if it encountered any issues during the scan.
Download additional logs: If you wanted to dive even deeper into what happened during the scan, you can to to look into the full scan logs.
Let’s look at some of the specific issues you might encounter with the different types of authentication noted above.
When a macro fails, the logs will give you the specific step where the macro had trouble. For example, in the image below, we can see that the macro failed on step 4, where it could not click on the specific object on the page. This could be caused by the page not loading quick enough, the name or ID of the element path changing, or the web app UI being different.
If you determine that the macro failed because the page isn’t loading fast enough, there are two ways you can slow the macro down.
The first way is to manually add a delay between the steps that are running too quickly. You can copy any of the delays that are currently in the macro, paste them into the spot that you want to slow down, and then change the step numbers. This way you can also set the specific duration for any delays you add into your macro.
The second way is to add additional delays throughout the macro, and change the Min Duration so the delays last longer. This is controlled via the export settings menu on the right. The default minimum duration is set to 3,000 milliseconds (3 seconds). Increasing the duration or adding delays will cause the macro to take longer to authenticate, but when running a scan overnight an extra few minutes to ensure the login works is a good tradeoff.
One other potential problem when recording a macro is when you have a password manager autofill the username and password. Anything that is automatically filled in will not be recorded by the macro. It is recommended to either turn off any password managers when recording a macro, or recording in Incognito/private browsing with all other plugins disabled to ensure nothing can modify or mess with the recording.
Lastly, if you have any events on your web app, such as a prompt to join a mailing list, that does not happen every time, you can mark that macro event as optional. If the event is not marked optional, then the macro will fail as it is unable to see the element on the page. Simply change the optional flag in the macro recording from 0 to 1 and you’re all set.
While traffic authentication is usually successful when the login is working, there could still be some problems with playback. When traffic authentication fails, the scan logs don’t give you specific information like with macro authentication. Instead, the traffic authentication fails with the LoggedInRegex did not detect logged in state error. If you can’t get the traffic authentication working in the Rapid7 Appsec Plugin, you can always record the authentication within your browser.
Click on the hamburger menu in the upper right.
Go to More Tools → Developer Options
Click on Network in the top tab
Make sure the dot in the upper left is red to signify you are recording.
Log in to your web app and when complete, right click on the recorded traffic and click Save all as HAR with content.
This will download the same .HAR file that the Appsec Plugin records, allowing you to use it for scanning.
Depending on how your web app responds, you might need to change the Use agent setting for how InsightAppsec interacts with your app.
Under your scan configuration, if you go to advanced options→HTTP Headers→User agent, you can then change what user agent is used to reach out to your web app. The latest version of Chrome should be fine for most modern web apps, but if you’re scanning a mobile app or an app that hasn’t been updated in a few years it might benefit from being changed.
The third primary type of authentication is selenium. Selenium is similar to the macro authentication where you record all the actions to log in to your web app. Selenium is similar to traffic authentication where you will usually receive the LoggedInRegex did not detect logged in state error in the scan logs rather than specific information about the failure.
If the Selenium script could not find specific elements on the web page, you could also receive the Could not execute Selenium script error. This means there’s a problem with the script itself, the page didn’t load fast enough, or it couldn’t find the specific element on the web page. If this happens, try re-recording the script or adding a delay.
Using the plugin to record selenium scripts:
Click on the selenium plugin and Record a new test in a new project.
Give the project a name and enter in the base URL where you want recording to start.
In the new window that appears, log in to your web app. Once complete, close out of the window.
Before saving, you can click on the play icon to replay and test your selenium script.
Review the recording and then click on the save button in the upper right. You can then upload the .side file into InsightAppSec.
Just like macro authentication, if your website takes a while to load and the selenium script is running too fast, you can add additional delays to slow it down. There are implicit waits built into the IDE commands but if those don’t work for you, after running the authentication, you can add in wait for element commands to your selenium script.
Right click on the selenium recording and click insert new command
Set the command to wait for element visible
Set the target to the element you want to wait for. In this case, we’re waiting for id=email
By default the value is set to wait for 30,000 milliseconds (30 seconds)
Alternatively, you can use the pause command and set the value to how long you want the script to pause for. However, it is recommended to use the wait for element visible command if the web app responds at different times.
After ensuring the macro, traffic, and selenium files are working correctly, the next step in the authentication process is the logged-in regex. After the login is complete, InsightAppSec will look at the web page to find a logout button or look at the browser header for a session cookie. This can be modified by clicking into the scan configuration, navigating to the Authentication tab, and clicking on Additional Settings on the left.
By default, the logged-in regex looks for sign out, sign off, log out and log off, with and without spaces between the words, on the web page.
One common problem is logged-in regex not seeing the logout button on the page before ending the authentication recording. If the logout button is on another page, or sometimes under a dropdown menu, the logged-in regex won’t detect it on the page, causing the authentication to fail.
Another common issue is if the logout button is an image or otherwise can’t be detected on the page. As the field is looking for a regular expression, you can use other words on the page to determine that the login was successful. You have to ensure that the word only appears on the page after logging in, such as the username. Otherwise the login might not actually be successful.
Logged-in Header Regex
Click on the three dots in the upper right corner
Then go to more tools and then developer options.
Click on the application tab at the top, then cookies on the left, and finally the web app cookie.
From there you want to find the session information cookie that only appears after logging in to the web app. Grab the name of the cookie and place that in the logged-in header regex.
The logged-in regex and logged-in header regex use AND logic, so if you put information in both fields, it will then need both to be successful in order for the login to work. Alternatively, if you remove the regex from both fields, it won’t run any post authentication checks, assuming the login is successful. It is recommended to do that as a last resort, you won’t be alerted if the login does start failing or if there are any other problems.
Other common issues and tricks
One issue you might encounter is where you start the authentication recording. For example, starting the recording after a page redirect. If your web app redirects to another page or SSO, and you start the authentication recording after the redirect, InsightAppSec won’t have the session information to properly redirect back to the target web app when it gets replayed during the scan. It is recommended to always start your recording on the root web app directory wherever possible.
You can also choose specific directories for scanning versus the entire web app. You want to remove the URL from the app Target URLs, and add it in specifically under the scan config. You can then set the target directory in the crawl and attack configs as literal, and then add a /* wildcard to hit any subdirectories.
Lastly, there is a way to restrict certain elements on a web page from being scanned. Under advanced options → CrawlConfig and AttackerConfig, there’s an option called ScopeConstraintList. This is where you can explicitly include or exclude specific pages from being scanned. You can take it a step further by adding a httpParameterList to explicitly exclude certain elements on the page from being scanned. For example, if you have a contact us page and you don’t want the scanner to hit the submit button, you can add it to the httpParameterList so it won’t be touched.
Below is an example of what the fields look like in the web page source code, and how it can be configured in IAS.
Email field source code: input type="email" name="contact_email"
An application proxying traffic through Cloudflare benefits from a wide range of easy to use security features including WAF, Bot Management and DDoS mitigation. To understand if traffic has been blocked by Cloudflare we have built a powerful Security Events dashboard that allows you to examine any mitigation events. Application owners often wonder though what happened to the rest of their traffic. Did they block all traffic that was detected as malicious?
Today, along with our announcement of the WAF Attack Score, we are also launching our new Security Analytics.
Security Analytics gives you a security lens across all of your HTTP traffic, not only mitigated requests, allowing you to focus on what matters most: traffic deemed malicious but potentially not mitigated.
Detect then mitigate
Imagine you just onboarded your application to Cloudflare and without any additional effort, each HTTP request is analyzed by the Cloudflare network. Analytics are therefore enriched with attack analysis, bot analysis and any other security signal provided by Cloudflare.
Right away, without any risk of causing false positives, you can view the entiretyof your traffic to explore what is happening, when and where.
This allows you to dive straight into analyzing the results of these signals, shortening the time taken to deploy active blocking mitigations and boosting your confidence in making decisions.
We are calling this approach “detect then mitigate” and we have already received very positive feedback from early access customers.
In fact, Cloudflare’s Bot Management has been using this model for the past two years. We constantly hear feedback from our customers that with greater visibility, they have a high confidence in our bot scoring solution. To further support this new way of securing your web applications and bringing together all our intelligent signals, we have designed and developed the new Security Analytics which starts bringing signals from the WAF and other security products to follow this model.
New Security Analytics
Built on top of the success of our analytics experiences, the new Security Analytics employs existing components such as top statistics, in-context quick filters, with a new page layout allowing for rapid exploration and validation. Following sections will break down this new page layout forming a high level workflow.
The key difference between Security Analytics and Security Events, is that the former is based on HTTP requests which covers visibility of your entire site’s traffic, while Security Events uses a different dataset that visualizes whenever there is a match with any active security rule.
Define a focus
The new Security Analytics visualizes the dataset of sampled HTTP requests based on your entire application, same as bots analytics. When validating the “detect then mitigate” model with selected customers, a common behavior observed is to use the top N statistics to quickly narrow down to either obvious anomalies or certain parts of the application. Based on this insight, the page starts with selected top N statistics covering both request sources and request destinations, allowing expanding to view all the statistics available. Questions like “How well is my application admin’s area protected?” lands at one or two quick filter clicks in this area.
Spot anomalies in trends
After a preliminary focus is defined, the core of the interface is dedicated to plotting trends over time. The time series chart has proven to be a powerful tool to help spot traffic anomalies, also allowing plotting based on different criteria. Whenever there is a spike, it is likely an attack or attack attempt has happened.
As mentioned above, different from Security Events, the dataset used in this page is HTTP requests which includes both mitigated and not mitigated requests. By mitigated requests here, we mean “any HTTP request that had a ‘terminating’ action applied by the Cloudflare platform”. The rest of the requests that have not been mitigated are either served by Cloudflare’s cache or reaching the origin. In the case such as a spike in not mitigated requests but flat in mitigated requests, an assumption could be that there was an attack that did not match any active WAF rule. In this example, you can one click to filter on not mitigated requests right in the chart which will update all the data visualized on this page supporting further investigations.
In addition to the default plotting of not mitigated and mitigated requests, you can also choose to plot trends of either attack analysis or bot analysis allowing you to spot anomalies for attack or bot behaviors.
Zoom in with analysis signals
One of the most loved and trusted analysis signals by our customers is the bot score. With the latest addition of WAF Attack Score and content scanning, we are bringing them together into one analytics page, helping you further zoom into your traffic based on some of these signals. The combination of these signals enables you to find answers to scenarios not possible until now:
Attack requests made by (definite) automated sources
Likely attack requests made by humans
Content uploaded with/without malicious content made by bots
Once a scenario is filtered on, the data visualization of the entire page including the top N statistics, HTTP requests trend and sampled log will be updated, allowing you to spot any anomalies among either one of the top N statistics or the time based HTTP requests trend.
Review sampled logs
After zooming into a specific part of your traffic that may be an anomaly, sampled logs provide a detailed view to verify your finding per HTTP request. This is a crucial step in a security study workflow backed by the high engagement rate when examining the usage data of such logs viewed in Security Events. While we are adding more data into each log entry, the expanded log view becomes less readable over time. We have therefore redesigned the expanded view, starting with how Cloudflare responded to a request, followed by our analysis signals, lastly the key components of the raw request itself. By reviewing these details, you validate your hypothesis of an anomaly, and if any mitigation action is required.
Handy insights to get started
When testing the prototype of this analytics dashboard internally, we learnt that the power of flexibility yields the learning curve upwards. To help you get started mastering the flexibility, a handy insights panel is designed. These insights are crafted to highlight specific perspectives into your total traffic. By a simple click on any one of the insights, a preset of filters is applied zooming directly onto the portion of your traffic that you are interested in. From here, you can review the sampled logs or further fine tune any of the applied filters. This approach has been proven with further internal studies of a highly efficient workflow that in many cases will be your starting point of using this dashboard.
How can I get it?
The new Security Analytics is being gradually rolled out to all Enterprise customers who have purchased the new Application Security Core or Advanced Bundles. We plan to roll this out to all other customers in the near future. This new view will be alongside the existing Security Events dashboard.
We are still at an early stage moving towards the “detect then mitigate” model, empowering you with greater visibility and intelligence to better protect your web applications. While we are working on enabling more detection capabilities, please share your thoughts and feedback with us to help us improve the experience. If you want to get access sooner, reach out to your account team to get started!
Rapid7 was honored at the Belfast Telegraph’s annual IT Awards, Friday, taking home a pair of awards including the coveted “Best Place to Work in IT” in the large company category award, and the “Cyber Security Project of the Year” award, for groundbreaking machine learning research in application security. That research was conducted in collaboration with The Centre for Secure Information Technologies (CSIT) at Queen’s University Belfast.
The team also took home a Highly Commended recognition for Best Use of Cloud services at the event.
The ability to work on meaningful projects that positively impact customers, being supported by a range of professional development opportunities, a culture rooted in connection and collaboration, and the invitation to explore new ways of thinking all came together in the submission to help earn them the “Best Place to Work” title.
Belfast has been regarded as one of the UK’s fastest growing technology hubs. There are now more than 300,000 people working in the technology sector of Northern Ireland, according to the Telegraph. For Rapid7, an impressive business environment and the work being done in IT Security at the local universities were significant factors in the decision to join the Belfast community in 2014. This move was in line with Rapid7’s goal of creating exceptional career experiences for their people and expanding operations to address a growing global customer base. In 2021, Rapid7 relocated to their newest office space and announced the addition of more than 200 new roles to the region.
Rapid7’s win for Cybersecurity Project of the Year focused on the cutting-edge area of machine learning in application security. Their research sought to reduce the high level of false positives generated by vulnerability scanners — a pain point that has become all too common in today’s digital environment. Rapid7’s multi-disciplinary Machine Learning (ML) team in Belfast was able to create a way to automatically prioritize real vulnerabilities and reduce false positive friction for customers. Their work has been peer-reviewed by industry experts, published in academic journals, and accepted for presentation at AISEC’s November 2022 event — where it was recognized with their “Best Paper Award.” AISEC is the leading venue for ML cybersecurity innovations.
Rounding out the evening was a Highly Commended recognition from the Telegraph for “Best Use of Cloud Services.” The scale and speed of cloud adoption over the last number of years has caused an exponential growth in complex security challenges. Rapid7 showcased how their team in Belfast partnered with global colleagues to create an innovative and multi-faceted solution to manage Cloud Identity Risk across three major Cloud Service Providers (CSPs) — AWS, Azure and GCP. Their work has created a positive impact on Rapid7 customers by enabling secure cloud adoption faster than ever before.
Rapid7 is a company that is firmly rooted in their company values. Employees are encouraged to challenge conventional ways of thinking, work together to create impact, be advocates for customers, bring their authentic selves and experiences to the table, and embrace the spirit of continuous learning and growth. The work represented in these awards is a testament to the incredible opportunities and experiences that are possible when these values are clearly modeled, celebrated and practiced in pursuit of a shared mission — creating a safer digital future for all.
GraphQL is an open-source data query and manipulation language that can be used to build application program interfaces (APIs). Since its initial inception by Facebook in 2012 and subsequent release in 2015, GraphQL has grown steadily in popularity. Some estimate that by 2025, more than 50% of enterprises will use GraphQL in production, up from less than 10% in 2021.
Unlike Rest APIs, which return information called from an endpoint and require the user to extract applicable information, GraphQL allows the user to query specific data from a GraphQL schema and return precise results.
Although GraphQL is relatively new and allows you to query exactly what you require, it is still prone to the same common vulnerabilities as other APIs. There are weaknesses that attackers can exploit to gain access to sensitive data, making securing GraphQL extremely important. The ability to scan a GraphQL schema will help to remediate those weaknesses and provide additional API security coverage.
Why GraphQL security is important
While there are numerous benefits to adopting GraphQL, the security implications are less well-understood. Can functionality be abused? What problems come with querying flexibility? Which vulnerabilities can be exploited? These are all points of concern for its use base.
GraphQL is also no different from other APIs in terms of potential attack vectors. Indeed, it has its own unique security vulnerabilities on top of those you would encounter through a REST API.
As we discussed in our recent post on API security best practices, APIs are a lucrative target that can allow hackers to gain access to an otherwise secure system and exploit vulnerabilities. Not only do APIs often suffer from the same vulnerabilities as web applications — like broken access controls, injections, security misconfigurations, and vulnerabilities inherited from other dependent code libraries — but they are also more susceptible to resource consumption and rate limiting issues due to the automated nature of their users.
Best practices for securing GraphQL
The first step in securing your GraphQL endpoint is to familiarize yourself with some of the most common vulnerabilities and best practices to protect against potential exposure. The most common are injection vulnerabilities – such as SQL injection, OS command injection, and server-side request forgery – where the data provided in the arguments of a GraphQL request is injected into commands, queries, and other executable entities by the application code. Other common vulnerabilities include a lack of resource management that can enable a Denial of Service (DoS) attack, due to general graph/query complexity and the potential for large batch requests. Finally, broken access control vulnerabilities exist in GraphQL APIs in much the same way as in other applications and services, but they can be exacerbated by the segmented nature of GraphQL query resolvers.
There are several best practice recommendations which can be utilized to counter such attacks.
Only allow valid values to be passed – Values should be controlled via allow lists, custom validators and correct definitions.
Depth limiting – Restricting the depth of a query only to predetermined levels will allow control over the expense of a query and avoid tying up your back end unnecessarily.
Amount limiting – Restricting the amount of a particular object in a query will reduce the expense of the query by not allowing more than x objects to be called.
Query cost analysis – Checking how expensive a query may be before you allow it to run is a useful additional step to block expensive or malicious queries.
Control input rejections – Ensure you don’t overly expose information about the API during input rejections.
Introspection turned off – By default, introspection will be enabled on GraphQL, but simply disabling introspection will restrict what information the consumer can access and not allow them to learn everything about your API.
OWASP have also produced a really neat cheat sheet series, which provides an introduction to GraphQL, as well as a detailed rundown of best practices and common GraphQL attacks, to help teams with upskilling and securing GraphQL.
How to secure GraphQL
The second step in securing your GraphQL endpoint is right here with Rapid7! While almost every modern DAST solution can properly parse and understand requests to and responses from web applications and, in most cases, APIs, that doesn’t mean all those tools will specifically understand GraphQL. That’s why InsightAppSec has specifically added support for parsing GraphQL requests, responses, and schemas, so that it can properly scan GraphQL-based APIs. This new feature provides customers with the ability to scan GraphQL endpoints to identify and then remediate any vulnerabilities encountered.
Initial support will be provided to identify the following vulnerabilities:
Blind SQL injection
Server-side request forgery
Local file inclusion/remote file inclusion
To find out how to execute a GraphQL scan, check out our doc on the feature in InsightAppSec for additional information, support, and guidance.
On November 11th 2022, Rapid7 will for the first time publish and present state-of-the-art machine learning (ML) research at AISec, the leading venue for AI/ML cybersecurity innovations. Led by Dr. Stuart Millar, Senior Data Scientist, Rapid7’s multi-disciplinary ML group has designed a novel deep learning model to automatically prioritize application security vulnerabilities and reduce false positive friction. Partnering with The Centre for Secure Information Technologies (CSIT) at Queen’s University Belfast, this is the first deep learning system to optimize DAST vulnerability triage in application security. CSIT is the UK’s Innovation and Knowledge Centre for cybersecurity, recognised by GCHQ and EPSRC as a Centre of Excellence for cybersecurity research.
Security teams struggle tremendously with prioritizing risk and managing a high level of false positive alerts, while the rise of the cloud post-Covid means web application security is more crucial than ever. Web attacks continue to be the most common type of compromise; however, high levels of false positives generated by vulnerability scanners have become an industry-wide challenge. To combat this, Rapid7’s innovative ML architecture optimizes vulnerability triage by utilizing the structure of traffic exchanges between a DAST scanner and a given web application. Leveraging convolutional neural networks and natural language processing, we designed a deep learning system that encapsulates internal representations of request and response HTTP traffic before fusing them together to make a prediction of a verified vulnerability or a false positive. This system learns from historical triage carried out by our industry-leading SMEs in Rapid7’s Managed Services division.
Given the skillset, time, and cognitive effort required to review high volumes of DAST results by hand, the addition of this deep learning capability to a scanner creates a hybrid system that enables application security analysts to rank scan results, deprioritise false positives, and concentrate on likely real vulnerabilities. With the system able to make hundreds of predictions per second, productivity is improved and remediation time reduced, resulting in stronger customer security postures. A rigorous evaluation of this machine learning architecture across multiple customers shows that 96% of false positives on average can automatically be detected and filtered out.
Rapid7’s deep learning model uses convolutional neural networks and natural language processing to represent the structure of client-server web traffic. Neither the model nor the scanner require source code access — with this hybrid approach first finding potential vulnerabilities using a scan engine, followed by the model predicting those findings as real vulnerabilities or false positives. The resultant solution enables the augmentation of triage decisions by deprioritizing false positives. These time savings are essential to reduce exposure and harden security postures — considering the average time to detect a web breach can be several months, the sooner a vulnerability can be discovered, verified and remediated, the smaller the window of opportunity for an attacker.
Now recognized as state-of-the-art research after expert peer review, Rapid7 will introduce the work at AISec on Nov 11th 2022 at the Omni Los Angeles Hotel at California Plaza. Watch this space for further developments, and download a copy of the pre-print publication here.
AWS Network Firewall helps make it easier for you to secure virtual networks at scale inside Amazon Web Services (AWS). Without having to worry about availability, scalability, or network performance, you can now deploy Network Firewall with the AWS Firewall Manager service. Firewall Manager allows administrators in your organization to apply network firewalls across accounts. This post will take you through different deployment models and demonstrate with step-by-step instructions how this can be achieved.
Here’s a quick overview of the services used in this blog post:
Amazon Virtual Private Cloud (Amazon VPC) is a logically isolated virtual network. It has inbuilt network security controls and routing between VPC subnets by design. An internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between your VPC and the internet.
AWS Transit Gateway is a service that connects your VPCs to each other, to on-premises networks, to virtual private networks (VPNs), and to the internet through a central hub.
AWS Network Firewall is a service that secures network traffic at the organization and account levels. AWS Network Firewall policies govern the monitoring and protection behavior of these firewalls. The specifics of these policies are defined in rule groups. A rule group consists of rules that define reusable criteria for inspecting and processing network traffic. Network Firewall can support thousands of rules that can be based on a domain, port, protocol, IP address, or pattern matching.
When it comes to securing multiple AWS accounts, security teams categorize firewall deployment into centralized or distributed deployment models. Firewall Manager supports Network Firewall deployment in both modes. There are multiple additional deployment models available with Network Firewall. For more information about these models, see the blog post Deployment models for AWS Network Firewall.
Centralized deployment model
Network Firewall can be centrally deployed as an Amazon VPC attachment to a transit gateway that you set up with AWS Transit Gateway. Transit Gateway acts as a network hub and simplifies the connectivity between VPCs as well as on-premises networks. Transit Gateway also provides inter-Region peering capabilities to other transit gateways to establish a global network by using the AWS backbone. In a centralized transit gateway model, Firewall Manager can create one or more firewall endpoints for each Availability Zone within an inspection VPC. Network Firewall deployed in a centralized model covers the following use cases:
Filtering and inspecting traffic within a VPC or in transit between VPCs, also known as east-west traffic.
Filtering and inspecting ingress and egress traffic to and from the internet or on-premises networks, also known as north-south traffic.
Distributed deployment model
With the distributed deployment model, Firewall Manager creates endpoints into each VPC that requires protection. Each VPC is protected individually and VPC traffic isolation is retained. You can either customize the endpoint location by specifying which Availability Zones to create firewall endpoints in, or Firewall Manager can automatically create endpoints in those Availability Zones that have public subnets. Each VPC does not require connectivity to any other VPC or transit gateway. Network Firewall configured in a distributed model addresses the following use cases:
Protect traffic between a workload in a public subnet (for example, an EC2 instance) and the internet. Note that the only recommended workloads that should have a network interface in a public subnet are third-party firewalls, load balancers, and so on.
Protect and filter traffic between an AWS resource (for example Application Load Balancers or Network Load Balancers) in a public subnet and the internet.
Deploying Network Firewall in a centralized model with Firewall Manager
The following steps provide a high-level overview of how to configure Network Firewall with Firewall Manager in a centralized model, as shown in Figure 1.
Build and deploy Firewall Manager policies for Network Firewall, based on the rule groups you defined previously. Firewall Manager will now create firewalls across these accounts.
Finish deployment by updating the related VPC route tables in the member account, so that traffic gets routed through the firewall for inspection.
Figure 1: Network Firewall centralized deployment model
The following steps provide a detailed description of how to configure Network Firewall with Firewall Manager in a centralized model.
To deploy network firewall policy centrally with Firewall Manager (console)
Sign in to your Firewall Manager delegated administrator account and open the Firewall Manager console under AWS WAF and Shield services.
In the navigation pane, under AWS Firewall Manager, choose Security policies.
On the Filter menu, select the AWS Region where your application is hosted, and choose Create policy. In this example, we choose US East (N. Virginia).
As shown in Figure 2, under Policy details, choose the following:
For AWS services, choose AWS Network Firewall.
For Deployment model, choose Centralized.
Figure 2: Network Firewall Manager policy type and Region for centralized deployment
Enter a policy name.
In the AWS Network Firewall policy configuration pane, you can choose to configure both stateless and stateful rule groups along with their logging configurations. In this example, we are not creating any rule groups and keep the default configurations, as shown in Figure 3. If you would like to add a rule group, you can create rule groups here and add them to the policy.
For Inspection VPC configuration, select the account and add the VPC ID of the inspection VPC in each of the member accounts that you previously created, as shown in Figure 4. In the centralized model, you can only select one VPC under a specific account as the inspection VPC.
Figure 4: Inspection VPC configuration
For Availability Zones, select the Availability Zones in which you want to create the Network Firewall endpoint(s), as shown in Figure 5. You can select by Availability Zone name or Availability Zone ID. Optionally, if you want to specify the CIDR for each Availability Zone, or specify the subnets for firewall subnets, then you can add the CIDR blocks. If you don’t provide CIDR blocks, Firewall Manager queries your VPCs for available IP addresses to use. If you provide a list of CIDR blocks, Firewall Manager searches for new subnets only in the CIDR blocks that you provide.
Figure 5: Network Firewall endpoint Availability Zones configuration
For Policy scope, choose VPC, as shown in Figure 6.
For Resource cleanup, choose Automatically remove protections from resources that leave the policy scope. When you select this option, Firewall Manager will automatically remove Firewall Manager managed protections from your resources when a member account or a resource leaves the policy scope. Choose Next.
For Policy tags, you don’t need to add any tags. Choose Next.
Review the security policy, and then choose Create policy.
To route traffic for inspection, you manually update the route configuration in the member accounts. Exactly how you do this depends on your architecture and the traffic that you want to filter. For more information, see Route table configurations for AWS Network Firewall.
Note: In current versions of Firewall Manager, centralized policy only supports one inspection VPC per account. If you want to have multiple inspection VPCs in an account to inspect multiple firewalls, you cannot deploy all of them through Firewall Manager centralized policy. You have to manually deploy to the network firewalls in each inspection VPC.
Deploying Network Firewall in a distributed model with Firewall Manager
The following steps provide a high-level overview of how to configure Network Firewall with Firewall Manager in a distributed model, as shown in Figure 7.
Build and deploy Firewall Manager policy for network firewalls into tagged VPCs based on the rule groups that you defined in the previous step.
Finish deployment by updating the related VPC route tables in the member accounts to begin routing traffic through the firewall for inspection.
Figure 7: Network Firewall distributed deployment model
The following steps provide a detailed description how to configure Network Firewall with Firewall Manager in a distributed model.
To deploy Network Firewall policy distributed with Firewall Manager (console)
Create new VPCs in member accounts and tag them. In this example, you launch VPCs in the US East (N. Virginia) Region. Create a new VPC in a member account by using the VPC wizard, as follows.
Choose VPC with a Single Public Subnet. For this example, select a subnet in the us-east-1a Availability Zone.
Add a desired tag to this VPC. For this example, use the key Network Firewall and the value yes. Make note of this tag key and value, because you will need this tag to configure the policy in the Policy scope step.
Sign in to your Firewall Manager delegated administrator account and open the Firewall Manager console under AWS WAF and Shield services.
In the navigation pane, under AWS Firewall Manager, choose Security policies.
On the Filter menu, select the AWS Region where you created VPCs previously and choose Create policy. In this example, you choose US East (N. Virginia).
For AWS services, choose AWS Network Firewall.
For Deployment model, choose Distributed, and then choose Next.
Figure 8: Network Firewall Manager policy type and Region for distributed deployment
Enter a policy name.
On the AWS Network Firewall policy configuration page, you can configure both stateless and stateful rule groups, along with their logging configurations. In this example you are not creating any rule groups, so you choose the default configurations, as shown in Figure 9. If you would like to add a rule group, you can create rule groups here and add them to the policy.
Figure 9: Network Firewall policy configuration
In the Configure AWS Network Firewall Endpoint section, as shown in Figure 10, you can choose Custom endpoint configuration or Automatic endpoint configuration. In this example, you choose Custom endpoint configuration and select the us-east-1a Availability Zone. Optionally, if you want to specify the CIDR for each Availability Zone or specify the subnets for firewall subnets, then you can add the CIDR blocks. If you don’t provide CIDR blocks, Firewall Manager queries your VPCs for available IP addresses to use. If you provide a list of CIDR blocks, Firewall Manager searches for new subnets only in the CIDR blocks that you provide.
Figure 10: Network Firewall endpoint Availability Zones configuration
For AWS Network Firewall route configuration, choose the following options, as shown in Figure 11. This will monitor the route configuration using the administrator account, to help ensure that traffic is routed as expected through the network firewalls.
For Route management, choose Monitor.
Under Traffic type, for Internet gateway, choose Add to firewall policy.
Select the checkbox for Allow required cross-AZ traffic, and then choose Next.
Important: Be careful when defining the policy scope. Each policy creates Network Firewall endpoints in all the VPCs and their Availability Zones that are within the policy scope. If you select an inappropriate scope, it could result in the creation of a large number of network firewalls and incur significant charges for AWS Network Firewall.
For Resource cleanup, select the Automatically remove protections from resources that leave the policy scope check box, and then choose Next.
For Policy tags, you don’t need to add any tags. Choose Next.
Review the security policy, and then choose Create policy.
To route traffic for inspection, you need to manually update the route configuration in the member accounts. Exactly how you do this depends on your architecture and the traffic that you want to filter. For more information, see Route table configurations for AWS Network Firewall.
To avoid incurring future charges, delete the resources you created for this solution.
To delete Firewall Manager policy (console)
Sign in to your Firewall Manager delegated administrator account and open the Firewall Manager console under AWS WAF and Shield services
In the navigation pane, choose Security policies.
Choose the option next to the policy that you want to delete.
Choose Delete all policy resources, and then choose Delete. If you do not select Delete all policy resources, then only the firewall policy on the administrator account will be deleted, not network firewalls deployed in the other accounts in AWS Organizations.
In this blog post, you learned how you can use either a centralized or a distributed deployment model for Network Firewall, so developers in your organization can build firewall rules, create security policies, and enforce them in a consistent, hierarchical manner across your entire infrastructure. As new applications are created, Firewall Manager makes it easier to bring new applications and resources into a consistent state by enforcing a common set of security rules.
“Yes, I know what applications we have publicly exposed.”
How many times have you said that with confidence? I bet not too many. With the rapid pace of development that engineering teams can work at, it is becoming increasingly difficult to know what apps you have exposed to the internet, adding potential security risks to your organization.
Using the data supplied by Project Sonar — which was started almost a decade ago and conducts internet-wide surveys across more than 70 different services and protocols — you can enter a domain within InsightAppSec and run a discovery search. You will get back a list of results that are linked to that initial domain, along with some useful metadata.
We have had this feature open as a beta for various customers and received real-world examples of how they used it. Here are two key use cases for this functionality.
After running a discovery scan, one customer noticed that a “business-critical web application was found on an open port that it shouldn’t have been on.” After getting this data, they were able to work with that application team and get it locked down.
Various customers noted that running a discovery scan helped them to get a better sense of their public-facing app inventory. From this, they were able to carry out various tasks, including“checking the list against their own list for accountability purposes” and “having relevant teams review the list before attacking.”They did this by exporting the discovery results to a CSV file and reviewing them outside of InsightAppSec.
How exactly does it work?
Running a discovery search shouldn’t be difficult, so we’ve made the process as easy as possible. Start by entering a domain that you own, and hit “Discover.” This will bring back a list of domains, along with their IP, Port, and Last Seen date (based on the last time a Sonar scan has found it.)
From here, you could add a domain to your allow list and then run a scan against it, using the scan config setup process.
If you see some domains that you are not sure about, you might decide that you need to know more about the domains before you run a scan. You can do this by exporting the data as a CSV and then running your own internal process on these before taking any next steps.
How do I access application discovery?
Running a discovery scan is currently available to all InsightAppSec Admins, but Admins can grant other users or sets of users access to the feature using the InsightPlatform role-based access control feature.
Rapid7’s tCell is a powerful tool that allows you to monitor risk and protect web applications and APIs in real time. Great! It’s a fundamental part of our push to make web application security as strong and comprehensive as it needs to be in an age when web application attacks account for roughly 70% of cybersecurity incidents.
But with that power comes complexity, and we know that not every customer has the same resources available both in-house or externally to leverage tCell in all its glory right out of the box. With our newest agent addition, we’re hoping to make that experience a little bit easier.
AWS AMI Agent for tCell
We’ve introduced the AWS AMI Agent for tCell, which makes it easier to deploy tCell into your software development life cycle (SDLC) without the need to manually configure tCell. If you aren’t as familiar with deploying web apps and need help getting tCell up and running, you can now deploy tCell with ease and get runtime protection on your apps within minutes.
If you use Amazon Web Services (AWS), you can now quickly launch a tCell agent with NGINX as a reverse proxy. This is placed in front of your existing web app without having to make development or code changes. To make things even easier, the new AWS AMI Agent even comes pre-equipped with a helper utility (with the NGINX agent pre-installed) that allows you to configure your tCell agent in a single command.
Shift left seamlessly
So why is this such an important new deployment method for tCell customers? Simply put, it’s a way to better utilize and understand tCell before making a case to your team of developers. To get the most out of tCell, it’s best to get buy-in from your developers, as deployment efforts traditionally can require bringing the dev team into the fold in a significant way.
With the AWS AMI Agent, your security team can utilize tCell right away, with limited technical knowledge, and use those learnings (and security improvements) to make the case that a full deployment of the tCell agent is in your dev team’s best interest. We’ve seen this barrier with some existing customers and with the overall shift-left approach within the web application community at large.
This new deployment offering is a way for your security team to get comfortable with the benefits (and there are many) of securing your web applications with tCell. They will better understand how to secure AWS-hosted web apps and how the two products work together seamlessly.
If you’d like to give it a spin, we recommend heading over to the docs to find out more.
The AWS AMI Agent is available to all existing tCell customers right now.
Summer is in full swing, and that means soaring temperatures, backyard grill-outs, and the latest roundup of Q2 application security improvements from Rapid7. Yes, we know you’ve been waiting for this moment with more anticipation than Season 4 of Stranger Things. So let’s start running up that hill, not beat around the bush (see what we did there?), and dive right in.
OWASP Top 10 for application security
Way, way back in September of 2021 (it feels like it was yesterday), the Open Web Application Security Project (OWASP) released its top 10 list of critical web application security risks. Naturally, we were all over it, as OWASP is one of the most trusted voices in cybersecurity, and their Top 10 lists are excellent places to start understanding where and how threat actors could be coming for your applications. We released a ton of material to help our customers better understand and implement the recommendations from OWASP.
This quarter, we were able to take those protections another big step forward by providing an OWASP 2021 Attack Template and Report for InsightAppSec. With this new feature, your security team can work closely with development teams to discover and remediate vulnerabilities in ways that jive with security best practice. It also helps to focus your AppSec program around the updated categories provided by OWASP (which we highly suggest you do).
The new attack template includes all the relevant attacks included in the updated OWASP Top 10 list which means you can focus on the most important vulnerabilities to remediate, rather than be overwhelmed by too many vulnerabilities and not focusing on the right ones. Once the vulns are discovered, InsightAppSec helps your development team to remediate the issues in several different ways, including a new OWASP Top 10 report and the ability to let developers confirm vulnerabilities and fixes with Attack Replay.
Scan engine and attack enhancements
Product support for OWASP 2021 wasn’t the only improvement we made to our industry-leading DAST this quarter. In fact, we’ve been quite busy adding additional attack coverage and making scan engine improvements to increase coverage and accuracy for our customers. Here are just a few.
Spring4Shell attacks and protections with InsightAppSec and tCell
We instituted a pair of improvements to InsightAppSec and tCell meant to identify and block the now-infamous Spring4Shell vulnerability. We now have included a default RCE attack module specifically to test for the Spring4Shell vulnerability with InsightAppSec. That feature is available to all InsightAppSec customers right now, and we highly recommend using it to prevent this major vulnerability from impacting your applications.
Additionally, for those customers leveraging tCell to protect their apps, we’ve added new detections and the ability to block Spring4Shell attacks against your web applications. In addition, we’ve added Spring4Shell coverage for our Runtime SCA capability. Check out more here on both of these new enhancements.
New out-of-band attack module
We’ve added a new out-of-band SQL injection module similar to Log4Shell, except it leverages the DNS protocol, which is typically less restricted and used by the adversary. It’s included in the “All Attacks” attack template and can be added to any customer attack template.
Improved scanning for session detection
We have made improvements to our scan engine on InsightAppSec to better detect unwanted logouts. When configuring authentication, the step-by-step instructions will guide you through configuring this process for your web applications.
Making it easier for our customers
This wouldn’t be a quarterly feature update if we didn’t mention ways we are making InsightAppSec and tCell even easier and more efficient for our customers. In the last few months, we have moved the “Manage Columns” function into “Vulnerabilities” in InsightAppSec to make it even more customizable. You can now also hide columns, drag and drop them where you would like, and change the order in ways that meet your needs.
We’ve also released an AWS AMI of the tCell nginx agent to make it easier for current customers to deploy tCell. This is perfect for those who are familiar with AWS and want to get up and running with tCell fast. Customers who also want a basic understanding of how tCell works and want to share tCell’s value with their dev teams will find this new AWS AMI to provide insight fast.
Summer may be a time to take it easy and enjoy the sunshine, but we’re going to be just as hard at work making improvements to InsightAppSec and tCell over the next three months as we were in the last three. With a break for a hot dog and some fireworks in there somewhere. Stay tuned for more from us and have a great summer.
It’s always a good thing to take a step back every once in a while to take the lay of the land. Like you, we are always working at a breakneck pace to help secure the web applications being built today and ready ourselves to secure the innovations of the future. When Forrester put out The State of Application Security, 2022 report a few weeks ago, we thought it was a great time to share where we think AppSec is headed and several places where we agree with Forrester’s take on the state of play.
Here are a few of the highlights.
Modern apps require end-to-end SDLC coverage
When we think of the software development life cycle (SDLC), there is always a key focus on “shifting left.” This makes sense: We want to find security vulnerabilities earlier to save time, money, and risk exposure in production. However, if there’s one thing we’ve learned in the last 12 months with recent emergent threats, it’s that no matter how much you try to secure your applications pre-production, you still need to have runtime protections in place for your business-critical applications. The Forrester report notes that the idea of “shift everywhere” seems to be gaining traction, which is inclusive of shifting both left and right. According to Forrester’s report, 58% of global senior security decision-makers plan to increase their application security budget this year. We can expect the spend on tooling across the SDLC to be prioritized.
An example of this – highlighted by recent vulnerabilities such as Log4Shell and Spring4Shell – is the adoption of software composition analysis (SCA) in-production. While finding and fixing third-party packages with vulnerabilities in pre-production environments is absolutely critical, customers are also going to require production coverage for open-source libraries. Rapid7 tools have helped our customers detect vulnerable third-party packages in their runtime environments. You can check out more how we helped our customers do this at this blog.
As infrastructure continues to become code and modern development technologies such as containers are adopted, the risk associated with these technologies grows as well. This modern approach to application development means investment in modern security practices like container and IaC scanning are key to a best-in-class AppSec program.
APIs are growing, as is their risk
APIs are the way in which modern applications communicate. Nearly every modern application utilizes one or multiple APIs – or even is an API. API usage continues to rise across the world – and attackers have started to take notice. Malicious API traffic almost doubled from the timeframe of Dec 2020 to January 2021, Forrester reports.
APIs are now clearly a part of organizations’ growing attack surface, and their importance will continue to grow over the next few years. That means they need to be a critical component of any security program. There are many ways to secure APIs, including proactively scanning and monitoring them for any malicious activity.
Developers’ influence is increasing
Between the threats we’ve experienced from vulnerabilities in open-source software components and the fact that open source accounts for 75% of audited code bases, as Forrester’s latest State of Application Security Report points out, we see the growing need for including developers in security decision-making. Development teams are critical stakeholders – and often, they need just as much input when it comes to what security tools and practices to implement.
As modern applications require modern development technologies, development teams are looking to partner with security teams on ways to implement compensating controls, without slowing down the speed of development. We can continue to expect an increase in the influence that development teams will have on security programs.
These are just a few highlights about the current state of application security and the trends that will shape it this year, next year, and years to come. As always, we will keep our finger on the pulse of application security and help to drive the practice forward to help you keep your organization safe.
API usage is skyrocketing. According to the latest State of the API Report, API requests increased by 56% last year to a total of 855 million, and Google says the growth isn’t expected to slow any time soon.
APIs – short for application programming interfaces – are a critical component of how applications are built. They control the type of requests that occur between programs, how requests are made, and the format of those requests.
The huge increase in usage stems from the important role APIs – and web applications more broadly – play in digital transformation. APIs have helped facilitate the transition from monolithic applications to microservices. They’ve enabled businesses to provide user-oriented API-based services for B2B use cases, including automation and integration. And they’re integral to modern web applications, which are no longer just HTML with links but rich user interfaces, built as single-page apps with REST API backends. Nearly every modern application utilizes – or is – an API.
Today, it’s almost impossible to do anything online without interacting with an API. That’s why cyberattacks are increasingly targeting APIs, and they’ve become a large part of the application attack surface.
Why securing APIs is important
APIs are a lucrative target that can allow hackers to gain access to an otherwise secure system and exploit vulnerabilities. Not only do APIs often suffer from the same vulnerabilities as web applications – like broken access controls, injections, security misconfigurations, and vulnerabilities inherited from other dependent code libraries – but they are also more susceptible to resource consumption and rate limiting issues due to the automated nature of their users.
Due to a lack of knowledge in the market, it’s also common that legacy issues from early APIs are carried forward. For example, not all APIs will be fronted by an API gateway, with older APIs sitting in the background with little or no protection simply due to a lack of awareness of them. Many unused APIs will also not have been decommissioned, as newer APIs are produced and replace them as a product evolves, due to a lack of standard process and bad practice. This can leave legacy APIs vulnerable to attacks.
How to secure an API
The first step in securing your APIs is to audit your environment and/or applications to take an inventory of what APIs you have and which ones you’re actually utilizing. Then, you must understand the purpose of each individual API to allow you to validate that it is working as expected. You must also understand the expected behavior of the API to allow identification of threats more readily by being able to capture abnormal activity. Once you have a firm understanding of what the API’s functionality is and what expected behavior is, you can then both manage and test your API more effectively and efficiently.
API management is a key element for API security. APIs not only require the same controls as web apps but also additional controls specific to the API’s unique function. Documentation and version control of APIs is of vital importance, as one product can have multiple APIs – even hundreds or thousands.
Poor management can lead to issues with legacy and defunct APIs, as you will often find that only a small portion of APIs pass through an API gateway. Meanwhile, older APIs – which haven’t been decommissioned, or which teams simply aren’t aware of – can sit in the background with no protection. The probability of known vulnerabilities with older APIs is also significantly higher, which amplifies the risk profile.
The same legacy issues can also lead to coverage gaps, and calls that are outside of an API gateway could leave a blind spot when it comes to intra-API calls. Publishing and clearly defining your API will simplify users’ understanding of the API, allowing them to connect in the most appropriate and effective way. Ensuring your API is appropriately monitored is a key management technique. Continuously undertaking performance checks will allow you to understand if the API is under stress from being overloaded. It can also provide an indication of traffic volumes to monitor usage, potentially gauge malicious activity (via audit logs), and judge whether you need to scale up your operation. Lastly, having a response plan in place for attacks is a vital control in API security, allowing for a rapid but controlled response to potential threats.
There have been many recent examples of API-based attacks, such as those experienced by WordPress – and even on the dating scene with Bumble’s recent vulnerability issues. Some simple but effective steps you can take to secure your API and reduce the risk of such exposures include:
Authentication: Do you have a control in place to understand who’s calling your API?
Authorization: Should the person calling be able to access this data?
Encryption: Have you encrypted your network traffic?
Traffic management: Have you set rate limits or thresholds to keep a customer from pulling too much data or running scripts to tie up an API?
Audit logging: Effective logging ensures you can understand what normal traffic looks like and allows you to identify abnormal activity.
How to test your API
API testing is still evolving to keep up with the increase in volume and complexity. While manual API security testing can be done with traditional testing tools, and fully automated API security testing is partially supported by most major DAST solutions, there are many open-source tools written for guided API security testing. API testing used in conjunction with proper API management will increase API security.
API testing is most effective when you have a full risk profile of your business – i.e. you are fully aware of all of your APIs (including legacy or defunct APIs) to ensure you have no blindspots that could be exposed or manipulated. Taking time to identify vulnerabilities in API frameworks, your network, configuration, and policy all enhance your API security.
Anticipating threats by understanding expected behavior and having adequate testing in place will allow for proactive coverage and enhanced protection and threat identification.
Finally, you must continuously test your endpoint to ensure protection is maintained at all times and optimum security is in place. The ability to identify and block security risks before they occur is vital in the fight to provide the best protection against threats to your API.
Building applications in the cloud has been great for development speed and scalability, but it can sometimes feel more like a sustained migraine for security teams. How do you keep your cloud applications safe without resorting to a dizzying patchwork of overlapping tools and dispersed services?
Gartner® research on “Innovation Insight for Cloud-Native Application Protection Platforms” breaks down the core capabilities required to effectively reduce risk in your cloud environment, and how they might come together into a single solution or ecosystem to relieve your security headaches.
At a high level, here’s what Gartner found in its research into cloud-native application protection platforms (CNAPP):
“To support [digital] initiatives, developers have embraced cloud-native application development, typically combining microservices-based architectures built using containers, assembled in DevOps-style development pipelines, deployed into programmatic cloud infrastructure and orchestrated at runtime using Kubernetes and maintained with an immutable infrastructure mindset. This shift creates significant challenges in securing these applications.”
“The unique characteristics of cloud-native applications makes them impossible to secure without a complex set of overlapping tools spanning development and production,” including infrastructure as code (IaC) scanning, cloud workload protection platforms (CWPP), cloud infrastructure entitlement management (CIEM), cloud security posture management (CSPM), and container management.
“Understanding and addressing the real risk of cloud-native applications requires advanced analytics combining siloed views of application risk, open-source component risk, cloud infrastructure risk, and runtime workload risk.”
Gartner also has a few recommendations for how to handle this new security paradigm:
“Implement an integrated security approach that covers the entire life cycle of cloud-native applications, starting in development and extending into production.”
“Integrate security into the developer’s toolchain so that security testing is automated as code is created and moves through the development pipeline, reducing the friction of adoption.”
“[Security and risk management] leaders should evaluate emerging cloud-native application protection platforms that provide a complete life cycle approach for security.”
Basically, securing app development in the cloud effectively is going to require tools that let you consolidate core security functions, get a clear view of your environment (and the risks it may contain), and empower your developers to incorporate security into the security pipeline.
So, what’s our take?
CNAPP represents the next evolution of cloud security through the unification of previously siloed feature sets or solutions. In previous years, just having tools that did one or more of these core functions provided by separate vendors was “good enough.” But over time, as cloud security programs across enterprises continued to scale and mature, it became clear that the dispersed nature of these tools made it extremely difficult, if not impossible, to get a true understanding of risk across complex cloud environments and make meaningful progress in operationalizing cloud security.
CNAPP is essentially a mindset that can save organizations from having to deploy a new set of technologies. It’s the idea that teams need a consolidated view of the different risks in their environment at the infrastructure, workload, orchestration, or API level, as well as unified workflows and automation capabilities to effectively mitigate those risks.
The reality today, however, is that very few vendors can actually live up to the high bar that Gartner has set with CNAPP. The capabilities shown on the diagram above are extremely wide-ranging and span across multiple teams (DevSecOps and more) within an organization.
CNAPP is about more than just identifying a shopping list of capabilities that your security team needs. When considering how to build out a program to protect cloud-native applications, security teams should focus on driving toward a set of outcomes they hope to achieve. Gartner doesn’t define these outcomes in their CNAPP report, but based on our experience working with some of the most sophisticated cloud and application security teams in the world, some of those desired outcomes may include:
An up-to-date, easily maintainable inventory of all infrastructure, workloads, and apps that make up your organization’s entire cloud footprint
Centralized reporting on risk across the full application stack, including open-source and third-party components
Ongoing, real-time monitoring of suspicious or malicious activity at both the application and infrastructure levels
Integration into the development team’s CI/CD pipeline in order to prevent risks at scale before code is deployed
Automated workflows, both for notification and remediation, to detect and respond to threats as quickly as possible, with minimal human intervention
Each team’s list of outcomes will vary slightly depending on operational maturity, compliance requirements, size and complexity of the cloud environment, and what types of applications they are protecting. Keeping these five outcomes top of mind while evaluating solutions will help your team build from a solid foundation and avoid simply checking boxes off a long list of capabilities.
CNAPP may be a mindset shift first and foremost – but at the end of the day, the capabilities needed to achieve this more holistic approach to cloud and application security have to live somewhere within your technology stack. A unified platform that supports all these needs can help break down unnecessary silos and make it easier to contextualize your security data across the entire cloud infrastructure.
GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.
Gartner, Innovation Insight for Cloud-Native Application Protection Programs, by Neil MacDonald, Charlie Winckless, 25 August 2021
Sometimes, data surprises you. When it does, it can force you to rethink your assumptions and second-guess the way you look at the world. But other times, data can reaffirm your assumptions, giving you hard proof they’re the right ones — and providing increased motivation to act decisively based on that outlook.
The 2022 edition of Verizon’s Data Breach Investigations Report (DBIR), which looks at data from cybersecurity incidents that occurred in 2021, is a perfect example of this latter scenario. This year’s DBIR rings many of the same bells that have been resounding in the ears of security pros worldwide for the past 12 to 18 months — particularly, the threat of ransomware and the increasing relevance of complex supply chain attacks.
Here are our three big takeaways from the 2022 DBIR, and why we think they should have defenders doubling down on the big cybersecurity priorities of the current moment.
This year’s DBIR confirms that ransomware is the critical threat that security pros and laypeople alike believe it to be. Ransomware-related breaches increased by 13% in 2021, the study found — that’s a greater increase than we saw in the past 5 years combined. In fact, nearly 50% of all system intrusion incidents — i.e., those involving a series of steps by which attackers infiltrate a company’s network or other systems — involved ransomware last year.
While the threat has massively increased, the top methods of ransomware delivery remain the ones we’re all familiar with: desktop sharing software, which accounted for 40% of incidents, and email at 35%, according to Verizon’s data. The growing ransomware threat may seem overwhelming, but the most important steps organizations can take to prevent these attacks remain the fundamentals: educating end users on how to spot phishing attempts and maintain security best practices, and equipping infosec teams with the tools needed to detect and respond to suspicious activity.
But security pros have had a slightly different sense of the term on their minds: the software supply chain. Breaches from Kaseya to SolarWinds — not to mention the Log4j vulnerability — reminded us all that vendors’ systems are just as likely a vector of attack as our own.
Unfortunately, Verizon’s Data Breach Investigations Report indicates these incidents are not isolated events — the software supply chain is, in fact, a major avenue of exploitation by attackers. In fact, 62% of cyberattacks that follow the system intrusion pattern began with the threat actors exploiting vulnerabilities in a partner’s systems, the study found.
Put another way: If you were targeted with a system intrusion attack last year, it was almost twice as likely that it began on a partner’s network than on your own.
While supply chain attacks still account for just under 10% of overall cybersecurity incidents, according to the Verizon data, the study authors point out that this vector continues to account for a considerable slice of all incidents each year. That means it’s critical for companies to keep an eye on both their own and their vendors’ security posture. This could include:
Demanding visibility into the components behind software vendors’ applications
Staying consistent with regular patching updates
Acting quickly to remediate and emergency-patch when the next major vulnerability that could affect high numbers of web applications rears its head
3. Mind the app
Between Log4Shell and Spring4Shell, the past 6 months have jolted developers and security pros alike to the realization that their web apps might contain vulnerable code. This proliferation of new avenues of exploitation is particularly concerning given just how commonly attackers target web apps.
Compromising a web application was far and away the top cyberattack vector in 2021, accounting for roughly 70% of security incidents, according to Verizon’s latest DBIR. Meanwhile, web servers themselves were the most commonly exploited asset type — they were involved in nearly 60% of documented breaches.
More than 80% of attacks targeting web apps involved the use of stolen credentials, emphasizing the importance of user awareness and strong authentication protocols at the endpoint level. That said, 30% of basic web application attacks did involve some form of exploited vulnerability — a percentage that should be cause for concern.
“While this 30% may not seem like an extremely high number, the targeting of mail servers using exploits has increased dramatically since last year, when it accounted for only 3% of the breaches,” the authors of the Verizon DBIR wrote.
That means vulnerability exploits accounted for a 10 times greater proportion of web application attacks in 2021 than they did in 2022, reinforcing the importance of being able to quickly and efficiently test your applications for the most common types of vulnerabilities that hackers take advantage of.
Stay the course
For those who’ve been tuned into the current cybersecurity landscape, the key themes of the 2022 Verizon DBIR will likely feel familiar — and with so many major breaches and vulnerabilities that claimed the industry’s attention in 2021, it would be surprising if there were any major curveballs we missed. But the key takeaways from the DBIR remain as critical as ever: Ransomware is a top-priority threat, software supply chains need greater security controls, and web applications remain a key attack vector.
If your go-forward cybersecurity plan reflects these trends, that means you’re on the right track. Now is the time to stick to that plan and ensure you have tools and tactics in place that let you focus on the alerts and vulnerabilities that matter most.
With the release of the new 2021 OWASP Top 10 late last year, OWASP made some fundamental and impactful changes to its ubiquitous reference framework. We published a high-level breakdown of the changes, followed by some deep dives into specific types of threats that made the new Top 10.
To help answer this question, we released an OWASP 2021 Attack Template and Report for InsightAppSec. This new feature helps you use the updated categories from OWASP to inform and focus your AppSec program, work closely with development teams to remediate the discovered vulnerabilities, and move toward best practices for achieving compliance.
Let’s take a closer look.
Before we can fix vulnerabilities, we need to find them, and to do that, we need to scan. We may know where to look, but we often lack the specialist knowledge of industry trends and the general threat landscape required to determine what we should be looking for.
Luckily, the OWASP organization has done the hard work for us. The new InsightAppSec OWASP 2021 attack template includes all the relevant attacks for the categories defined in the latest OWASP version.
The new attack module enables you to leverage the knowledge that went into the latest version of the OWASP Top 10 – even with little or no subject matter knowledge – to generate a focused, hopefully small, set of vulnerabilities. Where security and development resources are over-utilized and expensive, using the OWASP scan template ensures we are focusing on the right vulnerabilities.
Finding vulnerabilities is only part of the journey. If you can’t enable your development teams to remediate vulnerabilities, the entire exercise becomes academic.
That’s why InsightAppSec provides guidance in the form of detailed remediation reports, specifically formatted to provide development teams with all the information and tools required to confirm and remediate the vulnerabilities.
The remediation report includes the Attack Replayfeature found in the product that allows developers to quickly and easily validate the vulnerabilities by replaying the traffic used to identify them.
Although OWASP is not a compliance standard, auditors may view the inclusion of Top 10 scanning as an indication of intent toward good practice, which therefore implies adherence to other compliance standards.
To facilitate this and make it easy for organizations to show good practice, InsightAppSec provides an OWASP report that automatically groups vulnerabilities into the relevant OWASP categories but also includes areas where no vulns have been found.
The OWASP 2021 report gives you an excellent overview of the categories you are successfully addressing and those that may require more focus and attention, giving you actionable information to move your security program forward.
By leveraging the analysis and intel of OWASP and providing workflows right in the product, InsightAppSec gives you control over your AppSec program from scan to remediation enabling the right people, at the right time, with the right information.
Cybersecurity in financial services is a complex picture. Not only has a range of new tech hit the industry in the last 5 years, but compliance requirements introduce another layer of difficulty to the lives of infosec teams in this sector. To add to this picture, the overall cybersecurity landscape has rapidly transformed, with ransomware attacks picking up speed and high-profile vulnerabilities hitting the headlines at an alarming pace.
VMware recently released the 5th annual installment of their Modern Bank Heists report, and the results show a changing landscape for cybersecurity in banking and finance. Here’s a closer look at what CISOs and security leaders in finance said about the security challenges they’re facing — and what they’re doing to solve them.
Destructive threats and ransomware attacks on banks are increasing
The stakes for cybersecurity are higher than ever at financial institutions, as threat actors are increasingly using more vicious tactics. Banks have seen an uptick in destructive cyberattacks — those that delete data, damage hard drives, disrupt network connections, or otherwise leave a trail of digital wreckage in their wake.
63% of financial institutions surveyed in the VMware report said they’ve seen an increase in these destructive attacks targeting their organization — that’s 17% more than said the same in last year’s version of the report.
At the same time, finance hasn’t been spared from the rise in ransomware attacks, which have also become increasingly disruptive. Nearly 3 out of 4 respondents to the survey said they’d been hit by at least one ransomware attack. What’s more, 63% of those ended up paying the ransom.
Supply chain security: No fun in the sun
Like ransomware, island hopping is also on the rise — and while that might sound like something to do on a beach vacation, that’s likely the last thing the phrase brings to mind for security pros at today’s financial institutions.
IT Pro describes island hopping attacks as “the process of undermining a company’s cyber defenses by going after its vulnerable partner network, rather than launching a direct attack.” The source points to the high-profile data breach that rocked big-box retailer Target in 2017. Hackers found an entry point to the company’s data not through its own servers, but those of Fazio Mechanical Services, a third-party vendor.
In the years since the Target breach, supply chain cybersecurity has become an even greater area of focus for security pros across industries, thanks to incidents like the SolarWinds breach and large-scale vulnerabilities like Log4Shell that reveal just how many interdependencies are out there. Now, threats in the software supply chain are becoming more apparent by the day.
VMware’s study found that 60% of security leaders in finance have seen an increase in island hopping attacks — 58% more than said the same last year. The uptick in threats originating from partners’ systems is clearly keeping security officers up at night: 87% said they’re concerned about the security posture of the service providers they rely on.
The proliferation of mobile and web applications associated with the rise of financial technology (fintech) may be exacerbating the problem. VMware notes API attacks are one of the primary methods of island hopping — and they found a whopping 94% of financial-industry security leaders have experienced an API attack through a fintech application, while 58% said they’ve seen an increase in application security incidents overall.
How financial institutions are improving cybersecurity
With attacks growing more dangerous and more frequent, security leaders in finance are doubling down on their efforts to protect their organizations. The majority of companies surveyed in VMware’s study said they planned a 20% to 30% boost to their cybersecurity budget in 2022. But what types of solutions are they investing in with that added cash?
Today’s threat landscape has grown difficult to navigate — especially when financial institutions are competing for candidates in a tight cybersecurity talent market. In the meantime, the financial industry has only grown more competitive, and the pace of innovation is at an all-time high. Having powerful, flexible tools that can streamline and automate security processes is essential to keep up with change. For banks and finance organizations to attain the level of visibility they need to innovate while keeping their systems protected, these tools are crucial.
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.