Tag Archives: Application Security

How Cloudflare is helping domain owners with the upcoming Entrust CA distrust by Chrome and Mozilla

Post Syndicated from Dina Kozlov original https://blog.cloudflare.com/how-cloudflare-is-helping-domain-owners-with-the-upcoming-entrust-ca

Chrome and Mozilla announced that they will stop trusting Entrust’s public TLS certificates issued after November 12, 2024 and December 1, 2024, respectively. This decision stems from concerns related to Entrust’s ability to meet the CA/Browser Forum’s requirements for a publicly trusted certificate authority (CA). To prevent Entrust customers from being impacted by this change, Entrust has announced that they are partnering with SSL.com, a publicly trusted CA, and will be issuing certs from SSL.com’s roots to ensure that they can continue to provide their customers with certificates that are trusted by Chrome and Mozilla. 

We’re excited to announce that we’re going to be adding SSL.com as a certificate authority that Cloudflare customers can use. This means that Cloudflare customers that are currently relying on Entrust as a CA and uploading their certificate manually to Cloudflare will now be able to rely on Cloudflare’s certificate management pipeline for automatic issuance and renewal of SSL.com certificates. 

CA distrust: responsibilities, repercussions, and responses

With great power comes great responsibility
Every publicly trusted certificate authority (CA) is responsible for maintaining a high standard of security and compliance to ensure that the certificates they issue are trustworthy. The security of millions of websites and applications relies on a CA’s commitment to these standards, which are set by the CA/Browser Forum, the governing body that defines the baseline requirements for certificate authorities. These standards include rules regarding certificate issuance, validation, and revocation, all designed to secure the data transferred over the Internet. 

However, as with all complex software systems, it’s inevitable that bugs or issues may arise, leading to the mis-issuance of certificates. Improperly issued certificates pose a significant risk to Internet security, as they can be exploited by malicious actors to impersonate legitimate websites and intercept sensitive data. 

To mitigate such risk, publicly trusted CAs are required to communicate issues as soon as they are discovered, so that domain owners can replace the compromised certificates immediately. Once the issue is communicated, CAs must revoke the mis-issued certificates within 5 days to signal to browsers and clients that the compromised certificate should no longer be trusted. This level of transparency and urgency around the revocation process is essential for minimizing the risk posed by compromised certificates. 

Why Chrome and Mozilla are distrusting Entrust
The decision made by Chrome and Mozilla to distrust Entrust’s public TLS certificates stems from concerns regarding Entrust’s incident response and remediation process. In several instances, Entrust failed to report critical issues and did not revoke certificates in a timely manner. The pattern of delayed action has eroded the browsers’ confidence in Entrust’s ability to act quickly and transparently, which is crucial for maintaining trust as a CA. 

Google and Mozilla cited the ongoing lack of transparency and urgency in addressing mis-issuances as the primary reason for their distrust decision. Google specifically pointed out that over the past 6 years, Entrust has shown a “pattern of compliance failures” and failed to make the “tangible, measurable progress” necessary to restore trust. Mozilla echoed these concerns, emphasizing the importance of holding Entrust accountable to ensure the integrity and security of the public Internet. 

Entrust’s response to the distrust announcement 
In response to the distrust announcement from Chrome and Mozilla, Entrust has taken proactive steps to ensure continuity for their customers. To prevent service disruption, Entrust has announced that they are partnering with SSL.com, a CA that’s trusted by all major browsers, including Chrome and Mozilla, to issue certificates for their customers. By issuing certificates from SSL.com’s roots, Entrust aims to provide a seamless transition for their customers, ensuring that they can continue to obtain certificates that are recognized and trusted by the browsers their users rely on. 

In addition to their partnership with SSL.com, Entrust stated that they are working on a number of improvements, including changes to their organizational structure, revisions to their incident response process and policies, and a push towards automation to ensure compliant certificate issuances. 

How Cloudflare can help Entrust customers 

Now available: SSL.com as a certificate authority for Advanced Certificate Manager and SSL for SaaS certificates
We’re excited to announce that customers using Advanced Certificate Manager will now be able to select SSL.com as a certificate authority for Advanced certificates and Total TLS certificates. Once the certificate is issued, Cloudflare will handle all future renewals on your behalf. 

By default, Cloudflare will issue SSL.com certificates with a 90 day validity period. However, customers using Advanced Certificate Manager will have the option to set a custom validity period (14, 30, or 90 days) for their SSL.com certificates. In addition, Enterprise customers will have the option to obtain 1-year SSL.com certificates. Every SSL.com certificate order will include 1 RSA and 1 ECDSA certificate.

Note: We are gradually rolling this out and customers should see the CA become available to them through the end of September and into October. 

If you’re using Cloudflare as your DNS provider, there are no additional steps for you to take to get the certificate issued. Cloudflare will validate the ownership of the domain on your behalf to get your SSL.com certificate issued and renewed. 

If you’re using an external DNS provider and have wildcard hostnames on your certificates, DNS based validation will need to be used, which means that you’ll need to add TXT DCV tokens at your DNS provider in order to get the certificate issued. With SSL.com, two tokens are returned for every hostname on the certificate. This is because SSL.com uses different tokens for the RSA and ECDSA certificates. To reduce the overhead around certificate management, we recommend setting up DCV Delegation to allow Cloudflare to place domain control validation (DCV) tokens on your behalf. Once DCV Delegation is set up, Cloudflare will automatically issue, renew, and deploy all future certificates for you. 

Advanced Certificates: selecting SSL.com as a CA through the UI or API
Customers can select SSL.com as a CA through the UI or through the Advanced Certificate API endpoint by specifying “ssl_com” in the certificate_authority parameter. 

If you’d like to use SSL.com as a CA for an advanced certificate, you can select “SSL.com” as your CA when creating a new Advanced certificate order. 


If you’d like to use SSL.com as a CA for all of your certificates, we recommend setting your Total TLS CA to SSL.com. This will issue an individual certificate for each of your proxied hostname from the CA. 

Note: Total TLS is a feature that’s only available to customers that are using Cloudflare as their DNS provider. 


SSL for SaaS: selecting SSL.com as a CA through the UI or API
Enterprise customers can select SSL.com as a CA through the custom hostname creation UI or through the Custom Hostnames API endpoint by specifying “ssl_com” in the certificate_authority parameter. 

All custom hostname certificates issued from SSL.com will have a 90 day validity period. If you have wildcard support enabled for custom hostnames, we recommend using DCV Delegation to ensure that all certificate issuances and renewals are automatic.  

Our recommendation if you’re using Entrust as a certificate authority 

Cloudflare customers that use Entrust as their CA are required to manually handle all certificate issuances and renewals. Since Cloudflare does not directly integrate with Entrust, customers have to get their certificates issued directly from the CA and upload them to Cloudflare as custom certificates. Once these certificates come up for renewal, customers have to repeat this manual process and upload the renewed certificates to Cloudflare before the expiration date. 

Manually managing your certificate’s lifecycle is a time-consuming and error prone process. With certificate lifetimes decreasing from 1 year to 90 days, this cycle needs to be repeated more frequently by the domain owner. 

As Entrust transitions to issuing certificates from SSL.com roots, this manual management process will remain unless customers switch to Cloudflare’s managed certificate pipeline. By making this switch, you can continue to receive SSL.com certificates without the hassle of manual management — Cloudflare will handle all issuances and renewals for you!

In early October, we will be reaching out to customers who have uploaded Entrust certificates to Cloudflare to recommend migrating to our managed pipeline for SSL.com certificate issuances, simplifying your certificate management process. 

If you’re ready to make the transition today, simply go to the SSL/TLS tab in your Cloudflare dashboard, click “Order Advanced Certificate”, and select “SSL.com” as your certificate authority. Once your new SSL.com certificate is issued, you can either remove your Entrust certificate or simply let it expire. Cloudflare will seamlessly transition to serving the managed SSL.com certificate before the Entrust certificate expires, ensuring zero downtime during the switch. 

New whitepaper available: Building security from the ground up with Secure by Design

Post Syndicated from Bertram Dorn original https://aws.amazon.com/blogs/security/new-whitepaper-available-building-security-from-the-ground-up-with-secure-by-design/

Developing secure products and services is imperative for organizations that are looking to strengthen operational resilience and build customer trust. However, system design often prioritizes performance, functionality, and user experience over security. This approach can lead to vulnerabilities across the supply chain.

As security threats continue to evolve, the concept of Secure by Design (SbD) is gaining importance in the effort to mitigate vulnerabilities early, minimize risks, and recognize security as a core business requirement. We’re excited to share a whitepaper we recently authored with SANS Institute called Building Security from the Ground up with Secure by Design, which addresses SbD strategy and explores the effects of SbD implementations.

The whitepaper contains context and analysis that can help you take a proactive approach to product development that facilitates foundational security. Key considerations include the following:

  • Integrating SbD into the software development lifecycle (SDLC)
  • Supporting SbD with automation
  • Reinforcing defense-in-depth
  • Applying SbD to artificial intelligence (AI)
  • Identifying threats in the design phase with threat modeling
  • Using SbD to simplify compliance with requirements and standards
  • Planning for the short and long term
  • Establishing a culture of security

While the journey to a Secure by Design approach is an iterative process that is different for every organization, the whitepaper details five key action items that can help set you on the right path. We encourage you to download the whitepaper and gain insight into how you can build secure products with a multi-layered strategy that meaningfully improves your technical and business outcomes. We look forward to your feedback and to continuing the journey together.

Download Building Security from the Ground up with Secure by Design.

 
If you have feedback about this post, submit comments in the Comments section below.

Bertram Dorn
Bertram Dorn

Bertram is a Principal within the Office of the CISO at AWS, based in Munich, Germany. He helps internal and external AWS customers and partners navigate AWS security-related topics. He has over 30 years of experience in the technology industry, with a focus on security, networking, storage, and database technologies. When not helping customers, Bertram spends time working on his solo piano and multimedia performances.
Paul Vixie
Paul Vixie

Paul is a VP and Distinguished Engineer who joined AWS Security after a 29-year career as the founder and CEO of five startup companies covering the fields of DNS, anti-spam, internet exchange, internet carriage and hosting, and internet security. He earned his PhD in Computer Science from Keio University in 2011, and was inducted into the Internet Hall of Fame in 2014. Paul is also known as an author of open source software, including Cron. As a VP, Distinguished Engineer, and Deputy CISO at AWS, Paul and his team in the Office of the CISO use leadership and technical expertise to provide guidance and collaboration on the development and implementation of advanced security strategies and risk management.

Application Security report: 2024 update

Post Syndicated from Michael Tremante original https://blog.cloudflare.com/application-security-report-2024-update


Over the last twelve months, the Internet security landscape has changed dramatically. Geopolitical uncertainty, coupled with an active 2024 voting season in many countries across the world, has led to a substantial increase in malicious traffic activity across the Internet. In this report, we take a look at Cloudflare’s perspective on Internet application security.

This report is the fourth edition of our Application Security Report and is an official update to our Q2 2023 report. New in this report is a section focused on client-side security within the context of web applications.

Throughout the report we discuss various insights. From a global standpoint, mitigated traffic across the whole network now averages 7%, and WAF and Bot mitigations are the source of over half of that. While DDoS attacks remain the number one attack vector used against web applications, targeted CVE attacks are also worth keeping an eye on, as we have seen exploits as fast as 22 minutes after a proof of concept was released.

Focusing on bots, about a third of all traffic we observe is automated, and of that, the vast majority (93%) is not generated by bots in Cloudflare’s verified list and is potentially malicious.

API traffic is also still growing, now accounting for 60% of all traffic, and maybe more concerning, is that organizations have up to a quarter of their API endpoints not accounted for.

We also touch on client side security and the proliferation of third-party integrations in web applications. On average, enterprise sites integrate 47 third-party endpoints according to Page Shield data.

It is also worth mentioning that since the last report, our network, from which we gather the data and insights, is bigger and faster: we are now processing an average of 57 million HTTP requests/second (+23.9% YoY) and 77 million at peak (+22.2% YoY). From a DNS perspective, we are handling 35 million DNS queries per second (+40% YoY). This is the sum of authoritative and resolver requests served by our infrastructure.

Maybe even more noteworthy, is that, focusing on HTTP requests only, in Q1 2024 Cloudflare blocked an average of 209 billion cyber threats each day (+86.6% YoY). That is a substantial increase in relative terms compared to the same time last year.

As usual, before we dive in, we need to define our terms.

Definitions

Throughout this report, we will refer to the following terms:

  • Mitigated traffic: any eyeball HTTP* request that had a “terminating” action applied to it by the Cloudflare platform. These include the following actions: BLOCK, CHALLENGE, JS_CHALLENGE and MANAGED_CHALLENGE. This does not include requests that had the following actions applied: LOG, SKIP, ALLOW. They also accounted for a relatively small percentage of requests. Additionally, we improved our calculation regarding the CHALLENGE type actions to ensure that only unsolved challenges are counted as mitigated. A detailed description of actions can be found in our developer documentation. This has not changed from last year’s report.
  • Bot traffic/automated traffic: any HTTP* request identified by Cloudflare’s Bot Management system as being generated by a bot. This includes requests with a bot score between 1 and 29 inclusive. This has not changed from last year’s report.
  • API traffic: any HTTP* request with a response content type of XML or JSON. Where the response content type is not available, such as for mitigated requests, the equivalent Accept content type (specified by the user agent) is used instead. In this latter case, API traffic won’t be fully accounted for, but it still provides a good representation for the purposes of gaining insights. This has not changed from last year’s report.

Unless otherwise stated, the time frame evaluated in this post is the period from April 1, 2023, through March 31, 2024, inclusive.

Finally, please note that the data is calculated based only on traffic observed across the Cloudflare network and does not necessarily represent overall HTTP traffic patterns across the Internet.

*When referring to HTTP traffic we mean both HTTP and HTTPS.

Global traffic insights

Average mitigated daily traffic increases to nearly 7%

Compared to the prior 12-month period, Cloudflare mitigated a higher percentage of application layer traffic and layer 7 (L7) DDoS attacks between Q2 2023 and Q1 2024, growing from 6% to 6.8%.

Figure 1: Percent of mitigated HTTP traffic increasing over the last 12 months

During large global attack events, we can observe spikes of mitigated traffic approaching 12% of all HTTP traffic. These are much larger spikes than we have ever observed across our entire network.

WAF and Bot mitigations accounted for 53.9% of all mitigated traffic

As the Cloudflare platform continues to expose additional signals to identify potentially malicious traffic, customers have been actively using these signals in WAF Custom Rules to improve their security posture. Example signals include our WAF Attack Score, which identifies malicious payloads, and our Bot Score, which identifies automated traffic.

After WAF and Bot mitigations, HTTP DDoS rules are the second-largest contributor to mitigated traffic. IP reputation, that uses our IP threat score to block traffic, and access rules, which are simply IP and country blocks, follow in third and fourth place.

Figure 2: Mitigated traffic by Cloudflare product group

CVEs exploited as fast as 22 minutes after proof-of-concept published

Zero-day exploits (also called zero-day threats) are increasing, as is the speed of weaponization of disclosed CVEs. In 2023, 97 zero-days were exploited in the wild, and that’s along with a 15% increase of disclosed CVEs between 2022 and 2023.

Looking at CVE exploitation attempts against customers, Cloudflare mostly observed scanning activity, followed by command injections, and some exploitation attempts of vulnerabilities that had PoCs available online, including Apache CVE-2023-50164 and CVE-2022-33891, Coldfusion CVE-2023-29298 CVE-2023-38203 and CVE-2023-26360, and MobileIron CVE-2023-35082.

This trend in CVE exploitation attempt activity indicates that attackers are going for the easiest targets first, and likely having success in some instances given the continued activity around old vulnerabilities.

As just one example, Cloudflare observed exploitation attempts of CVE-2024-27198 (JetBrains TeamCity authentication bypass) at 19:45 UTC on March 4, just 22 minutes after proof-of-concept code was published.

Figure 3: JetBrains TeamCity authentication bypass timeline

The speed of exploitation of disclosed CVEs is often quicker than the speed at which humans can create WAF rules or create and deploy patches to mitigate attacks. This also applies to our own internal security analyst team that maintains the WAF Managed Ruleset, which has led us to combine the human written signatures with an ML-based approach to achieve the best balance between low false positives and speed of response.

CVE exploitation campaigns from specific threat actors are clearly visible when we focus on a subset of CVE categories. For example, if we filter on CVEs that result in remote code execution (RCE), we see clear attempts to exploit Apache and Adobe installations towards the end of 2023 and start of 2024 along with a notable campaign targeting Citrix in May of this year.

Figure 4: Worldwide daily number of requests for Code Execution CVEs

Similar views become clearly visible when focusing on other CVEs or specific attack categories.

DDoS attacks remain the most common attack against web applications

DDoS attacks remain the most common attack type against web applications, with DDoS comprising 37.1% of all mitigated application traffic over the time period considered.

Figure 5: Volume of HTTP DDoS attacks over time

We saw a large increase in volumetric attacks in February and March 2024. This was partly the result of improved detections deployed by our teams, in addition to increased attack activity. In the first quarter of 2024 alone, Cloudflare’s automated defenses mitigated 4.5 million unique DDoS attacks, an amount equivalent to 32% of all the DDoS attacks Cloudflare mitigated in 2023. Specifically, application layer HTTP DDoS attacks increased by 93% YoY and 51% quarter-over-quarter (QoQ).

Cloudflare correlates DDoS attack traffic and defines unique attacks by looking at event start and end times along with target destination.

Motives for launching DDoS attacks range from targeting specific organizations for financial gains (ransom), to testing the capacity of botnets, to targeting institutions and countries for political reasons. As an example, Cloudflare observed a 466% increase in DDoS attacks on Sweden after its acceptance to the NATO alliance on March 7, 2024. This mirrored the DDoS pattern observed during Finland’s NATO acceptance in 2023. The size of DDoS attacks themselves are also increasing.

In August 2023, Cloudflare mitigated a hyper-volumetric HTTP/2 Rapid Reset DDoS attack that peaked at 201 million requests per second (rps) – three times larger than any previously observed attack. In the attack, threat actors exploited a zero-day vulnerability in the HTTP/2 protocol that had the potential to incapacitate nearly any server or application supporting HTTP/2. This underscores how menacing DDoS vulnerabilities are for unprotected organizations.

Gaming and gambling became the most targeted sector by DDoS attacks, followed by Internet technology companies and cryptomining.

Figure 6: Largest HTTP DDoS attacks as seen by Cloudflare, by year

Bot traffic insights

Cloudflare has continued to invest heavily in our bot detection systems. In early July, we declared AIndependence to help preserve a safe Internet for content creators, offering a brand new “easy button” to block all AI bots. It’s available for all customers, including those on our free tier.

Major progress has also been made in other complementary systems such as our Turnstile offering, a user-friendly, privacy-preserving alternative to CAPTCHA.

All these systems and technologies help us better identify and differentiate human traffic from automated bot traffic.

On average, bots comprise one-third of all application traffic

31.2% of all application traffic processed by Cloudflare is bot traffic. This percentage has stayed relatively consistent (hovering at about 30%) over the past three years.

The term bot traffic may carry a negative connotation, but in reality bot traffic is not necessarily good or bad; it all depends on the purpose of the bots. Some are “good” and perform a needed service, such as customer service chatbots and authorized search engine crawlers. But some bots misuse an online product or service and need to be blocked.

Different application owners may have different criteria for what they deem a “bad” bot. For example, some organizations may want to block a content scraping bot that is being deployed by a competitor to undercut on prices, whereas an organization that does not sell products or services may not be as concerned with content scraping. Known, good bots are classified by Cloudflare as “verified bots.”

93% of bots we identified were unverified bots, and potentially malicious

Unverified bots are often created for disruptive and harmful purposes, such as hoarding inventory, launching DDoS attacks, or attempting to take over an account via brute force or credential stuffing. Verified bots are those that are known to be safe, such as search engine crawlers, and Cloudflare aims to verify all major legitimate bot operators. A list of all verified bots can be found in our documentation.

Attackers leveraging bots focus most on industries that could bring them large financial gains. For example, consumer goods websites are often the target of inventory hoarding, price scraping run by competition or automated applications aimed at exploiting some sort of arbitrage (for example, sneaker bots). This type of abuse can have a significant financial impact on the target organization.

Figure 8: Industries with the highest median daily share of bot traffic

API traffic insights

Consumers and end users expect dynamic web and mobile experiences powered by APIs. For businesses, APIs fuel competitive advantages, greater business intelligence, faster cloud deployments, integration of new AI capabilities, and more.

However, APIs introduce new risks by providing outside parties additional attack surfaces with which to access applications and databases which also need to be secured. As a consequence, numerous attacks we observe are not targeting API endpoints first rather than the traditional web interfaces.

The additional security concerns are of course not slowing down adoption of API first applications.

60% of dynamic (non cacheable) traffic is API-related

This is a two percentage point increase compared to last year’s report. Of this 60%, about 4% on average is mitigated by our security systems.

Figure 9: Share of mitigated API traffic

A substantial spike is visible around January 11-17 that accounts for almost a 10% increase in traffic share alone for that period. This was due to a specific customer zone receiving attack traffic that was mitigated by a WAF Custom Rule.

Digging into mitigation sources for API traffic, we see the WAF being the largest contributor, as standard malicious payloads are commonly applicable to both API endpoints and standard web applications.

Figure 10: API mitigated traffic broken down by product group

A quarter of APIs are “shadow APIs”

You cannot protect what you cannot see. And, many organizations lack accurate API inventories, even when they believe they can correctly identify API traffic.

Using our proprietary machine learning model that scans not just known API calls, but all HTTP requests (identifying API traffic that may be going unaccounted for), we found that organizations had 33% more public-facing API endpoints than they knew about. This number was the median, and it was calculated by comparing the number of API endpoints detected through machine learning based discovery vs. customer-provided session identifiers.

This suggests that nearly a quarter of APIs are “shadow APIs” and may not be properly inventoried and secured.

Client-side risks

Most organizations’ web apps rely on separate programs or pieces of code from third-party providers (usually coded in JavaScript). The use of third-party scripts accelerates modern web app development and allows organizations to ship features to market faster, without having to build all new app features in-house.

Using Cloudflare’s client side security product, Page Shield, we can get a view on the popularity of third party libraries used on the Internet and the risk they pose to organizations. This has become very relevant recently due to the Polyfill.io incident that affected more than one hundred thousand sites.

Enterprise applications use 47 third-party scripts on average

Cloudflare’s typical enterprise customer uses an average of 47 third-party scripts, and a median of 20 third-party scripts. The average is much higher than the median due to SaaS providers, who often have thousands of subdomains which may all use third-party scripts. Here are some of the top third-party script providers Cloudflare customers commonly use:

  • Google (Tag Manager, Analytics, Ads, Translate, reCAPTCHA, YouTube)
  • Meta (Facebook Pixel, Instagram)
  • Cloudflare (Web Analytics)
  • jsDelivr
  • New Relic
  • Appcues
  • Microsoft (Clarity, Bing, LinkedIn)
  • jQuery
  • WordPress (Web Analytics, hosted plugins)
  • Pinterest
  • UNPKG
  • TikTok
  • Hotjar

While useful, third-party software dependencies are often loaded directly by the end-user’s browser (i.e. they are loaded client-side) placing organizations and their customers at risk given that organizations have no direct control over third-party security measures. For example, in the retail sector, 18% of all data breaches originate from Magecart style attacks, according to Verizon’s 2024 Data Breach Investigations Report.

Enterprise applications connect to nearly 50 third-parties on average

Loading a third-party script into your website poses risks, even more so when that script “calls home” to submit data to perform the intended function. A typical example here is Google Analytics: whenever a user performs an action, the Google Analytics script will submit data back to the Google servers. We identify these as connections.

On average, each enterprise website connects to 50 separate third-party destinations, with a median of 15. Each of these connections also poses a potential client-side security risk as attackers will often use them to exfiltrate additional data going unnoticed.

Here are some of the top third-party connections Cloudflare customers commonly use:

  • Google (Analytics, Ads)
  • Microsoft (Clarity, Bing, LinkedIn)
  • Meta (Facebook Pixel)
  • Hotjar
  • Kaspersky
  • Sentry
  • Criteo
  • tawk.to
  • OneTrust
  • New Relic
  • PayPal

Looking forward

This application security report is also available in PDF format with additional recommendations on how to address many of the concerns raised, along with additional insights.

We also publish many of our reports with dynamic charts on Cloudflare Radar, making it an excellent resource to keep up to date with the state of the Internet.

Automatically replacing polyfill.io links with Cloudflare’s mirror for a safer Internet

Post Syndicated from Matthew Prince original https://blog.cloudflare.com/automatically-replacing-polyfill-io-links-with-cloudflares-mirror-for-a-safer-internet


polyfill.io, a popular JavaScript library service, can no longer be trusted and should be removed from websites.

Multiple reports, corroborated with data seen by our own client-side security system, Page Shield, have shown that the polyfill service was being used, and could be used again, to inject malicious JavaScript code into users’ browsers. This is a real threat to the Internet at large given the popularity of this library.

We have, over the last 24 hours, released an automatic JavaScript URL rewriting service that will rewrite any link to polyfill.io found in a website proxied by Cloudflare to a link to our mirror under cdnjs. This will avoid breaking site functionality while mitigating the risk of a supply chain attack.

Any website on the free plan has this feature automatically activated now. Websites on any paid plan can turn on this feature with a single click.

You can find this new feature under Security ⇒ Settings on any zone using Cloudflare.

Contrary to what is stated on the polyfill.io website, Cloudflare has never recommended the polyfill.io service or authorized their use of Cloudflare’s name on their website. We have asked them to remove the false statement and they have, so far, ignored our requests. This is yet another warning sign that they cannot be trusted.

If you are not using Cloudflare today, we still highly recommend that you remove any use of polyfill.io and/or find an alternative solution. And, while the automatic replacement function will handle most cases, the best practice is to remove polyfill.io from your projects and replace it with a secure alternative mirror like Cloudflare’s even if you are a customer.

You can do this by searching your code repositories for instances of polyfill.io and replacing it with cdnjs.cloudflare.com/polyfill/ (Cloudflare’s mirror). This is a non-breaking change as the two URLs will serve the same polyfill content. All website owners, regardless of the website using Cloudflare, should do this now.

How we came to this decision

Back in February, the domain polyfill.io, which hosts a popular JavaScript library, was sold to a new owner: Funnull, a relatively unknown company. At the time, we were concerned that this created a supply chain risk. This led us to spin up our own mirror of the polyfill.io code hosted under cdnjs, a JavaScript library repository sponsored by Cloudflare.

The new owner was unknown in the industry and did not have a track record of trust to administer a project such as polyfill.io. The concern, highlighted even by the original author, was that if they were to abuse polyfill.io by injecting additional code to the library, it could cause far reaching security problems on the Internet affecting several hundreds of thousands websites. Or it could be used to perform a targeted supply-chain attack against specific websites.

Unfortunately, that worry came true on June 25, 2024 as the polyfill.io service was being used to inject nefarious code that, under certain circumstances, redirected users to other websites.

We have taken the exceptional step of using our ability to modify HTML on the fly to replace references to the polyfill.io CDN in our customers’ websites with links to our own, safe, mirror created back in February.

In the meantime, additional threat feed providers have also taken the decision to flag the domain as malicious. We have not outright blocked the domain through any of the mechanisms we have because we are concerned it could cause widespread web outages given how broadly polyfill.io is used with some estimates indicating usage on nearly 4% of all websites.

Corroborating data with Page Shield

The original report indicates that malicious code was injected that, under certain circumstances, would redirect users to betting sites. It was doing this by loading additional JavaScript that would perform the redirect, under a set of additional domains which can be considered Indicators of Compromise (IoCs):

https://www.googie-anaiytics.com/analytics.js
https://www.googie-anaiytics.com/html/checkcachehw.js
https://www.googie-anaiytics.com/gtags.js
https://www.googie-anaiytics.com/keywords/vn-keyword.json
https://www.googie-anaiytics.com/webs-1.0.1.js
https://www.googie-anaiytics.com/analytics.js
https://www.googie-anaiytics.com/webs-1.0.2.js
https://www.googie-anaiytics.com/ga.js
https://www.googie-anaiytics.com/web-1.0.1.js
https://www.googie-anaiytics.com/web.js
https://www.googie-anaiytics.com/collect.js
https://kuurza.com/redirect?from=bitget

(note the intentional misspelling of Google Analytics)

Page Shield, our client side security solution, is available on all paid plans. When turned on, it collects information about JavaScript files loaded by end user browsers accessing your website.

By looking at the database of detected JavaScript files, we immediately found matches with the IoCs provided above starting as far back as 2024-06-08 15:23:51 (first seen timestamp on Page Shield detected JavaScript file). This was a clear indication that malicious activity was active and associated with polyfill.io.

Replacing insecure JavaScript links to polyfill.io

To achieve performant HTML rewriting, we need to make blazing-fast HTML alterations as responses stream through Cloudflare’s network. This has been made possible by leveraging ROFL (Response Overseer for FL). ROFL powers various Cloudflare products that need to alter HTML as it streams, such as Cloudflare Fonts, Email Obfuscation and Rocket Loader

ROFL is developed entirely in Rust. The memory-safety features of Rust are indispensable for ensuring protection against memory leaks while processing a staggering volume of requests, measuring in the millions per second. Rust’s compiled nature allows us to finely optimize our code for specific hardware configurations, delivering performance gains compared to interpreted languages.

The performance of ROFL allows us to rewrite HTML on-the-fly and modify the polyfill.io links quickly, safely, and efficiently. This speed helps us reduce any additional latency added by processing the HTML file.

If the feature is turned on, for any HTTP response with an HTML Content-Type, we parse all JavaScript script tag source attributes. If any are found linking to polyfill.io, we rewrite the src attribute to link to our mirror instead. We map to the correct version of the polyfill service while the query string is left untouched.

The logic will not activate if a Content Security Policy (CSP) header is found in the response. This ensures we don’t replace the link while breaking the CSP policy and therefore potentially breaking the website.

Default on for free customers, optional for everyone else

Cloudflare proxies millions of websites, and a large portion of these sites are on our free plan. Free plan customers tend to have simpler applications while not having the resources to update and react quickly to security concerns. We therefore decided to turn on the feature by default for sites on our free plan, as the likelihood of causing issues is reduced while also helping keep safe a very large portion of applications using polyfill.io.

Paid plan customers, on the other hand, have more complex applications and react quicker to security notices. We are confident that most paid customers using polyfill.io and Cloudflare will appreciate the ability to virtually patch the issue with a single click, while controlling when to do so.

All customers can turn off the feature at any time.

This isn’t the first time we’ve decided a security problem was so widespread and serious that we’d enable protection for all customers regardless of whether they were a paying customer or not. Back in 2014, we enabled Shellshock protection for everyone. In 2021, when the log4j vulnerability was disclosed we rolled out protection for all customers.

Do not use polyfill.io

If you are using Cloudflare, you can remove polyfill.io with a single click on the Cloudflare dashboard by heading over to your zone ⇒ Security ⇒ Settings. If you are a free customer, the rewrite is automatically active. This feature, we hope, will help you quickly patch the issue.

Nonetheless, you should ultimately search your code repositories for instances of polyfill.io and replace them with an alternative provider, such as Cloudflare’s secure mirror under cdnjs (https://cdnjs.cloudflare.com/polyfill/). Website owners who are not using Cloudflare should also perform these steps.

The underlying bundle links you should use are:

For minified: https://cdnjs.cloudflare.com/polyfill/v3/polyfill.min.js
For unminified: https://cdnjs.cloudflare.com/polyfill/v3/polyfill.js

Doing this ensures your website is no longer relying on polyfill.io.

Application Security at re:Inforce 2024

Post Syndicated from Daniel Begimher original https://aws.amazon.com/blogs/security/application-security-at-reinforce-2024/

Join us in Philadelphia, Pennsylvania, on June 10–12, 2024, for AWS re:Inforce, a security learning conference where you can enhance your skills and confidence in cloud security, compliance, identity, and privacy. As an attendee, you will have access to hundreds of technical and non-technical sessions, an Expo featuring Amazon Web Services (AWS) experts and AWS Security Competency Partners, and keynote sessions led by industry leaders. AWS re:Inforce offers a comprehensive focus on six key areas, including Application Security.

The Application Security track helps you understand and implement best practices for securing your applications throughout the development lifecycle. This year, we are focusing on several key themes:

  • Building a culture of security – Learn how to define and influence organizational behavior to speed up application development, while reducing overall security risk through implementing best practices, training your internal teams, and defining ownership.
  • Security of the pipeline – Discover how to embed governance and guardrails to allow developer agility, while maintaining security across your continuous integration and delivery (CI/CD) pipelines.
  • Security in the pipeline – Explore tooling and automation to reduce the mean time of security reviews and embed continuous security into each stage of the development pipeline.
  • Supply chain security – Gain improved awareness of how risks are introduced by extension, track dependencies, and identify vulnerabilities used in your software.

Additionally, this year the Application Security track will have sessions focused on generative AI (gen AI), covering how to secure gen AI applications and use gen AI for development. Join these sessions to deepen your knowledge and up-level your skills, so that you can build modern applications that are robust, resilient, and secure.

Breakout sessions, chalk talks, lightning talks, and code talks

APS201 | Breakout session | Accelerate securely: The Generative AI Security Scoping Matrix
As generative AI ignites business innovation, cybersecurity teams need to keep up with the accelerating domain. Security leaders are seeking tools and answers to help drive requirements around governance, compliance, legal, privacy, threat mitigations, resiliency, and more. This session introduces you to the Generative AI Security Scoping Matrix, which is designed to provide a common language and thought model for approaching generative AI security. Leave the session with a framework, techniques, and best practices that you can use to support responsible adoption of generative AI solutions designed to help your business move at an ever-increasing pace.

APS301 | Breakout session | Enhance AppSec: Generative AI integration in AWS testing
This session presents an in-depth look at the AWS Security Testing program, emphasizing its scaling efforts to help ensure new products and services meet a high security bar pre-launch. With a focus on integrating generative AI into its testing framework, the program showcases how AWS anticipates and mitigates complex security threats to maintain cloud security. Learn about AWS’s proactive approaches to collaboration across teams and mitigating vulnerabilities, enriched by case studies that highlight the program’s flexibility and dedication to security excellence. Ideal for security experts and cloud architects, this session offers valuable insights into safeguarding cloud computing technologies.

APS302 | Breakout session | Building a secure MLOps pipeline, featuring PathAI
DevOps and MLOps are both software development strategies that focus on collaboration between developers, operations, and data science teams. In this session, learn how to build modern, secure MLOps using AWS services and tools for infrastructure and network isolation, data protection, authentication and authorization, detective controls, and compliance. Discover how AWS customer PathAI, a leading digital pathology and AI company, uses seamless DevOps and MLOps strategies to run their AISight intelligent image management system and embedded AI products to support anatomic pathology labs and bio-pharma partners globally.

APS401 | Breakout session | Keeping your code secure
Join this session to dive deep into how AWS implemented generative AI tooling in our developer workflows. Learn about the AWS approach to creating the underlying code scanning and remediation engines that AWS uses internally. Also, explore how AWS integrated these tools into the services we offer through reactive and proactive security features. Leave this session with a better understanding of how you can use AWS to secure code and how the code offered to you through AWS generative AI services is designed to be secure.

APS402 | Breakout session | Verifying code using automated reasoning
In this session, AWS principal applied scientists discuss how they use automated reasoning to certify bug-free code mathematically and help secure underlying infrastructure. Explore how to use Kani, an AWS created open source engine that analyzes, verifies, and detects errors in safe and unsafe Rust code. Hear how AWS built and implemented Kani internally with examples taken from real-world AWS open source code. Leave this session with the tools you need to get started using this Rust verification engine for your own workloads.

APS232 | Chalk talk | Successful security team patterns
It’s more common to hear what a security team does than to hear how the security team does it, or with whom the security team works rather than how it was designed to work. Organizational design is often demoted to a secondary consideration behind the goals of a security team, despite intentional design generally being what empowers, or hinders, security teams from achieving their goals. Security must work across the organization, not in isolation. This chalk talk focuses on designing effective security teams for organizations moving to the cloud, which necessitates outlining both what the security team works on and how it achieves that work.

APS331 | Chalk talk | Verifiable and auditable security inside the pipeline
In this chalk talk, explore platform engineering best practices at AWS. AWS deploys more than 150 million times per year while maintaining 143 different compliance framework attestations and certifications. Internally, AWS has learned how to make security easier for builder teams. Learn key risks associated with operating pipelines at scale and Amazonian mechanisms to make security controls inside the pipeline verifiable and auditable so that you can shift compliance and auditing left into the pipeline.

APS233 | Chalk talk | Threat modeling your generative AI workload to evaluate security risk
As the capabilities and possibilities of machine learning continue to expand with advances in generative AI, understanding the security risks introduced by these advances is essential for protecting your valuable AWS workloads. This chalk talk guides you through a practical threat modeling approach, empowering you to create a threat model for your own generative AI applications. Gain confidence to build your next generative AI workload securely on AWS with the help of threat modeling and leave with actionable steps you can take to get started.

APS321 | Lightning talk | Using generative AI to create more secure applications
Generative AI revolutionizes application development by enhancing security and efficiency. This lightning talk explores how Amazon Q, your generative AI assistant, empowers you to build, troubleshoot, and transform applications securely. Discover how its capabilities streamline the process, allowing you to focus on innovation while ensuring robust security measures. Unlock the power of generative AI for helping build secure, cutting-edge applications.

APS341 | Code talk | Shifting left, securing right: Container supply chain security
Supply chain security for containers helps ensure you can detect software security risks in third-party packages and remediate them during the container image build process. This prevents container images with vulnerabilities from being pushed to your container registry and causing potential harm to your production systems. In this code talk, learn how you can apply a shift-left approach to container image security testing in your deployment pipelines.

Hands-on sessions

APS373 | Workshop | Build a more secure generative AI chatbot with security guardrails
Generative AI is an emerging technology that is disrupting multiple industries. An early generative AI use case is interactive chat in customer service applications. As users interact with generative AI chatbots, there are security risks, such as prompt injection and jailbreaking resulting from specially crafted inputs sent to large language models. In this workshop, learn how to build an AI chatbot using Amazon Bedrock and protect it using Guardrails for Amazon Bedrock. You must bring your laptop to participate.

APS351 | Builders’ session | Implement controls for the OWASP Top 10 for LLM applications
In this builders’ session, learn how to implement security controls that address the OWASP Top 10 for LLM applications on AWS. Experts guide you through the use of AWS security tooling to provide practical insights and solutions to mitigate the most critical security risks outlined by OWASP. Discover technical options and choices you can make in cloud infrastructure and large-scale enterprise environments augmented by AWS generative AI technology. You must bring your laptop to participate.

APS271 | Workshop | Threat modeling for builders
In this workshop, learn threat modeling core concepts and how to apply them through a series of group exercises. Key topics include threat modeling personas, key phases, data flow diagrams, STRIDE, and risk response strategies as well as the introduction of a “threat grammar rule” with an associated tool. In exercises, identify threats and mitigations through the lens of each threat modeling persona. Assemble in groups and walk through a case study, with AWS threat modeling experts on hand to guide you and provide feedback. You must bring your laptop to participate.

APS371 | Workshop | Integrating open source security tools with AWS code services
AWS, open source, and partner tooling work together to accelerate your software development lifecycle. In this workshop, learn how to use the Automated Security Helper (ASH), an open source application security tool, to quickly integrate various security testing tools into your software build and deployment flows. AWS experts guide you through the process of security testing locally on your machines and within the AWS CodeCommit, AWS CodeBuild, and AWS CodePipeline services. In addition, discover how to identify potential security issues in your applications through static analysis, software composition analysis, and infrastructure-as-code testing. You must bring your laptop to participate.

This blog post highlighted some of the unique sessions in the Application Security track at the upcoming re:Inforce 2024 conference in Philadelphia. If these sessions pique your interest, register for re:Inforce 2024 to attend them, along with the numerous other Application Security sessions offered at the conference. For a comprehensive overview of sessions across all tracks, explore the AWS re:Inforce catalog preview.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Daniel Begimher

Daniel Begimher
Daniel is a Senior Security Engineer specializing in cloud security and incident response solutions. He holds all AWS certifications and authored the open-source code scanning tool, Automated Security Helper. In his free time, Daniel enjoys gadgets, video games, and traveling.

Ipolitas Dunaravich

Ipolitas Dunaravich
Ipolitas is a technical marketing leader for networking and security services at AWS. With over 15 years of marketing experience and more than 4 years at AWS, Ipolitas is the Head of Marketing for AppSec services and curates the security content for re:Inforce and re:Invent.

The art of possible: Three themes from RSA Conference 2024

Post Syndicated from Anne Grahn original https://aws.amazon.com/blogs/security/the-art-of-possible-three-themes-from-rsa-conference-2024/

San Francisco skyline with Oakland Bay Bridge at sunset, California, USA

RSA Conference 2024 drew 650 speakers, 600 exhibitors, and thousands of security practitioners from across the globe to the Moscone Center in San Francisco, California from May 6 through 9.

The keynote lineup was diverse, with 33 presentations featuring speakers ranging from WarGames actor Matthew Broderick, to public and private-sector luminaries such as Cybersecurity and Infrastructure Security Agency (CISA) Director Jen Easterly, U.S. Secretary of State Antony Blinken, security technologist Bruce Schneier, and cryptography experts Tal Rabin, Whitfield Diffie, and Adi Shamir.

Topics aligned with this year’s conference theme, “The art of possible,” and focused on actions we can take to revolutionize technology through innovation, while fortifying our defenses against an evolving threat landscape.

This post highlights three themes that caught our attention: artificial intelligence (AI) security, the Secure by Design approach to building products and services, and Chief Information Security Officer (CISO) collaboration.

AI security

Organizations in all industries have started building generative AI applications using large language models (LLMs) and other foundation models (FMs) to enhance customer experiences, transform operations, improve employee productivity, and create new revenue channels. So it’s not surprising that AI dominated conversations. Over 100 sessions touched on the topic, and the desire of attendees to understand AI technology and learn how to balance its risks and opportunities was clear.

“Discussions of artificial intelligence often swirl with mysticism regarding how an AI system functions. The reality is far more simple: AI is a type of software system.” — CISA

FMs and the applications built around them are often used with highly sensitive business data such as personal data, compliance data, operational data, and financial information to optimize the model’s output. As we explore the advantages of generative AI, protecting highly sensitive data and investments is a top priority. However, many organizations aren’t paying enough attention to security.

A joint generative AI security report released by Amazon Web Services (AWS) and the IBM Institute for Business Value during the conference found that 82% of business leaders view secure and trustworthy AI as essential for their operations, but only 24% are actively securing generative AI models and embedding security processes in AI development. In fact, nearly 70% say innovation takes precedence over security, despite concerns over threats and vulnerabilities (detailed in Figure 1).

Figure 1: Generative AI adoption concerns

Figure 1: Generative AI adoption concerns, Source: IBM Security

Because data and model weights—the numerical values models learn and adjust as they train—are incredibly valuable, organizations need them to stay protected, secure, and private, whether that means restricting access from an organization’s own administrators, customers, or cloud service provider, or protecting data from vulnerabilities in software running in the organization’s own environment.

There is no silver AI-security bullet, but as the report points out, there are proactive steps you can take to start protecting your organization and leveraging AI technology to improve your security posture:

  1. Establish a governance, risk, and compliance (GRC) foundation. Trust in gen AI starts with new security governance models (Figure 2) that integrate and embed GRC capabilities into your AI initiatives, and include policies, processes, and controls that are aligned with your business objectives.

    Figure 2: Updating governance, risk, and compliance models

    Figure 2: Updating governance, risk, and compliance models, Source: IBM Security

    In the RSA Conference session AI: Law, Policy, and Common Sense Suggestions to Stay Out of Trouble, digital commerce and gaming attorney Behnam Dayanim highlighted ethical, policy, and legal considerations—including AI-specific regulations—as well as governance structures such as the National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF 1.0) that can help maximize a successful implementation and minimize potential risk.

  2. Strengthen your security culture. When we think of securing AI, it’s natural to focus on technical measures that can help protect the business. But organizations are made up of people—not technology. Educating employees at all levels of the organization can help avoid preventable harms such as prompt-based risks and unapproved tool use, and foster a resilient culture of cybersecurity that supports effective risk mitigation, incident detection and response, and continuous collaboration.

    “You’ve got to understand early on that security can’t be effective if you’re running it like a project or a program. You really have to run it as an operational imperative—a core function of the business. That’s when magic can happen.” — Hart Rossman, Global Services Security Vice President at AWS
  3. Engage with partners. Developing and securing AI solutions requires resources and skills that many organizations lack. Partners can provide you with comprehensive security support—whether that’s informing and advising you about generative AI, or augmenting your delivery and support capabilities. This can help make your engineers and your security controls more effective.

    While many organizations purchase security products or solutions with embedded generative AI capabilities, nearly two-thirds, as detailed in Figure 3, report that their generative AI security capabilities come through some type of partner.

    Figure 3: More than 90% of security gen AI capabilities are coming from third-party products or partners

    Figure 3: Most security gen AI capabilities are coming from third-party products or partners, Source: IBM Security

    Tens of thousands of customers are using AWS, for example, to experiment and move transformative generative AI applications into production. AWS provides AI-powered tools and services, a Generative AI Innovation Center program, and an extensive network of AWS partners that have demonstrated expertise delivering machine learning (ML) and generative AI solutions. These resources can support your teams with hands-on help developing solutions mapped to your requirements, and a broader collection of knowledge they can use to help you make the nuanced decisions required for effective security.

View the joint report and AWS generative AI security resources for additional guidance.

Secure by Design

Building secure software was a popular and related focus at the conference. Insecure design is ranked as the number four critical web application security concern on the Open Web Application Security Project (OWASP) Top 10.

The concept known as Secure by Design is gaining importance in the effort to mitigate vulnerabilities early, minimize risks, and recognize security as a core business requirement. Secure by Design builds off of security models such as Zero Trust, and aims to reduce the burden of cybersecurity and break the cycle of constantly creating and applying updates by developing products that are foundationally secure.

More than 60 technology companies—including AWS—signed CISA’s Secure by Design Pledge during RSA Conference as part of a collaborative push to put security first when designing products and services.

The pledge demonstrates a commitment to making measurable progress towards seven goals within a year:

  • Broaden the use of multi-factor authentication (MFA)
  • Reduce default passwords
  • Enable a significant reduction in the prevalence of one or more vulnerability classes
  • Increase the installation of security patches by customers
  • Publish a vulnerability disclosure policy (VDP)
  • Demonstrate transparency in vulnerability reporting
  • Strengthen the ability of customers to gather evidence of cybersecurity intrusions affecting products

“From day one, we have pioneered secure by design and secure by default practices in the cloud, so AWS is designed to be the most secure place for customers to run their workloads. We are committed to continuing to help organizations around the world elevate their security posture, and we look forward to collaborating with CISA and other stakeholders to further grow and promote security by design and default practices.” — Chris Betz, CISO at AWS

The need for security by design applies to AI like any other software system. To protect users and data, we need to build security into ML and AI with a Secure by Design approach that considers these technologies to be part of a larger software system, and weaves security into the AI pipeline.

Since models tend to have very high privileges and access to data, integrating an AI bill of materials (AI/ML BOM) and Cryptography Bill of Materials (CBOM) into BOM processes can help you catalog security-relevant information, and gain visibility into model components and data sources. Additionally, frameworks and standards such as the AI RMF 1.0, the HITRUST AI Assurance Program, and ISO/IEC 42001 can facilitate the incorporation of trustworthiness considerations into the design, development, and use of AI systems.

CISO collaboration

In the RSA Conference keynote session CISO Confidential: What Separates The Best From The Rest, Trellix CEO Bryan Palma and CISO Harold Rivas noted that there are approximately 32,000 global CISOs today—4 times more than 10 years ago. The challenges they face include staffing shortages, liability concerns, and a rapidly evolving threat landscape. According to research conducted by the Information Systems Security Association (ISSA), nearly half of organizations (46%) report that their cybersecurity team is understaffed, and more than 80% of CISOs recently surveyed by Trellix have experienced an increase in cybersecurity threats over the past six months. When asked what would most improve their organizations’ abilities to defend against these threats, their top answer was industry peers sharing insights and best practices.

Building trusted relationships with peers and technology partners can help you gain the knowledge you need to effectively communicate the story of risk to your board of directors, keep up with technology, and build success as a CISO.

AWS CISO Circles provide a forum for cybersecurity executives from organizations of all sizes and industries to share their challenges, insights, and best practices. CISOs come together in locations around the world to discuss the biggest security topics of the moment. With NDAs in place and the Chatham House Rule in effect, security leaders can feel free to speak their minds, ask questions, and get feedback from peers through candid conversations facilitated by AWS Security leaders.

“When it comes to security, community unlocks possibilities. CISO Circles give us an opportunity to deeply lean into CISOs’ concerns, and the topics that resonate with them. Chatham House Rule gives security leaders the confidence they need to speak openly and honestly with each other, and build a global community of knowledge-sharing and support.” — Clarke Rodgers, Director of Enterprise Strategy at AWS

At RSA Conference, CISO Circle attendees discussed the challenges of adopting generative AI. When asked whether CISOs or the business own generative AI risk for the organization, the consensus was that security can help with policies and recommendations, but the business should own the risk and decisions about how and when to use the technology. Some attendees noted that they took initial responsibility for generative AI risk, before transitioning ownership to an advisory board or committee comprised of leaders from their HR, legal, IT, finance, privacy, and compliance and ethics teams over time. Several CISOs expressed the belief that quickly taking ownership of generative AI risk before shepherding it to the right owner gave them a valuable opportunity to earn trust with their boards and executive peers, and to demonstrate business leadership during a time of uncertainty.

Embrace the art of possible

There are many more RSA Conference highlights on a wide range of additional topics, including post-quantum cryptography developments, identity and access management, data perimeters, threat modeling, cybersecurity budgets, and cyber insurance trends. If there’s one key takeaway, it’s that we should never underestimate what is possible from threat actors or defenders. By harnessing AI’s potential while addressing its risks, building foundationally secure products and services, and developing meaningful collaboration, we can collectively strengthen security and establish cyber resilience.

Join us to learn more about cloud security in the age of generative AI at AWS re:Inforce 2024 June 10–12 in Pennsylvania. Register today with the code SECBLOfnakb to receive a limited time $150 USD discount, while supplies last.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Anne Grahn

Anne Grahn

Anne is a Senior Worldwide Security GTM Specialist at AWS, based in Chicago. She has more than a decade of experience in the security industry, and focuses on effectively communicating cybersecurity risk. She maintains a Certified Information Systems Security Professional (CISSP) certification.

Danielle Ruderman

Danielle Ruderman

Danielle is a Senior Manager for the AWS Worldwide Security Specialist Organization, where she leads a team that enables global CISOs and security leaders to better secure their cloud environments. Danielle is passionate about improving security by building company security culture that starts with employee engagement.

Upcoming Let’s Encrypt certificate chain change and impact for Cloudflare customers

Post Syndicated from Dina Kozlov original https://blog.cloudflare.com/upcoming-lets-encrypt-certificate-chain-change-and-impact-for-cloudflare-customers


Let’s Encrypt, a publicly trusted certificate authority (CA) that Cloudflare uses to issue TLS certificates, has been relying on two distinct certificate chains. One is cross-signed with IdenTrust, a globally trusted CA that has been around since 2000, and the other is Let’s Encrypt’s own root CA, ISRG Root X1. Since Let’s Encrypt launched, ISRG Root X1 has been steadily gaining its own device compatibility.

On September 30, 2024, Let’s Encrypt’s certificate chain cross-signed with IdenTrust will expire. To proactively prepare for this change, on May 15, 2024, Cloudflare will stop issuing certificates from the cross-signed chain and will instead use Let’s Encrypt’s ISRG Root X1 chain for all future Let’s Encrypt certificates.

The change in the certificate chain will impact legacy devices and systems, such as Android devices version 7.1.1 or older, as those exclusively rely on the cross-signed chain and lack the ISRG X1 root in their trust store. These clients may encounter TLS errors or warnings when accessing domains secured by a Let’s Encrypt certificate.

According to Let’s Encrypt, more than 93.9% of Android devices already trust the ISRG Root X1 and this number is expected to increase in 2024, especially as Android releases version 14, which makes the Android trust store easily and automatically upgradable.

We took a look at the data ourselves and found that, from all Android requests, 2.96% of them come from devices that will be affected by the change. In addition, only 1.13% of all requests from Firefox come from affected versions, which means that most (98.87%) of the requests coming from Android versions that are using Firefox will not be impacted.

Preparing for the change

If you’re worried about the change impacting your clients, there are a few things that you can do to reduce the impact of the change. If you control the clients that are connecting to your application, we recommend updating the trust store to include the ISRG Root X1. If you use certificate pinning, remove or update your pin. In general, we discourage all customers from pinning their certificates, as this usually leads to issues during certificate renewals or CA changes.

If you experience issues with the Let’s Encrypt chain change, and you’re using Advanced Certificate Manager or SSL for SaaS on the Enterprise plan, you can choose to switch your certificate to use Google Trust Services as the certificate authority instead.

For more information, please refer to our developer documentation.

While this change will impact a very small portion of clients, we support the shift that Let’s Encrypt is making as it supports a more secure and agile Internet.

Embracing change to move towards a better Internet

Looking back, there were a number of challenges that slowed down the adoption of new technologies and standards that helped make the Internet faster, more secure, and more reliable.

For starters, before Cloudflare launched Universal SSL, free certificates were not attainable. Instead, domain owners had to pay around $100 to get a TLS certificate. For a small business, this is a big cost and without browsers enforcing TLS, this significantly hindered TLS adoption for years. Insecure algorithms have taken decades to deprecate due to lack of support of new algorithms in browsers or devices. We learned this lesson while deprecating SHA-1.

Supporting new security standards and protocols is vital for us to continue improving the Internet. Over the years, big and sometimes risky changes were made in order for us to move forward. The launch of Let’s Encrypt in 2015 was monumental. Let’s Encrypt allowed every domain to get a TLS certificate for free, which paved the way to a more secure Internet, with now around 98% of traffic using HTTPS.

In 2014, Cloudflare launched elliptic curve digital signature algorithm (ECDSA) support for Cloudflare-issued certificates and made the decision to issue ECDSA-only certificates to free customers. This boosted ECDSA adoption by pressing clients and web operators to make changes to support the new algorithm, which provided the same (if not better) security as RSA while also improving performance. In addition to that, modern browsers and operating systems are now being built in a way that allows them to constantly support new standards, so that they can deprecate old ones.

For us to move forward in supporting new standards and protocols, we need to make the Public Key Infrastructure (PKI) ecosystem more agile. By retiring the cross-signed chain, Let’s Encrypt is pushing devices, browsers, and clients to support adaptable trust stores. This allows clients to support new standards without causing a breaking change. It also lays the groundwork for new certificate authorities to emerge.

Today, one of the main reasons why there’s a limited number of CAs available is that it takes years for them to become widely trusted, that is, without cross-signing with another CA. In 2017, Google launched a new publicly trusted CA, Google Trust Services, that issued free TLS certificates. Even though they launched a few years after Let’s Encrypt, they faced the same challenges with device compatibility and adoption, which caused them to cross-sign with GlobalSign’s CA. We hope that, by the time GlobalSign’s CA comes up for expiration, almost all traffic is coming from a modern client and browser, meaning the change impact should be minimal.

Security Week 2024 wrap up

Post Syndicated from Daniele Molteni original https://blog.cloudflare.com/security-week-2024-wrap-up


The next 12 months have the potential to reshape the global political landscape with elections occurring in more than 80 nations, in 2024, while new technologies, such as AI, capture our imagination and pose new security challenges.

Against this backdrop, the role of CISOs has never been more important. Grant Bourzikas, Cloudflare’s Chief Security Officer, shared his views on what the biggest challenges currently facing the security industry are in the Security Week opening blog.

Over the past week, we announced a number of new products and features that align with what we believe are the most crucial challenges for CISOs around the globe. We released features that span Cloudflare’s product portfolio, ranging from application security to securing employees and cloud infrastructure. We have also published a few stories on how we take a Customer Zero approach to using Cloudflare services to manage security at Cloudflare.

We hope you find these stories interesting and are excited by the new Cloudflare products. In case you missed any of these announcements, here is a recap of Security Week:

Responding to opportunity and risk from AI

Title Excerpt
Cloudflare announces Firewall for AI Cloudflare announced the development of Firewall for AI, a protection layer that can be deployed in front of Large Language Models (LLMs) to identify abuses and attacks.
Defensive AI: Cloudflare’s framework for defending against next-gen threats Defensive AI is the framework Cloudflare uses when integrating intelligent systems into its solutions. Cloudflare’s AI models look at customer traffic patterns, providing that organization with a tailored defense strategy unique to their environment.
Cloudflare launches AI Assistant for Security Analytics We released a natural language assistant as part of Security Analytics. Now it is easier than ever to get powerful insights about your applications by exploring log and security events using the new natural language query interface.
Dispelling the Generative AI fear: how Cloudflare secures inboxes against AI-enhanced phishing Generative AI is being used by malicious actors to make phishing attacks much more convincing. Learn how Cloudflare’s email security systems are able to see past the deception using advanced machine learning models.

Maintaining visibility and control as applications and clouds change

Title Excerpt
Magic Cloud Networking simplifies security, connectivity, and management of public clouds Introducing Magic Cloud Networking, a new set of capabilities to visualize and automate cloud networks to give our customers easy, secure, and seamless connection to public cloud environments.
Secure your unprotected assets with Security Center: quick view for CISOs Security Center now includes new tools to address a common challenge: ensuring comprehensive deployment of Cloudflare products across your infrastructure. Gain precise insights into where and how to optimize your security posture.
Announcing two highly requested DLP enhancements: Optical Character Recognition (OCR) and Source Code Detections Cloudflare One now supports Optical Character Recognition and detects source code as part of its Data Loss Prevention service. These two features make it easier for organizations to protect their sensitive data and reduce the risks of breaches.
Introducing behavior-based user risk scoring in Cloudflare One We are introducing user risk scoring as part of Cloudflare One, a new set of capabilities to detect risk based on user behavior, so that you can improve security posture across your organization.
Eliminate VPN vulnerabilities with Cloudflare One The Cybersecurity & Infrastructure Security Agency issued an Emergency Directive due to the Ivanti Connect Secure and Policy Secure vulnerabilities. In this post, we discuss the threat actor tactics exploiting these vulnerabilities and how Cloudflare One can mitigate these risks.
Zero Trust WARP: tunneling with a MASQUE This blog discusses the introduction of MASQUE to Zero Trust WARP and how Cloudflare One customers will benefit from this modern protocol.
Collect all your cookies in one jar with Page Shield Cookie Monitor Protecting online privacy starts with knowing what cookies are used by your websites. Our client-side security solution, Page Shield, extends transparent monitoring to HTTP cookies.
Protocol detection with Cloudflare Gateway Cloudflare Secure Web Gateway now supports the detection, logging, and filtering of network protocols using packet payloads without the need for inspection.
Introducing Requests for Information (RFIs) and Priority Intelligence Requirements (PIRs) for threat intelligence teams Our Security Center now houses Requests for Information and Priority Intelligence Requirements. These features are available via API as well and Cloudforce One customers can start leveraging them today for enhanced security analysis.

Consolidating to drive down costs

Title Excerpt
Log Explorer: monitor security events without third-party storage With the combined power of Security Analytics and Log Explorer, security teams can analyze, investigate, and monitor logs natively within Cloudflare, reducing time to resolution and overall cost of ownership by eliminating the need of third-party logging systems.
Simpler migration from Netskope and Zscaler to Cloudflare: introducing Deskope and a Descaler partner update Cloudflare expands the Descaler program to Authorized Service Delivery Partners (ASDPs). Cloudflare is also launching Deskope, a new set of tooling to help migrate existing Netskope customers to Cloudflare One.
Protecting APIs with JWT Validation Cloudflare customers can now protect their APIs from broken authentication attacks by validating incoming JSON Web Tokens with API Gateway.
Simplifying how enterprises connect to Cloudflare with Express Cloudflare Network Interconnect Express Cloudflare Network Interconnect makes it fast and easy to connect your network to Cloudflare. Customers can now order Express CNIs directly from the Cloudflare dashboard.
Cloudflare treats SASE anxiety for VeloCloud customers The turbulence in the SASE market is driving many customers to seek help. We’re doing our part to help VeloCloud customers who are caught in the crosshairs of shifting strategies.
Free network flow monitoring for all enterprise customers Announcing a free version of Cloudflare’s network flow monitoring product, Magic Network Monitoring. Now available to all Enterprise customers.
Building secure websites: a guide to Cloudflare Pages and Turnstile Plugin Learn how to use Cloudflare Pages and Turnstile to deploy your website quickly and easily while protecting it from bots, without compromising user experience.
General availability for WAF Content Scanning for file malware protection Announcing the General Availability of WAF Content Scanning, protecting your web applications and APIs from malware by scanning files in-transit.

How can we help make the Internet better?

Title Excerpt
Cloudflare protects global democracy against threats from emerging technology during the 2024 voting season At Cloudflare, we’re actively supporting a range of players in the election space by providing security, performance, and reliability tools to help facilitate the democratic process.
Navigating the maze of Magecart: a cautionary tale of a Magecart impacted website Learn how a sophisticated Magecart attack was behind a campaign against e-commerce websites. This incident underscores the critical need for a strong client side security posture.
Cloudflare’s URL Scanner, new features, and the story of how we built it Discover the enhanced URL Scanner API, now integrated with the Security Center Investigate Portal. Enjoy unlisted scans, multi-device screenshots, and seamless integration with the Cloudflare ecosystem.
Changing the industry with CISA’s Secure by Design principles Security considerations should be an integral part of software’s design, not an afterthought. Explore how Cloudflare adheres to Cybersecurity & Infrastructure Security Agency’s Secure by Design principles to shift the industry.
The state of the post-quantum Internet Nearly two percent of all TLS 1.3 connections established with Cloudflare are secured with post-quantum cryptography. In this blog post we discuss where we are now in early 2024, what to expect for the coming years, and what you can do today.
Advanced DNS Protection: mitigating sophisticated DNS DDoS attacks Introducing the Advanced DNS Protection system, a robust defense mechanism designed to protect against the most sophisticated DNS-based DDoS attacks.

Sharing the Cloudflare way

Title Excerpt
Linux kernel security tunables everyone should consider adopting This post illustrates some of the Linux kernel features that are helping Cloudflare keep its production systems more secure. We do a deep dive into how they work and why you should consider enabling them.
Securing Cloudflare with Cloudflare: a Zero Trust journey A deep dive into how we have deployed Zero Trust at Cloudflare while maintaining user privacy.
Network performance update: Security Week 2024 Cloudflare is the fastest provider for 95th percentile connection time in 44% of networks around the world. We dig into the data and talk about how we do it.
Harnessing chaos in Cloudflare offices This blog discusses the new sources of “chaos” that have been added to LavaRand and how you can make use of that harnessed chaos in your next application.
Launching email security insights on Cloudflare Radar The new Email Security section on Cloudflare Radar provides insights into the latest trends around threats found in malicious email, sources of spam and malicious email, and the adoption of technologies designed to prevent abuse of email.

A final word

Thanks for joining us this week, and stay tuned for our next Innovation Week in early April, focused on the developer community.

Application Security Posture Management

Post Syndicated from Rapid7 original https://blog.rapid7.com/2024/01/16/application-security-posture-management/

Application Security Posture Management

Accelerating the Remediation of Vulnerabilities From Code To Cloud

Written by Eric Sheridan, Chief Innovation Officer, Tromzo

In this guest blog post by Eric Sheridan, Chief Innovation Officer at valued Rapid7 partner Tromzo, you’ll learn how Rapid7 customers can utilize ASPM solutions to accelerate triaging, prioritization and remediation of findings from security testing products such as InsightAppSec and InsightCloudSec.

Application Security’s Massive Data Problem

Application Security teams have a massive data problem. With the widespread adoption of cloud native architectures and increasing fragmentation of development technologies, many teams amass a wide variety of specialized security scanning tools. These technologies are highly specialized, designed to carry out comprehensive security testing as a means of identifying as many vulnerabilities as possible.

A natural byproduct of their deployment at scale is that, in aggregate, application security (appsec) teams are presented with thousands – if not millions – of vulnerabilities to process. If you’re going to deploy advanced application security testing solutions, then of course a significant amount of vulnerability data is going to be generated. In fact, I’d argue this is a good problem to have. It’s like the old saying goes: You cannot improve what you cannot measure.

Here’s the kicker though: given a backlog of, lets say 200k vulnerabilities with a severity of “critical” across the entire product stack, where do you start your remediation efforts and why? Put another way: is this critical more important than that critical? Answering this question requires additional context, of which is often manually obtained by appsec teams. And how do you then disseminate that siloed vulnerability and track its remediation workflow to resolution? And can you replicate that for the other 199,999 critical vulnerabilities? This is what I mean when I say appsec teams have a massive data problem. Accelerating remediation, reducing risk, and demonstrating ROI requires us to be able to act on the data we collect at scale.

Introducing Application Security Posture Management

Overcoming Application Security’s massive data problem requires a completely new approach to how we operationalize vulnerability remediation, and this is exactly what Application Security Posture Management (ASPM) is designed to solve. In a recent Innovation Insight, Gartner defined ASPM as follows:

“Application security posture management analyzes security signals across software development, deployment and operation to improve visibility, better manage vulnerabilities and enforce controls. Security leaders can use ASPM to improve application security efficacy and better manage risk.” – Gartner

Obtaining and analyzing “security signals” requires integrations with various third party technologies as a means of deriving the context necessary to better understand the security implications of vulnerabilities within your enterprise and its environment. To see this in action, let’s revisit the question: “Is this critical more important than that critical?” A robust ASPM solution will provide you context beyond just the vulnerability severity as reported by the security tool. Is this vulnerability associated with an asset that is actually deployed to production? Is the vulnerability internet-facing or internal only? Does either of these vulnerable assets process sensitive data, such as personally identifiable information (PII) or credit card information? By integrating with third party services such as Source Code Management systems and Cloud runtime environments, for example, ASPM is able to enrich vulnerabilities so that appsec teams can make more informed decisions about risk. In fact, with this additional context, an ASPM helps Application Security teams identify those vulnerabilities representing the greatest risk to the organization.

Identifying the most significant vulnerabilities is only the first step, however. The second step is automating the remediation workflow for those vulnerabilities. ASPM enables the scalable dissemination of security vulnerabilities to their respective owners via integration with the ticketing and work management systems already in use by your developers today. Better yet, Application Security teams can monitor the remediation workflow of vulnerabilities to resolution all from within the ASPM. From a collaboration perspective, this is a massive win-win: development teams and appsec teams are able to collaborate on vulnerability remediation using their own respective technologies.

When you put all of this together, you’ll come to understand the greatest value-add provided by ASPM and realized by our customers at Tromzo:

ASPM solutions accelerate the triage and remediation of vulnerabilities representing the greatest risk to the organization at scale.

ASPM Core Capabilities

Effectively delivering on an integrated experience that accelerates the triage and remediation of vulnerabilities representing the greatest risk requires several core capabilities:

  1. The ability to aggregate security vulnerabilities across all scanning tools without impeding your ability to use the best-in-class security testing solutions.
  2. The ability to integrate with and build context from development tools across the CI/CD pipeline.
  3. The ability to derive relationships between the various software assets and security findings from code to cloud.
  4. The ability to express and overlay organizational- as well as team-specific security policies on top of security vulnerabilities.
  5. The ability to derive actions and insights from this metadata that help prioritize and drive to remediation the most significant vulnerabilities.

Doing this effectively requires a tremendous amount of data, connectivity, analysis, and insight. With integrations across 70+ tools, Tromzo is delivering a best-in-class remediation ASPM solution.

How Rapid7 Customers Benefit from an ASPM Solution

By its very nature, ASPM fulfills the need for automation and efficiency of vulnerability remediation via integration across various security testing solutions and development technologies. With efficiency comes real cost savings. Let’s take a look at how Rapid7 customers can realize operational efficiencies using Tromzo.

Breaking Down Security Solution Silos

Rapid7 customers are already amassing best-in-class security testing solutions, such as InsightAppSec and InsightCloudSec. ASPM enables the integration of not only Rapid7 products but all your other security testing products into a single holistic view, whether it be Software Composition Analysis (SCA), Static Application Security Testing (SAST), Secrets Scanning, etc. This effectively breaks down the silos and operational overhead with individually managing these stand-alone tools. You’re freeing yourself from the need to analyze, triage, and prioritize data from dozens of different security products with different severity taxonomies and different vulnerability models. Instead, it’s: one location, one severity taxonomy, and one data model. This is a clear win for operational efficiency.

Accelerating Vulnerability Remediation Through Deep Environmental and Organizational Context

Typical security teams are dealing with hundreds of thousands of security findings and this takes us back to our question of “Is this critical more important than that critical?”. Rapid7 customers can leverage Application Security Posture Management solutions to derive additional context in a way that allows them to more efficiently triage and remediate vulnerabilities produced by best-of-breed technologies such as InsightAppSec and InsightCloudSec. By way of example, let’s explore how ASPM can be used to answer some common questions raised by appsec teams:

1. Who is the “owner” of this vulnerability?

Security teams spend countless hours trying to identify who introduced a vulnerability so they can identify who needs to fix it. ASPM solutions are able to help identify vulnerability owners via the integration with third party systems such as Source Code Management repositories. This automated attribution serves as a foundation to drive remediation by teams and individuals that own the risk.

No more wasted hours!

2. Which vulnerabilities are actually deployed to our production environment?

One of the most common questions that arises when triaging a vulnerability is whether it is deployed to production. This often leads to additional questions such as whether it is internet-facing, how frequently the asset is being consumed, whether the vulnerability has a known exploit, etc. Obtaining answers to these questions is tedious to say the least.

The “code to cloud” visibility offered by ASPM solutions allows appsecteams to quickly answer these questions. By way of example, consider a CVE vulnerability found within a container hosted in a private registry. The code-to-cloud story would look something like this:

  • A developer wrote a “Dockerfile” or “Containerfile” and stored it in GitHub
  • GitHub Actions built a Container from this file and deployed it to AWS ECR
  • AWS ECS pulled this Container from ECR and deployed it to Production

With an integration into GitHub, AWS ECR, and AWS ECS, we can confidently conclude whether or not the Container hosted in AWS ECR is actually deployed to production via AWS ECS. We can even take this further: By integrating within GitHub, we can even map the container back to the corresponding Dockerfile/Containerfile and the team of developers that maintain it.

No more laborious meetings!

3. Does this application process PII or credit card numbers?

Appsecteams have the responsibility of helping their organization achieve compliance with various regulations and industry standards, including GDPR, CCPA, HIPAA, and PCI DSS. These standards place emphasis on the types of data being processed by applications, and hence appsec teams can understand what applications process what types of sensitive data. Unfortunately, obtaining this visibility requires security teams to create, distribute, collect, and maintain questionnaires that recipients often fail to complete.

ASPM solutions have the ability to derive context around the consumption of sensitive data and use this information to enrich applicable security vulnerabilities. A vulnerability deployed to production that stands to disclose credit card numbers, for example, will likely be treated with the highest of priority as a means of avoiding possible fines and other consequences associated with PCI DSS.

No more tedious questionnaires!

4. How do I automate ticket creation for vulnerabilities?

Once you know what needs to be fixed and who needs to fix it, the task of remediating the issue needs to be handed off to the individual or team that can implement a fix. This could involve taking hundreds or thousands of vulnerabilities, de-duplicating them, and grouping them into actionable tasks while automating creation of tickets in a format that is consumable by the receiving team. This is a complex workflow that not only involves automating correctly formatted tickets with the right level of remediation information, but also tracking the entire lifecycle of that ticket until remediation, followed by reporting of KPIs. ASPM solutions like Tromzo are perfectly suited to automate these ticketing and governance workflows, since ASPMs already centralize all vulnerabilities and have the appropriate contextual and ownership metadata.

Leverage ASPM to Accelerate Vulnerability Remediation

ASPM solutions enable Rapid7 customers to accelerate the remediation of vulnerabilities found by their preferred security testing technologies. With today’s complex hybrid work environments, the increased innovation and sophistication of attackers, and the underlying volatile market, automated code to cloud visibility and governance is an absolute must for maximizing operational efficiency and Tromzo is here to help. Check out www.tromzo.com for more information.

NEW RESEARCH: Artificial intelligence and Machine Learning Can Be Used to Stop DAST Attacks Before they Start

Post Syndicated from Tom Caiazza original https://blog.rapid7.com/2023/11/09/artificial-intelligence-and-machine-learning-can-be-used-to-stop-dast-attacks-before-they-start/

NEW RESEARCH: Artificial intelligence and Machine Learning Can Be Used to Stop DAST Attacks Before they Start

Within cloud security, one of the most prevalent tools is dynamic application security testing, or DAST. DAST is a critical component of a robust application security framework, identifying vulnerabilities in your cloud applications either pre or post deployment that can be remediated for a stronger security posture.

But what if the very tools you use to identify vulnerabilities in your own applications can be used by attackers to find those same vulnerabilities? Sadly, that’s the case with DASTs. The very same brute-force DAST techniques that alert security teams to vulnerabilities can be used by nefarious outfits for that exact purpose.

There is good news, however. A new research paper written by Rapid7’s Pojan Shahrivar and Dr. Stuart Millar and published by the Institute of Electrical and Electronics Engineers (IEEE) shows how artificial intelligence (AI) and machine learning (ML) can be used to thwart unwanted brute-force DAST attacks before they even begin. The paper Detecting Web Application DAST Attacks with Machine Learning was presented yesterday to the specialist AI/ML in Cybersecurity workshop at the 6th annual IEEE Dependable and Secure Computing conference, hosted this year at the University of Southern Florida (USF) in Tampa.

The team designed and evaluated AI and ML techniques to detect brute-force DAST attacks during the reconnaissance phase, effectively preventing 94% of DAST attacks and eliminating the entire kill-chain at the source. This presents security professionals with an automated way to stop DAST brute-force attacks before they even start. Essentially, AI and ML are being used to keep attackers from even casing the joint in advance of an attack.

This novel work is the first application of AI in cloud security to automatically detect brute-force DAST reconnaissance with a view to an attack. It shows the potential this technology has in preventing attacks from getting off the ground, plus it enables significant time savings for security administrators and lets them complete other high-value investigative work.

Here’s how it is done: Using a real-world dataset of millions of events from enterprise-grade apps, a random forest model is trained using tumbling windows of time to generate aggregated event features from source IPs. In this way the characteristics of a DAST attack related to, for example, the number of unique URLs visited per IP or payloads per session, is learned by the model. This avoids the conventional threshold approach, which is brittle and causes excessive false positives.

This is not the first time Millar and team have made major advances in the use of AI and ML to improve the effectiveness of cloud application security. Late last year, Millar published new research at AISec in Los Angeles, the leading venue for AI/ML cybersecurity innovations, into the use of AI/ML to triage vulnerability remediation, reducing false positives by 96%. The team was also delighted to win AISec’s highly coveted Best Paper Award, ahead of the likes of Apple and Microsoft.

A complimentary pre-print version of the paper Detecting Web Application DAST Attacks with Machine Learning is available on the Rapid7 website by clicking here.

How AWS built the Security Guardians program, a mechanism to distribute security ownership

Post Syndicated from Ana Malhotra original https://aws.amazon.com/blogs/security/how-aws-built-the-security-guardians-program-a-mechanism-to-distribute-security-ownership/

Product security teams play a critical role to help ensure that new services, products, and features are built and shipped securely to customers. However, since security teams are in the product launch path, they can form a bottleneck if organizations struggle to scale their security teams to support their growing product development teams. In this post, we will share how Amazon Web Services (AWS) developed a mechanism to scale security processes and expertise by distributing security ownership between security teams and development teams. This mechanism has many names in the industry — Security Champions, Security Advocates, and others — and it’s often part of a shift-left approach to security. At AWS, we call this mechanism Security Guardians.

In many organizations, there are fewer security professionals than product developers. Our experience is that it takes much more time to hire a security professional than other technical job roles, and research conducted by (ISC)2 shows that the cybersecurity industry is short 3.4 million workers. When product development teams continue to grow at a faster rate than security teams, the disparity between security professionals and product developers continues to increase as well. Although most businesses understand the importance of security, frustration and tensions can arise when it becomes a bottleneck for the business and its ability to serve customers.

At AWS, we require the teams that build products to undergo an independent security review with an AWS application security engineer before launching. This is a mechanism to verify that new services, features, solutions, vendor applications, and hardware meet our high security bar. This intensive process impacts how quickly product teams can ship to customers. As shown in Figure 1, we found that as the product teams scaled, so did the problem: there were more products being built than the security teams could review and approve for launch. Because security reviews are required and non-negotiable, this could potentially lead to delays in the shipping of products and features.

Figure 1: More products are being developed than can be reviewed and shipped

Figure 1: More products are being developed than can be reviewed and shipped

How AWS builds a culture of security

Because of its size and scale, many customers look to AWS to understand how we scale our own security teams. To tell our story and provide insight, let’s take a look at the culture of security at AWS.

Security is a business priority

At AWS, security is a business priority. Business leaders prioritize building products and services that are designed to be secure, and they consider security to be an enabler of the business rather than an obstacle.

Leaders also strive to create a safe environment by encouraging employees to identify and escalate potential security issues. Escalation is the process of making sure that the right people know about the problem at the right time. Escalation encompasses “Dive Deep”, which is one of our corporate values at Amazon, because it requires owners and leaders to dive into the details of the issue. If you don’t know the details, you can’t make good decisions about what’s going on and how to run your business effectively.

This aspect of the culture goes beyond intention — it’s embedded in our organizational structure:

CISOs and IT leaders play a key role in demystifying what security and compliance represent for the business. At AWS, we made an intentional choice for the security team to report directly to the CEO. The goal was to build security into the structural fabric of how AWS makes decisions, and every week our security team spends time with AWS leadership to ensure we’re making the right choices on tactical and strategic security issues.

– Stephen Schmidt, Chief Security Officer, Amazon, on Building a Culture of Security

Everyone owns security

Because our leadership supports security, it’s understood within AWS that security is everyone’s job. Security teams and product development teams work together to help ensure that products are built and shipped securely. Despite this collaboration, the product teams own the security of their product. They are responsible for making sure that security controls are built into the product and that customers have the tools they need to use the product securely.

On the other hand, central security teams are responsible for helping developers to build securely and verifying that security requirements are met before launch. They provide guidance to help developers understand what security controls to build, provide tools to make it simpler for developers to implement and test controls, provide support in threat modeling activities, use mechanisms to help ensure that customers’ security expectations are met before launch, and so on.

This responsibility model highlights how security ownership is distributed between the security and product development teams. At AWS, we learned that without this distribution, security doesn’t scale. Regardless of the number of security experts we hire, product teams always grow faster. Although the culture around security and the need to distribute ownership is now well understood, without the right mechanisms in place, this model would have collapsed.

Mechanisms compared to good intentions

Mechanisms are the final pillar of AWS culture that has allowed us to successfully distribute security across our organization. A mechanism is a complete process, or virtuous cycle, that reinforces and improves itself as it operates. As shown in Figure 2, a mechanism takes controllable inputs and transforms them into ongoing outputs to address a recurring business challenge. At AWS, the business challenge that we’re facing is that security teams create bottlenecks for the business. The culture of security at AWS provides support to help address this challenge, but we needed a mechanism to actually do it.

Figure 2: AWS sees mechanisms as a complete process, or virtuous cycle

Figure 2: AWS sees mechanisms as a complete process, or virtuous cycle

“Often, when we find a recurring problem, something that happens over and over again, we pull the team together, ask them to try harder, do better – essentially, we ask for good intentions. This rarely works… When you are asking for good intentions, you are not asking for a change… because people already had good intentions. But if good intentions don’t work, what does? Mechanisms work.

 – Jeff Bezos, February 1, 2008 All Hands.

At AWS, we’ve learned that we can help solve the challenge of scaling security by distributing security ownership with a mechanism we call the Security Guardians program. Like other mechanisms, it has inputs and outputs, and transforms over time.

AWS distributes security ownership with the Security Guardians program

At AWS, the Security Guardians program trains, develops, and empowers developers to be security ambassadors, or Guardians, within the product teams. At a high level, Guardians make sure that security considerations for a product are made earlier and more often, helping their peers build and ship their product faster. They also work closely with the central security team to help ensure that the security bar at AWS is rising and the Security Guardians program is improving over time. As shown in Figure 3, embedding security expertise within the product teams helps products with Guardian involvement move through security review faster.

Figure 3: Security expertise is embedded in the product teams by Guardians

Figure 3: Security expertise is embedded in the product teams by Guardians

Guardians are informed, security-minded product builders who volunteer to be consistent champions of security on their teams and are deeply familiar with the security processes and tools. They provide security guidance throughout the development lifecycle and are stakeholders in the security of the products being shipped, helping their teams make informed decisions that lead to more secure, on-time launches. Guardians are the security points-of-contact for their product teams.

In this distributed security ownership model, accountability for product security sits with the product development teams. However, the Guardians are responsible for performing the first evaluation of a development team’s security review submission. They confirm the quality and completeness of the new service’s resources, design documents, threat model, automated findings, and penetration test readiness. The development teams, supported by the Guardian, submit their security review to AWS Application Security (AppSec) engineers for the final pre-launch review.

In practice, as part of this development journey, Guardians help ensure that security considerations are made early, when teams are assessing customer requests and the feature or product design. This can be done by starting the threat modeling processes. Next, they work to make sure that mitigations identified during threat modeling are developed. Guardians also play an active role in software testing, including security scans such as static application security testing (SAST) and dynamic application security testing (DAST). To close out the security review, security engineers work with Guardians to make sure that findings are resolved and the product is ready to ship.

Figure 4: Expedited security review process supported by Guardians

Figure 4: Expedited security review process supported by Guardians

Guardians are, after all, Amazonians. Therefore, Guardians exemplify a number of the Amazon Leadership Principles and often have the following characteristics:

  • They are exemplary practitioners for security ownership and empower their teams to own the security of their service.
  • They hold a high security bar and exercise strong security judgement, don’t accept quick or easy answers, and drive continuous improvement.
  • They advocate for security needs in internal discussions with the product team.
  • They are thoughtful yet assertive to make customer security a top priority on their team.
  • They maintain and showcase their security knowledge to their peers, continuously building knowledge from many different sources to gain perspective and to stay up to date on the constantly evolving threat landscape.
  • They aren’t afraid to have their work independently validated by the central security team.

Expected outcomes

AWS has benefited greatly from the Security Guardians program. We’ve had 22.5 percent fewer medium and high severity security findings generated during the security review process and have taken about 26.9 percent less time to review a new service or feature. This data demonstrates that with Guardians involved we’re identifying fewer issues late in the process, reducing remediation work, and as a result securely shipping services faster for our customers. To help both builders and Guardians improve over time, our security review tool captures feedback from security engineers on their inputs. This helps ensure that our security ownership mechanism reinforces and improves itself over time.

AWS and other organizations have benefited from this mechanism because it generates specialized security resources and distributes security knowledge that scales without needing to hire additional staff.

A program such as this could help your business build and ship faster, as it has for AWS, while maintaining an appropriately high security bar that rises over time. By training builders to be security practitioners and advocates within your development cycle, you can increase the chances of identifying risks and security findings earlier. These findings, earlier in the development lifecycle, can reduce the likelihood of having to patch security bugs or even start over after the product has already been built. We also believe that a consistent security experience for your product teams is an important aspect of successfully distributing your security ownership. An experience with less confusion and friction will help build trust between the product and security teams.

To learn more about building positive security culture for your business, watch this spotlight interview with Stephen Schmidt, Chief Security Officer, Amazon.

If you’re an AWS customer and want to learn more about how AWS built the Security Guardians program, reach out to your local AWS solutions architect or account manager for more information.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Ana Malhotra

Ana Malhotra

Ana is a Security Specialist Solutions Architect and the Healthcare and Life Sciences (HCLS) Security Lead for AWS Industry, based in Seattle, Washington. As a former AWS Application Security Engineer, Ana loves talking all things AppSec, including people, process, and technology. In her free time, she enjoys tapping into her creative side with music and dance.

Mitch Beaumont

Mitch Beaumont

Mitch is a Principal Solutions Architect for Amazon Web Services, based in Sydney, Australia. Mitch works with some of Australia’s largest financial services customers, helping them to continually raise the security bar for the products and features that they build and ship. Outside of work, Mitch enjoys spending time with his family, photography, and surfing.

InsightAppSec Advanced Authentication Settings: Token Replacement

Post Syndicated from Shane Queeney original https://blog.rapid7.com/2023/08/01/insightappsec-advanced-authentication-settings-token-replacement/

InsightAppSec Advanced Authentication Settings: Token Replacement

There are many different ways to use InsightAppSec to authenticate to web apps, but sometimes you need to go deeper into the advanced settings to fully automate your logins, especially with API scanning. Today, we’ll cover one of those advanced settings: Token Replacement.

InsightAppSec Token Replacement can be used to capture and replay Bearer Authentication tokens, JWT Authentication tokens, or any other type of session token.

The token replacement values are under your scan configs in the following location: Custom Options > Advanced > AuthConfig > TokenReplacementList

When you press Add, the following values can be set.

Name Description Possible Values
ExtractionTokenLocation Where the token you want to extract is located. Request HeaderRequest BodyRequest URLResponse HeadersResponse Body
ExtractionTokenRegex Regex used to extract the token. Anything placed in brackets can be returned in the InjectionTokenRegex using @token@. Any regex, such as:”token”: ?”([^”]*)”access_token”: ?”([-a-f0-9]+)”[?]sessionId=([^&]*)
InjectionTokenLocation Where the captured token should be injected. Request URLRequest HeadersRequest Body
InjectionTokenRegex The format in which the token should be sent to the web app. @token@ is replaced with the value captured by ExtractionTokenLocation. Any string. @token@ is replaced with the captured value. Such as:Authorization: Bearer @token@Authorization: Token @token@&sessionId=@token@

InsightAppSec Advanced Authentication Settings: Token Replacement

Why Token Replacement?

Under Custom Options > HTTP Headers > Extra Header, you can manually pass an authentication token to your web app. While this is the easiest way to set up this form of authentication, unless you generate a token that will not expire, you will have to replace this token every scan. Automating this process using token replacement will save you time and effort in the long run, especially if you have multiple apps you need to generate tokens for.

For this example, we will be using the Rapid7 Hackazon web app. If you want to configure your own Hackazon instance, details around installation and setup can be found here.

Alternatively, there are free public test sites you can use instead, such as this one.

The main difference you’ll encounter when using the Hackazon web app is the API authentication does not have a UI, therefore we must record and pass a traffic file for InsightAppSec to authenticate.

We will use Postman to send the API request to the web app and Burp Suite to record the traffic. You could alternately use the Rapid7 Insight AppSec Toolkit, to record the traffic as well. Here is a video running through setup using the InsightAppSec Toolkit.

The first step is to set up your proxy settings. In Postman, you can go to Settings by clicking the gear icon in the upper right, and then clicking into the proxy settings. We’re going to set the proxy server to “localhost” and change the port to “5000”.

InsightAppSec Advanced Authentication Settings: Token Replacement

After setting the proxy in Postman, you must set it up in Burp Suite. In Burp, go to the Proxy tab, then click on Proxy Settings. Next, add a proxy listener, specifying port 5000 to match the setting in Postman. Then, set the interface to Loopback Only.

InsightAppSec Advanced Authentication Settings: Token Replacement

Go back to Postman, add your basic authentication, and then send the traffic. In Burp, click on the HTTP History tab, right click on the captured traffic, then click “Save Item”. Make sure you save the traffic as an xml file.

InsightAppSec Advanced Authentication Settings: Token Replacement

InsightAppSec Advanced Authentication Settings: Token Replacement

You can also record the traffic using the Rapid7 Insight AppSec Plugin, or from within the Chrome browser. Instructions for how to do this are located under Traffic Authentication or can be found here.

InsightAppSec Advanced Authentication Settings: Token Replacement

When recording using the Rapid7 Appsec Plugin, make sure that the recording includes the Bearer Auth or Token in the recorded details.

InsightAppSec Advanced Authentication Settings: Token Replacement

After recording the login, upload the traffic file to Site Authentication. Make sure you adjust the Logged-In Regex as well to make sure the scan doesn’t fail.

InsightAppSec Advanced Authentication Settings: Token Replacement

InsightAppSec Advanced Authentication Settings: Token Replacement

After authenticating to your web app and grabbing the token, the next step is to configure a regex to ensure the token is able to be extracted. There are a wide variety of ways to test the regex, but we will be using https://regex101.com/ for this example.

We will then grab the web app response containing the token info, paste it into the website, and configure a regular expression to ensure only the token is selected. In this use case, the expression “token”: ?”([^”]*) was successful in only highlighting the info we want to extract. We can ensure that only the token is selected in capture Group 1 as that will be returned when we specify @token@ under the InjectionTokenRegex.

InsightAppSec Advanced Authentication Settings: Token Replacement
InsightAppSec Advanced Authentication Settings: Token Replacement

Next, we want to configure the TokenReplacementList.

Name Value Reason
ExtractionTokenLocation Response Body The token appeared in the body after authenticating
ExtractionTokenRegex “token”: ?”([^”]*) This successfully isolated the auth token
InjectionTokenLocation Request Header Where the web app is expecting the token
InjectionTokenRegex Authorization: Token @token@ The header format the web app is expecting

InsightAppSec Advanced Authentication Settings: Token Replacement

Make sure you upload the swagger API file. You can either upload the file or point InsightAppSec to the specific URL. You can optionally restrict the scan to just the swagger file for more targeted scanning.

InsightAppSec Advanced Authentication Settings: Token Replacement

To ensure we were successful, click Download Additional Logs from the Scan Logs page after the scan is complete and open the Operation log file. You are looking for the log entry “[good]: Added imported token from response body”. Once you see this, you know the taken was imported into the scan properly and we were able to use it to log in to the API.

For further testing, you can look in the vulnerability traffic requests to ensure the Authorization: Token header has been passed successfully.

InsightAppSec Advanced Authentication Settings: Token Replacement

InsightAppSec Advanced Authentication Settings: Token Replacement

To detect if the token has expired, you can modify the sessionLossRegex and sessionLossHeaderRegex under Authentication > Additional Settings, or by using a CanaryPage if that has been set up. When configured correctly, the token replacement will grab the token again, ensuring we stay logged in to your API.

Further information on configuring Scan Authentication can be found here. When in doubt, please reach out to your web app developers and/or Rapid7 support for assistance.

OWASP TOP 10 API Security Risks: 2023!

Post Syndicated from Ray Cochrane original https://blog.rapid7.com/2023/06/08/owasp-top-10-api-security-risks-2023/

OWASP TOP 10 API Security Risks: 2023!

The OWASP Top 10 API Security Risks 2023 has arrived! OWASP’s API Top 10 is always a highly anticipated release and can be a key component of API security preparedness for the year. As we discussed in API Security Best Practices for a Changing Attack Surface, API usage continues to skyrocket. As a result, API security coverage must be more advanced than ever.

What are the OWASP Top 10 API Security Risks?

The OWASP Top 10 API Security Risks is a list of the highest priority API based threats in 2023. Let’s dig a little deeper into each item on the OWASP Top 10 API Security Risks list to outline the type of threats you may encounter and appropriate responses to curtail each threat.

1. Broken object level authorization

Object level authorization is a control method that restricts access to objects to minimize system exposures. All API endpoints that handle objects should perform authorization checks utilizing user group policies.

We recommend using this authorization mechanism in every function that receives client input to access objects from a data store. As an additional means for hardening, it is recommended to use cryptographically secure random GUID values for object reference IDs.

2. Broken authentication

Authentication relates to all endpoints and data flows that handle the identity of users or entities accessing an API. This includes credentials, keys, tokens, and even password reset functionality. Broken authentication can lead to many issues such as credential stuffing, brute force attacks, weak unsigned keys, and expired tokens.

Authentication covers a wide range of functionality and requires strict scrutiny and strong practices. Detailed threat modeling should be performed against all authentication functionality to understand data flows, entities, and risks involved in an API. Multi-factor authentication should be enforced where possible to mitigate the risk of compromised credentials.

To prevent brute force and other automated password attacks, rate-limitation should be implemented with a reasonable threshold. Weak and expired credentials should not be accepted, this includes JWTs, passwords, and keys. Integrity checks should be performed against all tokens as well, ensuring signature algorithms and values are valid to prevent tampering attacks.

3. Broken object property level authorization

Related to object level authorization, object property level authorization is another control method to restrict access to specific properties or fields of an object. This category combines aspects of 2019 OWASP API Security’s “excessive data exposure” and “mass assignment”. If an API endpoint is exposing sensitive object properties that should not be read or modified by an unauthorized user it is considered vulnerable.

The overall mitigation strategy for this is to validate user permissions in all API endpoints that handle object properties. Access to properties and fields should be kept to a bare minimum at an as-needed basis scoped to the functionality of a given endpoint.

4. Unrestricted resource consumption

API resource consumption pertains to CPU, memory, storage, network, and service provider usage for an API. Denial of service attacks result from overconsumption of these resources leading to downtime and racked up service charges.

Setting minimum and maximum limits relative to business functional needs is the overall strategy to mitigating resource consumption risks. API endpoints should limit the rate and maximum number of calls at a per-client basis. For API infrastructure, using containers and serverless code with defined resource limits will mitigate the risk of server resource consumption.

Coding practices that limit resource consumption need to be in place, as well. Limit the number of records returned in API responses with careful use of paging, as appropriate. File uploads should also have size limits enforced to prevent overuse of storage. Additionally, regular expressions and other data-processing means must be carefully evaluated for performance in order to avoid high CPU and memory consumption.

5. Broken function level authorization

Lack of authorization checks in controllers or functions behind API endpoints are covered under broken function level authorization. This vulnerability class allows attackers to access unauthorized functionality; whether they are changing an HTTP method from a `GET` to a `PUT` to modify data that is not expected to be modified, or changing a URL string from `user` to `admin`. Proper authorization checks can be difficult due to controller complexities and the numbers of user groups and roles.

Comprehensive threat modeling against an API architecture and design is paramount in preventing these vulnerabilities. Ensure that API functionality is carefully structured and corresponding controllers are performing authentication checks. For example, all functionality under an `/api/v1/admin` endpoint should be handled by an admin controller class that performs strict authentication checks. When in doubt, access should be denied by default and grants should be given on a as needed basis.

6. Unrestricted Access to Sensitive Business Flows

Automated threats are becoming increasingly more difficult to combat and must be addressed on a case-by-case basis. An API is vulnerable if sensitive functionality is exposed in such a way that harm could occur if excessive automated use occurs. There may not be a specific implementation bug, but rather an exposure of business flow that can be abused in an automated fashion.

Threat modeling exercises are important as an overall mitigation strategy. Business functionality and all dataflows must be carefully considered, and the excessive automated use threat scenario must be discussed. From an implementation perspective, device fingerprinting, human detection, irregular API flow and sequencing pattern detection, and IP blocking can be implemented on a case-by-case basis.

7. Server side request forgery

Server side request forgery (SSRF) vulnerabilities happen when a client provides a URL or other remote resource as data to an API. The result is a crafted outbound request to that URL on behalf of the API. These are common in redirect URL parameters, webhooks, file fetching functionality, and URL previews.

SSRF can be leveraged by attackers in many ways. Modern usage of cloud providers and containers exposes instance metadata URLs and internal management consoles that can be targeted to leak credentials and abuse privileged functionality. Internal network calls such as backend service-to-service requests, even when protected by service meshes and mTLS, can be exploited for unexpected results. Internal repositories, build tools, and other internal resources can all be targeted with SSRF attacks.

We recommend validating and sanitizing all client provided data to mitigate SSRF vulnerabilities. Strict allow-listing must be enforced when implementing resource-fetching functionality. Allow lists should be granular, restricting all but specified services, URLs, schemes, ports, and media types. If possible, isolate this functionality within a controlled network environment with careful monitoring to prevent probing of internal resources.

8. Security misconfiguration

Misconfigurations in any part of the API stack can result in weakened security. This can be the result of incomplete or inconsistent patching, enabling unnecessary features, or improperly configuring permissions. Attackers will enumerate the entire surface area of an API to discover these misconfigurations, which could be exploited to leak data, abuse extra functionality, or find additional vulnerabilities in out of date components.

Having a robust, fast, and repeatable hardening process is paramount to mitigating the risk of misconfiguration issues. Security updates must be regularly applied and tracked with a patch management process. Configurations across the entire API stack should be regularly reviewed. Asset Management and Vulnerability Management solutions should be considered to automate this hardening process.

9. Improper inventory management

Complex services with multiple interconnected APIs present a difficult inventory management problem and introduces more exposure to risk. Having multiple versions of APIs across various environments further increases the challenge. Improper inventory management can lead to running unpatched systems and exposing data to attackers. With modern microservices making it easier than ever to deploy many applications, it is important to have strong inventory management practices.

Documentation for all assets including hosts, applications, environments, and users should be carefully collected and managed in an asset management solution. All third-party integrations need to be vetted and documented, as well, to have visibility into any risk exposure. API documentation should be standardized and available to those authorized to use the API. Careful controls over access to and changes of environments, plus what’s shared externally vs. internally, and data protection measures must be in place to ensure that production data does not fall into other environments.

10. Unsafe consumption of APIs

Data consumed from other APIs must be handled with caution to prevent unexpected behavior. Third-party APIs could be compromised and leveraged to attack other API services. Attacks such as SQL injection, XML External Entity injection, deserialization attacks, and more, should be considered when handling data from other APIs.

Careful development practices must be in place to ensure all data is validated and properly sanitized. Evaluate third-party integrations and service providers’ security posture. Ensure all API communications occur over a secure channel such as TLS. Mutual authentication should also be enforced when connections between services are established.

What’s next?

The OWASP Top 10 API Security Risks template is now ready and available for use within InsightAppSec, mapping each of Rapid7’s API attack modules to their corresponding OWASP categories for ease of reference and enhanced API threat coverage.

Make sure to utilize the new template to ensure best in class coverage against API security threats today! And of course, as is always the case, ensure you are following Rapid7’s best practices for securing your APIs.

How to use granular geographic match rules with AWS WAF

Post Syndicated from Mohit Mysore original https://aws.amazon.com/blogs/security/how-to-use-granular-geographic-match-rules-with-aws-waf/

In November 2022, AWS introduced support for granular geographic (geo) match conditions in AWS WAF. This blog post demonstrates how you can use this new feature to customize your AWS WAF implementation and improve the security posture of your protected application.

AWS WAF provides inline inspection of inbound traffic at the application layer. You can use AWS WAF to detect and filter common web exploits and bots that could affect application availability or security, or consume excessive resources. Inbound traffic is inspected against web access control list (web ACL) rules. A web ACL rule consists of rule statements that instruct AWS WAF on how to inspect a web request.

The AWS WAF geographic match rule statement functionality allows you to restrict application access based on the location of your viewers. This feature is crucial for use cases like licensing and legal regulations that limit the delivery of your applications outside of specific geographic areas.

AWS recently released a new feature that you can use to build precise geographic rules based on International Organization for Standardization (ISO) 3166 country and area codes. With this release, you can now manage access at the ISO 3166 region level. This capability is available across AWS Regions where AWS WAF is offered and for all AWS WAF supported services. In this post, you will learn how to use this new feature with Amazon CloudFront and Elastic Load Balancing (ELB) origin types.

Summary of concepts

Before we discuss use cases and setup instructions, make sure that you are familiar with the following AWS services and concepts:

  • Amazon CloudFront: CloudFront is a web service that gives businesses and web application developers a cost-effective way to distribute content with low latency and high data transfer speeds.
  • Amazon Simple Storage Service (Amazon S3): Amazon S3 is an object storage service built to store and retrieve large amounts of data from anywhere.
  • Application Load Balancer: Application Load Balancer operates at the request level (layer 7), routing traffic to targets—Amazon Elastic Compute Cloud (Amazon EC2) instances, IP addresses, and Lambda functions—based on the content of the request.
  • AWS WAF labels: Labels contain metadata that can be added to web requests when a rule is matched. Labels can alter the behavior or default action of managed rules.
  • ISO (International Organization for Standardization) 3166 codes: ISO codes are internationally recognized codes that designate for every country and most of the dependent areas a two- or three-letter combination. Each code consists of two parts, separated by a hyphen. For example, in the code AU-QLD, AU is the ISO 3166 alpha-2 code for Australia, and QLD is the subdivision code of the state or territory—in this case, Queensland.

How granular geo labels work

Previously, geo match statements in AWS WAF were used to allow or block access to applications based on country of origin of web requests. With updated geographic match rule statements, you can control access at the region level.

In a web ACL rule with a geo match statement, AWS WAF determines the country and region of a request based on its IP address. After inspection, AWS WAF adds labels to each request to indicate the ISO 3166 country and region codes. You can use labels generated in the geo match statement to create a label match rule statement to control access.

AWS WAF generates two types of labels based on origin IP or a forwarded IP configuration that is defined in the AWS WAF geo match rule. These labels are the country and region labels.

By default, AWS WAF uses the IP address of the web request’s origin. You can instruct AWS WAF to use an IP address from an alternate request header, like X-Forwarded-For, by enabling forwarded IP configuration in the rule statement settings. For example, the country label for the United States with origin IP and forwarded IP configuration are awswaf:clientip:geo:country:US and awswaf:forwardedip:geo:country:US, respectively. Similarly, the region labels for a request originating in Oregon (US) with origin and forwarded IP configuration are awswaf:clientip:geo:region:US-OR and awswaf:forwardedip:geo:region:US-OR, respectively.

To demonstrate this AWS WAF feature, we will outline two distinct use cases.

Use case 1: Restrict content for copyright compliance using AWS WAF and CloudFront

Licensing agreements might prevent you from distributing content in some geographical locations, regions, states, or entire countries. You can deploy the following setup to geo-block content in specific regions to help meet these requirements.

In this example, we will use an AWS WAF web ACL that is applied to a CloudFront distribution with an S3 bucket origin. The web ACL contains a geo match rule to tag requests from Australia with labels, followed by a label match rule to block requests from the Queensland region. All other requests with source IP originating from Australia are allowed.

To configure the AWS WAF web ACL rule for granular geo restriction

  1. Follow the steps to create an Amazon S3 bucket and CloudFront distribution with the S3 bucket as origin.
  2. After the CloudFront distribution is created, open the AWS WAF console.
  3. In the navigation pane, choose Web ACLs, select Global (CloudFront) from the dropdown list, and then choose Create web ACL.
  4. For Name, enter a name to identify this web ACL.
  5. For Resource type, choose the CloudFront distribution that you created in step 1, and then choose Add.
  6. Choose Next.
  7. Choose Add rules, and then choose Add my own rules and rule groups.
  8. For Name, enter a name to identify this rule.
  9. For Rule type, choose Regular rule.
  10. Configure a rule statement for a request that matches the statement Originates from a Country and select the Australia (AU) country code from the dropdown list.
  11. Set the IP inspection configuration parameter to Source IP address.
  12. Under Action, choose Count, and then choose Add Rule.
  13. Create a new rule by following the same actions as in step 7 and enter a name to identify the rule.
  14. For Rule type, choose Regular rule.
  15. Configure a rule statement for a request that matches the statement Has a Label and enter awswaf:clientip:geo:region:AU-QLD for the match key.
  16. Set the action to Block and choose Add rule.
  17. For Actions, keep the default action of Allow.
  18. For Amazon CloudWatch metrics, select the AWS WAF rules that you created in steps 8 and 14.
  19. For Request sampling options, choose Enable sampled requests, and then choose Next.
  20. Review and create the web ACL rule.

After the web ACL is created, you should see the web ACL configuration, as shown in the following figures. Figure 1 shows the geo match rule configuration.

Figure 1: Web ACL rule configuration

Figure 1: Web ACL rule configuration

Figure 2 shows the Queensland regional geo restriction.

Figure 2: Queensland regional geo restriction - web ACL configuration<

Figure 2: Queensland regional geo restriction – web ACL configuration<

The setup is now complete—you have a web ACL with two regular rules. The first rule matches requests that originate from Australia and adds geographic labels automatically. The label match rule statement inspects requests with Queensland granular geo labels and blocks them. To understand where requests are originating from, you can configure logging on the AWS WAF web ACL.

You can test this setup by making requests from Queensland, Australia, to the DNS name of the CloudFront distribution to invoke a block. CloudFront will return a 403 error, similar to the following example.

$ curl -IL https://abcdd123456789.cloudfront.net
HTTP/2 403 
server: CloudFront
date: Tue, 21 Feb 2023 22:06:25 GMT
content-type: text/html
content-length: 919
x-cache: Error from cloudfront
via: 1.1 abcdd123456789.cloudfront.net (CloudFront)
x-amz-cf-pop: SYD1-C1

As shown in these test results, requests originating from Queensland, Australia, are blocked.

Use case 2: Allow incoming traffic from specific regions with AWS WAF and Application Load Balancer

We recently had a customer ask us how to allow traffic from only one region, and deny the traffic from other regions within a country. You might have similar requirements, and the following section will explain how to achieve that. In the example, we will show you how to allow only visitors from Washington state, while disabling traffic from the rest of the US.

This example uses an AWS WAF web ACL applied to an application load balancer in the US East (N. Virginia) Region with an Amazon EC2 instance as the target. The web ACL contains a geo match rule to tag requests from the US with labels. After we enable forwarded IP configuration, we will inspect the X-Forwarded-For header to determine the origin IP of web requests. Next, we will add a label match rule to allow requests from the Washington region. All other requests from the United States are blocked.

To configure the AWS WAF web ACL rule for granular geo restriction

  1. Follow the steps to create an internet-facing application load balancer in the US East (N. Virginia) Region.
  2. After the application load balancer is created, open the AWS WAF console.
  3. In the navigation pane, choose Web ACLs, and then choose Create web ACL in the US east (N. Virginia) Region.
  4. For Name, enter a name to identify this web ACL.
  5. For Resource type, choose the application load balancer that you created in step 1 of this section, and then choose Add.
  6. Choose Next.
  7. Choose Add rules, and then choose Add my own rules and rule groups.
  8. For Name, enter a name to identify this rule.
  9. For Rule type, choose Regular rule.
  10. Configure a rule statement for a request that matches the statement Originates from a Country in, and then select the United States (US) country code from the dropdown list.
  11. Set the IP inspection configuration parameter to IP address in Header.
  12. Enter the Header field name as X-Forwarded-For.
  13. For Match, choose Fallback for missing IP address. Web requests without a valid IP address in the header will be treated as a match and will be allowed.
  14. Under Action, choose Count, and then choose Add Rule.
  15. Create a new rule by following the same actions as in step 7 of this section, and enter a name to identify the rule.
  16. For Rule type, choose Regular rule.
  17. Configure a rule statement for a request that matches the statement Has a Label, and for the match key, enter awswaf:forwardedip:geo:region:US-WA.
  18. Set the action to Allow and add choose Add Rule.
  19. For Default web ACL action for requests that don’t match any rules, set the Action to Block.
  20. For Amazon CloudWatch metrics, select the AWS WAF rules that you created in steps 8 and 14 of this section.
  21. For Request sampling options, choose Enable sampled requests, and then choose Next.
  22. Review and create the web ACL rule.

After the web ACL is created, you should see the web ACL configuration, as shown in the following figures. Figure 3 shows the geo match rule

Figure 3: Geo match rule

Figure 3: Geo match rule

Figure 4 shows the Washington regional geo restriction.

Figure 4: Washington regional geo restriction - web ACL configuration

Figure 4: Washington regional geo restriction – web ACL configuration

The following is a JSON representation of the rule:

{
  "Name": "WashingtonRegionAllow",
  "Priority": 1,
  "Statement": {
    "LabelMatchStatement": {
      "Scope": "LABEL",
      "Key": "awswaf:forwardedip:geo:region:US-WA"
    }
  },
  "Action": {
    "Allow": {}
  },
  "VisibilityConfig": {
    "SampledRequestsEnabled": true,
    "CloudWatchMetricsEnabled": true,
    "MetricName": "USRegionalRestriction"
  }
}

The setup is now complete—you have a web ACL with two regular rules. The first rule matches requests that originate from the US after inspecting the origin IP in the X-Forwarded-For header, and adds geographic labels. The label match rule statement inspects requests with the Washington region granular geo labels and allows these requests.

If a user makes a web request from outside of the Washington region, the request will be blocked and a HTTP 403 error response will be returned, similar to the following.

curl -IL https://GeoBlock-1234567890.us-east-1.elb.amazonaws.com
HTTP/1.1 403 Forbidden
Server: awselb/2.0
Date: Tue, 21 Feb 2023 22:07:54 GMT
Content-Type: text/html
Content-Length: 118
Connection: keep-alive

Conclusion

AWS WAF now supports the ability to restrict traffic based on granular geographic labels. This gives you further control based on geographic location within a country.

In this post, we demonstrated two different use cases that show how this feature can be applied with CloudFront distributions and application load balancers. Note that, apart from CloudFront and application load balancers, this feature is supported by other origin types that are supported by AWS WAF, such as Amazon API Gateway and Amazon Cognito.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS WAF re:Post or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Mohit Mysore

Mohit Mysore

Mohit is a Technical Account Manager with over 5 years of experience working with AWS Customers. He is passionate about network and system administration. Outside work, He likes to travel, watch soccer and F1 and spend time with his family.

Troubleshooting InsightAppSec Authentication Issues

Post Syndicated from Shane Queeney original https://blog.rapid7.com/2023/02/02/troubleshooting-insightappsec-authentication-issues/

Troubleshooting InsightAppSec Authentication Issues

For complete visibility into the vulnerabilities in your environment, proper authentication to web apps in InsightAppSec is essential. In this article, we’ll look at issues you might encounter with macro, traffic, and selenium authentication and how to troubleshoot them. Additionally, you’ll get practical and actionable tips on using InsightAppSec to its full potential.

The first step to troubleshooting InsightAppSec authentication is to look over the scan logs. The scan logs can be located under the scan in the upper left hand corner. The logs can give you useful information such as if the authentication fails, the website is unavailable, or if any other problems arose during the scan.

  • Event log will give you information about the scan itself.
  • Platform event log will give you information about the scan engine and if it encountered any issues during the scan.
  • Download additional logs: If you wanted to dive even deeper into what happened during the scan, you can to to look into the full scan logs.
Troubleshooting InsightAppSec Authentication Issues
Troubleshooting InsightAppSec Authentication Issues

Let’s look at some of the specific issues you might encounter with the different types of authentication noted above.

Macro Authentication

When a macro fails, the logs will give you the specific step where the macro had trouble. For example, in the image below, we can see that the macro failed on step 4, where it could not click on the specific object on the page. This could be caused by the page not loading quick enough, the name or ID of the element path changing, or the web app UI being different.

Troubleshooting InsightAppSec Authentication Issues

If you determine that the macro failed because the page isn’t loading fast enough, there are two ways you can slow the macro down.

The first way is to manually add a delay between the steps that are running too quickly. You can copy any of the delays that are currently in the macro, paste them into the spot that you want to slow down, and then change the step numbers. This way you can also set the specific duration for any delays you add into your macro.

Troubleshooting InsightAppSec Authentication Issues

The second way is to add additional delays throughout the macro, and change the Min Duration so the delays last longer. This is controlled via the export settings menu on the right. The default minimum duration is set to 3,000 milliseconds (3 seconds). Increasing the duration or adding delays will cause the macro to take longer to authenticate, but when running a scan overnight an extra few minutes to ensure the login works is a good tradeoff.

Troubleshooting InsightAppSec Authentication Issues

One other potential problem when recording a macro is when you have a password manager autofill the username and password. Anything that is automatically filled in will not be recorded by the macro. It is recommended to either turn off any password managers when recording a macro, or recording in Incognito/private browsing with all other plugins disabled to ensure nothing can modify or mess with the recording.

Lastly, if you have any events on your web app, such as a prompt to join a mailing list, that does not happen every time, you can mark that macro event as optional. If the event is not marked optional, then the macro will fail as it is unable to see the element on the page. Simply change the optional flag in the macro recording from 0 to 1 and you’re all set.

Troubleshooting InsightAppSec Authentication Issues

Traffic Authentication

While traffic authentication is usually successful when the login is working, there could still be some problems with playback. When traffic authentication fails, the scan logs don’t give you specific information like with macro authentication. Instead, the traffic authentication fails with the LoggedInRegex did not detect logged in state error. If you can’t get the traffic authentication working in the Rapid7 Appsec Plugin, you can always record the authentication within your browser.

For Chrome:

  • Click on the hamburger menu in the upper right.
  • Go to More Tools → Developer Options
  • Click on Network in the top tab
  • Make sure the dot in the upper left is red to signify you are recording.
  • Log in to your web app and when complete, right click on the recorded traffic and click Save all as HAR with content.

This will download the same .HAR file that the Appsec Plugin records, allowing you to use it for scanning.

Troubleshooting InsightAppSec Authentication Issues
Troubleshooting InsightAppSec Authentication Issues

Depending on how your web app responds, you might need to change the Use agent setting for how InsightAppsec interacts with your app.

Under your scan configuration, if you go to advanced options HTTP Headers User agent, you can then change what user agent is used to reach out to your web app. The latest version of Chrome should be fine for most modern web apps, but if you’re scanning a mobile app or an app that hasn’t been updated in a few years it might benefit from being changed.

Additional information can be found here.

Troubleshooting InsightAppSec Authentication Issues

Selenium Authentication

The third primary type of authentication is selenium. Selenium is similar to the macro authentication where you record all the actions to log in to your web app. Selenium is similar to traffic authentication where you will usually receive the LoggedInRegex did not detect logged in state error in the scan logs rather than specific information about the failure.

If the Selenium script could not find specific elements on the web page, you could also receive the Could not execute Selenium script error. This means there’s a problem with the script itself, the page didn’t load fast enough, or it couldn’t find the specific element on the web page. If this happens, try re-recording the script or adding a delay.

Troubleshooting InsightAppSec Authentication Issues

Using the plugin to record selenium scripts:

  • Click on the selenium plugin and Record a new test in a new project.
  • Give the project a name and enter in the base URL where you want recording to start.
  • In the new window that appears, log in to your web app. Once complete, close out of the window.
  • Before saving, you can click on the play icon to replay and test your selenium script.
  • Review the recording and then click on the save button in the upper right. You can then upload the .side file into InsightAppSec.
Troubleshooting InsightAppSec Authentication Issues

Just like macro authentication, if your website takes a while to load and the selenium script is running too fast, you can add additional delays to slow it down. There are implicit waits built into the IDE commands but if those don’t work for you, after running the authentication, you can add in wait for element commands to your selenium script.

  • Right click on the selenium recording and click insert new command
  • Set the command to wait for element visible
  • Set the target to the element you want to wait for. In this case, we’re waiting for id=email
  • By default the value is set to wait for 30,000 milliseconds (30 seconds)

Alternatively, you can use the pause command and set the value to how long you want the script to pause for. However, it is recommended to use the wait for element visible command if the web app responds at different times.

Additional information can be found here.

Troubleshooting InsightAppSec Authentication Issues
Troubleshooting InsightAppSec Authentication Issues

Logged-In Regex Errors

After ensuring the macro, traffic, and selenium files are working correctly, the next step in the authentication process is the logged-in regex. After the login is complete, InsightAppSec will look at the web page to find a logout button or look at the browser header for a session cookie. This can be modified by clicking into the scan configuration, navigating to the Authentication tab, and clicking on Additional Settings on the left.

Troubleshooting InsightAppSec Authentication Issues

Logged-in Regex

By default, the logged-in regex looks for sign out, sign off, log out and log off, with and without spaces between the words, on the web page.

One common problem is logged-in regex not seeing the logout button on the page before ending the authentication recording. If the logout button is on another page, or sometimes under a dropdown menu, the logged-in regex won’t detect it on the page, causing the authentication to fail.

Another common issue is if the logout button is an image or otherwise can’t be detected on the page. As the field is looking for a regular expression, you can use other words on the page to determine that the login was successful. You have to ensure that the word only appears on the page after logging in, such as the username. Otherwise the login might not actually be successful.

Troubleshooting InsightAppSec Authentication Issues

Logged-in Header Regex

Depending on how your web app is structured, you might need to use the logged-in header regex instead. For example, if there is no logout button, or if you have a single page web app that calls javascript files, the logged-in regex is more likely to fail. What you can do instead is grab the session ID cookie from the web browser, and if that is detected, then the regex will be successful.  

In Chrome:

  • Click on the three dots in the upper right corner
  • Then go to more tools and then developer options.
  • Click on the application tab at the top, then cookies on the left, and finally the web app cookie.
  • From there you want to find the session information cookie that only appears after logging in to the web app. Grab the name of the cookie and place that in the logged-in header regex.
Troubleshooting InsightAppSec Authentication Issues
Troubleshooting InsightAppSec Authentication Issues
Troubleshooting InsightAppSec Authentication Issues

The logged-in regex and logged-in header regex use AND logic, so if you put information in both fields, it will then need both to be successful in order for the login to work. Alternatively, if you remove the regex from both fields, it won’t run any post authentication checks, assuming the login is successful. It is recommended to do that as a last resort, you won’t be alerted if the login does start failing or if there are any other problems.

Troubleshooting InsightAppSec Authentication Issues

Other common issues and tricks

One issue you might encounter is where you start the authentication recording. For example, starting the recording after a page redirect. If your web app redirects to another page or SSO, and you start the authentication recording after the redirect, InsightAppSec won’t have the session information to properly redirect back to the target web app when it gets replayed during the scan. It is recommended to always start your recording on the root web app directory wherever possible.

You can also choose specific directories for scanning versus the entire web app. You want to remove the URL from the app Target URLs, and add it in specifically under the scan config. You can then set the target directory in the crawl and attack configs as literal, and then add a /* wildcard to hit any subdirectories.

Troubleshooting InsightAppSec Authentication Issues
Troubleshooting InsightAppSec Authentication Issues
Troubleshooting InsightAppSec Authentication Issues

Lastly, there is a way to restrict certain elements on a web page from being scanned. Under advanced options → CrawlConfig and AttackerConfig, there’s an option called ScopeConstraintList. This is where you can explicitly include or exclude specific pages from being scanned. You can take it a step further by adding a httpParameterList to explicitly exclude certain elements on the page from being scanned. For example, if you have a contact us page and you don’t want the scanner to hit the submit button, you can add it to the httpParameterList so it won’t be touched.

Below is an example of what the fields look like in the web page source code, and how it can be configured in IAS.

Email field source code: input type="email" name="contact_email"

Submit button source code: <button id="form-submit"

The entire site is in scope, and we are telling IAS not to hit the submit button or the email field.

Troubleshooting InsightAppSec Authentication Issues

You can find the Selenium and AppSec plugins below:

New! Security Analytics provides a comprehensive view across all your traffic

Post Syndicated from Zhiyuan Zheng original https://blog.cloudflare.com/security-analytics/

New! Security Analytics provides a comprehensive view across all your traffic

New! Security Analytics provides a comprehensive view across all your traffic

An application proxying traffic through Cloudflare benefits from a wide range of easy to use security features including WAF, Bot Management and DDoS mitigation. To understand if traffic has been blocked by Cloudflare we have built a powerful Security Events dashboard that allows you to examine any mitigation events. Application owners often wonder though what happened to the rest of their traffic. Did they block all traffic that was detected as malicious?

Today, along with our announcement of the WAF Attack Score, we are also launching our new Security Analytics.

Security Analytics gives you a security lens across all of your HTTP traffic, not only mitigated requests, allowing you to focus on what matters most: traffic deemed malicious but potentially not mitigated.

Detect then mitigate

Imagine you just onboarded your application to Cloudflare and without any additional effort, each HTTP request is analyzed by the Cloudflare network. Analytics are therefore enriched with attack analysis, bot analysis and any other security signal provided by Cloudflare.

Right away, without any risk of causing false positives, you can view the entirety of your traffic to explore what is happening, when and where.

This allows you to dive straight into analyzing the results of these signals, shortening the time taken to deploy active blocking mitigations and boosting your confidence in making decisions.

New! Security Analytics provides a comprehensive view across all your traffic

We are calling this approach “detect then mitigate” and we have already received very positive feedback from early access customers.

In fact, Cloudflare’s Bot Management has been using this model for the past two years. We constantly hear feedback from our customers that with greater visibility, they have a high confidence in our bot scoring solution. To further support this new way of securing your web applications and bringing together all our intelligent signals, we have designed and developed the new Security Analytics which starts bringing signals from the WAF and other security products to follow this model.

New! Security Analytics provides a comprehensive view across all your traffic

New Security Analytics

Built on top of the success of our analytics experiences, the new Security Analytics employs existing components such as top statistics, in-context quick filters, with a new page layout allowing for rapid exploration and validation. Following sections will break down this new page layout forming a high level workflow.

The key difference between Security Analytics and Security Events, is that the former is based on HTTP requests which covers visibility of your entire site’s traffic, while Security Events uses a different dataset that visualizes whenever there is a match with any active security rule.

Define a focus

The new Security Analytics visualizes the dataset of sampled HTTP requests based on your entire application, same as bots analytics. When validating the “detect then mitigate” model with selected customers, a common behavior observed is to use the top N statistics to quickly narrow down to either obvious anomalies or certain parts of the application. Based on this insight, the page starts with selected top N statistics covering both request sources and request destinations, allowing expanding to view all the statistics available. Questions like “How well is my application admin’s area protected?” lands at one or two quick filter clicks in this area.

New! Security Analytics provides a comprehensive view across all your traffic

After a preliminary focus is defined, the core of the interface is dedicated to plotting trends over time. The time series chart has proven to be a powerful tool to help spot traffic anomalies, also allowing plotting based on different criteria. Whenever there is a spike, it is likely an attack or attack attempt has happened.

As mentioned above, different from Security Events, the dataset used in this page is HTTP requests which includes both mitigated and not mitigated requests. By mitigated requests here, we mean “any HTTP request that had a ‘terminating’ action applied by the Cloudflare platform”. The rest of the requests that have not been mitigated are either served by Cloudflare’s cache or reaching the origin. In the case such as a spike in not mitigated requests but flat in mitigated requests, an assumption could be that there was an attack that did not match any active WAF rule. In this example, you can one click to filter on not mitigated requests right in the chart which will update all the data visualized on this page supporting further investigations.

In addition to the default plotting of not mitigated and mitigated requests, you can also choose to plot trends of either attack analysis or bot analysis allowing you to spot anomalies for attack or bot behaviors.

New! Security Analytics provides a comprehensive view across all your traffic

Zoom in with analysis signals

One of the most loved and trusted analysis signals by our customers is the bot score. With the latest addition of WAF Attack Score and content scanning, we are bringing them together into one analytics page, helping you further zoom into your traffic based on some of these signals. The combination of these signals enables you to find answers to scenarios not possible until now:

  • Attack requests made by (definite) automated sources
  • Likely attack requests made by humans
  • Content uploaded with/without malicious content made by bots

Once a scenario is filtered on, the data visualization of the entire page including the top N statistics, HTTP requests trend and sampled log will be updated, allowing you to spot any anomalies among either one of the top N statistics or the time based HTTP requests trend.

New! Security Analytics provides a comprehensive view across all your traffic

Review sampled logs

After zooming into a specific part of your traffic that may be an anomaly, sampled logs provide a detailed view to verify your finding per HTTP request. This is a crucial step in a security study workflow backed by the high engagement rate when examining the usage data of such logs viewed in Security Events. While we are adding more data into each log entry, the expanded log view becomes less readable over time. We have therefore redesigned the expanded view, starting with how Cloudflare responded to a request, followed by our analysis signals, lastly the key components of the raw request itself. By reviewing these details, you validate your hypothesis of an anomaly, and if any mitigation action is required.

New! Security Analytics provides a comprehensive view across all your traffic

Handy insights to get started

When testing the prototype of this analytics dashboard internally, we learnt that the power of flexibility yields the learning curve upwards. To help you get started mastering the flexibility, a handy insights panel is designed. These insights are crafted to highlight specific perspectives into your total traffic. By a simple click on any one of the insights, a preset of filters is applied zooming directly onto the portion of your traffic that you are interested in. From here, you can review the sampled logs or further fine tune any of the applied filters. This approach has been proven with further internal studies of a highly efficient workflow that in many cases will be your starting point of using this dashboard.

New! Security Analytics provides a comprehensive view across all your traffic

How can I get it?

The new Security Analytics is being gradually rolled out to all Enterprise customers who have purchased the new Application Security Core or Advanced Bundles. We plan to roll this out to all other customers in the near future. This new view will be alongside the existing Security Events dashboard.

New! Security Analytics provides a comprehensive view across all your traffic

What’s next

We are still at an early stage moving towards the “detect then mitigate” model, empowering you with greater visibility and intelligence to better protect your web applications. While we are working on enabling more detection capabilities, please share your thoughts and feedback with us to help us improve the experience. If you want to get access sooner, reach out to your account team to get started!

Rapid7 Takes Home 2 Awards and a Highly Commended Recognition at the 2022 Belfast Telegraph IT Awards

Post Syndicated from Rapid7 original https://blog.rapid7.com/2022/11/16/rapid7-takes-home-2-awards-and-a-highly-commended-recognition-at-the-2022-belfast-telegraph-it-awards/

Rapid7 Takes Home 2 Awards and a Highly Commended Recognition at the 2022 Belfast Telegraph IT Awards

Rapid7 was honored at the Belfast Telegraph’s annual IT Awards, Friday, taking home a pair of awards including the coveted “Best Place to Work in IT” in the large company category award, and the “Cyber Security Project of the Year” award, for groundbreaking machine learning research in application security. That research was conducted in collaboration with The Centre for Secure Information Technologies (CSIT) at Queen’s University Belfast.

The team also took home a Highly Commended recognition for Best Use of Cloud services at the event.

Rapid7 Takes Home 2 Awards and a Highly Commended Recognition at the 2022 Belfast Telegraph IT Awards

The ability to work on meaningful projects that positively impact customers, being supported by a range of professional development opportunities, a culture rooted in connection and collaboration, and the invitation to explore new ways of thinking all came together in the submission to help earn them the “Best Place to Work” title.

Belfast has been regarded as one of the UK’s fastest growing technology hubs. There are now more than 300,000 people working in the technology sector of Northern Ireland, according to the Telegraph. For Rapid7, an impressive business environment and the work being done in IT Security at the local universities were significant factors in the decision to join the Belfast community in 2014. This move was in line with Rapid7’s goal of creating exceptional career experiences for their people and expanding operations to address a growing global customer base. In 2021, Rapid7 relocated to their newest office space and announced the addition of more than 200 new roles to the region.

Rapid7’s win for Cybersecurity Project of the Year focused on the cutting-edge area of machine learning in application security. Their research sought to reduce the high level of false positives generated by vulnerability scanners — a pain point that has become all too common in today’s digital environment. Rapid7’s multi-disciplinary Machine Learning (ML) team in Belfast was able to create a way to automatically prioritize real vulnerabilities and reduce false positive friction for customers. Their work has been peer-reviewed by industry experts, published in academic journals, and accepted for presentation at AISEC’s November 2022 event — where it was recognized with their “Best Paper Award.” AISEC is the leading venue for ML cybersecurity innovations.

Rapid7 Takes Home 2 Awards and a Highly Commended Recognition at the 2022 Belfast Telegraph IT Awards

Rounding out the evening was a Highly Commended recognition from the Telegraph for “Best Use of Cloud Services.” The scale and speed of cloud adoption over the last number of years has caused an exponential growth in complex security challenges. Rapid7 showcased how their team in Belfast partnered with global colleagues to create an innovative and multi-faceted solution to manage Cloud Identity Risk across three major Cloud Service Providers (CSPs) — AWS, Azure and GCP. Their work has created a positive impact on Rapid7 customers by enabling secure cloud adoption faster than ever before.

Rapid7 is a company that is firmly rooted in their company values. Employees are encouraged to challenge conventional ways of thinking, work together to create impact, be advocates for customers, bring their authentic selves and experiences to the table, and embrace the spirit of continuous learning and growth. The work represented in these awards is a testament to the incredible opportunities and experiences that are possible when these values are clearly modeled, celebrated and practiced in pursuit of a shared mission — creating a safer digital future for all.

For more information about working at Rapid7, check out rapid7.careers.com

For more information on the Belfast Telegraph IT Awards and other winners, click here.

GraphQL Security: The Next Evolution in API Protection

Post Syndicated from Ray Cochrane original https://blog.rapid7.com/2022/11/14/graphql-security-the-next-evolution-in-api-protection/

GraphQL Security: The Next Evolution in API Protection

GraphQL is an open-source data query and manipulation language that can be used to build application program interfaces (APIs). Since its initial inception by Facebook in 2012 and subsequent release in 2015, GraphQL has grown steadily in popularity. Some estimate that by 2025, more than 50% of enterprises will use GraphQL in production, up from less than 10% in 2021.

Unlike Rest APIs, which return information called from an endpoint and require the user to extract applicable information, GraphQL allows the user to query specific data from a GraphQL schema and return precise results.

Although GraphQL is relatively new and allows you to query exactly what you require, it is still prone to the same common vulnerabilities as other APIs. There are weaknesses that attackers can exploit to gain access to sensitive data, making securing GraphQL extremely important. The ability to scan a GraphQL schema will help to remediate those weaknesses and provide additional API security coverage.

GraphQL Security: The Next Evolution in API Protection

Why GraphQL security is important

While there are numerous benefits to adopting GraphQL, the security implications are less well-understood. Can functionality be abused? What problems come with querying flexibility? Which vulnerabilities can be exploited? These are all points of concern for its use base.

GraphQL is also no different from other APIs in terms of potential attack vectors. Indeed, it has its own unique security vulnerabilities on top of those you would encounter through a REST API.

As we discussed in our recent post on API security best practices, APIs are a lucrative target that can allow hackers to gain access to an otherwise secure system and exploit vulnerabilities. Not only do APIs often suffer from the same vulnerabilities as web applications — like broken access controls, injections, security misconfigurations, and vulnerabilities inherited from other dependent code libraries — but they are also more susceptible to resource consumption and rate limiting issues due to the automated nature of their users.

Best practices for securing GraphQL

The first step in securing your GraphQL endpoint is to familiarize yourself with some of the most common vulnerabilities and best practices to protect against potential exposure. The most common are injection vulnerabilities – such as SQL injection, OS command injection, and server-side request forgery – where the data provided in the arguments of a GraphQL request is injected into commands, queries, and other executable entities by the application code. Other common vulnerabilities include a lack of resource management that can enable a  Denial of Service (DoS) attack, due to general graph/query complexity and the potential for large batch requests. Finally, broken access control vulnerabilities exist in GraphQL APIs in much the same way as in other applications and services, but they can be exacerbated by the segmented nature of GraphQL query resolvers.

There are several best practice recommendations which can be utilized to counter such attacks.

  • Only allow valid values to be passed – Values should be controlled via allow lists, custom validators and correct definitions.
  • Depth limiting – Restricting the depth of a query only to predetermined levels will allow control over the expense of a query and avoid tying up your back end unnecessarily.
  • Amount limiting – Restricting the amount of a particular object in a query will reduce the expense of the query by not allowing more than x objects to be called.
  • Query cost analysis – Checking how expensive a query may be before you allow it to run is a useful additional step to block expensive or malicious queries.
  • Control input rejections – Ensure you don’t overly expose information about the API during input rejections.
  • Introspection turned off – By default, introspection will be enabled on GraphQL, but simply disabling introspection will restrict what information the consumer can access and not allow them to learn everything about your API.

OWASP have also produced a really neat cheat sheet series, which provides an introduction to GraphQL, as well as a detailed rundown of best practices and common GraphQL attacks, to help teams with upskilling and securing GraphQL.

How to secure GraphQL

The second step in securing your GraphQL endpoint is right here with Rapid7! While almost every modern DAST solution can properly parse and understand requests to and responses from web applications and, in most cases, APIs, that doesn’t mean all those tools will specifically understand GraphQL. That’s why InsightAppSec has specifically added support for parsing GraphQL requests, responses, and schemas, so that it can properly scan GraphQL-based APIs. This new feature provides customers with the ability to scan GraphQL endpoints to identify and then remediate any vulnerabilities encountered.

Initial support will be provided to identify the following vulnerabilities:

  • SQL injection
  • Blind SQL injection
  • OS commanding
  • Server-side request forgery
  • Local file inclusion/remote file inclusion  

To find out how to execute a GraphQL scan, check out our doc on the feature in InsightAppSec for additional information, support, and guidance.

New Research: Optimizing DAST Vulnerability Triage with Deep Learning

Post Syndicated from Tom Caiazza original https://blog.rapid7.com/2022/11/09/new-research-optimizing-dast-vulnerability-triage-with-deep-learning/

New Research: Optimizing DAST Vulnerability Triage with Deep Learning

On November 11th 2022, Rapid7 will for the first time publish and present state-of-the-art machine learning (ML) research at AISec, the leading venue for AI/ML cybersecurity innovations. Led by Dr. Stuart Millar, Senior Data Scientist, Rapid7’s multi-disciplinary ML group has designed a novel deep learning model to automatically prioritize application security vulnerabilities and reduce false positive friction. Partnering with The Centre for Secure Information Technologies (CSIT) at Queen’s University Belfast, this is the first deep learning system to optimize DAST vulnerability triage in application security. CSIT is the UK’s Innovation and Knowledge Centre for cybersecurity, recognised by GCHQ and EPSRC as a Centre of Excellence for cybersecurity research.

Security teams struggle tremendously with prioritizing risk and managing a high level of false positive alerts, while the rise of the cloud post-Covid means web application security is more crucial than ever. Web attacks continue to be the most common type of compromise; however, high levels of false positives generated by vulnerability scanners have become an industry-wide challenge. To combat this, Rapid7’s innovative ML architecture optimizes vulnerability triage by utilizing the structure of traffic exchanges between a DAST scanner and a given web application. Leveraging convolutional neural networks and natural language processing, we designed a deep learning system that encapsulates internal representations of request and response HTTP traffic before fusing them together to make a prediction of a verified vulnerability or a false positive. This system learns from historical triage carried out by our industry-leading SMEs in Rapid7’s Managed Services division.

Given the skillset, time, and cognitive effort required to review high volumes of DAST results by hand, the addition of this deep learning capability to a scanner creates a hybrid system that enables application security analysts to rank scan results, deprioritise false positives, and concentrate on likely real vulnerabilities. With the system able to make hundreds of predictions per second, productivity is improved and remediation time reduced, resulting in stronger customer security postures. A rigorous evaluation of this machine learning architecture across multiple customers shows that 96% of false positives on average can automatically be detected and filtered out.

Rapid7’s deep learning model uses convolutional neural networks and natural language processing to represent the structure of client-server web traffic. Neither the model nor the scanner require source code access — with this hybrid approach first finding potential vulnerabilities using a scan engine, followed by the model predicting those findings as real vulnerabilities or false positives. The resultant solution enables the augmentation of triage decisions by deprioritizing false positives. These time savings are essential to reduce exposure and harden security postures — considering the average time to detect a web breach can be several months, the sooner a vulnerability can be discovered, verified and remediated, the smaller the window of opportunity for an attacker.

Now recognized as state-of-the-art research after expert peer review, Rapid7 will introduce the work at AISec on Nov 11th 2022 at the Omni Los Angeles Hotel at California Plaza. Watch this space for further developments, and download a copy of the pre-print publication here.

How to deploy AWS Network Firewall by using AWS Firewall Manager

Post Syndicated from Harith Gaddamanugu original https://aws.amazon.com/blogs/security/how-to-deploy-aws-network-firewall-by-using-aws-firewall-manager/

AWS Network Firewall helps make it easier for you to secure virtual networks at scale inside Amazon Web Services (AWS). Without having to worry about availability, scalability, or network performance, you can now deploy Network Firewall with the AWS Firewall Manager service. Firewall Manager allows administrators in your organization to apply network firewalls across accounts. This post will take you through different deployment models and demonstrate with step-by-step instructions how this can be achieved.

Here’s a quick overview of the services used in this blog post:

  • Amazon Virtual Private Cloud (Amazon VPC) is a logically isolated virtual network. It has inbuilt network security controls and routing between VPC subnets by design. An internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between your VPC and the internet.
  • AWS Transit Gateway is a service that connects your VPCs to each other, to on-premises networks, to virtual private networks (VPNs), and to the internet through a central hub.
  • AWS Network Firewall is a service that secures network traffic at the organization and account levels. AWS Network Firewall policies govern the monitoring and protection behavior of these firewalls. The specifics of these policies are defined in rule groups. A rule group consists of rules that define reusable criteria for inspecting and processing network traffic. Network Firewall can support thousands of rules that can be based on a domain, port, protocol, IP address, or pattern matching.
  • AWS Firewall Manager is a security management service that acts as a central place for you to configure and deploy firewall rules across AWS Regions, accounts, and resources in AWS Organizations. Firewall Manager helps you to ensure that all firewall rules are consistently enforced, even as new accounts and resources are created. Firewall Manager integrates with AWS Network Firewall, Amazon Route 53 Resolver DNS Firewall, AWS WAF, AWS Shield Advanced, and Amazon VPC security groups.

Deployment models overview

When it comes to securing multiple AWS accounts, security teams categorize firewall deployment into centralized or distributed deployment models. Firewall Manager supports Network Firewall deployment in both modes. There are multiple additional deployment models available with Network Firewall. For more information about these models, see the blog post Deployment models for AWS Network Firewall.

Centralized deployment model

Network Firewall can be centrally deployed as an Amazon VPC attachment to a transit gateway that you set up with AWS Transit Gateway. Transit Gateway acts as a network hub and simplifies the connectivity between VPCs as well as on-premises networks. Transit Gateway also provides inter-Region peering capabilities to other transit gateways to establish a global network by using the AWS backbone. In a centralized transit gateway model, Firewall Manager can create one or more firewall endpoints for each Availability Zone within an inspection VPC. Network Firewall deployed in a centralized model covers the following use cases:

  • Filtering and inspecting traffic within a VPC or in transit between VPCs, also known as east-west traffic.
  • Filtering and inspecting ingress and egress traffic to and from the internet or on-premises networks, also known as north-south traffic.

Distributed deployment model

With the distributed deployment model, Firewall Manager creates endpoints into each VPC that requires protection. Each VPC is protected individually and VPC traffic isolation is retained. You can either customize the endpoint location by specifying which Availability Zones to create firewall endpoints in, or Firewall Manager can automatically create endpoints in those Availability Zones that have public subnets. Each VPC does not require connectivity to any other VPC or transit gateway. Network Firewall configured in a distributed model addresses the following use cases:

  • Protect traffic between a workload in a public subnet (for example, an EC2 instance) and the internet. Note that the only recommended workloads that should have a network interface in a public subnet are third-party firewalls, load balancers, and so on.
  • Protect and filter traffic between an AWS resource (for example Application Load Balancers or Network Load Balancers) in a public subnet and the internet.

Deploying Network Firewall in a centralized model with Firewall Manager

The following steps provide a high-level overview of how to configure Network Firewall with Firewall Manager in a centralized model, as shown in Figure 1.

Overview of how to configure a centralized model

  1. Complete the steps described in the AWS Firewall Manager prerequisites.
  2. Create an Inspection VPC in each Firewall Manager member account. Firewall Manager will use these VPCs to create firewalls. Follow the steps to create a VPC.
  3. Create the stateless and stateful rule groups that you want to centrally deploy as an administrator. For more information, see Rule groups in AWS Network Firewall.
  4. Build and deploy Firewall Manager policies for Network Firewall, based on the rule groups you defined previously. Firewall Manager will now create firewalls across these accounts.
  5. Finish deployment by updating the related VPC route tables in the member account, so that traffic gets routed through the firewall for inspection.
    Figure 1: Network Firewall centralized deployment model

    Figure 1: Network Firewall centralized deployment model

The following steps provide a detailed description of how to configure Network Firewall with Firewall Manager in a centralized model.

To deploy network firewall policy centrally with Firewall Manager (console)

  1. Sign in to your Firewall Manager delegated administrator account and open the Firewall Manager console under AWS WAF and Shield services.
  2. In the navigation pane, under AWS Firewall Manager, choose Security policies.
  3. On the Filter menu, select the AWS Region where your application is hosted, and choose Create policy. In this example, we choose US East (N. Virginia).
  4. As shown in Figure 2, under Policy details, choose the following:
    1. For AWS services, choose AWS Network Firewall.
    2. For Deployment model, choose Centralized.
      Figure 2: Network Firewall Manager policy type and Region for centralized deployment

      Figure 2: Network Firewall Manager policy type and Region for centralized deployment

  5. Choose Next.
  6. Enter a policy name.
  7. In the AWS Network Firewall policy configuration pane, you can choose to configure both stateless and stateful rule groups along with their logging configurations. In this example, we are not creating any rule groups and keep the default configurations, as shown in Figure 3. If you would like to add a rule group, you can create rule groups here and add them to the policy.
    Figure 3: AWS Network Firewall policy configuration

    Figure 3: AWS Network Firewall policy configuration

  8. Choose Next.
  9. For Inspection VPC configuration, select the account and add the VPC ID of the inspection VPC in each of the member accounts that you previously created, as shown in Figure 4. In the centralized model, you can only select one VPC under a specific account as the inspection VPC.
    Figure 4: Inspection VPC configuration

    Figure 4: Inspection VPC configuration

  10. For Availability Zones, select the Availability Zones in which you want to create the Network Firewall endpoint(s), as shown in Figure 5. You can select by Availability Zone name or Availability Zone ID. Optionally, if you want to specify the CIDR for each Availability Zone, or specify the subnets for firewall subnets, then you can add the CIDR blocks. If you don’t provide CIDR blocks, Firewall Manager queries your VPCs for available IP addresses to use. If you provide a list of CIDR blocks, Firewall Manager searches for new subnets only in the CIDR blocks that you provide.
    Figure 5: Network Firewall endpoint Availability Zones configuration

    Figure 5: Network Firewall endpoint Availability Zones configuration

  11. Choose Next.
  12. For Policy scope, choose VPC, as shown in Figure 6.
    Figure 6: Firewall Manager policy scope configuration

    Figure 6: Firewall Manager policy scope configuration

  13. For Resource cleanup, choose Automatically remove protections from resources that leave the policy scope. When you select this option, Firewall Manager will automatically remove Firewall Manager managed protections from your resources when a member account or a resource leaves the policy scope. Choose Next.
  14. For Policy tags, you don’t need to add any tags. Choose Next.
  15. Review the security policy, and then choose Create policy.
  16. To route traffic for inspection, you manually update the route configuration in the member accounts. Exactly how you do this depends on your architecture and the traffic that you want to filter. For more information, see Route table configurations for AWS Network Firewall.

Note: In current versions of Firewall Manager, centralized policy only supports one inspection VPC per account. If you want to have multiple inspection VPCs in an account to inspect multiple firewalls, you cannot deploy all of them through Firewall Manager centralized policy. You have to manually deploy to the network firewalls in each inspection VPC.

Deploying Network Firewall in a distributed model with Firewall Manager

The following steps provide a high-level overview of how to configure Network Firewall with Firewall Manager in a distributed model, as shown in Figure 7.

Overview of how to configure a distributed model

  1. Complete the steps described in the AWS Firewall Manager prerequisites.
  2. Create a new VPC with a desired tag in each Firewall Manager member account. Firewall Manager uses these VPC tags to create network firewalls in tagged VPCs. Follow these steps to create a VPC.
  3. Create the stateless and stateful rule groups that you want to centrally deploy as an administrator. For more information, see Rule groups in AWS Network Firewall.
  4. Build and deploy Firewall Manager policy for network firewalls into tagged VPCs based on the rule groups that you defined in the previous step.
  5. Finish deployment by updating the related VPC route tables in the member accounts to begin routing traffic through the firewall for inspection.
    Figure 7: Network Firewall distributed deployment model

    Figure 7: Network Firewall distributed deployment model

The following steps provide a detailed description how to configure Network Firewall with Firewall Manager in a distributed model.

To deploy Network Firewall policy distributed with Firewall Manager (console)

  1. Create new VPCs in member accounts and tag them. In this example, you launch VPCs in the US East (N. Virginia) Region. Create a new VPC in a member account by using the VPC wizard, as follows.
    1. Choose VPC with a Single Public Subnet. For this example, select a subnet in the us-east-1a Availability Zone.
    2. Add a desired tag to this VPC. For this example, use the key Network Firewall and the value yes. Make note of this tag key and value, because you will need this tag to configure the policy in the Policy scope step.
  2. Sign in to your Firewall Manager delegated administrator account and open the Firewall Manager console under AWS WAF and Shield services.
  3. In the navigation pane, under AWS Firewall Manager, choose Security policies.
  4. On the Filter menu, select the AWS Region where you created VPCs previously and choose Create policy. In this example, you choose US East (N. Virginia).
    1. For AWS services, choose AWS Network Firewall.
    2. For Deployment model, choose Distributed, and then choose Next.
      Figure 8: Network Firewall Manager policy type and Region for distributed deployment

      Figure 8: Network Firewall Manager policy type and Region for distributed deployment

  5. Enter a policy name.
  6. On the AWS Network Firewall policy configuration page, you can configure both stateless and stateful rule groups, along with their logging configurations. In this example you are not creating any rule groups, so you choose the default configurations, as shown in Figure 9. If you would like to add a rule group, you can create rule groups here and add them to the policy.
    Figure 9: Network Firewall policy configuration

    Figure 9: Network Firewall policy configuration

  7. Choose Next.
  8. In the Configure AWS Network Firewall Endpoint section, as shown in Figure 10, you can choose Custom endpoint configuration or Automatic endpoint configuration. In this example, you choose Custom endpoint configuration and select the us-east-1a Availability Zone. Optionally, if you want to specify the CIDR for each Availability Zone or specify the subnets for firewall subnets, then you can add the CIDR blocks. If you don’t provide CIDR blocks, Firewall Manager queries your VPCs for available IP addresses to use. If you provide a list of CIDR blocks, Firewall Manager searches for new subnets only in the CIDR blocks that you provide.
    Figure 10: Network Firewall endpoint Availability Zones configuration

    Figure 10: Network Firewall endpoint Availability Zones configuration

  9. Choose Next.
  10. For AWS Network Firewall route configuration, choose the following options, as shown in Figure 11. This will monitor the route configuration using the administrator account, to help ensure that traffic is routed as expected through the network firewalls.
    1. For Route management, choose Monitor.
    2. Under Traffic type, for Internet gateway, choose Add to firewall policy.
    3. Select the checkbox for Allow required cross-AZ traffic, and then choose Next.
      Figure 11: Network Firewall route management configuration

      Figure 11: Network Firewall route management configuration

  11. For Policy scope, select the following options to create network firewalls in previously tagged VPCs, as shown in Figure 12.
    1. For AWS accounts this policy applies to, choose All accounts under my AWS organization.
    2. For Resource type, choose VPC.
    3. For Resources, choose Include only resources that have the specified tags.
    4. For Key, enter Network Firewall. For Value, Enter Yes. The tag you are using here is the same tag defined in step 1.
      Figure 12: AWS Firewall Manager policy scope configuration

      Figure 12: AWS Firewall Manager policy scope configuration

      Important: Be careful when defining the policy scope. Each policy creates Network Firewall endpoints in all the VPCs and their Availability Zones that are within the policy scope. If you select an inappropriate scope, it could result in the creation of a large number of network firewalls and incur significant charges for AWS Network Firewall.

  12. For Resource cleanup, select the Automatically remove protections from resources that leave the policy scope check box, and then choose Next.
    Figure 13: Firewall Manager Resource cleanup configuration

    Figure 13: Firewall Manager Resource cleanup configuration

  13. For Policy tags, you don’t need to add any tags. Choose Next.
  14. Review the security policy, and then choose Create policy.
  15. To route traffic for inspection, you need to manually update the route configuration in the member accounts. Exactly how you do this depends on your architecture and the traffic that you want to filter. For more information, see Route table configurations for AWS Network Firewall.

Clean up

To avoid incurring future charges, delete the resources you created for this solution.

To delete Firewall Manager policy (console)

  1. Sign in to your Firewall Manager delegated administrator account and open the Firewall Manager console under AWS WAF and Shield services
  2. In the navigation pane, choose Security policies.
  3. Choose the option next to the policy that you want to delete.
  4. Choose Delete all policy resources, and then choose Delete. If you do not select Delete all policy resources, then only the firewall policy on the administrator account will be deleted, not network firewalls deployed in the other accounts in AWS Organizations.

To delete the VPCs you created as prerequisites

Conclusion

In this blog post, you learned how you can use either a centralized or a distributed deployment model for Network Firewall, so developers in your organization can build firewall rules, create security policies, and enforce them in a consistent, hierarchical manner across your entire infrastructure. As new applications are created, Firewall Manager makes it easier to bring new applications and resources into a consistent state by enforcing a common set of security rules.

For information about pricing, see the pages for AWS Firewall Manager pricing and AWS Network Firewall pricing. For more information, see the other AWS Network Firewall posts on the AWS Security Blog. Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Firewall Manager re:Post or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Harith Gaddamanugu

Harith Gaddamanugu

Harith works at AWS as a Sr. Edge Specialist Solutions Architect. He stays motivated by solving problems for customers across AWS Perimeter Protection and Edge services. When he is not working, he enjoys spending time outdoors with friends and family.

Yang Liu

Yang Liu

Yang works as cloud support engineer II with AWS. On a daily basis, he provides solutions for customers’ cloud architecture questions related to networking infrastructure and the security domain. Outside of work, Yang loves traveling with his family and two Corgis, Cookie and Cache.