Tag Archives: cloud security

AWS Security Profile: Byron Cook, Director of the AWS Automated Reasoning Group

Post Syndicated from Supriya Anand original https://aws.amazon.com/blogs/security/aws-security-profile-byron-cook-director-aws-automated-reasoning-group/

Author


Byron Cook leads the AWS Automated Reasoning Group, which automates proof search in mathematical logic and builds tools that provide AWS customers with provable security. Byron has pushed boundaries in this field, delivered real-world applications in the cloud, and fostered a sense of community amongst its practitioners. In recognition of Byron’s contributions to cloud security and automated reasoning, the UK’s Royal Academy of Engineering elected him as one of 7 new Fellows in computing this year.

I recently sat down with Byron to discuss his new Fellowship, the work that it celebrates, and how he and his team continue to use automated reasoning in new ways to provide higher security assurance for customers in the AWS cloud.

Congratulations, Byron! Can you tell us a little bit about the Royal Academy of Engineering, and the significance of being a Fellow?

Thank you. I feel very honored! The Royal Academy of Engineering is focused on engineering in the broad sense; for example, aeronautical, biomedical, materials, etc. I’m one of only 7 Fellows elected this year that specialize in computing or logic, making the announcement really unique.

As for what the Royal Academy of Engineering is: the UK has Royal Academies for key disciplines such as music, drama, etc. The Royal Academies focus financial support and recognition on these fields, and gives a location and common meeting place. The Royal Academy of Music, for example, is near Regent’s Park in West London. The Royal Academy of Engineering’s building is in Carlton Place, one of the most exclusive locations in central London near Pall Mall and St. James’ Park. I’ve been to a number of lectures and events in that space. For example, it’s where I spoke ten years ago when I was the recipient of the Roger Needham prize. Some examples of previously elected Fellows include Sir Frank Whittle, who invented the jet engine; radar pioneer Sir George MacFarlane, and Sir Tim Berners-Lee, who developed the world-wide web.

Can you tell us a little bit about why you were selected for the award?

The letter I received from the Royal Academy says it better than I could say myself:

“Byron Cook is a world-renowned leader in the field of formal verification. For over 20 years Byron has worked to bring this field from academic hypothesis to mechanised industrial reality. Byron has made major research contributions, built influential tools, led teams that operationalised formal verification activities, and helped establish connections between others that have dramatically accelerated growth of the area. Byron’s tools have been applied to a wide array of topics, e.g. biological systems, computer operating systems, programming languages, and security. Byron’s Automated Reasoning Group at Amazon is leading the field to even greater success”.

Formal verification is the one term here that may be foreign to you, so perhaps I should explain. Formal verification is the use of mathematical logic to prove properties of systems. Euclid, for example, used formal verification in ~300 BC to prove that the Pythagorean theorem holds for all possible right-angled triangles. Today we are using formal verification to prove things about all possible configurations of a computer program might reach. When I founded Amazon’s Automated Reasoning Group, I named it that because my ambition was to automate all of the reasoning performed during formal verification.

Can you give us a bit of detail about some of the “research contributions and tools” mentioned in the text from Royal Academy of Engineering?

Probably my best-known work before joining Amazon was on the Terminator tool. Terminator was designed to reason at compile-time about what a given computer program would eventually do when running in production. For example, “Will the program eventually halt?” This is the famous “Halting problem,” proved undecidable in the 1930s. The Terminator tool piloted a new approach to the problem which is popular now, based on the idea of incrementally improving the best guess for a proof based on failed proof attempts. This was the first known approach capable of scaling termination proving to industrial problems. My colleagues and I used Terminator to find bugs in device drivers that could cause operating systems to become unresponsive. We found many bugs in device drivers that ran keyboards, mice, network devices, and video cards. The Terminator tool was also the basis of BioModelAnaylzer. It turns out that there’s a connection between diseases like Leukemia and the Halting problem: Leukemia is a termination bug in the genetic-regulatory pathways in your blood. You can think of it in the same way you think of a device driver that’s stuck in an infinite loop, causing your computer to freeze. My tools helped answer fundamental questions that no tool could solve before. Several pharmaceutical companies use BioModelAnaylzer today to understand disease and find new treatment options. And these days, there is an annual international competition with many termination provers that are much better than the Terminator. I think that this is what Royal Academy is talking about when they say I moved the area from “academic hypothesis to mechanized industrial reality.”

I have also worked on problems related to the question of P=NP, the most famous open problem in computing theory. From 2000-2006, I built tools that made NP feel equal to P in certain limited circumstances to try and understand the problem better. Then I focused on circumstances that aligned with important industrial problems, like proving the absence of bugs in microprocessors, flight control software, telecommunications systems, and railway control systems. These days the tools in this space are incredibly powerful. You should check out the software tools CVC4 or Z3.

And, of course, there’s my work with the Automated Reasoning Group, where I’ve built a team of domain experts that develop and apply formal verification tools to a wide variety of problems, helping make the cloud more secure. We have built tools that automatically reason about the semantics of policies, networks, cryptography, virtualization, etc. We reason about the implementation of Amazon Web Services (AWS) itself, and we’ve built tools that help customers prove the correctness of their AWS-based implementations.

Could you go into a bit more detail about how this work connects to Amazon and its customers?

AWS provides cloud services globally. Cloud is shorthand for on-demand access to IT resources such as compute, storage, and analytics via the Internet with pay-as-you-go pricing. AWS has a wide variety of customers, ranging from individuals to the largest enterprises, and practically all industries. My group develops mathematical proof tools that help make AWS more secure, and helps AWS customers understand how to build in the cloud more securely.

I first became an AWS customer myself when building BioModelAnaylzer. AWS allowed us working on this project to solve major scientific challenges (see this Nature Scientific Report for an example) using very large datacenters, but without having to buy the machines, maintain the machines, maintain the rooms that the machines would sit in, the A/C system that would keep them cool, etc. I was also able to easily provide our customers with access to the tool via the cloud, because it’s all over the internet. I just pointed people to the end-point on the internet and, presto, they were using the tool. About 5 years before developing BioModelAnalyzer, I was developing proof tools for device drivers and I gave a demo of the tool to my executive leadership. At the end of the demo, I was asked if 5,000 machines would help us do more proofs. Computationally, the answer was an obvious “yes,” but then I thought a minute about the amount of overhead required to manage a fleet of 5,000 machines and reluctantly replied “No, but thank you very much for the offer!” With AWS, it’s not even a question. Anyone with an Amazon account can provision 5,000 machines for practically nothing. In less than 5 minutes, you can be up and running and computing with thousands of machines.

What I love about working at AWS is that I can focus a very small team on proving the correctness of some aspect of AWS (for example, the cryptography) and, because of the size and importance of the customer base, we make much of the world meaningfully more secure. Just to name a few examples: s2n (the Amazon TLS implementation); the AWS Key Management Service (KMS), which allows customers to securely store crypto keys; and networking extensions to the IoT operating system Amazon FreeRTOS, which customers use to link cloud to IoT devices, such as robots in factories. We also focus on delivering service features that help customers prove the correctness of their AWS-based implementations. One example is Tiros, which powers a network reachability feature in Amazon Inspector. Another example is Zelkova, which powers features in services such as Amazon S3, AWS Config, and AWS IoT Device Defender.

When I think of mathematical logic I think of obscure theory and messy blackboards, not practical application. But it sounds like you’ve managed to balance the tension between theory and practical industrial problems?

I think that this is a common theme that great scientists don’t often talk about. Alan Turing, for example, did his best work during the war. John Snow, who made fundamental contributions to our understanding of germs and epidemics, did his greatest work while trying to figure out why people were dying in the streets of London. Christopher Stratchey, one of the founders of our field, wrote:

“It has long been my personal view that the separation of practical and theoretical work is artificial and injurious. Much of the practical work done in computing, both in software and in hardware design, is unsound and clumsy because the people who do it have not any clear understanding of the fundamental design principles in their work. Most of the abstract mathematical and theoretical work is sterile because it has no point of contact with real computing.”

Throughout my career, I’ve been at the intersection of practical and theoretical. In the early days, this was driven by necessity: I had two children during my PhD and, frankly, I needed the money. But I soon realized that my deep connection to real engineering problems was an advantage and not a disadvantage, and I’ve tried through the rest of my career to stay in that hot spot of commercially applicable problems while tackling abstract mathematical topics.

What’s next for you? For the Automated Reasoning Group? For your scientific field?

The Royal Academy of Engineering kindly said that I’ve brought “this field from academic hypothesis to mechanized industrial reality.” That’s perhaps true, but we are very far from done: it’s not yet an industrial standard. The full power of automated reasoning is not yet available to everyone because today’s tools are either difficult to use or weak. The engineering challenge is to make them both powerful and easy to use. With that I believe that they’ll become a key part of every software engineer’s daily routine. What excites me is that I believe that Amazon has a lot to teach me about how to operationalize the impossible. That’s what Amazon has done over and over again. That’s why I’m at Amazon today. I want to see these proof techniques operating automatically at Amazon scale.

Links:
Provable security webpage
Lecture: Fundamentals for Provable Security at AWS
Lecture: The evolution of Provable Security at AWS
Lecture: Automating compliance verification using provable security
Lecture: Byron speaks about Terminator at University of Colorado
https://biomodelanalyzer.org/

If you have feedback about this post, let us know in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

New whitepaper: Achieving Operational Resilience in the Financial Sector and Beyond

Post Syndicated from Rahul Prabhakar original https://aws.amazon.com/blogs/security/new-whitepaper-achieving-operational-resilience-in-the-financial-sector-and-beyond/

AWS has released a new whitepaper, Amazon Web Services’ Approach to Operational Resilience in the Financial Sector and Beyond, in which we discuss how AWS and customers build for resiliency on the AWS cloud. We’re constantly amazed at the applications our customers build using AWS services — including what our financial services customers have built, from credit risk simulations to mobile banking applications. Depending on their internal and regulatory requirements, financial services companies may need to meet specific resiliency objectives and withstand low-probability events that could otherwise disrupt their businesses. We know that financial regulators are also interested in understanding how the AWS cloud allows customers to meet those objectives. This new whitepaper addresses these topics.

The paper walks through the AWS global infrastructure and how we build to withstand failures. Reflecting how AWS and customers share responsibility for resilience, the paper also outlines how a financial institution can build mission-critical applications to leverage, for example, multiple Availability Zones to improve their resiliency compared to a traditional, on-premises environment.

Security and resiliency remain our highest priority. We encourage you to check out the paper and provide feedback. We’d love to hear from you, so don’t hesitate to get in touch with us by reaching out to your account executive or contacting AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

A simpler way to assess the network exposure of EC2 instances: AWS releases new network reachability assessments in Amazon Inspector

Post Syndicated from Catherine Dodge original https://aws.amazon.com/blogs/security/amazon-inspector-assess-network-exposure-ec2-instances-aws-network-reachability-assessments/

Performing network security assessments allows you to understand your cloud infrastructure and identify risks, but this process traditionally takes a lot of time and effort. You might need to run network port-scanning tools to test routing and firewall configurations, then validate what processes are listening on your instance network ports, before finally mapping the IPs identified in the port scan back to the host’s owner. To make this process simpler for our customers, AWS recently released the Network Reachability rules package in Amazon Inspector, our automated security assessment service that enables you to understand and improve the security and compliance of applications deployed on AWS. The existing Amazon Inspector host assessment rules packages check the software and configurations on your Amazon Elastic Compute Cloud (Amazon EC2) instances for vulnerabilities and deviations from best practices.

The new Network Reachability rules package analyzes your Amazon Virtual Private Cloud (Amazon VPC) network configuration to determine whether your EC2 instances can be reached from external networks such as the Internet, a virtual private gateway, AWS Direct Connect, or from a peered VPC. In other words, it informs you of potential external access to your hosts. It does this by analyzing all of your network configurations—like security groups, network access control lists (ACLs), route tables, and internet gateways (IGWs)—together to infer reachability. No packets need to be sent across the VPC network, nor must attempts be made to connect to EC2 instance network ports—it’s like packet-less network mapping and reconnaissance!

This new rules package is the first Amazon Inspector rules package that doesn’t require an Amazon Inspector agent to be installed on your Amazon EC2 instances. If you do optionally install the Amazon Inspector agent on your EC2 instances, the network reachability assessment will also report on the processes listening on those ports. In addition, installing the agent allows you to use Amazon Inspector host rules packages to check for vulnerabilities and security exposures in your EC2 instances.

To determine what is reachable, the Network Reachability rules package uses the latest technology from the AWS Provable Security initiative, referring to a suite of AWS technology powered by automated reasoning. It analyzes your AWS network configurations such as Amazon Virtual Private Clouds (VPCs), security groups, network access control lists (ACLs), and route tables to prove reachability of ports. What is automated reasoning, you ask? It’s fancy math that proves things are working as expected. In more technical terms, it’s a method of formal verification that automatically generates and checks mathematical proofs, which help to prove systems are functioning correctly. Note that Network Reachability only analyzes network configurations, so any other network controls, like on-instance IP filters or external firewalls, are not accounted for in the assessment. See documentation for more details.

Tim Kropp, Technology & Security Lead at Bridgewater Associates talked about how Bridgewater benefitted from Network Reachability Rules. “AWS provides tools for organizations to know if all their compliance, security, and availability requirements are being met. Technology created by the AWS Automated Reasoning Group, such as the Network Reachability Rules, allow us to continuously evaluate our live networks against these requirements. This grants us peace of mind that our most sensitive workloads exist on a network that we deeply understand.”

Network reachability assessments are priced per instance per assessment (instance-assessment). The free trial offers the first 250 instance-assessments for free within your first 90 days of usage. After the free trial, pricing is tiered based on your monthly volume. You can see pricing details here.

Using the Network Reachability rules package

Amazon Inspector makes it easy for you to run agentless network reachability assessments on all of your EC2 instances. You can do this with just a couple of clicks on the Welcome page of the Amazon Inspector console. First, use the check box to Enable network assessments, then select Run Once to run a single assessment or Run Weekly to run a weekly recurring assessment.
 

Figure 1: Assessment setup

Figure 1: Assessment setup

Customizing the Network Reachability rules package

If you want to target a subset of your instances or modify the recurrence of assessments, you can select Advanced setup for guided steps to set up and run a custom assessment. For full customization options including getting notifications for findings, select Cancel and use the following steps.

  1. Navigate to the Assessment targets page of the Amazon Inspector console to create an assessment target. You can select the option to include all instances within your account and AWS region, or you can assess a subset of your instances by adding tags to them in the EC2 console and inputting those tags when you create the assessment target. Give your target a name and select Save.
     
    Figure 2: Assessment target

    Figure 2: Assessment target

    Optional agent installation: To get information about the processes listening on reachable ports, you’ll need to install the Amazon Inspector agent on your EC2 instances. If your instances allow the Systems Manager Run command, you can select the Install Agents option while creating your assessment target. Otherwise, you can follow the instructions here to install the Amazon Inspector agent on your instances before setting up and running the Amazon Inspector assessments using the steps above. In addition, installing the agent allows you to use Amazon Inspector host rules packages to check for vulnerabilities and security exposures in your EC2 instances.

  2. Go to the Assessment templates page of the Amazon Inspector console. In the Target name field, select the assessment target that you created in step 1. From the Rules packages drop-down, select the Network Reachability-1.1 rules package. You can also set up a recurring schedule and notifications to your Amazon Simple Notification Service topic. (Learn more about Amazon SNS topics here). Now, select Create and Run. That’s it!

    Alternately, you can run the assessment by selecting the template you just created from the Assessment templates page and then selecting Run, or by using the Amazon Inspector API.

You can view your findings on the Findings page in the Amazon Inspector console. You can also download a CSV of the findings from Amazon Inspector by using the Download button on the page, or you can use the AWS application programming interface (API) to retrieve findings in another application.

Note: You can create any CloudWatch Events rule and add your Amazon Inspector assessment as the target using the assessment template’s Amazon Resource Name (ARN), which is available in the console. You can use CloudWatch Events rules to automatically trigger assessment runs on a schedule or based on any other event. For example, you can trigger a network reachability assessment whenever there is a change to a security group or another VPC configuration, allowing you to automatically be alerted about insecure network exposure.

Understanding your EC2 instance network exposure

You can use this new rules package to analyze the accessibility of critical ports, as well as all other network ports. For critical ports, Amazon Inspector will show the exposure of each and will offer findings per port. When critical, well-known ports (based on Amazon’s standard guidance) are reachable, findings will be created with higher severities. When the Amazon Inspector agent is installed on the instance, each reachable port with a listener will also be reported. The following examples show network exposure from the Internet. There are analogous findings for network exposure via VPN, Direct Connect, or VPC peering. Read more about the finding types here.

Example finding for a well-known port open to the Internet, without installation of the Amazon Inspector Agent:
 

Figure 3: Finding for a well-known port open to the Internet

Figure 3: Finding for a well-known port open to the Internet

Example finding for a well-known port open to the Internet, with the Amazon Inspector Agent installed and a listening process (SSH):
 

Figure 4: Finding for a well-known port open to the Internet, with the Amazon Inspector Agent installed and a listening process (SSH)

Figure 4: Finding for a well-known port open to the Internet, with the Amazon Inspector Agent installed and a listening process (SSH)

Note that the findings provide the details on exactly how network access is allowed, including which VPC and subnet the instance is in. This makes tracking down the root cause of the network access straightforward. The recommendation includes information about exactly which Security Group you can edit to remove the access. And like all Amazon Inspector findings, these can be published to an SNS topic for additional processing, whether that’s to a ticketing system or to a custom AWS Lambda function. (See our blog post on automatic remediation of findings for guidance on how to do this.) For example, you could use Lambda to automatically remove ingress rules in the Security Group to address a network reachability finding.

Summary

With this new functionality from Amazon Inspector, you now have an easy way of assessing the network exposure of your EC2 instances and identifying and resolving unwanted exposure. We’ll continue to tailor findings to align with customer feedback. We encourage you to try out the Network Reachability Rules Package yourself and post any questions in the Amazon Inspector forum.

Want more AWS Security news? Follow us on Twitter.

Author

Catherine Dodge

Catherine is a Senior Technical Program Manager in AWS Security. She helps teams use cutting edge AI technology to build security products to delight customers. She has over 15 years of experience in the cybersecurity field, mostly spent at the assembly level, either pulling apart malware or piecing together shellcode. In her spare time, she’s always tearing something down around the house, preferably ivy or drywall.

Author

Stephen Quigg

Stephen — known as “Squigg,” internally — is a Principal Security Solutions Architect at AWS. His job is helping customers understand AWS security and how they can meet their most demanding security requirements when using the AWS platform. It’s not all about solving hard problems though, he gets just as much delight when an AWS customer creates their first VPC! When he’s not with his customers, you can find him up in his loft making bleeping noises on a bunch of old synthesizers.

Daniel Schwartz-Narbonne shares how automated reasoning is helping achieve the provable security of AWS boot code

Post Syndicated from Supriya Anand original https://aws.amazon.com/blogs/security/automated-reasoning-provable-security-of-boot-code/

I recently sat down with Daniel Schwartz-Narbonne, a software development engineer in the Automated Reasoning Group (ARG) at AWS, to learn more about the groundbreaking work his team is doing in cloud security. The team uses automated reasoning, a technology based on mathematical logic, to prove that key components of the cloud are operating as intended. ARG recently hit a milestone by leveraging this technology to prove the memory safety of boot code components. Boot code is the foundation of the cloud. Proving the memory safety of boot code is akin to verifying that the foundation of your house is secure—it lets you build upon it without worry. Daniel shared details with the AWS Security Blog team about the project’s accomplishments and how it will help solve cloud security challenges.

Daniel Schwartz-Narbonne discusses ARG's work on the provable security of boot code

Daniel Schwartz-Narbonne discusses how automated reasoning, a branch of AI tech, can help prove the security of boot code

Tell me about yourself: what made you decide to become a software engineer with the Automated Reasoning Group?

I wanted to become an engineer because I like building things and understanding how they work. I get satisfaction out of producing something tangible. I went into cloud security because I believe it’s a major area of opportunity in the computing industry right now. As the cloud continues to scale in response to customer demand, we’re going to need more and more automation around security to meet this demand.

I was offered the opportunity to work with ARG after I finished up my post-doc at NYU. Byron Cook, the director of ARG, was starting up a team with the mission of using formal reasoning methods to solve real-world problems in the cloud. Joining ARG was an opportunity for me to help pioneer the use of automated reasoning for cloud security.

How would you describe automated reasoning?

Automated reasoning uses mathematical analysis to understand what’s happening in a complex computer system. The technique takes a system and a question you might have about the system—like “is the system memory safe?”—and reformulates the question as a set of mathematical properties. Then it uses automated reasoning tools called “constraint solvers” to analyze these properties and provide an answer. We’re using this technology to provide higher levels of cloud security assurance for AWS customers via features that protect key components of the cloud, including IAM permissions, networking controls, verification for security protocols and source code of foundational software packages in use at AWS. Links to this work can be found at the bottom of this post.

What is the Boot Code Verification Project?

The Boot Code Verification Project is one of several ARG projects that apply automated reasoning techniques to the foundational elements of cloud security. In this case, we’re looking at boot code. Boot code is the first code that starts when you turn on a computer. It’s the foundation for all computer code, which makes its security critical. This is joint work with my ARG colleagues Michael Tautschnig and Mark Tuttle and with infrastructure engineers.

Why is boot code so difficult to secure?

Ensuring boot code security by using traditional techniques, such as penetration testing and unit testing, is hard. You can only achieve visibility into code execution via debug ports, which means you have almost no ability to single-step the boot code for debugging. You often can’t instrument the boot code, either, because this can break the build process: the increased size of the instrumented code may be larger than the size of the ROM targeted by the build process. Extracting the data collected by instrumentation is also difficult because the boot code has no access to a file system to record the data, and memory available for storing the data may be limited.

Our aim is to gain increased confidence in the correctness of the boot code by using automated reasoning, instead. Applying automated reasoning to boot code has challenges, however. A big one is that boot code directly interfaces with hardware. Hardware can, for example, modify the value of memory locations through the use of memory-mapped input/output (IO). We developed techniques for modeling the effect that hardware can have on executing boot code. One technique we successfully tried is using model checking to symbolically represent all the effects that hardware could have on the memory state of the boot code. This required close collaboration with our product teams to understand AWS data center hardware and then design and validate a model based on these specifications. To ensure future code revisions maintain the properties we have validated, our analysis is embedded into the continuous integration flow. In such a workflow, each change by the developers triggers automated verification.

We published the full technical details, including the process by which we were able to prove the memory safety of boot code, in Model Checking Boot Code from AWS Data Centers, a peer-reviewed scientific publication at the Computer-Aided Verification Conference, the leading academic conference on automated reasoning.

You mention model checking. Can you explain what that is?

A software model checker is a tool that examines every path through a computer program from every possible input. There are different kinds of model checkers, but our model checker is based on a constraint solver (called a SAT solver, or a Satisfiability solver) that can test whether a given set of constraints is satisfiable. To understand how it works, first remember that each line of a computer program describes a particular change in the state of the computer (for example, turning on the device). Our model checker describes each change as an equation that shows how the computer’s state has changed. If you describe each line of code in a program this way, the result is a set of equations that describes a set of constraints upon all the ways that the program can change the state of the computer. We hand these constraints and a question (“Is there a bug?”) to a constraint solver, which then determines if the computer can ever reach a state in which the question (“Is there a bug?”) is true.

What is memory safety? Why is it so crucial to prove the memory safety of boot code?

A proof of memory safety gives you assurance that certain security issues cannot arise. Memory safety states that every operation in a program can only write to the variables it has access to, within the bounds of those variables. A classic example is a buffer that stores data coming in from a message on the network. If the message is larger than the buffer in which it’s stored, then you’ll overwrite the buffer, as well as whatever comes after the buffer. If trusted data stored after the buffer is overwritten, then the value of this trusted data is under the control of the adversary inducing the buffer overflow—and your system’s security is now at risk.

Boot code is written in C, a language that does not have the dynamic run-time support for memory safety found in other programming languages. The Boot Code Verification Project uses automated reasoning technology to prove memory safety of the boot code for every possible input.

What has the Boot Code Verification Project accomplished?

We’ve achieved two major accomplishments. The first is the concrete proof we’ve delivered. We have demonstrated that for every boot configuration, device configuration, possible boot source, and second stage binary, AWS boot code is memory safe.

The second accomplishment is more forward-looking. We haven’t just validated a piece of code—we’ve validated a methodology for testing security critical C code at AWS. As we describe in our paper, completing this proof required us to make significant advances in program analysis tooling, ranging from the way we handle memory-mapped IO, to a more efficient symbolic implementation of memcpy, to new tooling that can analyze the unusual linker configurations used in boot code. We made the tooling easier to use, with AWS Batch scripts that allow automatic proof re-runs, and HTML-based reports that make it easy to dive in and understand code. We expect to build on these improvements as we continue to apply automated reasoning to the AWS cloud.

Is your work open source?

We use the model checker CBMC (C Bounded Model Checker), which is available on GitHub under the open source Berkeley Software Distribution license. AWS is committed to the open source community, and we have a number of other projects that you can also find on GitHub.

Overall, how does the Boot Code Verification Project benefit customers?

Customers ask how AWS secures their data. This project is a part of the answer, providing assurance for how AWS protects low-level code in customer data centers running on AWS. Given all systems and processes run on top of this code, customers need to know measures are in place to keep it continuously safe.

We also believe that technology powered by automated reasoning has wider applicability. Our team has created tools like Zelkova, which we’ve embedded in a variety of AWS services to help customers validate their security-critical code. Because the Boot Code Verification Project is based on an existing open source project, wider applications of our methodology have also been documented in a variety of scientific publications that you can find on the the AWS Provable Security page under “Insight papers.” We encourage customers to check out our resources and comment below!

Want more AWS Security news? Follow us on Twitter.

Author

Supriya Anand

Supriya is a Content Strategist at AWS working with the Automated Reasoning Group.

Podcast: We developed Amazon GuardDuty to meet scaling demands, now it could assist with compliance considerations such as GDPR

Post Syndicated from Katie Doptis original https://aws.amazon.com/blogs/security/podcast-we-developed-amazon-guardduty-to-meet-scaling-demands-now-it-could-assist-with-compliance-considerations-such-as-gdpr/

It isn’t simple to meet the scaling requirements of AWS when creating a threat detection monitoring service. Our service teams have to maintain the ability to deliver at a rapid pace. That led to the question what can be done to make a security service as frictionless as possible to business demands?

Core parts of our internal solution can now be found in Amazon GuardDuty, which doesn’t require deployment of software or security infrastructure. Instead, GuardDuty uses machine learning to monitor metadata for access activity such as unusual API calls. This method turned out to be highly effective. Because it worked well for us, we thought it would work well for our customers, too. Additionally, when we externalized the service, we enabled it to be turned on with a single click. The customer response to Amazon GuardDuty has been positive with rapid adoption since launch in late 2017.

The service’s monitoring capabilities and threat detections could become increasingly helpful to customers concerned with data privacy or facing regulations such as the EU’s General Data Privacy Regulation (GDPR). Listen to the podcast with Senior Product Manager Michael Fuller to learn how Amazon GuardDuty could be leveraged to meet your compliance considerations.

How AWS Meets a Physical Separation Requirement with a Logical Separation Approach

Post Syndicated from Min Hyun original https://aws.amazon.com/blogs/security/how-aws-meets-a-physical-separation-requirement-with-a-logical-separation-approach/

We have a new resource available to help you meet a requirement for physically-separated infrastructure using logical separation in the AWS cloud. Our latest guide, Logical Separation: An Evaluation of the U.S. Department of Defense Cloud Security Requirements for Sensitive Workloads outlines how AWS meets the U.S. Department of Defense’s (DoD) stringent physical separation requirement by pioneering a three-pronged logical separation approach that leverages virtualization, encryption, and deploying compute to dedicated hardware.

This guide will help you understand logical separation in the cloud and demonstrates its advantages over a traditional physical separation model. Embracing this approach can help organizations confidently meet or exceed security requirements found in traditional on-premises environments, while also providing increased security control and flexibility.

Logical Separation is the second guide in the AWS Government Handbook Series, which examines cybersecurity policy initiatives and identifies best practices.

If you have questions or want to learn more, contact your account executive or AWS Support.

Announcing the new AWS Certified Security – Specialty exam

Post Syndicated from Janna Pellegrino original https://aws.amazon.com/blogs/architecture/announcing-the-new-aws-certified-security-specialty-exam/

Good news for cloud security experts: following our most popular beta exam ever, the AWS Certified Security – Specialty exam is here. This new exam allows experienced cloud security professionals to demonstrate and validate their knowledge of how to secure the AWS platform.

About the exam
The security exam covers incident response, logging and monitoring, infrastructure security, identity and access management, and data protection. The exam is open to anyone who currently holds a Cloud Practitioner or Associate-level certification. We recommend candidates have five years of IT security experience designing and implementing security solutions, and at least two years of hands-on experience securing AWS workloads.

The exam validates:

  • An understanding of specialized data classifications and AWS data protection mechanisms.
  • An understanding of data encryption methods and AWS mechanisms to implement them.
  • An understanding of secure Internet protocols and AWS mechanisms to implement them.
  • A working knowledge of AWS security services and features of services to provide a secure production environment.
  • Competency gained from two or more years of production deployment experience using AWS security services and features.
  • Ability to make trade-off decisions with regard to cost, security, and deployment complexity given a set of application requirements.
  • An understanding of security operations and risk.

Learn more and register >>

How to prepare
We have training and other resources to help you prepare for the exam:

AWS Training (aws.amazon.com/training)

Additional Resources

Learn more and register >>

Please contact us if you have questions about exam registration.

Good luck!

Announcing the new AWS Certified Security – Specialty exam

Post Syndicated from Ozlem Yilmaz original https://aws.amazon.com/blogs/security/announcing-the-new-aws-certified-security-specialty-exam/

Good news for cloud security experts: the AWS Certified Security — Specialty exam is here. This new exam allows experienced cloud security professionals to demonstrate and validate their knowledge of how to secure the AWS platform.

About the exam

The security exam covers incident response, logging and monitoring, infrastructure security, identity and access management, and data protection. The exam is open to anyone who currently holds a Cloud Practitioner or Associate-level certification. We recommend candidates have five years of IT security experience designing and implementing security solutions, and at least two years of hands-on experience securing AWS workloads.

The exam validates your understanding of:

  • Specialized data classifications and AWS data protection mechanisms
  • Data encryption methods and AWS mechanisms to implement them
  • Secure Internet protocols and AWS mechanisms to implement them
  • AWS security services and features of services to provide a secure production environment
  • Making tradeoff decisions with regard to cost, security, and deployment complexity given a set of application requirements
  • Security operations and risk

How to prepare

We have training and other resources to help you prepare for the exam.

AWS Training that includes:

Additional Resources

Learn more and register here, and please contact us if you have questions about exam registration.

Want more AWS Security news? Follow us on Twitter.

Red Hat Enterprise Linux 7.5 is out

Post Syndicated from ris original https://lwn.net/Articles/751457/rss

Red Hat has announced
the general availability
of Red Hat Enterprise Linux 7.5. This version
features enhanced hybrid cloud security and compliance, improved storage
performance and efficiency, simplified management, and production-ready
Linux containers. RHEL 7.5 is available for x86, IBM Power, IBM z Systems, and 64-bit Arm. This release also brings support for single-host KVM virtualization and Open Container Initiative (OCI)-formatted runtime environment and base image to IBM z Systems.

Addressing Data Residency with AWS

Post Syndicated from Min Hyun original https://aws.amazon.com/blogs/security/addressing-data-residency-with-aws/

Whitepaper image

AWS has released a new whitepaper that has been requested by many AWS customers: AWS Policy Perspectives: Data Residency. Data residency is the requirement that all customer content processed and stored in an IT system must remain within a specific country’s borders, and it is one of the foremost concerns of governments that want to use commercial cloud services. General cybersecurity concerns and concerns about government requests for data have contributed to a continued focus on keeping data within countries’ borders. In fact, some governments have determined that mandating data residency provides an extra layer of security.

This approach, however, is counterproductive to the data protection objectives and the IT modernization and global economic growth goals that many governments have set as milestones. This new whitepaper addresses the real and perceived security risks expressed by governments when they demand in-country data residency by identifying the most likely and prevalent IT vulnerabilities and security risks, explaining the native security embedded in cloud services, and highlighting the roles and responsibilities of cloud service providers (CSPs), governments, and customers in protecting data.

Large-scale, multinational CSPs, often called hyperscale CSPs, represent a transformational disruption in technology because of how they support their customers with high degrees of efficiency, agility, and innovation as part of world-class security offerings. The whitepaper explains how hyperscale CSPs, such as AWS, that might be located out of country provide their customers the ability to achieve high levels of data protection through safeguards on their own platform and with turnkey tooling for their customers. They do this while at the same time preserving nation-state regulatory sovereignty.

The whitepaper also considers the commercial, public-sector, and economic effects of data residency policies and offers considerations for governments to evaluate before enforcing requirements that can unintentionally limit public-sector digital transformation goals, in turn possibly leading to increased cybersecurity risk.

AWS continues to engage with governments around the world to hear and address their top-of-mind security concerns. We take seriously our commitment to advocate for our customers’ interests and enforce security from “ground zero.” This means that when customers use AWS, they can have the confidence that their data is protected with a level of assurance that meets, if not exceeds, their needs, regardless of where the data resides.

– Min Hyun, Cloud Security Policy Strategist

EU Compliance Update: AWS’s 2017 C5 Assessment

Post Syndicated from Oliver Bell original https://aws.amazon.com/blogs/security/eu-compliance-update-awss-2017-c5-assessment/

C5 logo

AWS has completed its 2017 assessment against the Cloud Computing Compliance Controls Catalog (C5) information security and compliance program. Bundesamt für Sicherheit in der Informationstechnik (BSI)—Germany’s national cybersecurity authority—established C5 to define a reference standard for German cloud security requirements. With C5 (as well as with IT-Grundschutz), customers in German member states can use the work performed under this BSI audit to comply with stringent local requirements and operate secure workloads in the AWS Cloud.

Continuing our commitment to Germany and the AWS European Regions, AWS has added 16 services to this year’s scope:

The English version of the C5 report is available through AWS Artifact. The German version of the report will be available through AWS Artifact in the coming weeks.

– Oliver

Announcing our new beta for the AWS Certified Security – Specialty exam

Post Syndicated from Janna Pellegrino original https://aws.amazon.com/blogs/architecture/announcing-our-new-beta-for-the-aws-certified-security-specialty-exam/

Take the AWS Certified Security – Specialty beta exam for the chance to be among the first to hold this new AWS Certification. This beta exam allows experienced cloud security professionals to demonstrate and validate their expertise. Register today – this beta exam will only be available from January 15 to March 2!

About the exam

This beta exam validates that the successful candidate can effectively demonstrate knowledge of how to secure the AWS platform. The exam covers incident response, logging and monitoring, infrastructure security, identity and access management, and data protection.

The exam validates:

  • Familiarity with regional- and country-specific security and compliance regulations and meta issues that these regulations embody.
  • An understanding of specialized data classifications and AWS data protection mechanisms.
  • An understanding of data encryption methods and AWS mechanisms to implement them.
  • An understanding of secure Internet protocols and AWS mechanisms to implement them.
  • A working knowledge of AWS security services and features of services to provide a secure production environment.
  • Competency gained from two or more years of production deployment experience using AWS security services and features.
  • Ability to make tradeoff decisions with regard to cost, security, and deployment complexity given a set of application requirements.
  • An understanding of security operations and risk.

Learn more and register >>

Who is eligible

The beta is open to anyone who currently holds an Associate or Cloud Practitioner certification. We recommend candidates have five years of IT security experience designing and implementing security solutions, and at least two years of hands-on experience securing AWS workloads.

How to prepare

We have training and other resources to help you prepare for the beta exam:

AWS Security Fundamentals Digital| 3 Hours
This course introduces you to fundamental cloud computing and AWS security concepts, including AWS access control and management, governance, logging, and encryption methods. It also covers security-related compliance protocols and risk management strategies, as well as procedures related to auditing your AWS security infrastructure.

Security Operations on AWS Classroom | 3 Days
This course demonstrates how to efficiently use AWS security services to stay secure and compliant in the AWS Cloud. The course focuses on the AWS-recommended security best practices that you can implement to enhance the security of your data and systems in the cloud. The course highlights the security features of AWS key services including compute, storage, networking, and database services.

Online resources for Cloud Security and Compliance

Review documentation, whitepapers, and articles & tutorials related to cloud security and compliance.

Learn more and register >>

Please contact us if you have questions about exam registration.

Good luck!

Validate Your IT Security Expertise with the New AWS Certified Security – Specialty Beta Exam

Post Syndicated from Sara Snedeker original https://aws.amazon.com/blogs/security/validate-your-it-security-expertise-with-the-new-aws-certified-security-specialty-beta-exam/

AWS Training and Certification image

If you are an experienced cloud security professional, you can demonstrate and validate your expertise with the new AWS Certified Security – Specialty beta exam. This exam allows you to demonstrate your knowledge of incident response, logging and monitoring, infrastructure security, identity and access management, and data protection. Register today – this beta exam will be available only from January 15 to March 2, 2018.

By taking this exam, you can validate your:

  • Familiarity with region-specific and country-specific security and compliance regulations and meta issues that these regulations include.
  • Understanding of data encryption methods and secure internet protocols, and the AWS mechanisms to implement them.
  • Working knowledge of AWS security services to provide a secure production environment.
  • Ability to make trade-off decisions with regard to cost, security, and deployment complexity when given a set of application requirements.

See the full list of security knowledge you can validate by taking this beta exam.

Who is eligible?

The beta exam is open to anyone who currently holds an AWS Associate or Cloud Practitioner certification. We recommend candidates have five years of IT security experience designing and implementing security solutions, and at least two years of hands-on experience securing AWS workloads.

How to prepare

You can take the following courses and use AWS cloud security resources and compliance resources to prepare for this exam.

AWS Security Fundamentals (digital, 3 hours)
This digital course introduces you to fundamental cloud computing and AWS security concepts, including AWS access control and management, governance, logging, and encryption methods. It also covers security-related compliance protocols and risk management strategies, as well as procedures related to auditing your AWS security infrastructure.

Security Operations on AWS (classroom, 3 days)
This instructor-led course demonstrates how to efficiently use AWS security services to help stay secure and compliant in the AWS Cloud. The course focuses on the AWS-recommended security best practices that you can implement to enhance the security of your AWS resources. The course highlights the security features of AWS compute, storage, networking, and database services.

If you have questions about this new beta exam, contact us.

Good luck with the exam!

– Sara

Join Us for AWS Security Week November 6–9 in New York City

Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/join-us-for-aws-security-week-november-6-9-in-new-york-city/

Want to learn how to securely deploy applications and services in the AWS Cloud? Join us in New York City at the AWS Pop-up Loft for AWS Security Week, November 6–9. At this free technical event, you will learn security concepts and strategies from AWS security professionals in sessions, demos, and labs.

Here is a sampling of the security offerings during the week:

  • Become a Cloud Security Ninja
  • Data Protection in Transit and at Rest
  • Soup to Nuts: Identity Federation for AWS
  • Brewing an Effective Cloud Security Strategy

Learn more about the available sessions and register!

– Craig

AWS Earns Department of Defense Impact Level 5 Provisional Authorization

Post Syndicated from Chris Gile original https://aws.amazon.com/blogs/security/aws-earns-department-of-defense-impact-level-5-provisional-authorization/

AWS GovCloud (US) Region image

The Defense Information Systems Agency (DISA) has granted the AWS GovCloud (US) Region an Impact Level 5 (IL5) Department of Defense (DoD) Cloud Computing Security Requirements Guide (CC SRG) Provisional Authorization (PA) for six core services. This means that AWS’s DoD customers and partners can now deploy workloads for Controlled Unclassified Information (CUI) exceeding IL4 and for unclassified National Security Systems (NSS).

We have supported sensitive Defense community workloads in the cloud for more than four years, and this latest IL5 authorization is complementary to our FedRAMP High Provisional Authorization that covers 18 services in the AWS GovCloud (US) Region. Our customers now have the flexibility to deploy any range of IL 2, 4, or 5 workloads by leveraging AWS’s services, attestations, and certifications. For example, when the US Air Force needed compute scale to support the Next Generation GPS Operational Control System Program, they turned to AWS.

In partnership with a certified Third Party Assessment Organization (3PAO), an independent validation was conducted to assess both our technical and nontechnical security controls to confirm that they meet the DoD’s stringent CC SRG standards for IL5 workloads. Effective immediately, customers can begin leveraging the IL5 authorization for the following six services in the AWS GovCloud (US) Region:

AWS has been a long-standing industry partner with DoD, federal-agency customers, and private-sector customers to enhance cloud security and policy. We continue to collaborate on the DoD CC SRG, Defense Acquisition Regulation Supplement (DFARS) and other government requirements to ensure that policy makers enact policies to support next-generation security capabilities.

In an effort to reduce the authorization burden of our DoD customers, we’ve worked with DISA to port our assessment results into an easily ingestible format by the Enterprise Mission Assurance Support Service (eMASS) system. Additionally, we undertook a separate effort to empower our industry partners and customers to efficiently solve their compliance, governance, and audit challenges by launching the AWS Customer Compliance Center, a portal providing a breadth of AWS-specific compliance and regulatory information.

We look forward to providing sustained cloud security and compliance support at scale for our DoD customers and adding additional services within the IL5 authorization boundary. See AWS Services in Scope by Compliance Program for updates. To request access to AWS’s DoD security and authorization documentation, contact AWS Sales and Business Development. For a list of frequently asked questions related to AWS DoD SRG compliance, see the AWS DoD SRG page.

To learn more about the announcement in this post, tune in for the AWS Automating DoD SRG Impact Level 5 Compliance in AWS GovCloud (US) webinar on October 11, 2017, at 11:00 A.M. Pacific Time.

– Chris Gile, Senior Manager, AWS Public Sector Risk & Compliance

 

 

The AWS EU (London) Region Achieves Public Services Network (PSN) Assurance

Post Syndicated from Oliver Bell original https://aws.amazon.com/blogs/security/aws-uk-region-achieves-public-services-network-psn-assurance/

UK flag

AWS is excited to announce that the AWS EU (London) Region has achieved Public Services Network (PSN) assurance. This means that the EU (London) Region can now be connected to the PSN (or PSN customers) by PSN-certified AWS Direct Connect partners. PSN assurance demonstrates to our UK Public Sector customers that the EU (London) Region has met the stringent requirements of PSN and provides an assured platform on which to build UK Public Sector services. Customers are required to ensure that applications and configurations applied to their AWS instances meet the PSN standards, and they must undertake PSN certification for the content, platform, applications, systems, and networks they run on AWS (but no longer need to include AWS infrastructure and products in their certification).

In conjunction with our Standardized Architecture for UK-OFFICIAL, PSN assurance enables UK Public Sector organizations to move their UK-OFFICIAL classified data to the EU (London) Region in a controlled and risk-managed manner. AWS has also created a UK-OFFICIAL on AWS Quick Start, which provisions an environment suitable for UK-OFFICIAL classified data. This Quick Start includes guidance and controls that help public sector organizations manage risks and ensure security when handling UK-OFFICIAL information assets.

You can download the EU (London) Region PSN Code of Connection and Service Compliance certificates through AWS Artifact. For further information about using AWS in the context of the National Cyber Security Centre (NCSC) UK’s Cloud Security Principles, see Using AWS in the Context of NCSC UK’s Cloud Security Principles.

– Oliver