Tag Archives: Foundational (100)

AWS Security Profile: Ron Cully, Principal Product Manager, AWS Identity

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profile-ron-cully-principal-product-manager-aws-identity/


In the weeks leading up to re:Invent, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


How long have you been at AWS, and what do you do in your current role?

I’ve been with AWS for nearly four years. I’m a Principal Product Manager in AWS Identity. I spent most of my time covering our Managed Active Directory products, and over the past year I’ve taken on management for AWS Single Sign-On and AWS Identity and Access Management (IAM).

How do you explain your job to non-tech friends?

Identity is what people use when they sign in to their services. What we work on is the back-end systems that authenticate and manage access so that people have secure access to their services.

What are you currently working on that you’re excited about?

Wow, it’s hard to pick just one. So, I’d say I’m most excited about the work that we’re doing so that customers can use identities that they already have across all of AWS.

What’s the most challenging part of your job?

Making sure that we deliver the most important features that customers want, in the right sequence, as quickly as possible. To do that, we need to focus on the key pain points customers have right now and resolve those pain points in ways that are the most meaningful to them. We also need to make sure that we have the right roadmap and keep doing that on an iterative basis.

What’s your favorite part of your job?

I get to work with some really incredibly smart people inside and outside of Amazon. It’s a really interesting space to be in. There’s a lot happening at the industry level, and we’re trying to sort out the puzzle of how we bring things together given what customers have and use today. Customers have all of this existing technology that they want to use, and they have a lot of investments in it. We want to make it possible for them to use those investments in new innovative ways that make their lives easier.

The AWS Identity team is growing rapidly. What are some of the biggest challenges that teams face during rapid growth?

One key challenge is hiring. How do we find great people? Amazon has some pretty high bars, and we need to find the right people that can ramp up quickly to help us solve the challenges that we want to go fix. The other thing is making sure that we stay on the same page. There’s a lot of work that we’re doing across a lot of different areas. So it’s important to stay in coordination so that we deliver the most important things that solve our customers’ current pain points.

What advice would you give to people coming on board the AWS Identity team?

Make sure that you’re highly customer focused. Dive deep because we really need to understand the details of what’s going on and what customers are trying to accomplish. Be a really effective communicator by breaking things down into the simplest terms. I find that often, people get so caught up in technology that they get lost in the technology. It’s really important to remember that we’re solving problems that are very visceral to human beings. In order to get the correct results, you need to be able to communicate in a way that makes sense to anybody.

Which Amazon leadership principles have you relied on the most in your own career at AWS?

Certainly Customer Obsession. That’s absolutely imperative. Dive Deep of course. Learn and Be Curious is huge. But also a less popular principle: Have Backbone; Disagree and Commit. It’s important that we have healthy discussions. This principle isn’t about being confrontational. It’s about being smart about how you synthesize the information that you learn from your customers and bring forth your ideas and opinions in a respectful way. It’s important to have a healthy conversational debate about what’s right for customers, so that we can drive important things forward when they need to be done. At the same time, we must recognize that not all ideas or their timing are right. It’s important to understand the bigger picture of what’s going on, understand that a different approach might be better in that particular moment, and commit to moving forward as a team after the debate is finished.

What’s the most common misperception you encounter about AWS Identity?

I think there’s a huge amount of confusion in the Active Directory area about what you can and can’t do, and how it relates to what customers are doing with Azure AD. We probably have the best managed active directory in the cloud. But, people sometimes confuse Active Directory with Azure AD, which are completely different technologies. So, we try to help customers understand how our product works relative to Azure AD. They are complementary; they can work together.

Another area that’s confusing for customers is choosing which AWS identity system to use today. AWS identity systems have grown organically over time. We’ve listened to customers and added features, and so now we have a couple of different ways of approaching identity. We started out with IAM users and groups. Then over the past few years, we’ve made it possible to use Active Directory identities in AWS. We’ve also been embracing the use of standards-based federation. Federation enables customers who use identity systems like Okta, Ping, Google, or Azure AD, to use those identities to sign into AWS. Due to this organic change, customers can choose between managing identities as IAM, create them in AWS SSO, bring them in from Active Directory by using AWS SSO, or use SAML federation through IAM. We also have the Cognito product that people have been adapting to use with IAM federation. Based on the state of where the technologies are now, it can be confusing for customers to know which identity system is the right one to use right now so they are on the right path going into the future. This is an area we are working hard to simplify and clarify for our customers.

What do you think is the biggest challenge facing the identity space right now?

I think it’s helping customers understand how to use the identity system that they have now—broadly, across all of the applications and services that they want to use—and how to provide them with a consistent experience. I think that’s one of the key industry challenges. We’ve come a long way, but there’s still a lot of road ahead of us to make that all possible at the industry level.

Looking to the future, how do you think the authorization and authentication landscape will evolve?

I think we’ll start to see more convergence on interoperable technologies for authentication. There’s some evolution already happening between the SAML model of authentication and OIDC (OpenID Connect). And I think we’ll start to see more convergence. One sticky spot in the industry right now is how to set up federation. It can be complicated and time consuming to set up, and there’s work that we’re doing in this space to help make it easier. We did a technology demonstration at identiverse last June using the Fast Federation standards draft to connect IDPs and service providers together. In our demonstration, we showed how Fast Fed makes it possible to connect AWS SSO to Google in a couple of clicks. That enables customers to use the identities they already have and use AWS SSO as their AWS integrated permissions management tool to grant access to resources across all their AWS accounts. I think Fast Fed will really help customers because today it’s so complicated to try and connect identity providers to tens or hundreds of applications.

What does identity mean to you on a personal level?

When I think about identity, it’s about who I am, and there are different contexts for that, such as who I am as a consumer or who I am as an employee. Let’s focus on who I am as an employee: Today I may have different user identities and credentials, each to a different system. I also have to manage my passwords for each of those identities. If I make a mistake and use the wrong sign-in or password, I get blocked, and I might get locked out. These things get in the way of focusing on my job. Another example is that if I change my role within a company, I need access to new resources, and there are old resources that I should no longer be able to access. It’s really a pain today for people to navigate getting my access to resources set up correctly. It can take a month before you have all of the different permissions to access the things you need. So when I look at what I want to do for customers, it’s about “how do I make it really easy for people to get access to the things they need without compromising security?” I want to make it so that people can have one identity to use, and when there’s a change to their identity, the system automatically gives them access to what they need and removes access to what they don’t need. People shouldn’t have to go through all the painful processes of going to websites and talking to managers to get them to change group membership.

Will you be doing anything at re:Invent this year?

I’m involved in a few sessions.

I’ll be talking about our single sign-on product, AWS Single Sign-On. It enables customers to centrally manage access to the AWS Console, accounts, roles, and applications using identities from their Active Directory, or identities they create in AWS SSO. We’ll be talking about some exciting new features that we’ve released in that product area since the last re:Invent.

I’m also involved in a session about how enterprises can use Active Directory in the cloud. Customers have a lot of investment in their Windows environments on premises, and they’re migrating their workloads into the cloud. As they do that, those Windows workloads in the cloud need access to Active Directory. Customers often don’t want to manage the Active Directory infrastructure in the cloud. The operational pain of doing that detracts from what they’re trying to do, which is to get to the cloud and actually convert into server-less technologies where they get better economies of scale and more flexibility. AWS offers a managed Active Directory solution that customers can use with their Windows workloads while eliminating the overhead of operating Active Directory domain controllers in the cloud.

What are you hoping that your audience will do differently as a result of attending?

I would love to see customers realize they can take advantage of the services we offer in new ways, and then go home and deploy them. I would hope that they go back and do a proof of concept—go play with it and understand what it can do, see what kind of value it can bring, and then build out from there. Armed with the right information I think customers can streamline some processes in terms of how to get on to the cloud and take advantage of the cloud faster.

What do you recommend that first-time attendees do at Re:Invent?

There’s so much amazing content that’s there, you won’t be able to get it all. So, get clear about what information you’re after, go through the session list, and get registered for the sessions. Sometimes these fill up fast! If you’re coming with a team, divide and conquer. But also leave some time to learn something new in an area you’re less familiar with. Also, take advantage of the presenters. Ask us questions! We’re here to help customers learn as much as they can. If you see me there, stop me and ask your questions!

If you had to pick any other job, what would you want to do with your life?

I would probably want to be in food safety. I used to not care about food at all. Then, I went to an event where I made a life decision that made me think about my health and made me think about my food. So I started understanding more about food. I began realizing how much happens with our food today that we just don’t know about. There are a lot of things that I really don’t align with. I would love to see more transparency about our food so that we could have the ability to pick and choose what we want to eat based upon our values. If it wasn’t food safety, maybe politics.

Want more AWS Security news? Follow us on Twitter.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Ron Cully

Ron Cully is a Principal Product Manager at AWS where he leads feature and roadmap planning for workforce identity products at AWS. Ron has over 20 years of industry experience in product and program management of networking and directory related products. He is passionate about delivering secure, reliable solutions that help make it easier for customers to migrate directory aware applications and workloads to the cloud.

AWS achieves FedRAMP JAB High and Moderate Provisional Authorization across 18 services in the AWS US East/West and AWS GovCloud (US) Regions

Post Syndicated from Amendaze Thomas original https://aws.amazon.com/blogs/security/aws-achieves-fedramp-jab-high-moderate-provisional-authorization-18-services/

It’s my pleasure to announce that we’ve expanded the number of AWS services that customers can use to run sensitive and highly regulated workloads in the federal government space. This expansion of our FedRAMP program marks a 28.6% increase in our number of FedRAMP authorizations.

Today, we’ve achieved FedRAMP authorizations for 6 services in our AWS US East/West Regions:

We also received 14 service authorizations in our AWS GovCloud (US) Regions:

In total, we now offer 48 AWS services authorized in the AWS US East/West Regions under FedRAMP Moderate and 43 services authorized in our AWS GovCloud (US) Regions under FedRamp High. You can see our full, updated list of authorizations on the FedRAMP Marketplace. We also list all of our services in scope by compliance program on our Services in Scope page.

Our FedRAMP assessment was completed with a third-party assessment partner to ensure an independent validation of our technical, management, and operational security controls against the FedRAMP baselines.

We care deeply about our customers’ needs, and compliance is my team’s priority. As we expand in the federal space, we want to continue to onboard services into the compliance programs our customers are using, such as FedRAMP.

To learn what other public sector customers are doing on AWS, see our Government, Education, and Nonprofits Case Studies and Customer Success Stories. Stay tuned for future updates on our Services in Scope by Compliance Program page. If you have feedback about this blog post, let us know in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

author photo

Amendaze Thomas

Amendaze is the manager of AWS Security’s Government Assessments and Authorization Program (GAAP). He has 15 years of experience providing advisory services to clients in the Federal government, and over 13 years’ experience supporting CISO teams with risk management framework (RMF) activities.

Updated whitepaper available: “Navigating GDPR Compliance on AWS”

Post Syndicated from Carmela Gambardella original https://aws.amazon.com/blogs/security/updated-whitepaper-available-navigating-gdpr-compliance-on-aws/

The European Union’s General Data Protection Regulation 2016/679 (GDPR) safeguards EU citizens’ fundamental right to privacy and to personal data protection. In order to make local regulations coherent and homogeneous, the GDPR introduces and defines stringent new standards in terms of compliance, security and data protection.

The updated version of our Navigating GDPR Compliance on AWS whitepaper (.pdf) explains the role that AWS plays in your GDPR compliance process and shows how AWS can help your organization accelerate the process of aligning your compliance programs to the GDPR by using AWS cloud services.

AWS compliance, data protection, and security experts work with customers across the world to help them run workloads in the AWS Cloud, including customers who must operate within GDPR requirements. AWS teams also review what AWS is responsible for to make sure that our operations comply with the requirements of the GDPR so that customers can continue to use AWS services. The whitepaper provides guidelines to better orient you to the wide variety of AWS security offerings and to help you identify the service that best suits your GDPR compliance needs.

If you have feedback about this blog post, please submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

Carmela Gambardella

Carmela graduated in Computer Science at the Federico II University of Naples, Italy. She has worked in a variety of roles at large IT companies, including as a software engineer, security consultant, and security solutions architect. Her areas of interest include data protection, security and compliance, application security, and software engineering. In April 2018, she joined the AWS Public Sector Solution Architects team in Italy.

Author photo

Giuseppe Russo

Giuseppe is a Security Assurance Manager for AWS in Italy. He has a Master’s Degree in Computer Science with a specialization in cryptography, security and coding theory. Giuseppe is s a seasoned information security practitioner with many years of experience engaging key stakeholders, developing guidelines, and influencing the security market on strategic topics such as privacy and critical infrastructure protection.

AWS Security Profile: Byron Cook, Director of the AWS Automated Reasoning Group

Post Syndicated from Supriya Anand original https://aws.amazon.com/blogs/security/aws-security-profile-byron-cook-director-aws-automated-reasoning-group/

Author


Byron Cook leads the AWS Automated Reasoning Group, which automates proof search in mathematical logic and builds tools that provide AWS customers with provable security. Byron has pushed boundaries in this field, delivered real-world applications in the cloud, and fostered a sense of community amongst its practitioners. In recognition of Byron’s contributions to cloud security and automated reasoning, the UK’s Royal Academy of Engineering elected him as one of 7 new Fellows in computing this year.

I recently sat down with Byron to discuss his new Fellowship, the work that it celebrates, and how he and his team continue to use automated reasoning in new ways to provide higher security assurance for customers in the AWS cloud.

Congratulations, Byron! Can you tell us a little bit about the Royal Academy of Engineering, and the significance of being a Fellow?

Thank you. I feel very honored! The Royal Academy of Engineering is focused on engineering in the broad sense; for example, aeronautical, biomedical, materials, etc. I’m one of only 7 Fellows elected this year that specialize in computing or logic, making the announcement really unique.

As for what the Royal Academy of Engineering is: the UK has Royal Academies for key disciplines such as music, drama, etc. The Royal Academies focus financial support and recognition on these fields, and gives a location and common meeting place. The Royal Academy of Music, for example, is near Regent’s Park in West London. The Royal Academy of Engineering’s building is in Carlton Place, one of the most exclusive locations in central London near Pall Mall and St. James’ Park. I’ve been to a number of lectures and events in that space. For example, it’s where I spoke ten years ago when I was the recipient of the Roger Needham prize. Some examples of previously elected Fellows include Sir Frank Whittle, who invented the jet engine; radar pioneer Sir George MacFarlane, and Sir Tim Berners-Lee, who developed the world-wide web.

Can you tell us a little bit about why you were selected for the award?

The letter I received from the Royal Academy says it better than I could say myself:

“Byron Cook is a world-renowned leader in the field of formal verification. For over 20 years Byron has worked to bring this field from academic hypothesis to mechanised industrial reality. Byron has made major research contributions, built influential tools, led teams that operationalised formal verification activities, and helped establish connections between others that have dramatically accelerated growth of the area. Byron’s tools have been applied to a wide array of topics, e.g. biological systems, computer operating systems, programming languages, and security. Byron’s Automated Reasoning Group at Amazon is leading the field to even greater success”.

Formal verification is the one term here that may be foreign to you, so perhaps I should explain. Formal verification is the use of mathematical logic to prove properties of systems. Euclid, for example, used formal verification in ~300 BC to prove that the Pythagorean theorem holds for all possible right-angled triangles. Today we are using formal verification to prove things about all possible configurations of a computer program might reach. When I founded Amazon’s Automated Reasoning Group, I named it that because my ambition was to automate all of the reasoning performed during formal verification.

Can you give us a bit of detail about some of the “research contributions and tools” mentioned in the text from Royal Academy of Engineering?

Probably my best-known work before joining Amazon was on the Terminator tool. Terminator was designed to reason at compile-time about what a given computer program would eventually do when running in production. For example, “Will the program eventually halt?” This is the famous “Halting problem,” proved undecidable in the 1930s. The Terminator tool piloted a new approach to the problem which is popular now, based on the idea of incrementally improving the best guess for a proof based on failed proof attempts. This was the first known approach capable of scaling termination proving to industrial problems. My colleagues and I used Terminator to find bugs in device drivers that could cause operating systems to become unresponsive. We found many bugs in device drivers that ran keyboards, mice, network devices, and video cards. The Terminator tool was also the basis of BioModelAnaylzer. It turns out that there’s a connection between diseases like Leukemia and the Halting problem: Leukemia is a termination bug in the genetic-regulatory pathways in your blood. You can think of it in the same way you think of a device driver that’s stuck in an infinite loop, causing your computer to freeze. My tools helped answer fundamental questions that no tool could solve before. Several pharmaceutical companies use BioModelAnaylzer today to understand disease and find new treatment options. And these days, there is an annual international competition with many termination provers that are much better than the Terminator. I think that this is what Royal Academy is talking about when they say I moved the area from “academic hypothesis to mechanized industrial reality.”

I have also worked on problems related to the question of P=NP, the most famous open problem in computing theory. From 2000-2006, I built tools that made NP feel equal to P in certain limited circumstances to try and understand the problem better. Then I focused on circumstances that aligned with important industrial problems, like proving the absence of bugs in microprocessors, flight control software, telecommunications systems, and railway control systems. These days the tools in this space are incredibly powerful. You should check out the software tools CVC4 or Z3.

And, of course, there’s my work with the Automated Reasoning Group, where I’ve built a team of domain experts that develop and apply formal verification tools to a wide variety of problems, helping make the cloud more secure. We have built tools that automatically reason about the semantics of policies, networks, cryptography, virtualization, etc. We reason about the implementation of Amazon Web Services (AWS) itself, and we’ve built tools that help customers prove the correctness of their AWS-based implementations.

Could you go into a bit more detail about how this work connects to Amazon and its customers?

AWS provides cloud services globally. Cloud is shorthand for on-demand access to IT resources such as compute, storage, and analytics via the Internet with pay-as-you-go pricing. AWS has a wide variety of customers, ranging from individuals to the largest enterprises, and practically all industries. My group develops mathematical proof tools that help make AWS more secure, and helps AWS customers understand how to build in the cloud more securely.

I first became an AWS customer myself when building BioModelAnaylzer. AWS allowed us working on this project to solve major scientific challenges (see this Nature Scientific Report for an example) using very large datacenters, but without having to buy the machines, maintain the machines, maintain the rooms that the machines would sit in, the A/C system that would keep them cool, etc. I was also able to easily provide our customers with access to the tool via the cloud, because it’s all over the internet. I just pointed people to the end-point on the internet and, presto, they were using the tool. About 5 years before developing BioModelAnalyzer, I was developing proof tools for device drivers and I gave a demo of the tool to my executive leadership. At the end of the demo, I was asked if 5,000 machines would help us do more proofs. Computationally, the answer was an obvious “yes,” but then I thought a minute about the amount of overhead required to manage a fleet of 5,000 machines and reluctantly replied “No, but thank you very much for the offer!” With AWS, it’s not even a question. Anyone with an Amazon account can provision 5,000 machines for practically nothing. In less than 5 minutes, you can be up and running and computing with thousands of machines.

What I love about working at AWS is that I can focus a very small team on proving the correctness of some aspect of AWS (for example, the cryptography) and, because of the size and importance of the customer base, we make much of the world meaningfully more secure. Just to name a few examples: s2n (the Amazon TLS implementation); the AWS Key Management Service (KMS), which allows customers to securely store crypto keys; and networking extensions to the IoT operating system Amazon FreeRTOS, which customers use to link cloud to IoT devices, such as robots in factories. We also focus on delivering service features that help customers prove the correctness of their AWS-based implementations. One example is Tiros, which powers a network reachability feature in Amazon Inspector. Another example is Zelkova, which powers features in services such as Amazon S3, AWS Config, and AWS IoT Device Defender.

When I think of mathematical logic I think of obscure theory and messy blackboards, not practical application. But it sounds like you’ve managed to balance the tension between theory and practical industrial problems?

I think that this is a common theme that great scientists don’t often talk about. Alan Turing, for example, did his best work during the war. John Snow, who made fundamental contributions to our understanding of germs and epidemics, did his greatest work while trying to figure out why people were dying in the streets of London. Christopher Stratchey, one of the founders of our field, wrote:

“It has long been my personal view that the separation of practical and theoretical work is artificial and injurious. Much of the practical work done in computing, both in software and in hardware design, is unsound and clumsy because the people who do it have not any clear understanding of the fundamental design principles in their work. Most of the abstract mathematical and theoretical work is sterile because it has no point of contact with real computing.”

Throughout my career, I’ve been at the intersection of practical and theoretical. In the early days, this was driven by necessity: I had two children during my PhD and, frankly, I needed the money. But I soon realized that my deep connection to real engineering problems was an advantage and not a disadvantage, and I’ve tried through the rest of my career to stay in that hot spot of commercially applicable problems while tackling abstract mathematical topics.

What’s next for you? For the Automated Reasoning Group? For your scientific field?

The Royal Academy of Engineering kindly said that I’ve brought “this field from academic hypothesis to mechanized industrial reality.” That’s perhaps true, but we are very far from done: it’s not yet an industrial standard. The full power of automated reasoning is not yet available to everyone because today’s tools are either difficult to use or weak. The engineering challenge is to make them both powerful and easy to use. With that I believe that they’ll become a key part of every software engineer’s daily routine. What excites me is that I believe that Amazon has a lot to teach me about how to operationalize the impossible. That’s what Amazon has done over and over again. That’s why I’m at Amazon today. I want to see these proof techniques operating automatically at Amazon scale.

Links:
Provable security webpage
Lecture: Fundamentals for Provable Security at AWS
Lecture: The evolution of Provable Security at AWS
Lecture: Automating compliance verification using provable security
Lecture: Byron speaks about Terminator at University of Colorado
https://biomodelanalyzer.org/

If you have feedback about this post, let us know in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Tips for building a cloud security operating model in the financial services industry

Post Syndicated from Stephen Quigg original https://aws.amazon.com/blogs/security/tips-for-building-a-cloud-security-operating-model-in-the-financial-services-industry/

My team helps financial services customers understand how AWS services operate so that you can incorporate AWS into your existing processes and security operations centers (SOCs). As soon as you create your first AWS account for your organization, you’re live in the cloud. So, from day one, you should be equipped with certain information: you should understand some basics about how our products and services work, you should know how to spot when something bad could happen, and you should understand how to recover from that situation. Below is some of the advice I frequently offer to financial services customers who are just getting started.

How to think about cloud security

Security is security – the principles don’t change. Many of the on-premises security processes that you have now can extend directly to an AWS deployment. For example, your processes for vulnerability management, security monitoring, and security logging can all be transitioned over.

That said, AWS is more than just infrastructure. I sometimes talk to customers who are only thinking about the security of their AWS Virtual Private Clouds (VPCs), and about the Amazon Elastic Compute Cloud (EC2) instances running in those VPCs. And that’s good; its traditional network security that remains quite standard. But I also ask my customers questions that focus on other services they may be using. For example:

  • How are you thinking about who has Database Administrator (DBA) rights for Amazon Aurora Serverless? Aurora Serverless is a managed database service that lets AWS do the heavy lifting for many DBA tasks.
  • Do you understand how to configure (and monitor the configuration of) your Amazon Athena service? Athena lets you query large amounts of information that you’ve stored in Amazon Simple Storage Service (S3).
  • How will you secure and monitor your AWS Lambda deployments? Lambda is a serverless platform that has no infrastructure for you to manage.

Understanding AWS security services

As a customer, it’s important to understand the information that’s available to you about the state of your cloud infrastructure. Typically, AWS delivers much of that information via the Amazon CloudWatch service. So, I encourage my customers to get comfortable with CloudWatch, alongside our AWS security services. The key services that any security team needs to understand include:

  • Amazon GuardDuty, which is a threat detection system for the cloud.
  • AWS Cloudtrail, which is the log of AWS API services.
  • VPC Flow Logs, which enables you to capture information about the IP traffic going to and from network interfaces in your VPC.
  • AWS Config, which records all the configuration changes that your teams have made to AWS resources, allowing you to assess those changes.
  • AWS Security Hub, which offers a “single pane of glass” that helps you assess AWS resources and collect information from across your security services. It gives you a unified view of resources per Region, so that you can more easily manage your security and compliance workflow.

These tools make it much quicker for you to get up to speed on your cloud security status and establish a position of safety.

Getting started with automation in the cloud

You don’t have to be a software developer to use AWS. You don’t have to write any code; the basics are straightforward. But to optimize your use of AWS and to get faster at automating, there is a real advantage if you have coding skills. Automation is the core of the operating model. We have a number of tutorials that can help you get up to speed.

Self-service cloud security resources for financial services customers

There are people like me who can come and talk to you. But to keep you from having to wait for us, we also offer a lot of self-service cloud security resources on our website.

We offer a free digital training course on AWS security fundamentals, plus webinars on financial services topics. We also offer an AWS security certification, which lets you show that your security knowledge has been validated by a third-party.

There are also a number of really good videos you can watch. For example, we had our inaugural security conference, re:Inforce, in Boston this past June. The videos and slides from the conference are now on YouTube, so you can sit and watch at your own pace. If you’re not sure where to start, try this list of popular sessions.

Finding additional help

You can work with a number of technology partners to help extend your security tools and processes to the cloud.

  • Our AWS Professional Services team can come and help you on site. In addition, we can simulate security incidents with you tohelp you get comfortable with security and cloud technology and how to respond to incidents.
  • AWS security consulting partners can also help you develop processes or write the code that you might need.
  • The AWS Marketplace is a wonderful self-service location where you can get all sorts of great security solutions, including finding a consulting partner.

And if you’re interested in speaking directly to AWS, you can always get in touch. There are forms on our website, or you can reach out to your AWS account manager and they can help you find the resources that are necessary for your business.

Conclusion

Financial services customers face some tough security challenges. You handle large amounts of data, and it’s really important that this data is stored securely and that its privacy is respected. We know that our customers do lots of due diligence of AWS before adopting our services, and they have many different regulatory environments within which they have to work. In turn, we want to help customers understand how they can build a cloud security operating model that meets their needs while using our services.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Stephen Quigg

Stephen Quigg is a Principal Securities Solutions Architect within AWS Financial Services. Quigg started his AWS career in Sydney, Australia, but returned home to Scotland three years ago having missed the wind and rain too much. He manages to fit some work in between being a husband and father to two angelic children and making music.

Using new vCPU-based On-Demand Instance limits with Amazon EC2

Post Syndicated from Roshni Pary original https://aws.amazon.com/blogs/compute/preview-vcpu-based-instance-limits/

This post is contributed by Saloni Sonpal, Senior Product Manager, Amazon EC2

As an Amazon EC2 customer running On-Demand Instances, you can increase or decrease your compute capacity depending on your application’s needs, and only pay for what you use. EC2 implements instance limits, which give you a highly elastic experience while protecting you from unintentional spending or abuse.

To streamline launching On-Demand Instances, AWS is introducing simplified EC2 limits on September 24, 2019, based on the number of virtual central processing units (vCPUs) attached to your running instances. In addition to now measuring usage in number of vCPUs, there will only be five different On-Demand Instance limits—one limit that governs the usage of standard instance families such as A, C, D, H, I, M, R, T, and Z, and one limit per accelerated instance family for FPGA (F), graphic-intensive (G), general purpose GPU (P), and special memory optimized (X) instances. Use your limits in terms of the number of vCPUs for the EC2 instance types to launch any combination of instance types that meet your changing application needs. For example, with a Standard Instance Family limit of 256 vCPUs, you could launch 32 m5.2xlarge instances (32 x 8 vCPUs) or 16 c5.4xlarge instances (16 x 16 vCPUs) or any combination of applicable instance types and sizes that total 256 vCPUs. For more information on vCPU mapping for instance types, see Amazon EC2 Instance Types.

During a transition period from September 24, 2019, through October 24, 2019, you can opt in to receive vCPU-based instance limits. When you opt in, EC2 automatically computes your new limits, giving you access to launch at least the same number of instances (if not more) than you do currently. Beginning October 24, 2019, all accounts will switch to vCPU-based instance limits, and the current count-based instance limits will no longer be supported. Although the switchover will not impact your ability to launch EC2 instances, you should familiarize yourself with the new On-Demand Instance limits experience and opt into vCPU limits at a time of your choosing.

You can always view and manage your limits from the Limits page in the Amazon EC2 console and from the EC2 Service page in the Service Quotas console. With Amazon CloudWatch metrics integration, you can monitor EC2 usage against limits as well as configure alarms to warn about approaching limits. For more information, see EC2 On-Demand Instance limits. If you have any questions, contact the AWS support team on the community forums and via AWS Premium Support.

AWS and the European Banking Authority Guidelines on Outsourcing

Post Syndicated from Chad Woolf original https://aws.amazon.com/blogs/security/aws-european-banking-authority-guidelines-on-outsourcing/

Financial institutions across the globe use AWS to transform the way they do business. It’s exciting to watch our customers in the financial services industry innovate on AWS in unique ways, across all geos and use cases. Regulations continue to evolve in this space, and we’re working hard to help customers proactively respond to new rules and guidelines. In many cases, the AWS Cloud makes it easier than ever before for customers to comply with different regulations and frameworks around the world.

The European Banking Authority (EBA), an EU financial supervisory authority, recently provided EU financial institutions (which includes credit institutions, certain investment firms, and payment institutions) with new outsourcing guidelines (PDF), which also apply to the use of cloud services. We’re ready and able to support our customers’ compliance with their obligations under the EBA Guidelines and to help meet and exceed their regulators’ expectations. We offer our customers a wide range of services that can simplify and directly assist in complying with the new guidelines, which take effect on September 30, 2019.

What do the EBA Guidelines mean for AWS customers?

The EBA Guidelines establish technology-neutral outsourcing requirements for EU financial institutions, and there is a particular focus on the outsourcing of “critical or important functions.” For AWS and our customers, the key takeaway is that the EBA Guidelines allow for EU financial institutions to use cloud services for material, regulated workloads. When considering or using third-party services, many EU financial institutions already follow due diligence, risk management, and regulatory notification processes that are similar to those processes laid out in the EBA Guidelines. To meet and exceed the EBA Guidelines’ requirements on security, resiliency, and assurance, EU financial institutions can use a variety of AWS security and compliance services.

Risk-based approach

The EBA Guidelines incorporate a risk-based approach that expects regulated entities to identify, assess, and mitigate the risks associated with any outsourcing arrangement. The risk-based approach outlined in the EBA Guidelines is consistent with the long-standing AWS shared responsibility model. This approach applies throughout the EBA Guidelines, including the areas of risk assessment, contractual and audit requirements, data location and transfer, and security implementation.

  • Risk assessment: The EBA Guidelines emphasize the need for EU financial institutions to assess the potential impact of outsourcing arrangements on their operational risk. The AWS shared responsibility model helps customers formulate their risk assessment approach because it illustrates how their security and management responsibilities change depending on the AWS services they use. For example, AWS operates some controls on behalf of customers, such as data center security, while customers operate other controls, such as event logging. In practice, AWS services help customers assess and improve their risk profile relative to traditional, on-premises environments.
  • Contractual and audit requirements: The EBA Guidelines lay out requirements for the written agreement between an EU financial institution and its service provider, including access and audit rights. For EU financial institutions running regulated workloads on AWS services, we offer the EBA Financial Services Addendum to address the EBA Guidelines’ contractual requirements. We also provide these institutions the ability to comply with the audit requirements in the EBA Guidelines through the AWS Security & Audit Series, including participation in an Audit Symposium, to facilitate customer audits. To align with regulatory requirements and expectations, our EBA addendum and audit program incorporate feedback that we’ve received from a variety of financial supervisory authorities across EU member states. EU financial services customers interested in learning more about the addendum or about the audit engagements offered by AWS can reach out to their AWS account teams.
  • Data location and transfer: The EBA Guidelines do not put restrictions on where an EU financial institution can store and process its data, but rather state that EU financial institutions should “adopt a risk-based approach to data storage and data processing location(s) (i.e. country or region) and information security considerations.” Our customers can choose which AWS Regions they store their content in, and we will not move or replicate your customer content outside of your chosen Regions unless you instruct us to do so. Customers can replicate and back up their customer content in more than one AWS Region to meet a variety of objectives, such as availability goals and geographic requirements.
  • Security implementation: The EBA Guidelines require EU financial institutions to consider, implement, and monitor various security measures. Using AWS services, customers can meet this requirement in a scalable and cost-effective way while improving their security posture. Customers can use AWS Config or AWS Security Hub to simplify auditing, security analysis, change management, and operational troubleshooting. As part of their cybersecurity measures, customers can activate Amazon GuardDuty, which provides intelligent threat detection and continuous monitoring, to generate detailed and actionable security alerts. Amazon Inspector automatically assesses a customer’s AWS resources for vulnerabilities or deviations from best practices and then produces a detailed list of security findings prioritized by level of severity. Customers can also enhance their security by using AWS Key Management Service (creation and control of encryption keys), AWS Shield (DDoS protection), and AWS WAF (filtering of malicious web traffic). These are just a few of the 500+ services and features we offer that enable strong availability, security, and compliance for our customers.

As reflected in the EBA Guidelines, it’s important to take a balanced approach when evaluating responsibilities in a cloud implementation. We are responsible for the security of the AWS Global Infrastructure. In the EU, we currently operate AWS Regions in Ireland, Frankfurt, London, Paris, and Stockholm, with our new Milan Region opening soon. For all of our data centers, we assess and manage environmental risks, employ extensive physical and personnel security controls, and guard against outages through our resiliency and testing procedures. In addition, independent, third-party auditors test more than 2,600 standards and requirements in the AWS environment throughout the year.

Conclusion

We encourage customers to learn about how the EBA Guidelines apply to their organization. Our teams of security, compliance, and legal experts continue to work with our EU financial services customers, both large and small, to support their journey to the AWS Cloud. AWS is closely following how regulatory authorities apply the EBA Guidelines locally and will provide further updates as needed. If you have any questions about compliance with the EBA Guidelines and their application to your use of AWS, or if you require the EBA Financial Services Addendum, please reach out to your account representative or request to be contacted.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Chad Woolf

Chad joined Amazon in 2010 and built the AWS compliance functions from the ground up, including audit and certifications, privacy, contract compliance, control automation engineering and security process monitoring. Chad’s work also includes enabling public sector and regulated industry adoption of the AWS Cloud, compliance with complex privacy regulations such as GDPR and operating a trade and product compliance team in conjunction with global region expansion. Prior to joining AWS, Chad spent 12 years with Ernst & Young as a Senior Manager working directly with Fortune 100 companies consulting on IT process, security, risk, and vendor management advisory work, as well as designing and deploying global security and assurance software solutions. Chad holds a Masters of Information Systems Management and a Bachelors of Accounting from Brigham Young University, Utah. Follow Chad on Twitter.

64 AWS services achieve HITRUST certification

Post Syndicated from Chris Gile original https://aws.amazon.com/blogs/security/64-aws-services-achieve-hitrust-certification/

We’re excited to announce that 64 AWS services are now certified for the Health Information Trust Alliance (HITRUST) Common Security Framework (CSF).

The full list of AWS services that were audited by a third party auditor and certified under HITRUST CSF is available on our Services in Scope by Compliance Program page. You can view and download our HITRUST CSF certification here:

The HITRUST certification allows AWS customers to tailor their security control baselines to a variety of factors including, but not limited to, regulatory requirements and organization type.

The HITRUST Alliance has established the CSF as a certifiable framework that can be leveraged by organizations to comply with ISO/IEC 27000 series and HIPAA related requirements. The HITRUST CSF is already widely adopted by leading organizations in a variety of industries in their approach to security and privacy. Please visit the HITRUST Alliance website for more information.

As always, we value your feedback and questions and commit to helping customers achieve and maintain the highest standard of security and compliance. Please feel free to reach out to the team through the AWS Compliance Contact Us page.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Nine AWS Security Hub best practices

Post Syndicated from Ketan Srivastava original https://aws.amazon.com/blogs/security/nine-aws-security-hub-best-practices/

AWS Security Hub is a security and compliance service that became generally available on June 25, 2019. It provides you with extensive visibility into your security and compliance status across multiple AWS accounts, in a single dashboard per region. The service helps you monitor critical settings to ensure that your AWS accounts remain secure, allowing you to notice and react quickly to any changes in your environment.

AWS Security Hub aggregates, organizes, and prioritizes security findings from supported AWS services—that’s Amazon GuardDuty, Amazon Inspector, and Amazon Macie at the time this post was published—and from various AWS partner security solutions. AWS Security Hub also generates its own findings, based on automated, resource-level and account-level configuration and compliance checks using service-linked AWS Config rules plus other analytic techniques. These checks help you keep your AWS accounts compliant with industry standards and best practices, such as the Center for Internet Security (CIS) AWS Foundations standard.

In this post, I’ll provide nine best practices to help you use AWS Security Hub as effectively as possible.

1. Use the AWS Labs script to turn on Security Hub in all your AWS accounts in all regions and to establish your existing Amazon GuardDuty master/member hierarchy

As a best practice, you should continuously monitor all regions across all of your AWS accounts for unauthorized behavior or misconfigurations, even in regions that you don’t use heavily. AWS already recommends that you do this when using monitoring services like AWS Config and AWS CloudTrail. I recommend that you enable Security Hub in every region available in your AWS accounts.

In addition, you can also invite other AWS accounts to enable Security Hub and share findings with your AWS account. If you send an invitation and it is accepted by the other account owner, your Security Hub account is designated as the master account, and any associated Security Hub accounts become your member accounts. Users from the master account will then be able to view Security Hub findings from member accounts.

To simplify these configurations, you can utilize the AWS Labs script available on GitHub, which provides a step-by-step guide to automate this process. This script allows you to enable (and disable) AWS Security Hub simultaneously across a list of associated AWS accounts and bulk-add them to become your Security Hub members; it sends invitations from the master account and automatically accepts invitations in all member accounts. To run the script, you must have the AWS account IDs and root email addresses of the AWS accounts that you want as your Security Hub members. (Note that you should only share your root email address and account ID with AWS accounts that you trust. Visit the IAM best practices page to learn more about how to keep access to your AWS accounts secured.)

By default, the Security Hub master/member association is independent of the relationships that you’ve established between your Amazon GuardDuty or Amazon Macie accounts and other associated accounts. If you have an existing master/member hierarchy in GuardDuty or Macie, you can export that list of accounts into a CSV file and then use it with the script. For example in GuardDuty, use the ListMembers API to export the AWS Account ID and email of all member accounts, as follows:

aws guardduty list-members –detector-id <Detector ID> –query "Members[].[AccountId, Email]" –output text | awk ‘{print $1 "," $2}’

The output of the above command will be your GuardDuty member account IDs and their corresponding root email addresses, one per line and separated with a comma as shown below:

12345678910,[email protected]
98765432101,[email protected]

2. Enable AWS Config in all AWS accounts and regions and leave the AWS CIS Foundations standard check enabled

When you enable Security Hub in any region, the AWS CIS standard checks are enabled by default. I recommend leaving them enabled; they are important security measures that are applicable to all AWS accounts.

To run most of these checks, Security Hub uses service-linked AWS Config rules. Because of this, you should make sure that AWS Config is turned on and recording all supported resources, including global resources, in all accounts and regions where Security Hub is deployed. You are not charged by AWS Config for these service-linked rules. You are only charged via Security Hub’s pricing model.

3. Use specific managed IAM policies for different types of Security Hub users

You can choose to allow a large group of users to access List and Read Security Hub actions, which will permit them to view your security findings. However, you should allow only a small group of users to access the Security Hub Write actions. This will permit only authorized users to archive, resolve, or remediate the findings.

You can use AWS managed policies to give your employees the permissions they need to get started. These policies are already available in your account and are maintained and updated by AWS. To grant more granular permission to your Security Hub users, I recommend that you create your own customer managed policies. A great place to start with this is to import an existing AWS managed policy. That way, you know that the policy is initially correct, and all you need to do is customize it for your environment.

AWS categorizes each service action into one of five access levels based on what each action does: List, Read, Write, Permissions management, or Tagging. To determine which access level to include in the IAM policies that you assign to your users, you can view the policy summary by navigating from the IAM Console to Policies, then selecting any AWS managed or customer managed policy. Next, on the Summary page, under the Permissions tab, select Policy summary (see Figure 1). For more details and examples of access level classification, see Understanding Access Level Summaries Within Policy Summaries.
 

Figure 1: Policy summary of AWSSecurityHubReadOnlyAccess AWS managed policy

Figure 1: Policy summary of AWSSecurityHubReadOnlyAccess AWS managed policy

4. Use tags for access controls and cost allocation

A SecurityHub::Hub resource represents the implementation of the AWS Security Hub service per region in your AWS account. Security Hub allows you to assign metadata to your SecurityHub::Hub resource in the form of tags. Each tag is a string consisting of a user-defined key and an optional key-value that makes it easier for you to identify and manage the AWS resources in your environment.

You can control access permissions by using tags on your SecurityHub::Hub resource. For example, you can allow a group of developer IAM entities to manage and update only the SecurityHub::Hub resources that have the tag key developer associated with them. This can help you restrict access to your production SecurityHub::Hub resources, while allowing your developers to continue testing in their developer environment.

For more information on the supported tag-based conditions which you can use with the Security Hub APIs, refer to Condition Keys for AWS Security Hub. Please note that when you use tag-based conditions for access control, you must define who can modify those tags.

To make it easier to categorize and track your AWS costs, you can also activate cost allocation tags. This helps you organize your SecurityHub::Hub resource costs. AWS generates a cost allocation report as a CSV file, with your usage and costs grouped according to your active tags. You can apply tags that represent business categories (such as cost centers, application names, or project environments) to organize your costs.

For more information on commonly used tagging categories and effective tagging strategies, read about AWS Tagging Strategies.

5. Integrate and enable your existing security products (with 34 integrations today and more to come)

Numerous tools can help you understand the security and compliance posture of your AWS accounts, but these tools generate their own set of findings, often in different formats. Security Hub normalizes the findings.

With Security Hub, findings generated from integrated providers (both third-party services and AWS services) are ingested using a standard findings format, which eliminates the need for security teams to convert the data. You can currently integrate 34 findings providers to import and/or export findings with Security Hub. Some partner products, like PagerDuty, Splunk, and Slack, can receive findings from Security Hub, although they don’t generate findings.

If you want to add a third-party partner product to your AWS environment, you can choose the Purchase link from the Security Hub console’s Integrations page and navigate to AWS Marketplace. Once purchased, choose the Configure link to navigate to step-by-step instructions to install the product and configure its integration with Security Hub. Then choose Enable integration to create a product subscription in your account for that third-party provider (see Figure 2).

After you enable a subscription, a resource policy is automatically attached to it. The resource policy defines the permissions that Security Hub needs to accept and process the product’s findings. You can also enable the subscription via the API and CloudFormation.
 

Figure 2: Integrating partner findings provider with Security Hub

Figure 2: Integrating partner findings provider with Security Hub

6. Build out customized remediation playbooks using Amazon CloudWatch Events, AWS Systems Manager Automation documents, and AWS Step Functions to automatically resolve findings that don’t require human intervention

Security Hub automatically sends all findings to Amazon CloudWatch Events. This integration helps you automate your response to threat incidents by allowing you to take specific actions using AWS Systems Manager Automation documents, OpsItems, and AWS Step Functions. Using these tools, you can create your own incident handling plan. This will allow your security team to focus on strengthening the security of your AWS environments rather than on remediating the current findings.
 

Figure 3: Creating a CloudWatch Events Rule for sending matched Security Hub findings to specific Targets

Figure 3: Creating a CloudWatch Events Rule for sending matched Security Hub findings to specific Targets

7. Create custom actions to send a copy of a Security Hub finding to a resource that is internal or external to your AWS account, enabling additional visibility and remediation options for the finding

Because of its integration with CloudWatch Events, you can use Security Hub to create custom actions that will send specific findings to ticketing, chat, email, or automated remediation systems. Custom actions can also be sent to your own AWS resources, such as AWS Systems Manager OpsCenter, AWS Lambda or Amazon Kinesis, allowing you to do your own remediation or data capture related to the finding.

For an in-depth look at this architecture, plus specific examples of how to implement custom actions, see How to Integrate AWS Security Hub Custom Actions with PagerDuty and How to Enable Custom Actions in AWS Security Hub.

In addition, Security Hub gives you the option to choose a language-specific AWS SDK so that you can use custom actions to resolve findings programmatically. Below, I’ll demonstrate how you can implement this using AWS Lambda and AWS SDK for Python (Boto3). In my example, I’ll remediate the finding generated by Security Hub for CIS check 2.4, “Ensure CloudTrail trails are integrated with Amazon CloudWatch Logs.” For this walk-through, I assume that you have the necessary AWS IAM permissions to work with Security Hub, CloudWatch Events, Lambda and AWS CloudTrail.
 

Figure 4: Data flow supporting remediation of Security Hub findings using custom actions

Figure 4: Data flow supporting remediation of Security Hub findings using custom actions

As shown in figure 4:

  1. When findings against CIS check 2.4 are generated in Security Hub, Security Hub will send them to CloudWatch Events using custom actions that I’ll describe below.
  2. CloudWatch Events will send the findings to a Lambda function that has been configured as the target.
  3. The Lambda function will utilize a Python script to check whether the finding has been generated against CIS check 2.4. If it has, the Lambda function will identify the affected CloudTrail trail and configure it with CloudWatch Logs to monitor the trail logs.

Prerequisites

  1. You must configure an IAM Role for AWS CloudTrail to assume so that it can deliver events to your CloudWatch Logs log group. For more information about how to do this, refer to the AWS CloudTrail documentation. I’ll refer to this role as the CloudTrail role.
  2. To deploy the Lambda function, you must configure an IAM Role for the Lambda function to assume. I’ll refer to this role as the Lambda execution role. The following sample policy includes the permissions that you’ll assign to it for this example. Please replace <CloudTrail_CloudWatchLogs_Role> with the CloudTrail role that you created in the previous step. Depending on your use case, you can restrict this IAM policy further to grant least privilege, which is a recommended IAM Best Practice.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents",
                "logs:DescribeLogGroups",
                "cloudtrail:UpdateTrail",
                "iam:GetRole"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": "arn:aws:iam::012345678910:role/<CloudTrail_CloudWatchLogs_Role>"
        }
    ]
}     

Solution deployment

  1. Create a custom action in AWS Security Hub and associate it with a CloudWatch Events rule that you configure for your Security Hub findings. Follow the instructions laid out in the Security Hub user guide for the exact steps to do this.
  2. Create a Lambda Function, which will complete the auto-remediation of the CIS 2.4 findings:
    1. Open the Lambda Console and select Create function.
    2. On the next page, choose Author from scratch.
    3. Under Basic information, enter a name for your function. For Runtime, select Python 3.7.
       
      Figure 5: Updating basic information to create the Lambda function

      Figure 5: Updating basic information to create the Lambda function

    4. Under Permissions, expand Choose or create an execution role.
    5. Under Execution role, select the drop down menu and change the setting to Use an existing role.
    6. Under Existing role, select the Lambda execution role that you created earlier, then select Create function.
       
      Figure 6: Updating basic information and permissions to create the Lambda function

      Figure 6: Updating basic information and permissions to create the Lambda function

    7. Delete the default function code and paste the code I’ve provided below:
      
              import json, boto3
              cloudtrail_client = boto3.client('cloudtrail')
              cloudwatchlogs_client = boto3.client('logs')
              iam_client = boto3.client('iam')
              
              role_details = iam_client.get_role(RoleName='<CloudTrail_CloudWatchLogs_Role>')
              
              def lambda_handler(event, context):
                  # First off all, let us see if the JSON sent by CWE has any Security Hub findings.
                  if 'detail' in event.keys() and 'findings' in event['detail'].keys() and len(event['detail']['findings']) > 0:
                      print("There are some findings. Let's check them!")
                      print("Number of findings: %i" % len(event['detail']['findings']))
              
                      # Then we need to filter out the findings. In this code snippet, we'll handle only findings related to CloudTrail trails for integration with CloudWatch Logs.
                      for finding in event['detail']['findings']:
                          if 'Title' in finding.keys():
                              if 'Ensure CloudTrail trails are integrated with CloudWatch Logs' in finding['Title']:
                                  print("There's a CloudTrail-related finding. I can handle it!")
              
                                  if 'Compliance' in finding.keys() and 'Status' in finding['Compliance'].keys():
                                      print("Compliance Status: %s" % finding['Compliance']['Status'])
              
                                      # We can skip compliant findings, and evaluate only the non-compliant ones.                        
                                      if finding['Compliance']['Status'] == 'PASSED':
                                          continue
              
                                      # For each non-compliant finding, we need to get specific pieces of information so as to create the correct log group and update the CloudTrail trail.                        
                                      for resource in finding['Resources']:
                                          resource_id = resource['Id']
                                          cloudtrail_name = resource['Details']['Other']['name']
                                          loggroup_name = 'CloudTrail/' + cloudtrail_name
                                          print("ResourceId for the finding is %s" % resource_id)
                                          print("LogGroup name: %s" % loggroup_name)
              
                                          # At this point, we can create the log group using the name extracted from the finding.
                                          try:
                                              response_logs = cloudwatchlogs_client.create_log_group(logGroupName=loggroup_name)
                                          except Exception as e:
                                              print("Exception: %s" % str(e))
              
                                          # For updating the CloudTrail trail, we need to have the ARN of the log group. Let's retrieve it now.                            
                                          response_logsARN = cloudwatchlogs_client.describe_log_groups(logGroupNamePrefix = loggroup_name)
                                          print("LogGroup ARN: %s" % response_logsARN['logGroups'][0]['arn'])
                                          print("The role used by CloudTrail is: %s" % role_details['Role']['Arn'])
              
                                          # Finally, let's update the CloudTrail trail so that it sends logs to the new log group created.
                                          try:
                                              response_cloudtrail = cloudtrail_client.update_trail(
                                                  Name=cloudtrail_name,
                                                  CloudWatchLogsLogGroupArn = response_logsARN['logGroups'][0]['arn'],
                                                  CloudWatchLogsRoleArn = role_details['Role']['Arn']
                                              )
                                          except Exception as e:
                                              print("Exception: %s" % str(e))
                              else:
                                  print("Title: %s" % finding['Title'])
                                  print("This type of finding cannot be handled by this function. Skipping it…")
                          else:
                              print("This finding doesn't have a title and so cannot be handled by this function. Skipping it…")
                  else:
                      print("There are no findings to remediate.")            
              

    8. After pasting the code, replace <CloudTrail_CloudWatchLogs_Role> with your CloudTrail role and select Save to save your Lambda function.
       
      Figure 7: Editing Lambda code to replace the correct CloudTrail role

      Figure 7: Editing Lambda code to replace the correct CloudTrail role

  3. Go to your CloudWatch console and select Rules in the navigation pane on the left.
    1. From the list of CloudWatch rules that you see, select the rule which you created in Step 1 of this solution deployment.
    2. Then, select Actions on the top right of the page and choose Edit.
    3. On the Step 1: Create rule page, under Targets, choose Lambda function and select the Lambda function you created in Step 2.
    4. Select Configure details.
    5. On the Step 2: Configure rule details page, select Update rule.
       
      Figure 8: Adding your created Lambda function as Target for the CloudWatch rule

      Figure 8: Adding your created Lambda function as target for the CloudWatch rule

  4. Configuration is now complete, and you can test your rule. Go to your AWS Security Hub console and select Compliance standards in the navigation pane.
    1. Next, select CIS AWS Foundations.
       
      Figure 9: Compliance standards page in the Security Hub console

      Figure 9: Compliance standards page in the Security Hub console

    2. Search for the rule 2.4 Ensure CloudTrail trails are integrated with CloudWatch Logs and select it.
       
      Figure 10: Locating CIS check 2.4 in the Security Hub console

      Figure 10: Locating CIS check 2.4 in the Security Hub console

    3. If you’ve left the default AWS Security Hub CIS checks enabled (along with AWS Config service in the same region), and if you have CloudTrail trails in that region which are not yet configured to deliver events to CloudWatch Logs, you should see a low severity finding with a Failed Compliance status.
    4. Select the failed finding by selecting the checkbox and choosing the Actions button.
    5. Finally, from the dropdown menu, select the custom action that you created in Step 1 to send the finding to CloudWatch Events. CloudWatch Events will send the finding to your Lambda function, which you configured as the target for the rule in step 3. The Lambda function will automatically identify the affected CloudTrail trail and configure CloudWatch Logs log group for you. The log group will have the same name as your trail for identification purposes. You can modify the code to suit your needs further.

    Note: There may be a delay before the compliance status of the remediated resource changes. Once the CIS AWS Foundations Standard is enabled, Security Hub will run the checks within 2 hours. After that, the checks are automatically run once every 24 hours.

     

    Figure 11: Findings generated against CIS check 2.4 in the Security Hub Console

    Figure 11: Findings generated against CIS check 2.4 in the Security Hub console

    8. Customize your insights using the default “managed insights” as templates and use them to prioritize resources and findings to act upon

    A Security Hub “insight” is a collection of related findings to which one or more Security Hub filters have been applied. Insights can help you organize your findings and identify security risks that need immediate attention.

    Security Hub offers several managed (default) insights. You can use these as templates for new insights, and modify them depending on your use case. You can save these modified queries as new custom insights to ensure an even greater visibility of your AWS accounts. Please refer to the documentation for step-by-step instructions on how to create custom insights.
     

    Figure 12: Creating a Security Hub custom insight

    Figure 12: Creating a Security Hub custom insight

    9. Use the free trial to evaluate what your costs could be

    Security Hub provides a 30-day free trial for all AWS accounts and regions. The trial is a good way to evaluate how much Security Hub will cost, on average, to monitor threats and compliance in your environments. You can view an estimate by navigating from the Security Hub console to Settings, then Usage (see Figure 13).
     

    Figure 13: Estimating your Security Hub costs

    Figure 13: Estimating your Security Hub costs

    Conclusion

    AWS Security Hub allows you to have more visibility into the security and compliance status of your AWS environments. Using the Security Hub best practices discussed here, security teams can spend more time on incident remediation and recovery rather than incident detection and organization. Security Hub has undergone HIPAA, ISO, PCI, and SOC certification. To learn more about Security Hub, refer to the AWS Security Hub documentation.

    If you have comments about this post, submit them in the Comments section below. If you have questions about anything in this post, start a new thread on the AWS Security Hub forum or contact AWS Support.

    Want more AWS Security news? Follow us on Twitter.

    Author

    Ketan Srivastava

    Ketan is a Cloud Support Engineer at AWS. He enjoys the fact that, at AWS, there are always so many opportunities to build things better for our customers and learn from these opportunities. Outside of work, he plays MOBAs and travels to new places with his wife. He holds a Master of Science degree from Rochester Institute of Technology.

Architecture Monthly Magazine for July: Machine Learning

Post Syndicated from Annik Stahl original https://aws.amazon.com/blogs/architecture/architecture-monthly-magazine-for-july-machine-learning/

Every month, AWS publishes the AWS Architecture Monthly Magazine (available for free on Kindle and Flipboard) that curates some of the best technical and video content from around AWS.

In the June edition, we offered several pieces of content related to Internet of Things (IoT). This month we’re talking about artificial intelligence (AI), namely machine learning.

Machine Learning: Let’s Get it Started

Alan Turing, the British mathematician whose life and work was documented in the movie The Imitation Game, was a pioneer of theoretical computer science and AI. He was the first to put forth the idea that machines can think.

Jump ahead 80 years to this month when researchers asked four-time World Poker Tour title holder Darren Elias to play Texas Hold’em with Pluribus, a poker-playing bot (actually, five of these bots were at the table). Pluribus learns by playing against itself over and over and remembering which strategies worked best. The bot became world-class-level poker player in a matter of days. Read about it in the journal Science.

If AI is making a machine more human, AI’s subset, machine learning, involves the techniques that allow these machines to make sense of the data we feed them. Machine learning is mimicking how humans learn, and Pluribus is actually learning from itself.

From self-driving cars, medical diagnostics, and facial recognition to our helpful (and sometimes nosy) pals Siri, Alexa, and Cortana, all these smart machines are constantly improving from the moment we unbox them. We humans are teaching the machines to think like us.

For July’s magazine, we assembled architectural best practices about machine learning from all over AWS, and we’ve made sure that a broad audience can appreciate it.

  • Interview: Mahendra Bairagi, Solutions Architect, Artificial Intelligence
  • Training: Getting in the Voice Mindset
  • Quick Start: Predictive Data Science with Amazon SageMaker and a Data Lake on AWS
  • Blog post: Amazon SageMaker Neo Helps Detect Objects and Classify Images on Edge Devices
  • Solution: Fraud Detection Using Machine Learning
  • Video: Viz.ai Uses Deep Learning to Analyze CT Scans and Save Lives
  • Whitepaper: Power Machine Learning at Scale

We hope you find this edition of Architecture Monthly useful, and we’d like your feedback. Please give us a star rating and your comments on Amazon. You can also reach out to [email protected] anytime. Check back in a month to discover what the August magazine will offer.

AWS achieves OSPAR outsourcing standard for Singapore financial industry

Post Syndicated from Brandon Lim original https://aws.amazon.com/blogs/security/aws-achieves-ospar-outsourcing-standard-for-singapore-financial-industry/

AWS has achieved the Outsourced Service Provider Audit Report (OSPAR) attestation for 66 services in the Asia Pacific (Singapore) Region. The OSPAR assessment is performed by an independent third party auditor. AWS’s OSPAR demonstrates that AWS has a system of controls in place that meet the Association of Banks in Singapore’s Guidelines on Control Objectives and Procedures for Outsourced Service Providers (ABS Guidelines).

The ABS Guidelines are intended to assist financial institutions in understanding approaches to due diligence, vendor management, and key technical and organizational controls that should be implemented in cloud outsourcing arrangements, particularly for material workloads. The ABS Guidelines are closely aligned with the Monetary Authority of Singapore’s Outsourcing Guidelines, and they’re one of the standards that the financial services industry in Singapore uses to assess the capability of their outsourced service providers (including cloud service providers).

AWS’s alignment with the ABS Guidelines demonstrates to customers AWS’s commitment to meeting the high expectations for cloud service providers set by the financial services industry in Singapore. Customers can leverage OSPAR to conduct their due diligence, minimizing the effort and costs required for compliance. AWS’s OSPAR report is now available in AWS Artifact.

You can find additional resources about regulatory requirements in the Singapore financial industry at the AWS Compliance Center. If you have questions about AWS’s OSPAR, or if you’d like to inquire about how to use AWS for your material workloads, please contact your AWS account team.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

Brandon Lim

Brandon is the Head of Security Assurance for Financial Services, Asia-Pacific. Brandon leads AWS’s regulatory and security engagement efforts for the Financial Services industry across the Asia Pacific region. He is passionate about working with Financial Services Regulators in the region to drive innovation and cloud adoption for the financial industry.

Spring 2019 PCI DSS report now available, 12 services added in scope

Post Syndicated from Chris Gile original https://aws.amazon.com/blogs/security/spring-2019-pci-dss-report-now-available-12-services-added-in-scope/

At AWS Security, continuously raising the cloud security bar for our customers is central to all that we do. Part of that work is focused on our formal compliance certifications, which enable our customers to use the AWS cloud for highly sensitive and/or regulated workloads. We see our customers constantly developing creative and innovative solutions—and in order for them to continue to do so, we need to increase the availability of services within our certifications. I’m pleased to tell you that in the past year, we’ve increased our Payment Card Industry – Data Security Standard (PCI DSS) certification scope by 79%, from 62 services to 111 services, including 12 newly added services in our latest PCI report (listed below), and we were audited by our third-party auditor, Coalfire.

The PCI DSS report and certification cover the 111 services currently in scope that are used by our customers to architect a secure Cardholder Data Environment (CDE) to protect important workloads. The full list of PCI DSS certified AWS services is available on our Services in Scope by Compliance program page. The 12 newly added services for our Spring 2019 report are:

Our compliance reports, including this latest PCI report, are available on-demand through AWS Artifact.

To learn more about our PCI program and other compliance and security programs, please visit the AWS Compliance Programs page.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

AWS Security Profile: Rustan Leino, Senior Principal Applied Scientist

Post Syndicated from Supriya Anand original https://aws.amazon.com/blogs/security/aws-security-profile-rustan-leino-senior-principal-applied-scientist/

Author


I recently sat down with Rustan from the Automated Reasoning Group (ARG) at AWS to learn more about the prestigious Computer Aided Verification (CAV) Award that he received, and to understand the work that led to the prize. CAV is a top international conference on formal verification of software and hardware. It brings together experts in this field to discuss groundbreaking research and applications of formal verification in both academia and industry. Rustan received this award as a result of his work developing program-verification technology. Rustan and his team have taken his research and applied it in unique ways to protect AWS core infrastructure on which customers run their most sensitive applications. He shared details about his journey in the formal verification space, the significance of the CAV award, and how he plans to continue scaling formal verification for cloud security at AWS.

Congratulations on your CAV Award! Can you tell us a little bit about the significance of the award and why you received it?

Thanks! I am thrilled to jointly receive this award with Jean-Christophe Filliâtre, who works at the CNRS Research Laboratory in France. The CAV Award recognizes fundamental contributions to program verification, that is, the field of mathematically proving the correctness of software and hardware. Jean-Christophe and I were recognized for the building of intermediate verification languages (IVL), which are a central building block of modern program verifiers.

It’s like this: the world relies on software, and the world relies on that software to function correctly. Software is written by software engineers using some programming language. If the engineers want to check, with mathematical precision, that a piece of software always does what it is intended to do, then they use a program verifier for the programming language at hand. IVLs have accelerated the building of program verifiers for many languages. So, IVLs aid the construction of program verifiers which, in turn, improve software quality that, in turn, makes technology more reliable for all.

What is your role at AWS? How are you applying technologies you’ve been recognized by CAV for at AWS?

I am building and applying proof tools to ensure the correctness and security of various critical components of AWS. This lets us deliver a better and safer experience for our customers. Several tools that we apply are based on IVLs. Among them are the SideTrail verifier for timing-based attacks, the VCC verifier for concurrent systems code, and the verification-aware programming language Dafny, all of which are built on my IVL named Boogie.

What does an automated program verification tool do?

An automated program verifier is a tool that checks if a program behaves as intended. More precisely, the verifier tries to construct a correctness proof that shows that the code meets the given specification. Specifications include things like “data at rest on disk drives is always encrypted,” or “the event-handler always eventually returns control back to the caller,” or “the API method returns a properly formatted buffer encrypted under the current session key.” If the verifier detects a discrepancy (that is, a bug), the developer responds by fixing the code. Sometimes, the verifier can’t determine what the answer is. In this case, the developer can respond by helping the tool with additional information, so-called proof hints, until the tool is able to complete the correctness proof or find another discrepancy.

For example, picture a developer who is writing a program. The program is like a letter written in a word processor, but the letter is written in a language that the computer can understand. For cloud security, say the program manages a set of data keys and takes requests to encrypt data under those keys. The developer writes down the intention that each encryption request must use a different key. This is the specification: the what.

Next, the developer writes code that instructs the computer how to respond to a request. The code separates the keys into two lists. An encryption request takes a key from the “not used” list, encrypts the given data, and then places the key on the “used” list.

To see that the code in this example meets the specification, it is crucial to understand the roles of the two lists. A program verifier might not figure this out by itself and would then indicate the part of the code it can’t verify, much like a spell-checker underlines spelling and grammar mistakes in a letter you write. To help the program verifier along, the developer provides a proof hint that says that the keys on the “not used” list have never been returned. The verifier checks that the proof hint is correct and then, using this hint, is able to construct the proof that the code meets the specification.

You’ve designed several verification tools in your career. Can you share how you’re using verification tools such as Dafny and Boogie to provide higher assurances for AWS infrastructure?

Dafny is a Java-like programming language that was designed with verification in mind. Whereas most programming languages only allow you to write code, Dafny allows you to write specifications and code at the same time. In addition, Dafny allows you to write proof hints (in fact, you can write entire proofs). Having specifications, code, and proofs in one language sets you up for an integrated verification experience. But this would remain an intellectual exercise without an automated program verifier. The Dafny language was designed alongside its automated program verifier. When you write a Dafny program, the verifier constantly runs in the background and points out mistakes as you go along, very much like the spell-checker underlines I alluded to. Internally, the Dafny verifier is based on the Boogie IVL.

At AWS, we’re currently using Dafny to write and prove a variety of security-critical libraries. For example: encryption libraries. Encryption is vital for keeping customer data safe, so it makes for a great place to focus energy on formal verification.

You spent time in scientific research roles before joining AWS. Has your experience at AWS caused you to see scientific challenges in a different way now?

I began my career in 1989 in the Microsoft Windows LAN Manager team. Based on my experiences helping network computers together, I became convinced that formally proving the correctness of programs was going to go from a “nice to have” to a “must have” in the future, because of the need for more security in a world where computers are so interconnected. At the time, the tools and techniques for proving programs correct were so rudimentary that the only safe harbor for this type of work was in esoteric research laboratories. Thus, that’s where I could be found. But these days, the tools are increasingly scalable and usable, so finally I made the jump back into development where I’m leading efforts to apply and operationalize this approach, and also to continue my research based on the problems that arise as we do so.

One of the challenges we had in the 1990s and 2000s was that few people knew how to use the tools, even if they did exist. Thus, while in research laboratories, an additional focus of mine has been on making tools that are so easy to use that they can be used in university education. Now, with dozens of universities using my tools and after several eye-opening successes with the Dafny language and verifier, I’m scaling these efforts up with development teams in AWS that can hire the students who are trained with Dafny.

I alluded to continuing research. There are still scientific challenges to make specifications more expressive and more concise, to design programming languages more streamlined for verification, and to make tools more automated, faster, and more predictable. But there’s an equally large challenge in influencing the software engineering process. The two are intimately linked, and cannot be teased apart. Only by changing the process can we hope for larger improvements in software engineering. Our application of formal verification at AWS is teaching us a lot about this challenge. We like to think we’re changing the software engineering world.

What are the next big challenges that we need to tackle in cloud security? How will automated reasoning play a role?

There is a lot of important software to verify. This excites me tremendously. As I see it, the only way we can scale is to distribute the verification effort beyond the verification community, and to get usable verification tools into the hands of software engineers. Tooling can help put the concerns of security engineers into everyday development. To meet this challenge, we need to provide appropriate training and we need to make tools as seamless as possible for engineers to use.

I hear your YouTube channel, Verification Corner, is loved by engineering students. What’s the next video you’ll be creating?

[Rustan laughs] Yes, Verification Corner has been a fun way for me to teach about verification and I receive appreciation from people around the world who have learned something from these videos. The episodes tend to focus on learning concepts of program verification. These concepts are important to all software engineers, and Verification Corner shows the concepts in the context of small (and sometimes beautiful) programs. Beyond learning the concepts in isolation, it’s also important to see the concepts in use in larger programs, to help engineers apply the concepts. I want to devote some future Verification Corner episodes to showing verification “in the trenches;” that is, the application of verification in larger, real-life (and sometimes not so beautiful) programs for cloud security, as we’re continuing to do at AWS.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Supriya Anand

Supriya is a Senior Digital Strategist at AWS.

How to get specific security information about AWS services

Post Syndicated from Marta Taggart original https://aws.amazon.com/blogs/security/how-to-get-specific-security-information-about-aws-services/

We’re excited to announce the launch of dedicated security chapters in the AWS documentation for over 40 services. Security is a key component of your decision to use the cloud. These chapters can help your organization get in-depth information about both the built-in and the configurable security of AWS services. This information goes beyond “how-to.” It can help developers—as well as Security, Risk Management, Compliance, and Product teams—assess a service prior to use, determine how to use a service securely, and get updated information as new features are released.

This initiative is a direct result of customer requests for easy-to-find, easy-to-consume security documentation. Our new chapters provide information about the security of the cloud and in the cloud, as outlined in the AWS Shared Responsibility Model, for each service. The chapters align with the Cloud Adoption Framework: Security Perspective and include information about the following topics, as applicable:

  • Data protection
  • Identity and access management
  • Logging and monitoring
  • Compliance validation
  • Resilience
  • Infrastructure security
  • Configuration and vulnerability analysis
  • Security best practices

You can find links to the security chapters on the AWS Security Documentation page, which will be updated as more security chapters become available. Here are links to the new Security chapters we’ve released so far:

You can give us your feedback by selecting the Feedback button in the lower right corner of any documentation page. We look forward to learning how you use this information within your organization and how we can continue to provide useful resources to you.

Author

Marta Taggart

Marta is a Seattle-native and Senior Program Manager in AWS Security, where she focuses on privacy, content development, and educational programs. Her interest in education stems from two years she spent in the education sector while serving in the Peace Corps in Romania. In her free time, she’s on a global hunt for the perfect cup of coffee.

Author

Kristen Haught

Kristen is a Security and Compliance Business Development Manager focused on strategic initiatives that enable financial services customers to adopt Amazon Web Services for regulated workloads. She cares about sharing strategies that help customers adopt a culture of innovation, while also strengthening their security posture and minimizing risk in the cloud.

AWS Security Profile: John Backes, Senior Software Development Engineer

Post Syndicated from Supriya Anand original https://aws.amazon.com/blogs/security/aws-security-profile-john-backes-senior-software-development-engineer/

Author


AWS scientists and engineers believe in partnering closely with the academic and research community to drive innovation in a variety of areas of our business, including cloud security. One of the ways they do this is through participating in and sponsoring scientific conferences, where leaders in fields such as automated reasoning, artificial intelligence, and machine learning come together to discuss advancements in their field. The International Conference on Computer Aided Verification (CAV), is one such conference, sponsored and—this year—co-chaired by the AWS Automated Reasoning Group (ARG). CAV is dedicated to the advancement of the theory and practice of computer-aided formal analysis methods for hardware and software systems. This conference will take place next week, July 13-18, 2019 at The New School in New York City.

CAV covers the spectrum from theoretical results to concrete applications, with an emphasis on practical verification tools and the algorithms and techniques that are needed for their implementation. CAV also publishes scientific papers from the research community that it considers vital to continue spurring advances in hardware and software verification. One of the authors of a paper accepted this year, Reachability Analysis for AWS-based Networks, is authored by John Backes of AWS. I sat down with him to talk about the unique network-based analysis service, Tiros, that’s described in the paper and how it’s helping to set new standards for cloud network security.

Tell me about yourself: what made you decide to become a software engineer in the automated reasoning space?

It sounds cliche, but I have wanted to work with computers since I was a child. I recently was looking through my old school work, and I found an assignment from the second grade where I wrote about “What I wanted to be when I grow up.” I had drawn a crude picture of someone working on a computer and wrote “I want to be a computer programmer.” At university, I took a class on discrete mathematics where I learned about mathematical induction for the first time; it seemed like magic to me. I struggled a bit to develop proofs for the homework assignments and tests in the course. So the idea of writing a program to perform induction for me automatically became very compelling.

I decided to go to graduate school to do research related to proving the correctness of digital circuits. After graduating, I built automated reasoning tools for proving the correctness of software that controls airplanes and helicopters. I joined AWS because I wanted to prove properties about systems that are used by almost everyone.

I understand that your research paper on Tiros was recently published by CAV. What does the research paper cover?

Many influential papers in the space of automated reasoning have been published in CAV over the past three decades. We are publishing a paper at CAV 2019 about three different types of automated reasoning tools we used in the development of Tiros. It discusses different formal reasoning tools and techniques we used, and what tools and techniques were able to scale and which were not. The paper gives readers a blueprint for how they could build their own automated reasoning services on AWS.

What is Tiros? How is it being used in Amazon Inspector?

Tiros answers reachability questions about Amazon Virtual Private Cloud (Amazon VPC) networks. It allows customers to answer questions like “Which of my EC2 instances are reachable from the internet?” and “Is it possible for this Elastic Network Interface (ENI) to send traffic to that ENI?Amazon Inspector uses Tiros to power its recently launched Network Reachability Rules package. Customers can use this rules package to produce findings about how traffic originating from outside their accounts can reach their Amazon EC2 instances (for example, via an internet gateway, elastic load balancer, or virtual private gateway) and via which ports. Inspector also makes suggestions about how to remediate findings that a customer would like to eliminate. For example, if a customer has an EC2 instance that has port 22 (commonly associated with SSH) open to the internet, Amazon Inspector will suggest what security group needs to be changed to eliminate this finding.

Why are networks difficult to understand? How is Tiros helping to solve that problem?

As customers add more components and open them up to access from more addresses, the number of possible paths that traffic can flow through a network increases exponentially. It may be feasible to test all of the paths through a network with a dozen computers, but it would take longer than the heat death of the universe to test all possible paths of a network with hundreds of components (elastic load balancers, NAT gateways, network access control lists, EC2 instances, and so on). Tiros reasons about all possible network paths completely, using “symbolic methods,” where it does not send any packets but instead treats the network as a mathematical object. It does this by gathering information about how a VPC is configured using the describe APIs of relevant services. It takes this information and generates a set of logical constraints. It then proves properties about these sets of constraints using something called an SMT solver [Editor’s note: discussed below]

Tiros relies on the use of automated reasoning techniques and SMT solvers to provide customers with a better understanding of potential network vulnerabilities. Can you explain what these concepts are and how they’re being used in Tiros?

SMT stands for Satisfiability Modulo Theories. SMT solvers are general purpose software tools that solve a collection of mathematical constraints. The algorithms and heuristics that power these tools have been steadily improving over the past three decades. This means that if you can translate a problem into a form that can solved by an SMT solver then you can take advantage of highly optimized algorithms that have been continuously improved over decades. There are tutorials online about how to use SMT solvers to provide solutions to all sorts of interesting constraints problems. Another AWS service called Zelkova uses SMT solvers to answer questions about IAM policies. Tiros uses an SMT solver called MonoSAT to encode reachability constraints about VPC networks. The figure below shows how we encode constraints about what types of packets are allowed to flow from a subnet to an ENI:
 
Math equation

This diagram is from the CAV paper. It illustrates the constraints that Tiros generates to reason about packets moving from subnets to ENIs. Informally, these constraints say that a packet is allowed to flow from an ENI out to its subnet’s route table if the source IP address of the packet is the same as the source IP address of the ENI. Likewise, a packet can flow from a subnet to an ENI if the destination IP address of the packet is the same as that of the ENI.

Tiros generates all sorts of constraints like this to represent the rules of routing in VPCs. If the SMT solver is able to find a solution to satisfy all of the constraints, then this corresponds to a valid path that a packet can flow through the VPC from some source to some destination. Someone using Tiros can then inspect these paths to determine the source of a potential network misconfiguration.

Is Tiros helping customers meet their compliance requirements? How?

Many customers need to meet compliance standards such as PCI, FedRAMP, and HIPAA. The requirements in these standards call for evidence of properly configured network controls. For example, Requirement 11 of the PCI DSS gives guidance to regularly perform penetration testing and network vulnerability scans. Customers can use Amazon Inspector to automatically schedule assessments on a regular cadence to generate evidence that they can use to help meet this requirement.

What do you tell your friends and family about what you do?

I tell them that AWS is responsible for the security of the cloud, and AWS customers are responsible for their security in the cloud. AWS refers to this concept as the Shared Responsibility Model. I explain that I work on a technology called Tiros that automatically produces mathematical proofs to enable AWS customers to build secure applications in the cloud.

What’s next for Tiros? For automated reasoning at AWS?

AWS is constantly adding new networking features. For example, we recently announced support for Direct Connect in Transit Gateway. Tiros is continuously updated to reason about these new services and features so customers who use the service can see new reachability results as they use new VPC features. Right now, we are really focused on how Tiros can be used to help customers with compliance. We plan to integrate Tiros results into other services to help produce evidence of compliance that customers can provide to auditors.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Supriya Anand

Supriya is a Senior Digital Strategist at AWS.

Top 10 Security Blog posts in 2019 so far

Post Syndicated from Tom Olsen original https://aws.amazon.com/blogs/security/top-10-security-blog-posts-in-2019-so-far/

Twice a year, we like to share what’s been popular to let you know what everyone’s reading and so you don’t miss something interesting.

One of the top posts so far this year has been the registration announcement for the re:Inforce conference that happened last week. We hope you attended or watched the keynote live stream. Because the conference is now over, we omitted this from the list.

As always, let us know what you want to read about in the Comments section below – we read them all and appreciate the feedback.

The top 10 posts from 2019 based on page views

  1. How to automate SAML federation to multiple AWS accounts from Microsoft Azure Active Directory
  2. How to centralize and automate IAM policy creation in sandbox, development, and test environments
  3. AWS awarded PROTECTED certification in Australia
  4. Setting permissions to enable accounts for upcoming AWS Regions
  5. How to use service control policies to set permission guardrails across accounts in your AWS Organization
  6. Alerting, monitoring, and reporting for PCI-DSS awareness with Amazon Elasticsearch Service and AWS Lambda
  7. Updated whitepaper now available: Aligning to the NIST Cybersecurity Framework in the AWS Cloud
  8. How to visualize Amazon GuardDuty findings: serverless edition
  9. Guidelines for protecting your AWS account while using programmatic access
  10. How to quickly find and update your access keys, password, and MFA setting using the AWS Management Console

If you’re new to AWS and are just discovering the Security Blog, we’ve also compiled a list of older posts that customers continue to find useful.

The top 10 posts of all time based on page views

  1. Where’s My Secret Access Key?
  2. Writing IAM Policies: How to Grant Access to an Amazon S3 Bucket
  3. How to Restrict Amazon S3 Bucket Access to a Specific IAM Role
  4. Securely Connect to Linux Instances Running in a Private Amazon VPC
  5. Writing IAM Policies: Grant Access to User-Specific Folders in an Amazon S3 Bucket
  6. Setting the Record Straight on Bloomberg BusinessWeek’s Erroneous Article
  7. How to Connect Your On-Premises Active Directory to AWS Using AD Connector
  8. IAM Policies and Bucket Policies and ACLs! Oh, My! (Controlling Access to S3 Resources)
  9. A New and Standardized Way to Manage Credentials in the AWS SDKs
  10. How to Control Access to Your Amazon Elasticsearch Service Domain

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Tom Olsen

Tom shares responsibility for the AWS Security Blog with Becca Crockett. If you’ve got feedback about the blog, he wants to hear it in the Comments here or in any post. In his free time, you’ll either find him hanging out with his wife and their frog, in his woodshop, or skateboarding.

author photo

Becca Crockett

Becca co-manages the Security Blog with Tom Olsen. She enjoys guiding first-time blog contributors through the writing process, and she likes to interview people. In her free time, she drinks a lot of coffee and reads things. At work, she also drinks a lot of coffee and reads things.

Re:Inforce 2019 wrap-up and session links

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/reinforce-2019-wrap-up-and-session-links/

re:Inforce conference

A big thank you to the attendees of the inaugural AWS re:Inforce conference for two successful days of cloud security learning. As you head home and look toward next steps for your organization (or if you weren’t able to attend and want to know what all the fuss was about), check out some of the session videos. You can watch the keynote to hear from our AWS CISO Steve Schmidt, view the full list of recorded conference sessions on the AWS YouTube channel, or check out popular sessions by track below.

Re:Inforce leadership sessions

Listen to cloud security leaders talk about key concepts from each track:

Popular sessions by track

View sessions that you might have missed or want to re-watch. (“Popular” determined by number of video views at the time this post was published.)

Security Deep Dive

View the full list of Security Deep Dive break-out sessions.

The Foundation

View the full list of The Foundation break-out sessions.

Governance, Risk & Compliance

View the full list of Governance, Risk & Compliance break-out sessions.

Security Pioneers

View the full list of Security Pioneers break-out sessions.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

AWS Security Profiles: Mark Ryland, Director, Office of the CISO

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-mark-ryland-director-office-of-the-ciso/

Author photo

Mark Ryland at the AWS Summit Berlin keynote

In the weeks leading up to re:Inforce, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


How long have you been at AWS and what’s your current role?

I’ve been at AWS for almost eight years. For the first six and a half years, I built the Solutions Architecture and Professional Services teams for AWS’s worldwide public sector sales organization—from five people when I joined, to many hundreds some years later. It was an amazing ride to build such a great team of cloud technology experts.

About a year and a half ago, I transitioned to the AWS Security team. On the Security team, I run a much smaller team called the Office of the CISO. We help manage interaction between our customers and the leadership team for AWS Security. In addition, we have a number of internal projects that we work on to improve interaction and information flow between the Security team and various AWS service teams, and between the AWS security team and the Amazon.com security team.

Why is your team called “the Office of the CISO”?

A lot of people want to talk to Steve Schmidt, our Chief Information Security Officer (CISO) at AWS. If you want to talk to him, it’s very likely that you’re going to talk to me or to my team as a part of that process. There’s only one of him, and there are a few of us. We help Steve scale a bit, and help more customers have direct interaction with senior leadership in AWS Security.

We also provide guidance and leadership to the broader AWS security community, especially to the customer-facing side of AWS. For example, we’re leaders of the Security and Compliance Technical Field Community (TFC) for AWS. The Security TFC is made up of subject matter experts in solutions architecture, professional services, technical account management, and other technical disciplines. We help them to understand and communicate effectively with customers about important security and compliance topics, and to gather customer requirements and funnel them to the right places.

What’s your favorite part of your job?

I love communicating about technology — first diving deep to figure it out for myself, and then explaining it to others. And I love interacting with our customers, both to explain our platform and what we do, and, equally important, to get their feedback. We constantly get great input and great ideas from customers, and we try to leverage that feedback into continuous improvement of our products and services.

What does cloud security mean to you, personally? Why is it a topic you’re passionate about?

I remember being at a private conference on cybersecurity. It was government-oriented, and organized by a Washington DC-based think-tank. A number of senior government officials were talking about challenges in cybersecurity. In the middle of an intense discussion about the big challenges facing the industry, a former, very senior official in the U.S. Government intelligence community said (using a golfing colloquialism), “The great thing about the cloud is that it’s a Mulligan; it’s a do-over. When we make the cloud transition, we can finally do the right things when it comes to cybersecurity.

There’s a lot of truth to that, just in terms of general IT modernization. The cloud simply makes security easier. Not “easy” — there are still challenges. But you’re much more equipped to do the right thing—to build automation, to build tooling, and to take full advantage of the base protections that are built into the platform. With a little bit of care, what you do is going to be better than what you did before. The responsibility that remains for you as the customer is still significant, but because everything is software-defined, you get far more visibility and control. Because everything is API-driven, you can automate just about everything.

Challenges remain; I want to reiterate that it’s never easy to do security right. But it’s so much easier when you don’t have to run the entire stack from the concrete floor up to the application, and when you can rely on the inherent visibility and control provided by a software-defined environment. In short, cloud migration represents the industry’s best opportunity for making big improvements in IT security. I love being in the center of that change for the better, and helping to make it real.

What initiatives are you currently working on that you’re particularly excited about?

Two things. First, we’re laser-focused on improving our AWS Identity and Access Management capabilities. They’re already very sophisticated and very powerful, but they are somewhat uneven across our services, and not as easy to use as they should be. I’m on the periphery of that work, but I’m actively involved in scoping out improvements. One recent example is a big advance in the capabilities of Service Control Policies (SCPs) within AWS Organizations. These now allow extremely fine-grained controls — as expressive as IAM polices—that can easily be applied globally across dozens or hundreds of AWS accounts. For example, you can express a global policy like “nobody but [some very special role] can attach an internet gateway to my VPCs, full stop.”

I’m also a networking geek, and another area I’ve been actively working on is improvements to our built-in networking security features. People have been asking for greater visibility and control over their VPCs. We have a lot of great features like security groups and network ACLs, but there’s a lot more we can and will do. For example, customers are looking for more visibility into what’s going on inside their VPCs beyond our existing VPC Flow Logs feature. We have an exciting announcement at our re:Inforce conference this week about some new capabilities in this area!

You’ll be speaking at re:Inforce about the security benefits of running EC2 instances on the AWS Nitro architecture. At a high level, what’s so innovative about Nitro, and how does it enable better security?

The EC2 Nitro architecture is a fundamental re-imagining of the best way to build a secure virtualization platform. I don’t think there’s anything else like it in the industry. We’ve taken a lot of the complicated software that’s needed for virtualization, which normally runs in a privileged copy of an operating system — the “domain 0,” or “dom0” to use Xen terminology, but present in all modern hypervisors — and we’ve completely eliminated it. All those features are now implemented by custom software and hardware in a set of powerful co-processor computers inside the same physical box as the main Intel processor system board. The Nitro computers present virtual devices to the mainboard as if they were actual hardware devices. You might say the main system board — despite its powerful Intel Xeon processor and big chunks of memory — is really the “co-processor” in these systems; I call it the “customer workload co-processor!” It’s the main Nitro controller and not the system mainboard that’s fundamentally in charge of the overall system, providing a root of trust and a secure layer between the mainboard and the outside world.

There are bunch of great security benefits that flow from this redesign. For example, with the elimination of the dom0 trusted operating system running on the mainboard, we’ve completely eliminated interactive access to these hosts. There’s no SSH, no RDP, no interactive software mechanisms that allow direct human access. I could go on and on, but I’ll stop there — you’ll have to come to my talk on Wednesday! And of course, we’ll post the video online afterward.

You’re also involved with a session to encourage customers to set up “state-of-the-art encryption.” In your view, what are some of the key elements of a “state-of-the-art” approach to encryption?

I came up with the original idea for the session, but was able to hand it off to an even better-suited speaker, so now I’ll just be there to enjoy it. Colm MacCarthaigh will be presenting. Colm is a senior principal engineer in the EC2 networking team, but he’s also the genius behind a number of important innovations in security and networking across AWS. For example, he did some of the original design work on the “shuffle sharding” techniques we use broadly, across AWS, to improve availability and resiliency for multi-tenanted services. Later, he came up with the idea, and, in a few weeks of intense coding, wrote the first version of S2N, our open source TLS implementation that provides far better security than the implementations typically used in the industry. He was also a significant contributor to the TLS 1.3 specification. I encourage everyone to follow him on Twitter, where you’ll learn all kinds of interesting things about cryptography, networking, and the like.

Now, to finally answer your question: Colm will be talking about how AWS does more and more encryption for you automatically, and how multiple layers of encryption can help address different kinds of threats. For example, without actually breaking TLS encryption, researchers have shown that they can figure out the content of an encrypted voice-over-IP (VOIP) call simply by analyzing the timing and size of the packets. So, wrapping TLS sessions inside of other encryption layers is a really good idea. Colm will talk about the importance of layered encryption, plus a bunch of other great topics: how AWS makes it easy to use encryption; where we do it automatically even if you don’t ask for it; how we’re inventing new, more secure means for key distribution; and fun stuff like that. It will be a blast!

What changes do you hope we’ll see across the global security and compliance landscape over the next 5 years?

I think that with innovations like the Nitro architecture for EC2, and with our commitment to continually improving and strengthening other security features and enabling greater automation around things like identity management and anomaly detection, we will come to a point where people will realize that the cloud, in almost every case, is more secure than an on-premises environment. I don’t mean to say that you couldn’t go outside of the cloud and build something secure (as long as you are willing to spend a ton of money). But as a general matter, cloud will become the default option for secure processing of very sensitive data.

We’re not quite there yet, in terms of widespread perception and understanding. There are still quite a few people who haven’t dug very far below the surface of “what is cloud.” There is still a common, visceral reaction to the idea of “public cloud” as being risky. People object to ideas like multitenancy, where you’re sharing physical infrastructure with other customers, as if it’s somehow inherently risky. There are risks, but they are so well mitigated, and we have so much experience controlling those risks, that they’re far outweighed by the big security benefits. Very consistently, as customers become more educated and experienced with the cloud, they tell us that they feel more secure in their cloud infrastructure than they did in their on-premises world. Still, that’s not currently the first reaction. People still start by thinking of the cloud as risky, and it takes time to educate them and change that perspective. So there’s still some important work ahead of us.

What’s your favorite way to relax?

It’s funny, now that I’m getting old, I’m reverting to some of the pursuits and hobbies of my youth. When I was a teenager I was passionate about cycling. I raced bicycles extensively at the regional and national level on both road and track from ages 14 to 18. A few minutes of my claim to 15 minutes of Warholian fame was used up by being in a two-man breakaway with 17-year-old Greg LeMond in a road race in Arizona, although he beat me and everyone else resoundingly in the end! I’ve ridden road bikes and done a bit of mountain biking over the years, but I’m getting back into it now and enjoying it immensely. Of course, there’s far more technology to play with these days, and I can’t resist. I splurged on an expensive pair of pedals with power meters built in, and so now I get detailed data from every ride that I can analyze to prove mathematically that I’m not in very good shape.

One of my other hobbies back in my teenage years was playing guitar — mostly folk-rock acoustic, but also electric and bass guitar in garage bands. That’s another activity I’ve started again. Fortunately, my kids, who are now around college-age plus or minus, all love the music from the 60s and 70s that I dust off and play, and they have great voices, so we have a lot of fun jamming and singing harmonies together.

What’s one thing that a visitor to your hometown of Washington, DC should experience?

The Washington DC area is famous for lots of great tourist attractions. But if you enjoy Michelin Guide-level dining experiences, I’d recommend a restaurant right in my neighborhood. It’s called L’Auberge Chez François, and it’s quite famous. It features Alsatian food (from the eastern region of France, along the German border). It’s an amazing restaurant that’s been there for almost 50 years, and it continues to draw a clientele from across the region and around the world. It’s always packed, so get reservations well in advance!

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mark Ryland

Mark is the director of the Office of the CISO for AWS. He has more than 28 years of experience in the technology industry and has served in leadership roles in cybersecurity, software engineering, distributed systems, technology standardization and public policy. Prior to his current role, he served as the Director of Solution Architecture and Professional Services for the AWS World Public Sector team.

Singapore financial services: new resources for customer side of the shared responsibility model

Post Syndicated from Darran Boyd original https://aws.amazon.com/blogs/security/singapore-financial-services-new-resources-for-customer-side-of-shared-responsibility-model/

Based on customer feedback, we’ve updated our AWS User Guide to Financial Services Regulations and Guidelines in Singapore whitepaper, as well as our AWS Monetary Authority of Singapore Technology Risk Management Guidelines (MAS TRM Guidelines) Workbook, which is available for download via AWS Artifact. Both resources now include considerations and best practices for the customer portion of the AWS Shared Responsibility Model.

The whitepaper provides considerations for financial institutions as they assess their responsibilities when using AWS services with regard to the MAS Outsourcing Guidelines, MAS TRM Guidelines, and Association of Banks in Singapore (ABS) Cloud Computing Implementation Guide.

The MAS TRM Workbook provides best practices for the customer portion of the AWS Shared Responsibility Model—that is, guidance on how you can manage security in the AWS Cloud. The guidance and best practices are sourced from the AWS Well-Architected Framework.

The Well-Architected Framework helps you understand the pros and cons of decisions you make while building systems on AWS. By using the Framework, you will learn architectural best practices for designing and operating reliable, secure, efficient, and cost-effective systems in the cloud. It provides a way for you to consistently measure your architectures against best practices and identify areas for improvement. The process for reviewing an architecture is a constructive conversation about architectural decisions, and is not an audit mechanism. We believe that having well-architected systems greatly increases the likelihood of business success. For more information, see the AWS Well-Architected homepage.

The compliance controls provided by the workbook also continue to address the AWS side of the Shared Responsibility Model (security of the AWS Cloud).

View the updated whitepaper here, or download the updated AWS MAS TRM Guidelines Workbook via AWS Artifact.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Boyd author photo

Darran Boyd

Darran is a Principal Security Solutions Architect at AWS, responsible for helping remove security blockers for our customers and accelerating their journey to the AWS Cloud. Darran’s focus and passion is to deliver strategic security initiatives that un-lock and enable our customers at scale across the financial services industry and beyond… Cx0 to <code>