Tag Archives: Security, Identity & Compliance

Trimming AWS WAF logs with Amazon Kinesis Firehose transformations

Post Syndicated from Tino Tran original https://aws.amazon.com/blogs/security/trimming-aws-waf-logs-with-amazon-kinesis-firehose-transformations/

In an earlier post, Enabling serverless security analytics using AWS WAF full logs, Amazon Athena, and Amazon QuickSight, published on March 28, 2019, the authors showed you how to stream WAF logs with Amazon Kinesis Firehose for visualization using QuickSight. This approach used no filtering of the logs so that you could visualize the full data set. However, you are often only interested in seeing specific events. Or you might be looking to minimize log size to save storage costs. In this post, I show you how to apply rules in Amazon Kinesis Firehose to trim down logs. You can then apply the same visualizations you used in the previous solution.

AWS WAF is a web application firewall that supports full logging of all the web requests it inspects. For each request, AWS WAF logs the raw HTTP/S headers along with information on which AWS WAF rules were triggered. Having complete logs is useful for compliance, auditing, forensics, and troubleshooting custom and Managed Rules for AWS WAF. However, for some use cases, you might not want to log all of the requests inspected by AWS WAF. For example, to reduce the volume of logs, you might only want to log the requests blocked by AWS WAF, or you might want to remove certain HTTP header or query string parameter values from your logs. In many cases, unblocked requests are often already stored in your CloudFront access logs or web server logs and, therefore, using AWS WAF logs can result in redundant data for these requests, while logging blocked traffic can help you to identify bad actors or root cause false positives.

In this post, I’ll show you how to create an Amazon Kinesis Data Firehose stream to filter out unneeded records, so that you only retain log records for requests that were blocked by AWS WAF. From here, the logs can be stored in Amazon S3 or directed to SIEM (Security information and event management) and log analysis tools.

To simplify things, I’ll provide you with a CloudFormation template that will create the resources highlighted in the diagram below:
 

Figure 1: Solution architecture

Figure 1: Solution architecture

  1. A Kinesis Data Firehose delivery stream is used to receive log records from AWS WAF.
  2. An IAM role for the Kinesis Data Firehose delivery stream, with permissions needed to invoke Lambda and write to S3.
  3. A Lambda function used to filter out WAF records matching the default action before the records are written to S3.
  4. An IAM role for the Lambda function, with the permissions needed to create CloudWatch logs (for troubleshooting).
  5. An S3 bucket where the WAF logs will be stored.

Prerequisites and assumptions

  • In this post, I assume that the AWS WAF default action is configured to allow requests that don’t explicitly match a blocking WAF rule. So I’ll show you how to omit any records matching the WAF default action.
  • You need to already have a AWS WAF WebACL created. In this example, you’ll use a WebACL generated from the AWS WAF OWASP 10 template. For more information on deploying AWS WAF to a CloudFront or ALB resource, see the Getting Started page.

Step 1: Create a Kinesis Data Firehose delivery stream for AWS WAF logs

In this step, you’ll use the following CloudFormation template to create a Kinesis Data Firehose delivery stream that writes logs to an S3 bucket. The template also creates a Lambda function that omits AWS WAF records matching the default action.

Here’s how to launch the template:

  1. Open CloudFormation in the AWS console.
  2. For WAF deployments on Amazon CloudFront, select region US-EAST-1. Otherwise, create the stack in the same region in which your AWS WAF Web ACL is deployed.
  3. Select the Create Stack button.
  4. In the CloudFormation wizard, select Specify an Amazon S3 template URL and copy and paste the following URL into the text box, then select Next:
    https://s3.amazonaws.com/awsiammedia/public/sample/TrimAWSWAFLogs/KinesisWAFDeliveryStream.yml
  5. On the options page, leave the default values and select Next.
  6. Specify the following and then select Next:
    1. Stack name: (for example, kinesis-waf-logging). Make sure to note your stack name, as you’ll need to provide it later in the walkthrough.
    2. Buffer size: This value specifies the size in MB for which Kinesis will buffer incoming records before processing.
    3. Buffer interval: This value specifies the interval in seconds for which Kinesis will buffer incoming records before processing.

    Note: Kinesis will trigger data delivery based on which buffer condition is satisfied first. This CloudFormation sets the default buffer size to 3MB and interval size to 900 seconds to match the maximum transformation buffer size and intervals which is set by this template. To learn more about Kinesis Data Firehose buffer conditions, read this documentation.

     

    Figure 2: Specify the stack name, buffer size, and buffer interval

    Figure 2: Specify the stack name, buffer size, and buffer interval

  7. Select the check box for I acknowledge that AWS CloudFormation might create IAM resources and choose Create.
  8. Wait for the template to finish creating the resources. This will take a few minutes. On the CloudFormation dashboard, the status next to your stack should say CREATE_COMPLETE.
  9. From the AWS Management Console, open Amazon Kinesis and find the Data Firehose delivery stream on the dashboard. Note that the name of the stream will start with aws-waf-logs- and end with the name of the CloudFormation. This prefix is required in order to configure AWS WAF to write logs to the Kinesis stream.
  10. From the AWS Management Console, open AWS Lambda and view the Lambda function created from the CloudFormation template. The function name should start with the Stack name from the CloudFormation template. I included the function code generated from the CloudFormation template below so you can see what’s going on.

    Note: Through CloudFormation, the code is deployed without indentation. To format it for readability, I recommend using the code formatter built into Lambda under the edit tab. This code can easily be modified for custom record filtering or transformations.

    
        'use strict';
    
        exports.handler = (event, context, callback) => {
            /* Process the list of records and drop those containing Default_Action */
            const output = event.records.map((record) => {
                const entry = (new Buffer(record.data, 'base64')).toString('utf8');
                if (!entry.match(/Default_Action/g)){
                    return {
                        recordId: record.recordId,
                        result: 'Ok',
                        data: record.data,
                    };
                } else {
                    return {
                        recordId: record.recordId,
                        result: 'Dropped',
                        data: record.data,
                    };
                }
            });
        
            console.log(`Processing completed.  Successful records ${output.length}.`);
            callback(null, { records: output });
        };"        
        

You now have a Kinesis Data Firehose stream that AWS WAF can use for logging records.

Cost Considerations

This template sets the Kinesis transformation buffer size to 3MB and buffer interval to 900 seconds (the maximum values) in order to reduce the number of Lambda invocations used to process records. On average, an AWS WAF record is approximately 1-1.5KB. With a buffer size of 3MB, Kinesis will use 1 Lambda invocation per 2000-3000 records. Visit the AWS Lambda website to learn more about pricing.

Step 2: Configure AWS WAF Logging

Now that you have an active Amazon Kinesis Firehose delivery stream, you can configure your AWS WAF WebACL to turn on logging.

  1. From the AWS Management Console, open WAF & Shield.
  2. Select the WebACL for which you would like to enable logging.
  3. Select the Logging tab.
  4. Select the Enable Logging button.
  5. Next to Amazon Kinesis Data Firehose, select the stream that was created from the CloudFormation template in Step 1 (for example, aws-waf-logs-kinesis-waf-stream) and select Create.

Congratulations! Your AWS WAF WebACL is now configured to send records of requests inspected by AWS WAF to Kinesis Data Firehose. From there, records that match the default action will be dropped, and the remaining records will be stored in S3 in JSON format.

Below is a sample of the logs generated from this example. Notice that there are only blocked records in the logs.
 

Figure 3: Sample logs

Figure 3: Sample logs

Conclusion

In this blog, I’ve provided you with a CloudFormation template to generate a Kinesis Data Firehose stream that can be used to log requests blocked by AWS WAF, omitting requests matching the default action. By omitting the default action, I have reduced the number of log records that must be reviewed to identify bad actors, tune new WAF rules, and/or root cause false positives. For unblocked traffic, consider using CloudFront’s access logs with Amazon Athena or CloudWatch Logs Insights to query and analyze the data. To learn more about AWS WAF logs, read our developer guide for AWS WAF.

If you have feedback about this blog post, , please submit them in the Comments section below. If you have issues with AWS WAF, start a thread on the AWS SSO forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Tino Tran

Tino is a Senior Edge Specialized Solutions Architect based out of Florida. His main focus is to help companies deliver online content in a secure, reliable, and fast way using AWS Edge Services. He is a experienced technologist with a background in software engineering, content delivery networks, and security.

AWS Security Profiles: CJ Moses, Deputy CISO and VP of Security Engineering

Post Syndicated from Supriya Anand original https://aws.amazon.com/blogs/security/aws-security-profiles-cj-moses-deputy-ciso-and-vp-of-security-engineering/

We recently sat down with CJ Moses, Deputy, Chief Information Security Officer (CISO), to learn about his day-to-day as a cybersecurity executive. He also shared more about his passion for racecar driving and why AWS is partnering with the SRO GT World Challenge America series this year.


How long have you been with AWS, and what is your role?

I’ve been with AWS since December 2007. I came to AWS from the FBI, along with current AWS CISO Steve Schmidt and VP/Distinguished Engineer Eric Brandwine. Together, we started the east coast AWS office. I’m now the Deputy CISO and VP of Security Engineering at AWS.

What excites you most about your role?

I like that every day brings something new and different. In the security space, it’s a bit of a cat and mouse game. It’s our job to be very much ahead of the adversaries. Continuing to engineer and innovate to keep our customers’ data secure allows us to do that. I also love providing customers things that didn’t exist in the past. The very first initiative that Steve, Eric and I worked on when we came from the government sector was Amazon Virtual Private Cloud (Amazon VPC), as well as the underlying network that gives customers virtually isolated network environments. By the end of 2011, the entire Amazon.com web fleet was running on this VPC infrastructure. Creating this kind of scalable offering and allowing our customers to use it was, as we like to say, making history.

We continue to work on new features and services that make it easier to operate more securely in the cloud than in on-premises datacenters. Over the past several years, we’ve launched services like Amazon GuardDuty (threat detection), Amazon Macie (sensitive data classification), and most recently AWS Security Hub (comprehensive security and compliance status monitoring). I love working for a company that’s committed to paying attention to customer feedback—and then innovating to not only meet those needs, but exceed them.

What’s the most challenging part of your role?

Juggling all the different aspects of it. I have responsibility for a lot of things, from auditing the physical security of our data centers to the type of hypervisor we need to use when thinking about new services and features. In the past, it was a Xen hypervisor based on open source software, but today AWS has built hardware and software components to eliminate the previous virtualization overhead. The Nitro system, as we call it, has had performance, availability, and security engineered into it from the earliest design phases, which is the right way to do it. Being able to go that full breadth of physical to virtual is challenging, but these challenges are what energize and drive me.

You’re an avid racecar driver. Tell us a bit about why you got into racing. What are the similarities between racecar driving and your job?

I got into racecar driving years ago, by accident. I bought an Audi A4 and it came with a membership to the Quattro club and they sent me a flyer for a track day, which is essentially a “take your streetcar to the track” event. I was hooked and began spending more and more time at the track. I’ve found that racecar drivers are extremely type A and very competitive. And because Amazon is very much a driven, fast-moving company, I think that what you need to have in place to succeed at Amazon is similar to what you need to be great on the racetrack. I like to tell my AWS team that they should be tactically impatient, yet strategically patient, and that applies to motorsports equally. You can’t win the race if you wreck in the first turn, but you also can’t win if you never get off the starting line.

Another similarity is the need to be laser-focused on the task at hand. In both environments, you need to be able to clear your mind of distractions and think from a new perspective. At AWS, our customer obsession drives what we do, the services and offerings we create, and our company culture. When I get in a racecar, there’s no time to think about anything except what’s at hand. When I’m streaming down the straightaway doing 180 mph, I need to focus on when to hit the brakes or when to make the next turn. When I get out of that car, I can then re-focus and bring new perspective to work and life.

AWS is the official cloud and machine learning provider of the SRO GT World Challenge America series this year. What drove the decision to become a partner?

We co-sponsored executive summits with our partner, CrowdStrike, at the SRO Motorsports race venues last year and are doing the same thing this year. But this year, we thought that it made sense to increase brand awareness and gain access to the GT Paddock Club for our guests. Paddock access allows you to see the cars up close and talk to drivers. It’s like a backstage pass at a concert.

In the paddock, every single car and team has sponsors or are self-funded—it’s like a small business-to-business environment. During our involvement last year, we didn’t see those businesses connecting with each other. What we hope to do this year is elevate the cybersecurity learning experience. We’re bringing in cybersecurity executives from across a range of companies to start dialogues with one another, with us, and with the companies represented in the paddock. The program is designed to build meaningful relationships and cultivate a shared learning experience on cybersecurity and AWS services in a setting where we can provide a once-in-a-lifetime experience for our guests. The cybersecurity industry is driven by trust-based relationships and genuinely being there for our customers. I believe our partnership with the SRO GT World Challenge series will provide a platform that helps us reinforce this.

What’s the connection between racecars and cloud security? How are AWS Security services being used at the racetrack?

With racing, there are tremendous workloads, such as telemetry and data acquisition, that you can stream from a car—essentially, hundreds of channels of data. There are advanced processing requirements for computational fluid dynamics, for example, both of air dynamics around the outside of the car and of air intake into and exhaust out of the engine. All these workloads and all this data are proprietary to the racing teams: The last thing you want is a competing racing team getting that data. This issue is analogous to data protection concerns amongst today’s traditional businesses. Also similar to traditional businesses, many racing teams need to be able to share data with each other in specific ways, to meet specific needs. For example, some teams might have multiple cars and drivers. Each of those drivers will need varying levels of access to data. AWS enables them to share that data in isolation, ensuring teams share only what’s needed. This kind of isolation would be difficult to achieve with a traditional data center or in many other environments.

AWS is also being used in new ways by GT World Challenge to help racecar drivers and partners make more real-time, data-driven decisions. For the first time, drivers and other racing partners will be able to securely stream telemetry directly to the AWS cloud. This will help drivers better analyze their driving and which parts of the course they need to improve upon. Crew chiefs and race engineers will have the data along with the advanced analytics to help them make informed decisions, such as telling drivers when it’s the most strategic time to make a pit stop.

This data will also help enhance the fan experience later this year. Spectators will be privy to some of the data being streamed through AWS and used by drivers, giving them a more intimate understanding of the velocity at which decisions need to be made on the track. We hope fans will be excited by this innovation.

What do you hope AWS customers will gain from attending GT World Challenge races?

I think the primary value is the opportunity to build relationships with experts and executives in the cybersecurity space, while enjoying the racetrack experience. We want to continue operating at the speed of innovation for our customers, and being able to build trust with them face-to-face helps enable this. We also keenly value the opportunity for customers to provide us feedback to influence how we think and what we offer at AWS, and I believe these events will provide opportunities where these conversations can easily take place.

In addition, we’ll be teasing out information about AWS re:Inforce (our upcoming security conference in Boston this June) at the GT Paddock Club. This includes information about the content, what to expect at the conference, and key dates. For anyone who wants to learn more about this event, I encourage them to visit https://reinforce.awsevents.com/, where you can read about our different session tracks, Steve Schmidt’s keynote, and other educational experiences we have planned.

How are you preparing for the races you’ll be in this year?

I’ve never raced professionally before, so driving the Audi R8 LMS GT4 this season has been a new experience for me. I’ve got my second race coming up this weekend (April 12th-14th), at the Pirelli GT4 Challenge at Long Beach, where AWS is the title sponsor. To prepare at home, I have a racing simulator that uses AWS services, as well as three large monitors, a steering wheel, pedals, and a moving and vibrating seat that I train on. I take what I learn from this simulation to the track in the actual car any chance I get. As the belated Jacob Levnon, a VP at AWS, was famous for saying, “There is no compression algorithm for experience.” I also do a lot of mental preparation by reading lap notes, watching videos of professional drivers, and working with my coaches. I’m grateful for the opportunity to be able to race this season and thank those that have helped me on this journey, both at AWS Security and on the racetrack.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

AWS Security Profiles: Olivier Klein, Head of Emerging Technologies in the APAC region

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-olivier-klein-head-of-emerging-technologies-in-the-apac-region/

Leading up to AWS Summit Singapore, we’re sharing our conversation with keynote speaker Olivier Klein about his work with emerging technology and about the overlap between “emerging technology” and “cloud security.”


You’re the “Head of Emerging Technologies in the APAC region” on your team at AWS. What kind of work do you do?

I continuously explore new technologies. By “technologies”, I don’t only mean AWS services, but also technologies that exists in the wider market. My goal is to understand how these developments can help our customers chart the course of their own digital transformation. There’s a lot happening—including advances in AWS offerings. I’m seeing evolution in terms of core AWS compute, storage and database services all the way up to higher-level services such as AI, machine learning services, deep learning, augmented or virtual reality, and even cryptographically verifiable distributed data stores (such as blockchain). My role involves taking in all these various facets of “emerging technology” and answering the question: how do these innovations help our customers solve problems or improve their businesses? Then I work to provide best practices around which type of technology is best at solving which particular type of challenge.

Given the rapid pace of technological development, how do you keep track of what’s happening in the space?

My approach is two-fold. First, there’s the element of exploring new technologies and trying to wrap my own head around them to see how they can be useful. But I’m guided by the Amazon way of approaching a challenge: that is, I work backward from the customer. I try to closely monitor the types of challenges our customers are facing. Based on what I’ve seen first-hand, plus the feedback I’ve received from the rest of my team and from Solutions Architects out in the field on what customers are struggling with, I figure out which technologies would help address those pain points.

I’m a technologist; I get excited by technology. But I don’t believe in using new technologies just for the sake of using them. The technology should solve for a particular business outcome. For example, two of my recent areas of focus are artificial intelligence and machine learning. A lot of companies are either already using some form of machine learning or AI, or looking into it. But it’s not something you should do just for the sake of doing it. You need to figure out the specific business outcomes you want to achieve, and then decide which kinds of technology can help. Maybe machine learning is part of it. In the space of computer vision and natural language processing I’ve seen a lot of recent advancement that allows you to tackle new use cases and scenarios. But machine learning won’t always be the right kind of technology for you. So my primary focus is on helping customers makes sense of what types of tech they should be using to address specific scenarios and solve for specific business outcomes.

What’s your favorite part of your job?

It’s really exciting to see how technology can come to life and solve interesting problems at scale across many different industries, and for customer of all sizes, from startups to medium-sized businesses to large enterprises. They all face interesting challenges and it’s rewarding to be able to assist in that problem-solving process.

How would you describe the relationship between “emerging tech” and “cloud security”?
Security is a changing landscape. Earlier, I mentioned that machine learning and AI are an area of emerging tech that I focus on and that a lot of customers are getting on top of. I think similar trends are happening in the cloud security space. Traditionally, if you think of “security,” you probably think about physical boundaries, firewalls, and boxes that you need to protect. But when you move to the cloud, you have to rethink that model—the cloud offers all sorts of new capabilities. Take for example Amazon Macie, a service that allows you to use machine learning to understand data access patterns, to classify your data, to understand which data sets have personally identifiable information, and to potentially serve as a protection mechanism to ensure your data privacy.

More broadly, a cloud environment fundamentally changes what you’re able to do with security: Everything is programmable. Everything can be event-driven. Everything is code. An entire infrastructure can be put together as code. By this, I mean that you have the ability to detect and understand changes within your environment as they happen. You can have automated rules, automated account configurations, and machine learning algorithms that verify any kind of change. These systems can not only make your environment fully auditable, they can prevent changes as they’re happening, whether that’s a potential threat or an alteration to the environment that could carry security risk. Before this, securing your environment meant going through approvals, setting and configuring servers, routers and firewalls, and putting a lot of boundaries around them. That approach can work, but it doesn’t scale well, and it doesn’t always accommodate this new world where people want to experiment and be agile without compromising on security.

Security remains the most important consideration—but if you move to the cloud, you have a plethora of services that enable to you to create a controlled environment where any activity can immediately be checked against your security posture. Ultimately, this allows security professionals to become enablers. They can help people build effectively and securely, instead of the more traditional model of, “Here’s a list of all the things you can’t do.”

What are some of the most common misperceptions that you encounter about cloud security?

The cloud takes away the heavy lifting traditionally associated with security, and I’ve found that for some people, this is a difficult mental shift. AWS removes the entire problem of the physical boundaries and protections that you need to put in place to secure your servers and your data centers, and instead allows you to focus on securing and building applications.

Physical environments tend to foster a more reactive way of thinking about security. For example, you can log everything, and if something goes wrong, you can go back and check the logs to see what happened—but because there are so many manual interactions involved, it’s probably not a fast process. You’re always a little behind. AWS enables you to be much more proactive. For example, you might use AWS CloudTrail, which logs any kind of activity in your account against the entire AWS platform, and you might combine CloudTrail with AWS Config, which allows you to look at any configurational change within your environment and track it over a period of time. Combined, these services allow you to say, “If any change within my environment matches X set of rules, I want to be notified. If the change is compliant with the rules that I’ve set up, great—carry on. If it isn’t, I want to immediately revoke or remove the change, or maybe revoke the permissions or the credentials of the individual responsible for the change.” And we can give you a bunch of predefined rule sets that are ready to be compliant with certain scenarios, or you can build your own. Compare this to a physical data center: If someone goes and cuts a cable, how do you look into that? How long does it take? On AWS, any change can immediately be verified against your rule sets. You can immediately know what happened and can immediately and automatically take action. That’s a fundamental game changer for security—the ability to react as it happens. This difference is something that I really try to emphasize for our customers.

In your experience, how does the cloud adoption landscape differ between APAC and other markets?

In the Asia Pacific market, we have a lot of new companies starting to pop up and build against the entire global ecosystem, and against an entire global platform from their regions, which I think is really exciting. It’s a very fast-moving market. One of the key benefits of using AWS products and services is the tremendous agility that you get. You have the ability to build things fairly easily, create platforms and services at very large scale across multiple geographies, build them up, tear them down, and experiment with them using a plethora of services. I think in 2018, on average, we had three to five new capabilities made available to any developer—to any builder out there—every single day. That’s a fundamental game changer in the way you build systems. It’s really exciting to see what our customers are doing with all the new services and features.

What are some of the challenges—security or otherwise—that you see customers frequently face as they move to the cloud?

I think it comes back to that same challenge of showing customers that they don’t just need to take an existing model—whether their security infrastructure or anything else—and move it into a cloud environment without making any changes. You can do that, if you want. We provide you with many migration and integration services to do so effectively. But I really encourage customers to ask themselves, “How can I re-architect to optimize the benefits I’m getting from AWS for the specific use cases and applications I’m building?”

I believe that the true benefits of cloud computing come if you build in a way that’s either cloud-native or optimized for best practices. AWS allows you to build applications in a very agile, but also very lean manner. Look at concepts such as containerization, or even the idea of deploying applications that are completely serverless on top of AWS—you basically just deploy the code pieces for your application, we fully run and manage it, and you only pay for the execution time. Or look at storage or databases: traditionally, it might have made sense to put everything into a relational database. But if you want to build in a really agile, scalable, and cost-effective manner, that might not be the best option. And again, AWS provides you with so many choices: you can choose a database that allows you to look at relations, a database that allows you to run at hyper-growth scale across multiple geographies at the same time, a database for key value pair stores, graph-relation or timeseries focused databases, and so on. There are many different ways you can build on AWS to optimize for your particular use case. Initially it might be hard to wrap your head around this new way of building modern applications, but I believe that the benefits in terms of agility, cost effectiveness, and sheer possible hyperscale without headaches are worth it.

You’re one of the keynote speakers at the Singapore Summit. What are some themes you’ll be focusing on?

I’ll be speaking on the first day, which is what we call the Tech Fest. My keynote will primarily focus on the technology behind AWS products and services, and on how we build modern applications and modern data architectures. By that, I mean that we’ll take a look into what modern application architectures look like, how to effectively make use of data and how to build highly-scalable applications that are portable. In the Asia Pacific region, there’s a strong interest in mobile-first or web-first design. So how do we build effectively for those platforms? I’ll use my talk to look at some of the elements of distributed computing: how do you effectively build for a global, large-scale user base? How does distributed computing work? How do you use the appropriate AWS services and techniques to ensure that your last mile, even in remote areas, is done correctly?

I’ll also talk about the concept of data analytics and how to build effective data analysis on top of AWS to get meaningful insights, potentially in real-time. Beyond insights, we’ll have a look at using AI and machine learning to further create better customer experiences. Then I’ll wrap up with a look into robotics. We’ll have a variety of different interesting live demos across all of these topics.

What are you hoping that your audience will do differently as a result of your keynote?

One of the things I want people to take away is that, while there are numerous options for exactly how you build on AWS, there are some very common patterns for applying best practices. Don’t just build your applications and platforms in the old, traditional way of monolithic applications and physical blocks of services and firewalls. Instead, ask yourself, “How do I design modern-day application architectures? How do I make use of the information and data I’m collecting to build a better customer experience? How do I choose the best tools for my use case?” These are all things that we’ll talk about during the keynote. The workshops and bootcamps during the rest of the event are then designed to give you hands-on experience figuring out how to make use of various AWS services and techniques so that you can build in a cloud-native manner.

What else should visitors to Singapore take the time to do or see while they’re there?

I used to live in Singapore (although I currently live in Hong Kong). So if you’re visiting, one thing you should definitely check out is the hawker centers, which are the local food courts, and where you can try some great local delicacies. One of my personal favorites is a dish called bak kut teh. If you’re into an herbal soup experience, you should check it out. And if you’ve never been to Singapore, go to the Marina Bay, take a picture with the Merlion, which is the national symbol of Singapore, and enjoy the wonderful landscape and skyline.

You have an advanced certification with the Professional Association of Diving Instructors. Where is your favorite place to dive?

I live very close to some of the Southeast Asian seas, which have wonderful dive spots all over. It’s hard to pick a favorite. But one that stands out is a place called Sipadan. Sipadan was one of my most amazing dive experiences: I did one of those morning dives where you go out on the boat, the sun is just about to come up, you jump into the sea, and the entire marine world wakes up. It’s a natural marine park, so even if you don’t scuba dive and just snorkel, there’s probably no place you can go to see more fish, and sharks, and turtles.

If you’ve never tried scuba diving, I’d recommend it. Snorkeling is great, but scuba diving gives you a fundamentally different experience. It’s much more calming. While snorkeling, you hear your breathing as you swim around. But if you scuba dive, and you’ve got good control of your buoyancy, you can just hover in the water and quietly watch aquatic life pass around you. With quiet like that, marine life is less afraid and approaches you more easily.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

Olivier Klein

Olivier is a hands-on technologist with more than 10 years of experience in the industry and has been working for AWS across APAC and Europe to help customers build resilient, scalable, secure and cost-effective applications and create innovative and data driven business models.

Provable security podcast: automated reasoning’s past, present, and future with Moshe Vardi

Post Syndicated from Supriya Anand original https://aws.amazon.com/blogs/security/provable-security-podcast-automated-reasonings-past-present-and-future-with-moshe-vardi/

AWS just released the first podcast of a new miniseries called Provable Security: Conversations on Next Gen Security. We published a podcast on provable security last fall, and, due to high customer interest, we decided to bring you a regular peek into this AWS initiative. This series will explore the unique intersection between academia and industry in the cloud security space. Specifically, the miniseries will cover how the traditionally academic field of automated reasoning is being applied at AWS at scale to help provide higher assurances for our customers, regulators, and the broader cloud industry. We’ll talk to individuals whose minds helped shape the history of automated reasoning, as well as learn from engineers and scientists who are applying automated reasoning to help solve pressing security and privacy challenges in the cloud.

This first interview is with Moshe Vardi, Karen Ostrum George Distinguished Service Professor in Computational Engineering and Director of the Ken Kennedy Institute for Information Technology. Moshe describes the history of logic, automated reasoning, and formal verification. He discusses modern day applications in software and how principles of automated reasoning underlie core aspects of computer science, such as databases. The podcast interview highlights Moshe’s standout contributions to the formal verification and automated reasoning space, as well as the number of awards he’s received for his work. You can learn more about Moshe Vardi here.

Byron Cook, Director of the AWS Automated Reasoning Group, interviews Moshe and will be featured throughout the miniseries. Byron is leading the provable security initiative at AWS, which is a collection of technologies that provide higher security assurance to customers by giving them a deeper understanding of their cloud architecture.

You can listen to or download the podcast above, or visit this link. We’ve also included links below to many of the technology and references Moshe discusses in his interview.

We hope you enjoy the podcast and this new miniseries! If you have feedback, let us know in the Comments section below.

Automated reasoning public figures:

Automated techniques and algorithms:

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Supriya Anand

Supriya is a Content Strategist at AWS working with the Automated Reasoning Group.

AWS Security releases IoT security whitepaper

Post Syndicated from Momena Cheema original https://aws.amazon.com/blogs/security/aws-security-releases-iot-security-whitepaper/

We’ve published a whitepaper, Securing Internet of Things (IoT) with AWS, to help you understand and address data security as it relates to your IoT devices and the data generated by them. The whitepaper is intended for a broad audience who is interested in learning about AWS IoT security capabilities at a service-specific level and for compliance, security, and public policy professionals.

IoT technologies connect devices and people in a multitude of ways and are used across industries. For example, IoT can help manage thermostats remotely across buildings in a city, efficiently control hundreds of wind turbines, or operate autonomous vehicles more safely. With all of the different types of devices and the data they transmit, security is a top concern.

The specific challenges using IoT technologies present has piqued the interest of governments worldwide who are currently assessing what, if any, new regulatory requirements should take shape to keep pace with IoT innovation and the general problem of securing data. This whitepaper uses a specific example to cover recent developments published by the National Institute of Standards and Technology (NIST) and the United Kingdom’s Code of Practice that are specific to IoT.

If you have questions or want to learn more, contact your account executive, or leave a comment below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

Momena Cheema

Momena is passionate about evangelizing the security and privacy capabilities of AWS services through the lens of global emerging technology and trends, such as Internet of Things, artificial intelligence, and machine learning through written content, workshops, talks, and educational campaigns. Her goal is to bring the security and privacy benefits of the cloud to customers across industries in both public and private sectors.

How to run AWS CloudHSM workloads on Docker containers

Post Syndicated from Mohamed AboElKheir original https://aws.amazon.com/blogs/security/how-to-run-aws-cloudhsm-workloads-on-docker-containers/

AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to generate and use your own encryption keys on the AWS Cloud. With CloudHSM, you can manage your own encryption keys using FIPS 140-2 Level 3 validated HSMs. Your HSMs are part of a CloudHSM cluster. CloudHSM automatically manages synchronization, high availability, and failover within a cluster.

CloudHSM is part of the AWS Cryptography suite of services, which also includes AWS Key Management Service (KMS) and AWS Certificate Manager Private Certificate Authority (ACM PCA). KMS and ACM PCA are fully managed services that are easy to use and integrate. You’ll generally use AWS CloudHSM only if your workload needs a single-tenant HSM under your own control, or if you need cryptographic algorithms that aren’t available in the fully-managed alternatives.

CloudHSM offers several options for you to connect your application to your HSMs, including PKCS#11, Java Cryptography Extensions (JCE), or Microsoft CryptoNG (CNG). Regardless of which library you choose, you’ll use the CloudHSM client to connect to all HSMs in your cluster. The CloudHSM client runs as a daemon, locally on the same Amazon Elastic Compute Cloud (EC2) instance or server as your applications.

The deployment process is straightforward if you’re running your application directly on your compute resource. However, if you want to deploy applications using the HSMs in containers, you’ll need to make some adjustments to the installation and execution of your application and the CloudHSM components it depends on. Docker containers don’t typically include access to an init process like systemd or upstart. This means that you can’t start the CloudHSM client service from within the container using the general instructions provided by CloudHSM. You also can’t run the CloudHSM client service remotely and connect to it from the containers, as the client daemon listens to your application using a local Unix Domain Socket. You cannot connect to this socket remotely from outside the EC2 instance network namespace.

This blog post discusses the workaround that you’ll need in order to configure your container and start the client daemon so that you can utilize CloudHSM-based applications with containers. Specifically, in this post, I’ll show you how to run the CloudHSM client daemon from within a Docker container without needing to start the service. This enables you to use Docker to develop, deploy and run applications using the CloudHSM software libraries, and it also gives you the ability to manage and orchestrate workloads using tools and services like Amazon Elastic Container Service (Amazon ECS), Kubernetes, Amazon Elastic Container Service for Kubernetes (Amazon EKS), and Jenkins.

Solution overview

My solution shows you how to create a proof-of-concept sample Docker container that is configured to run the CloudHSM client daemon. When the daemon is up and running, it runs the AESGCMEncryptDecryptRunner Java class, available on the AWS CloudHSM Java JCE samples repo. This class uses CloudHSM to generate an AES key, then it uses the key to encrypt and decrypt randomly generated data.

Note: In my example, you must manually enter the crypto user (CU) credentials as environment variables when running the container. For any production workload, you’ll need to carefully consider how to provide, secure, and automate the handling and distribution of your HSM credentials. You should work with your security or compliance officer to ensure that you’re using an appropriate method of securing HSM login credentials for your application and security needs.

Figure 1: Architectural diagram

Figure 1: Architectural diagram

Prerequisites

To implement my solution, I recommend that you have basic knowledge of the below:

  • CloudHSM
  • Docker
  • Java

Here’s what you’ll need to follow along with my example:

  1. An active CloudHSM cluster with at least one active HSM. You can follow the Getting Started Guide to create and initialize a CloudHSM cluster. (Note that for any production cluster, you should have at least two active HSMs spread across Availability Zones.)
  2. An Amazon Linux 2 EC2 instance in the same Amazon Virtual Private Cloud in which you created your CloudHSM cluster. The EC2 instance must have the CloudHSM cluster security group attached—this security group is automatically created during the cluster initialization and is used to control access to the HSMs. You can learn about attaching security groups to allow EC2 instances to connect to your HSMs in our online documentation.
  3. A CloudHSM crypto user (CU) account created on your HSM. You can create a CU by following these user guide steps.

Solution details

  1. On your Amazon Linux EC2 instance, install Docker:
    
            # sudo yum -y install docker
            

  2. Start the docker service:
    
            # sudo service docker start
            

  3. Create a new directory and step into it. In my example, I use a directory named “cloudhsm_container.” You’ll use the new directory to configure the Docker image.
    
            # mkdir cloudhsm_container
            # cd cloudhsm_container           
            

  4. Copy the CloudHSM cluster’s CA certificate (customerCA.crt) to the directory you just created. You can find the CA certificate on any working CloudHSM client instance under the path /opt/cloudhsm/etc/customerCA.crt. This certificate is created during initialization of the CloudHSM Cluster and is needed to connect to the CloudHSM cluster.
  5. In your new directory, create a new file with the name run_sample.sh that includes the contents below. The script starts the CloudHSM client daemon, waits until the daemon process is running and ready, and then runs the Java class that is used to generate an AES key to encrypt and decrypt your data.
    
            #! /bin/bash
    
            # start cloudhsm client
            echo -n "* Starting CloudHSM client ... "
            /opt/cloudhsm/bin/cloudhsm_client /opt/cloudhsm/etc/cloudhsm_client.cfg &> /tmp/cloudhsm_client_start.log &
            
            # wait for startup
            while true
            do
                if grep 'libevmulti_init: Ready !' /tmp/cloudhsm_client_start.log &> /dev/null
                then
                    echo "[OK]"
                    break
                fi
                sleep 0.5
            done
            echo -e "\n* CloudHSM client started successfully ... \n"
            
            # start application
            echo -e "\n* Running application ... \n"
            
            java -ea -Djava.library.path=/opt/cloudhsm/lib/ -jar target/assembly/aesgcm-runner.jar --method environment
            
            echo -e "\n* Application completed successfully ... \n"                      
            

  6. In the new directory, create another new file and name it Dockerfile (with no extension). This file will specify that the Docker image is built with the following components:
    • The AWS CloudHSM client package.
    • The AWS CloudHSM Java JCE package.
    • OpenJDK 1.8. This is needed to compile and run the Java classes and JAR files.
    • Maven, a build automation tool that is needed to assist with building the Java classes and JAR files.
    • The AWS CloudHSM Java JCE samples that will be downloaded and built.
  7. Cut and paste the contents below into Dockerfile.

    Note: Make sure to replace the HSM_IP line with the IP of an HSM in your CloudHSM cluster. You can get your HSM IPs from the CloudHSM console, or by running the describe-clusters AWS CLI command.

    
            # Use the amazon linux image
            FROM amazonlinux:2
            
            # Install CloudHSM client
            RUN yum install -y https://s3.amazonaws.com/cloudhsmv2-software/CloudHsmClient/EL7/cloudhsm-client-latest.el7.x86_64.rpm
            
            # Install CloudHSM Java library
            RUN yum install -y https://s3.amazonaws.com/cloudhsmv2-software/CloudHsmClient/EL7/cloudhsm-client-jce-latest.el7.x86_64.rpm
            
            # Install Java, Maven, wget, unzip and ncurses-compat-libs
            RUN yum install -y java maven wget unzip ncurses-compat-libs
            
            # Create a work dir
            WORKDIR /app
            
            # Download sample code
            RUN wget https://github.com/aws-samples/aws-cloudhsm-jce-examples/archive/master.zip
            
            # unzip sample code
            RUN unzip master.zip
            
            # Change to the create directory
            WORKDIR aws-cloudhsm-jce-examples-master
            
            # Build JAR files
            RUN mvn validate && mvn clean package
            
            # Set HSM IP as an environmental variable
            ENV HSM_IP <insert the IP address of an active CloudHSM instance here>
            
            # Configure cloudhms-client
            COPY customerCA.crt /opt/cloudhsm/etc/
            RUN /opt/cloudhsm/bin/configure -a $HSM_IP
            
            # Copy the run_sample.sh script
            COPY run_sample.sh .
            
            # Run the script
            CMD ["bash","run_sample.sh"]                        
            

  8. Now you’re ready to build the Docker image. Use the following command, with the name jce_sample_client. This command will let you use the Dockerfile you created in step 6 to create the image.
    
            # sudo docker build -t jce_sample_client .
            

  9. To run a Docker container from the Docker image you just created, use the following command. Make sure to replace the user and password with your actual CU username and password. (If you need help setting up your CU credentials, see prerequisite 3. For more information on how to provide CU credentials to the AWS CloudHSM Java JCE Library, refer to the steps in the CloudHSM user guide.)
    
            # sudo docker run --env HSM_PARTITION=PARTITION_1 \
            --env HSM_USER=<user> \
            --env HSM_PASSWORD=<password> \
            jce_sample_client
            

    If successful, the output should look like this:

    
            * Starting cloudhsm-client ... [OK]
            
            * cloudhsm-client started successfully ...
            
            * Running application ...
            
            ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors 
            to the console.
            70132FAC146BFA41697E164500000000
            Successful decryption
                SDK Version: 2.03
            
            * Application completed successfully ...          
            

Conclusion

My solution provides an example of how to run CloudHSM workloads on Docker containers. You can use it as a reference to implement your cryptographic application in a way that benefits from the high availability and load balancing built in to AWS CloudHSM without compromising on the flexibility that Docker provides for developing, deploying, and running applications. If you have comments about this post, submit them in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

Mohamed AboElKheir

Mohamed AboElKheir joined AWS in September 2017 as a Security CSE (Cloud Support Engineer) based in Cape Town. He is a subject matter expert for CloudHSM and is always enthusiastic about assisting CloudHSM customers with advanced issues and use cases. Mohamed is passionate about InfoSec, specifically cryptography, penetration testing (he’s OSCP certified), application security, and cloud security (he’s AWS Security Specialty certified).

Enabling serverless security analytics using AWS WAF full logs, Amazon Athena, and Amazon QuickSight

Post Syndicated from Umesh Ramesh original https://aws.amazon.com/blogs/security/enabling-serverless-security-analytics-using-aws-waf-full-logs/

Traditionally, analyzing data logs required you to extract, transform, and load your data before using a number of data warehouse and business intelligence tools to derive business intelligence from that data—on top of maintaining the servers that ran behind these tools.

This blog post will show you how to analyze AWS Web Application Firewall (AWS WAF) logs and quickly build multiple dashboards, without booting up any servers. With the new AWS WAF full logs feature, you can now log all traffic inspected by AWS WAF into Amazon Simple Storage Service (Amazon S3) buckets by configuring Amazon Kinesis Data Firehose. In this walkthrough, you’ll create an Amazon Kinesis Data Firehose delivery stream to which AWS WAF full logs can be sent, and you’ll enable AWS WAF logging for a specific web ACL. Then you’ll set up an AWS Glue crawler job and an Amazon Athena table. Finally, you’ll set up Amazon QuickSight dashboards to help you visualize your web application security. You can use these same steps to build additional visualizations to draw insights from AWS WAF rules and the web traffic traversing the AWS WAF layer. Security and operations teams can monitor these dashboards directly, without needing to depend on other teams to analyze the logs.

The following architecture diagram highlights the AWS services used in the solution:

Figure 1: Architecture diagram

Figure 1: Architecture diagram

AWS WAF is a web application firewall that lets you monitor HTTP and HTTPS requests that are forwarded to an Amazon API Gateway API, to Amazon CloudFront or to an Application Load Balancer. AWS WAF also lets you control access to your content. Based on conditions that you specify—such as the IP addresses from which requests originate, or the values of query strings—API Gateway, CloudFront, or the Application Load Balancer responds to requests either with the requested content or with an HTTP 403 status code (Forbidden). You can also configure CloudFront to return a custom error page when a request is blocked.

Amazon Kinesis Data Firehose is a fully managed service for delivering real-time streaming data to destinations such as Amazon S3, Amazon Redshift, Amazon Elasticsearch Service, and Splunk. With Kinesis Data Firehose, you don’t need to write applications or manage resources. You configure your data producers to send data to Kinesis Data Firehose, and it automatically delivers the data to the destination that you specified. You can also configure Kinesis Data Firehose to transform your data before delivering it.

AWS Glue can be used to run serverless queries against your Amazon S3 data lake. AWS Glue can catalog your S3 data, making it available for querying with Amazon Athena and Amazon Redshift Spectrum. With crawlers, your metadata stays in sync with the underlying data (more details about crawlers later in this post). Amazon Athena and Amazon Redshift Spectrum can directly query your Amazon S3 data lake by using the AWS Glue Data Catalog. With AWS Glue, you access and analyze data through one unified interface without loading it into multiple data silos.

Amazon Athena is an interactive query service that makes it easy to analyze data directly in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.

Amazon QuickSight is a business analytics service you can use to build visualizations, perform one-off analysis, and get business insights from your data. It can automatically discover AWS data sources and also works with your data sources. Amazon QuickSight enables organizations to scale to hundreds of thousands of users and delivers responsive performance by using a robust in-memory engine called SPICE.

SPICE stands for Super-fast, Parallel, In-memory Calculation Engine. SPICE supports rich calculations to help you derive insights from your analysis without worrying about provisioning or managing infrastructure. Data in SPICE is persisted until it is explicitly deleted by the user. SPICE also automatically replicates data for high availability and enables Amazon QuickSight to scale to hundreds of thousands of users who can all simultaneously perform fast interactive analysis across a wide variety of AWS data sources.

Step one: Set up a new Amazon Kinesis Data Firehose delivery stream

  1. In the AWS Management Console, open the Amazon Kinesis Data Firehose service and choose the button to create a new stream.
    1. In the Delivery stream name field, enter a name for your new stream that starts with aws-waf-logs- as shown in the screenshot below. AWS WAF filters all streams starting with the keyword aws-waf-logs when it displays the delivery streams. Note the name of your stream since you’ll need it again later in the walkthrough.
    2. For Source, choose Direct PUT, since AWS WAF logs will be the source in this walkthrough.

      Figure 2: Select the delivery stream name and source

      Figure 2: Select the delivery stream name and source

  2. Next, you have the option to enable AWS Lambda if you need to transform your data before transferring it to your destination. (You can learn more about data transformation in the Amazon Kinesis Data Firehose documentation.) In this walkthrough, there are no transformations that need to be performed, so for Record transformation, choose Disabled.
    Figure 3: Select "Disabled" for record transformations

    Figure 3: Select “Disabled” for record transformations

    1. You’ll have the option to convert the JSON object to Apache Parquet or Apache ORC format for better query performance. In this example, you’ll be reading the AWS WAF logs in JSON format, so for Record format conversion, choose Disabled.

      Figure 4: Choose "Disabled" to not convert the JSON object

      Figure 4: Choose “Disabled” to not convert the JSON object

  3. On the Select destination screen, for Destination, choose Amazon S3.
    Figure 5: Choose the destination

    Figure 5: Choose the destination

    1. For the S3 destination, you can either enter the name of an existing S3 bucket or create a new S3 bucket. Note the name of the S3 bucket since you’ll need the bucket name in a later step in this walkthrough.
    2. For Source record S3 backup, choose Disabled, because the destination in this walkthrough is an S3 bucket.

      Figure 6: Enter the S3 bucket name, and select "Disabled" for the Source record S3 backup

      Figure 6: Enter the S3 bucket name, and select “Disabled” for the source record S3 backup

  4. On the next screen, leave the default conditions for Buffer size, Buffer interval, S3 compression and S3 encryption as they are. However, we recommend that you set Error logging to Enabled initially, for troubleshooting purposes.
    1. For IAM role, select Create new or choose. This opens up a new window that will prompt you to create firehose_delivery_role, as shown in the following screenshot. Choose Allow in this window to accept the role creation. This grants the Kinesis Data Firehose service access to the S3 bucket.

      Figure 7: Select "Create new or choose" for IAM Role

      Figure 7: Select “Allow” to create the IAM role “firehose_delivery_role”

  5. On the last step of configuration, review all the options you’ve chosen, and then select Create delivery stream. This will cause the delivery stream to display as “Creating” under Status. In a couple of minutes, the status will change to “Active,” as shown in the below screenshot.

    Figure 8: Review the options you selected

    Figure 8: Review the options you selected

Step two: Enable AWS WAF logging for a specific Web ACL

  1. From the AWS Management Console, open the AWS WAF service and choose Web ACLs. Open your Web ACL resource, which can either be deployed on a CloudFront distribution or on an Application Load Balancer.
    1. Choose the Web ACL for which you want to enable logging. (In the below screenshot, we’ve selected a Web ACL in the US East Region.)
    2. On the Logging tab, choose Enable Logging.

      Figure 9: Choose "Enable Logging"

      Figure 9: Choose “Enable Logging”

  2. The next page displays all the delivery streams that start with aws-waf-logs. Choose the Amazon Kinesis Data Firehose delivery stream that you created for AWS WAF logs at the start of this walkthrough. (In the screenshot below, our example stream name is “aws-waf-logs-us-east-1)
    1. You can also choose to redact certain fields that you wish to exclude from being captured in the logs. In this walkthrough, you don’t need to choose any fields to redact.
    2. Select Create.

      Figure 10: Choose your delivery stream, and select "Create"

      Figure 10: Choose your delivery stream, and select “Create”

After a couple of minutes, you’ll be able to inspect the S3 bucket that you defined in the Kinesis Data Firehose delivery stream. The log files are created in directories by year, month, day, and hour.

Step three: Set up an AWS Glue crawler job and Amazon Athena table

The purpose of a crawler within your Data Catalog is to traverse your data stores (such as S3) and extract the metadata fields of the files. The output of the crawler consists of one or more metadata tables that are defined in your Data Catalog. When the crawler runs, the first classifier in your list to successfully recognize your data store is used to create a schema for your table. AWS Glue provides built-in classifiers to infer schemas from common files with formats that include JSON, CSV, and Apache Avro.

  1. In the AWS Management Console, open the AWS Glue service and choose Crawler to setup a crawler job.
  2. Choose Add crawler to launch a wizard to setup the crawler job. For Crawler name, enter a relevant name. Then select Next.

    Figure 11: Enter "Crawler name," and select "Next"

    Figure 11: Enter “Crawler name,” and select “Next”

  3. For Choose a data store, select S3 and include the path of the S3 bucket that stores your AWS WAF logs, which you made note of in step 1.3. Then choose Next.

    Figure 12: Choose a data store

    Figure 12: Choose a data store

  4. When you’re given the option to add another data store, choose No.
  5. Then, choose Create an IAM role and enter a name. The role grants access to the S3 bucket for the AWS Glue service to access the log files.

    Figure 13: Choose "Create an IAM role," and enter a name

    Figure 13: Choose “Create an IAM role,” and enter a name

  6. Next, set the frequency to Run on demand. You can also schedule the crawler to run periodically to make sure any changes in the file structure are reflected in your data catalog.

    Figure 14: Set the "Frequency" to "Run on demand"

    Figure 14: Set the “Frequency” to “Run on demand”

  7. For output, choose the database in which the Athena table is to be created and add a prefix to identify your table name easily. Select Next.

    Figure 15: Choose the database, and enter a prefix

    Figure 15: Choose the database, and enter a prefix

  8. Review all the options you’ve selected for the crawler job and complete the wizard by selecting the Finish button.
  9. Now that the crawler job parameters are set up, on the left panel of the console, choose Crawlers to select your job and then choose Run crawler. The job creates an Amazon Athena table. The duration depends on the size of the log files.

    Figure 16: Choose "Run crawler" to create an Amazon Athena table

    Figure 16: Choose “Run crawler” to create an Amazon Athena table

  10. To see the Amazon Athena table created by the AWS Glue crawler job, from the AWS Management Console, open the Amazon Athena service. You can filter by your table name prefix.
      1. To view the data, choose Preview table. This displays the table data with certain fields showing data in JSON object structure.
    Figure 17: Choose "Preview table" to view the data

    Figure 17: Choose “Preview table” to view the data

Step four: Create visualizations using Amazon QuickSight

  1. From the AWS Management Console, open Amazon QuickSight.
  2. In the Amazon QuickSight window, in the top left, choose New Analysis. Choose New Data set, and for the data source choose Athena. Enter an appropriate name for the data source name and choose Create data source.

    Figure 18: Enter the "Data source name," and choose "Create data source"

    Figure 18: Enter the “Data source name,” and choose “Create data source”

  3. Next, choose Use custom SQL to extract all the fields in the JSON object using the following SQL query:
    
        ```
        with d as (select
        waf.timestamp,
            waf.formatversion,
            waf.webaclid,
            waf.terminatingruleid,
            waf.terminatingruletype,
            waf.action,
            waf.httpsourcename,
            waf.httpsourceid,
            waf.HTTPREQUEST.clientip as clientip,
            waf.HTTPREQUEST.country as country,
            waf.HTTPREQUEST.httpMethod as httpMethod,
            map_agg(f.name,f.value) as kv
        from sampledb.jsonwaflogs_useast1 waf,
        UNNEST(waf.httprequest.headers) as t(f)
        group by 1,2,3,4,5,6,7,8,9,10,11)
        select d.timestamp,
            d.formatversion,
            d.webaclid,
            d.terminatingruleid,
            d.terminatingruletype,
            d.action,
            d.httpsourcename,
            d.httpsourceid,
            d.clientip,
            d.country,
            d.httpMethod,
            d.kv['Host'] as host,
            d.kv['User-Agent'] as UA,
            d.kv['Accept'] as Acc,
            d.kv['Accept-Language'] as AccL,
            d.kv['Accept-Encoding'] as AccE,
            d.kv['Upgrade-Insecure-Requests'] as UIR,
            d.kv['Cookie'] as Cookie,
            d.kv['X-IMForwards'] as XIMF,
            d.kv['Referer'] as Referer
        from d;
        ```        
        

  4. To extract individual fields, copy the previous SQL query and paste it in the New custom SQL box, then choose Edit/Preview data.
    Figure 19: Paste the SQL query in "New custom SQL query"

    Figure 19: Paste the SQL query in “New custom SQL query”

    1. In the Edit/Preview data view, for Data source, choose SPICE, then choose Finish.

      Figure 20: Choose "Spice" and then "Finish"

      Figure 20: Choose “Spice” and then “Finish”

  5. Back in the Amazon Quicksight console, under the Fields section, select the drop-down menu and change the data type to Date.

    Figure 21: In the Amazon Quicksight console, change the data type to "Date"

    Figure 21: In the Amazon Quicksight console, change the data type to “Date”

  6. After you see the Date column appear, enter an appropriate name for the visualizations at the top of the page, then choose Save.

    Figure 22: Enter the name for the visualizations, and choose "Save"

    Figure 22: Enter the name for the visualizations, and choose “Save”

  7. You can now create various visualization dashboards with multiple visual types by using the drag-and-drop feature. You can drag and drop combinations of fields such as Action, Client IP, Country, Httpmethod, and User Agents. You can also add filters on Date to view dashboards for a specific timeline. Here are some sample screenshots:
    Figure 23: Visualization dashboard samples

    Figure 23a: Visualization dashboard samples

    Figure 23: Visualization dashboard samples

    Figure 23b: Visualization dashboard samples

    Figure 23: Visualization dashboard samples

    Figure 23c: Visualization dashboard samples

    Figure 23: Visualization dashboard samples

    Figure 23d: Visualization dashboard samples

Conclusion

You can enable AWS WAF logs to Amazon S3 buckets and analyze the logs while they are being streamed by configuring Amazon Kinesis Data Firehose. You can further enhance this solution by automating the streaming of data and using AWS Lambda for any data transformations based on your specific requirements. Using Amazon Athena and Amazon QuickSight makes it easy to analyze logs and build visualizations and dashboards for executive leadership teams. Using these solutions, you can go serverless and let AWS do the heavy lifting for you.

Author photo

Umesh Kumar Ramesh

Umesh is a Cloud Infrastructure Architect with Amazon Web Services. He delivers proof-of-concept projects, topical workshops, and lead implementation projects to various AWS customers. He holds a Bachelor’s degree in Computer Science & Engineering from National Institute of Technology, Jamshedpur (India). Outside of work, Umesh enjoys watching documentaries, biking, and practicing meditation.

Author photo

Muralidhar Ramarao

Muralidhar is a Data Engineer with the Amazon Payment Products Machine Learning Team. He has a Bachelor’s degree in Industrial and Production Engineering from the National Institute of Engineering, Mysore, India. Outside of work, he loves to hike. You will find him with his camera or snapping pictures with his phone, and always looking for his next travel destination.

How to use service control policies to set permission guardrails across accounts in your AWS Organization

Post Syndicated from Michael Switzer original https://aws.amazon.com/blogs/security/how-to-use-service-control-policies-to-set-permission-guardrails-across-accounts-in-your-aws-organization/

AWS Organizations provides central governance and management for multiple accounts. Central security administrators use service control policies (SCPs) with AWS Organizations to establish controls that all IAM principals (users and roles) adhere to. Now, you can use SCPs to set permission guardrails with the fine-grained control supported in the AWS Identity and Access Management (IAM) policy language. This makes it easier for you to fine-tune policies to meet the precise requirements of your organization’s governance rules.

Now, using SCPs, you can specify Conditions, Resources, and NotAction to deny access across accounts in your organization or organizational unit. For example, you can use SCPs to restrict access to specific AWS Regions, or prevent your IAM principals from deleting common resources, such as an IAM role used for your central administrators. You can also define exceptions to your governance controls, restricting service actions for all IAM entities (users, roles, and root) in the account except a specific administrator role.

To implement permission guardrails using SCPs, you can use the new policy editor in the AWS Organizations console. This editor makes it easier to author SCPs by guiding you to add actions, resources, and conditions. In this post, I review SCPs, walk through the new capabilities, and show how to construct an example SCP you can use in your organization today.

Overview of Service Control Policy concepts

Before I walk through some examples, I’ll review a few features of SCPs and AWS Organizations.

SCPs offer central access controls for all IAM entities in your accounts. You can use them to enforce the permissions you want everyone in your business to follow. Using SCPs, you can give your developers more freedom to manage their own permissions because you know they can only operate within the boundaries you define.

You create and apply SCPs through AWS Organizations. When you create an organization, AWS Organizations automatically creates a root, which forms the parent container for all the accounts in your organization. Inside the root, you can group accounts in your organization into organizational units (OUs) to simplify management of these accounts. You can create multiple OUs within a single organization, and you can create OUs within other OUs to form a hierarchical structure. You can attach SCPs to the organization root, OUs, and individual accounts. SCPs attached to the root and OUs apply to all OUs and accounts inside of them.

SCPs use the AWS Identity and Access Management (IAM) policy language; however, they do not grant permissions. SCPs enable you set permission guardrails by defining the maximum available permissions for IAM entities in an account. If a SCP denies an action for an account, none of the entities in the account can take that action, even if their IAM permissions allow them to do so. The guardrails set in SCPs apply to all IAM entities in the account, which include all users, roles, and the account root user.

Policy Elements Available in SCPs

The table below summarizes the IAM policy language elements available in SCPs. You can read more about the different IAM policy elements in the IAM JSON Policy Reference.

The Supported Statement Effect column describes the effect type you can use with each policy element in SCPs.

Policy ElementDefinitionSupported Statement Effect
StatementMain element for a policy. Each policy can have multiple statements.Allow, Deny
Sid(Optional) Friendly name for the statement.Allow, Deny
EffectDefine whether a SCP statement allows or denies actions in an account.Allow, Deny
ActionList the AWS actions the SCP applies to.Allow, Deny
NotAction (New)(Optional) List the AWS actions exempt from the SCP. Used in place of the Action element.Deny
Resource (New)List the AWS resources the SCP applies to.Deny
Condition (New)(Optional) Specify conditions for when the statement is in effect.Deny

Note: Some policy elements are only available in SCPs that deny actions.

You can use the new policy elements in new or existing SCPs in your organization. In the next section, I use the new elements to create a SCP using the AWS Organizations console.

Create an SCP in the AWS Organizations console

In this section, you’ll create an SCP that restricts IAM principals in accounts from making changes to a common administrative IAM role created in all accounts in your organization. Imagine your central security team uses these roles to audit and make changes to AWS settings. For the purposes of this example, you have a role in all your accounts named AdminRole that has the AdministratorAccess managed policy attached to it. Using an SCP, you can restrict all IAM entities in the account from modifying AdminRole or its associated permissions. This helps you ensure this role is always available to your central security team. Here are the steps to create and attach this SCP.

  1. Ensure you’ve enabled all features in AWS Organizations and SCPs through the AWS Organizations console.
  2. In the AWS Organizations console, select the Policies tab, and then select Create policy.

    Figure 1: Select "Create policy" on the "Policies" tab

    Figure 1: Select “Create policy” on the “Policies” tab

  3. Give your policy a name and description that will help you quickly identify it. For this example, I use the following name and description.
    • Name: DenyChangesToAdminRole
    • Description: Prevents all IAM principals from making changes to AdminRole.

     

    Figure 2: Give the policy a name and description

    Figure 2: Give the policy a name and description

  4. The policy editor provides you with an empty statement in the text editor to get started. Position your cursor inside the policy statement. The editor detects the content of the policy statement you selected, and allows you to add relevant Actions, Resources, and Conditions to it using the left panel.

    Figure 3: SCP editor tool

    Figure 3: SCP editor tool

  5. Change the Statement ID to describe what the statement does. For this example, I reused the name of the policy, DenyChangesToAdminRole, because this policy has only one statement.

    Figure 4: Change the Statement ID

    Figure 4: Change the Statement ID

  6. Next, add the actions you want to restrict. Using the left panel, select the IAM service. You’ll see a list of actions. To learn about the details of each action, you can hover over the action with your mouse. For this example, we want to allow principals in the account to view the role, but restrict any actions that could modify or delete it. We use the new NotAction policy element to deny all actions except the view actions for the role. Select the following view actions from the list:
    • GetContextKeysForPrincipalPolicy
    • GetRole
    • GetRolePolicy
    • ListAttachedRolePolicies
    • ListInstanceProfilesForRole
    • ListRolePolicies
    • ListRoleTags
    • SimulatePrincipalPolicy
  7. Now position your cursor at the Action element and change it to NotAction. After you perform the steps above, your policy should look like the one below.

    Figure 5: An example policy

    Figure 5: An example policy

  8. Next, apply these controls to only the AdminRole role in your accounts. To do this, use the Resource policy element, which now allows you to provide specific resources.
      1. On the left, near the bottom, select the Add Resources button.
      2. In the prompt, select the IAM service from the dropdown menu.
      3. Select the role as the resource type, and then type “arn:aws:iam::*:role/AdminRole” in the resource ARN prompt.
      4. Select Add resource.

    Note: The AdminRole has a common name in all accounts, but the account IDs will be different for each individual role. To simplify the policy statement, use the * wildcard in place of the account ID to account for all roles with this name regardless of the account.

  9. Your policy should look like this:
    
    {    
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "DenyChangesToAdminRole",
          "Effect": "Deny",
          "NotAction": [
            "iam:GetContextKeysForPrincipalPolicy",
            "iam:GetRole",
            "iam:GetRolePolicy",
            "iam:ListAttachedRolePolicies",
            "iam:ListInstanceProfilesForRole",
            "iam:ListRolePolicies",
            "iam:ListRoleTags",
            "iam:SimulatePrincipalPolicy"
          ],
          "Resource": [
            "arn:aws:iam::*:role/AdminRole"
          ]
        }
      ]
    }
    

  10. Select the Save changes button to create your policy. You can see the new policy in the Policies tab.

    Figure 6: The new policy on the “Policies” tab

    Figure 6: The new policy on the “Policies” tab

  11. Finally, attach the policy to the AWS account where you want to apply the permissions.

When you attach the SCP, it prevents changes to the role’s configuration. The central security team that uses the role might want to make changes later on, so you may want to allow the role itself to modify the role’s configuration. I’ll demonstrate how to do this in the next section.

Grant an exception to your SCP for an administrator role

In the previous section, you created a SCP that prevented all principals from modifying or deleting the AdminRole IAM role. Administrators from your central security team may need to make changes to this role in your organization, without lifting the protection of the SCP. In this next example, I build on the previous policy to show how to exclude the AdminRole from the SCP guardrail.

  1. In the AWS Organizations console, select the Policies tab, select the DenyChangesToAdminRole policy, and then select Policy editor.
  2. Select Add Condition. You’ll use the new Condition element of the policy, using the aws:PrincipalARN global condition key, to specify the role you want to exclude from the policy restrictions.
  3. The aws:PrincipalARN condition key returns the ARN of the principal making the request. You want to ignore the policy statement if the requesting principal is the AdminRole. Using the StringNotLike operator, assert that this SCP is in effect if the principal ARN is not the AdminRole. To do this, fill in the following values for your condition.
    1. Condition key: aws:PrincipalARN
    2. Qualifier: Default
    3. Operator: StringNotEquals
    4. Value: arn:aws:iam::*:role/AdminRole
  4. Select Add condition. The following policy will appear in the edit window.
    
    {    
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "DenyChangesToAdminRole",
          "Effect": "Deny",
          "NotAction": [
            "iam:GetContextKeysForPrincipalPolicy",
            "iam:GetRole",
            "iam:GetRolePolicy",
            "iam:ListAttachedRolePolicies",
            "iam:ListInstanceProfilesForRole",
            "iam:ListRolePolicies",
            "iam:ListRoleTags"
          ],
          "Resource": [
            "arn:aws:iam::*:role/AdminRole"
          ],
          "Condition": {
            "StringNotLike": {
              "aws:PrincipalARN":"arn:aws:iam::*:role/AdminRole"
            }
          }
        }
      ]
    }
    
    

  5. After you validate the policy, select Save. If you already attached the policy in your organization, the changes will immediately take effect.

Now, the SCP denies all principals in the account from updating or deleting the AdminRole, except the AdminRole itself.

Next steps

You can now use SCPs to restrict access to specific resources, or define conditions for when SCPs are in effect. You can use the new functionality in your existing SCPs today, or create new permission guardrails for your organization. I walked through one example in this blog post, and there are additional use cases for SCPs that you can explore in the documentation. Below are a few that we have heard from customers that you may want to look at.

  • Account may only operate in certain AWS regions (example)
  • Account may only deploy certain EC2 instance types (example)
  • Account requires MFA is enabled before taking an action (example)

You can start applying SCPs using the AWS Organizations console, CLI, or API. See the Service Control Policies Documentation or the AWS Organizations Forums for more information about SCPs, how to use them in your organization, and additional examples.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Michael Switzer

Mike is the product manager for the Identity and Access Management service at AWS. He enjoys working directly with customers to identify solutions to their challenges, and using data-driven decision making to drive his work. Outside of work, Mike is an avid cyclist and outdoorsperson. He holds a master’s degree in computational mathematics from the University of Washington.

AWS Security Profiles: Nathan Case, Senior Security Specialist, Solutions Architect

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-nathan-case-senior-security-specialist-solutions-architect/

AWS Security Profiles: Nathan Case, Senior Security Specialist, Solutions Architect

Leading up to the AWS Santa Clara Summit, we’re sharing our conversation with Nathan Case, who will be presenting at the event, so you can learn more about him and some of the interesting work that he’s doing.


How long have you been at AWS, and what do you do in your current role?

I’ve been with AWS for three years, and I’m a Senior Security Specialist, Solutions Architect. I started out working on our Commercial Sector team, but I moved fairly quickly to Public Sector, where I was the tech lead for our work with the U.S. Department of Defense (DOD) and the first consultant in our U.S. DOD practice. I was offered a position back on the commercial side of the company, which entailed building out how we, as AWS, think about incident response and threat detection. I took that job because it was way too interesting to pass up, and it gave me an opportunity to have more impact for our customers. I love doing incident response and threat detection, so I had that moment where I thought, “Really? You’re going to pay me to do this?” I couldn’t turn it down. It did break my heart a little to step away from the public sector, but it’s great getting to work more intimately with some of our commercial customers.

What do you wish more people understood about incident response?

Because of my role, I generally talk with customers after one of their applications has been breached or something has been broken into. The thing I wish more people knew is that it will be okay. This happens to a lot of people, and it’s not the end of the world. Life will go on.

That said, the process does work much better if you call me before an incident happens. Prevention is so much better than the cure. I’m happy to help during an incident, but there are lots of ways we can proactively make things better.

What’s your favorite part of your job?

I think people like myself, who enjoy incident response, have a slight hero complex: You get to jump in, get involved, and make a difference for somebody. I love walking away from an engagement where something bad has happened, knowing that I’ve made a difference for that person and they’re now a happy customer again.

I also enjoy getting to do the pre-work sessions. While I have to make sure that customers understand that security is something they have to do, I help them reach the point where they’re happy about that fact. I get to help them realize that it’s something they’re capable of doing and it’s not as scary as they thought.

What’s the most challenging part of your job?

It’s that moment when I get the call—maybe in the middle of the night—and somebody says this thing has just happened, and can I help them?

The hardest aspect of that conversation is working through the event with the CISO or the individual who’s in charge of the response and convincing them that all the steps they’ll need to take will still be there tomorrow and that there’s nothing else they can do in the moment. It’s difficult to watch the pain that accompanies that realization. There’s eventually a certain catharsis at the end of the conversation, as the customer starts to see the light at the end of the tunnel and to understand that everything is going to be all right. But that first moment, when the pit has dropped out of someone’s stomach and I have to watch it on their face—that’s hard.

What’s the most common misperception about cloud security that you encounter?

I used to work in data centers, so I have a background that’s steeped in building out networking switches, and stacks, and points of presence, and so on. I spent a lot of time protecting and securing these things, and doing some impressive data center implementations. But now that I work in the cloud, I look back at that whole experience and ask, “Why?”

I think the misconception still exists that it’s easier to protect a data center than the cloud. But frankly, I wouldn’t be doing this job if I thought data centers were more secure. They’re not. There are so many more things that you can see and take care of in a cloud environment. You’re able to detect more threats than you could in a data center, and there’s so much more instrumentation to enable you to keep track of all of those threats.

What does cloud security mean to you, personally?

I view my current role as a statement of my belief in cloud security; it’s a way for me to offer help to a large number of people.

When I worked for the U.S. Department of Defense, through AWS, it was really important to me to help protect the country and to make sure that we were safe. That’s still really important to me—and I believe the cloud can help achieve that. If you look at the world as a whole, I think there’s evidence of a nefarious substructure that operates in a manner similar to organized crime: It exists, but it’s hopefully not something that most people have to see or interact with. I feel a certain calling to be one of the individuals that helps shield others from these influences that they generally wouldn’t have the knowledge to protect themselves against. For example, I’ve done work that helps protect people from attacks by nation states. It’s very satisfying to be able to help defend and protect customers from things like that.

Five years from now, what changes do you think we’ll see across the cloud security landscape?

I think that cloud security will begin shifting toward the question, “How did you implement? Is your architecture correct?” Right now, I hear this statement a lot: “We built this application like we have for the last [X] years. It’s fine!” And I believe that attitude will disappear as it becomes painfully obvious that it’s not fine, and it wasn’t fine. The way we architect and build and secure applications will change dramatically. Security will be included to begin with, and designing for failure will become the norm. We’ll see more people building security and detection in layers so that attackers’ actions can be seen and responded to automatically. With the services that are coming into being now, the options for new applications are just so different. It’s very exciting to see what they are and how we can secure applications and infrastructure.

You’re hosting a session at the Santa Clara summit titled “Threat detection and mitigation at AWS.” Where did the idea for this topic come from?

There’s no incident response without the ability to detect the threat. As AWS (and, frankly, as technology professionals), we need to teach people how to detect threats. That includes teaching them appropriate habits and appropriate architectures that allow for detecting, rather than simply accepting the attitude that “whatever happens, happens.”

My talk focuses on describing how you need to architect your environment so that you’re able to see a threat when it’s present. This enables you to know that there’s an issue in advance, rather than finding out two and half years later that a threat has been present all along and you just didn’t know about it. That’s an untenable scenario to me. If we begin to follow appropriate cloud hygiene, then that risk goes way down.

What are some of the most common challenges customers face when it comes to threat detection in the cloud?
I often see customers struggling to let go of the idea that a human has to touch production to make it work correctly. I think you can trace this back to the fact that people are used to having a rack down in the basement that they can go play with. As humans, we get locked in this “we’re used to it” concept. Change is scary! Technology is evolving and people need to change with it and move forward along the technical path. There are so many opportunities out there for someone who takes the time to learn about new technologies.

What are you hoping your audience will do differently as a result of your session?

Let me use sailing a boat as an example: If you don’t have a complex navigation system and you can’t tell exactly which course you’re on, there are times when you pick something off in the distance and steer toward that. You’ll probably have to correct course as you go. If the wind blows heavily, you might have to swing left or right before making your way back to your original course. But you have something to steer toward.

I hope that my topic gives people that end-point, that place in the distance to travel toward. I don’t think the talk will make everyone suddenly jump up and take action—although it would be great if that happens! But I’d settle for the realization that, “Gee, wouldn’t it be nice if we could get to the place Nathan is talking about?” Simply figuring out what to steer toward is the obstacle standing in the way of a lot of people.

You’re known for your excellent BBQ. Can you give us some tips on cooking a great brisket?

I generally cook brisket for about 18 hours, between 180 – 190 degrees Fahrenheit, using a homemade dry rub, heavy on salt, sugar, and paprika. I learned this technique (indirectly: https://rudysbbq.com/) from a guy named Rudy, who lived in San Antonio and opened a restaurant called Rudy’s Bar-B-Q (the “worst BBQ in Texas”) that I used to visit every summer. If you’re using a charcoal grill, maintaining 180 – 190 degrees for eighteen hours is a real pain in the butt—so I cheat and use an electric smoker. But if you do this a fair bit, you’ll notice that 180 – 190 isn’t hot enough to generate enough smoke, and you generally want brisket to be smoky. I add some smoldering embers to the smoke tray to keep it smoking. (I know that an electric smoker is cheating. I’m sure Rudy would be horribly offended.)

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Author photo

Nathan Case

Nathan is a Senior Security Specialist, Solutions Architect. He joined AWS in 2016.

Setting permissions to enable accounts for upcoming AWS Regions

Post Syndicated from Sulay Shah original https://aws.amazon.com/blogs/security/setting-permissions-to-enable-accounts-for-upcoming-aws-regions/

The AWS Cloud spans 61 Availability Zones within 20 geographic regions around the world, and has announced plans to expand to 12 more Availability Zones and four more Regions: Hong Kong, Bahrain, Cape Town, and Milan. Customers have told us that they want an easier way to control the Regions where their AWS accounts operate. Based on this feedback, AWS is changing the default behavior for these four and all future Regions so customers will opt in the accounts they want to operate in each new Region. For new AWS Regions, Identity and Access Management (IAM) resources such as users and roles will only be propagated to the Regions that you enable. When the next Region launches, you can enable this Region for your account using the AWS Regions setting under My Account in the AWS Management Console. You will need to enable a new Region for your account before you can create and manage resources in that Region. At this time, there are no changes to existing AWS Regions.

We recommend that you review who in your account will have access to enable and disable AWS Regions. Additionally, you can prepare for this change by setting permissions so that only approved account administrators can enable and disable AWS Regions. Starting today, you can use IAM permissions policies to control which IAM principals (users and roles) can perform these actions.

In this post, I describe the new account permissions for enabling and disabling new AWS Regions. I also describe the updates we’ve made to deny these permissions in the AWS-managed PowerUserAccess policy that many customers use to restrict access to administrative actions. For customers who use custom policies to manage administrative access, I show how to secure access to enable and disable new AWS Regions using IAM permissions policies and Service Control Policies in AWS Organizations. Finally, I explain the compatibility of Security Token Service (STS) session tokens with Regions.

IAM Permissions to enable and disable new AWS Regions for your account

To control access to enable and disable new AWS Regions for your account, you can set IAM permissions using two new account actions. By default, IAM denies access to new actions unless you have explicitly allowed these permissions in an existing policy. You can use IAM permissions policies to allow or deny the actions to enable and disable AWS Regions to IAM principals in your account. The new actions are:

ActionDescription
account:EnableRegionAllows you to opt in an account to a new AWS Region (for Regions launched after March 20, 2019). This action propagates your IAM resources such as users and roles to the Region.
account:DisableRegionAllows you to opt out an account from a new AWS Region (for Regions launched after March 20, 2019). This action removes your IAM resources such as users and roles from the Region.

When granting permissions using IAM policies, some administrators may have granted full access to AWS services except for administrative services such as IAM and Organizations. These IAM principals will automatically get access to the new administrative actions in your account to enable and disable AWS Regions. If you prefer not to provide account permissions to enable or disable AWS Regions to these principals, we recommend that you add a statement to your policies to deny access to account permissions. To do this, you can add a deny statement for account:*. As new Regions launch, you will be able to specify the Regions where these permissions are granted or denied.

At this time, the account actions to enable and disable AWS Regions apply to all upcoming AWS Regions launched after March 20, 2019. To learn more about managing access to existing AWS Regions, review my post, Easier way to control access to AWS regions using IAM policies.

Updates to AWS managed PowerUserAccess Policy

If you’re using the AWS managed PowerUserAccess policy to grant permissions to AWS services without granting access to administrative actions for IAM and Organizations, we have updated this policy as shown below to exclude access to account actions to enable and disable new AWS Regions. You do not need to take further action to restrict these actions for any IAM principals for which this policy applies. We updated the first policy statement, which now allows access to all existing and future AWS service actions except for IAM, AWS Organizations, and account. We also updated the second policy statement to allow the read-only action for listing Regions. The rest of the policy remains unchanged.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "NotAction": [
                "iam:*",
                "organizations:*",
				"account:*"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "iam:CreateServiceLinkedRole",
                "iam:DeleteServiceLinkedRole",
                "iam:ListRoles",
                "organizations:DescribeOrganization",
	  			"account:ListRegions"
            ],
            "Resource": "*"
        }
    ]
}

Restrict Region permissions across multiple accounts using Service Control Policies in AWS Organizations

You can also centrally restrict access to enable and disable Regions for all principals across all accounts in AWS Organizations using Service Control Policies (SCPs). You would use SCPs to restrict this access if you do not anticipate using new Regions. SCPs enable administrators to set permission guardrails that apply to accounts in your organization or an organization unit. To learn more about SCPs and how to create and attach them, read About Service Control Policies.

Next, I show how to restrict the Region enable and disable actions for accounts in an AWS organization using an SCP. In the policy below, I explicitly deny using the Effect block of the policy statement. In the Action block, you add the new permissions account:EnableRegion and account:DisableRegion.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Deny",
            "Action": [
                "account:EnableRegion",
                "account:DisableRegion"
            ],
            "Resource": "*"
        }
    ]
}

Once you create the policy, you can attach this policy to the root of your organization. This will restrict permissions across all accounts in your organization.

Check if users have permissions to enable or disable new AWS Regions in my account

You can use the IAM Policy Simulator to check if any IAM principal in your account has access to the new account actions for enabling and disabling Regions. The simulator evaluates the policies that you choose for a user or role and determines the effective permissions for each of the actions that you specify. Learn more about using the IAM Policy Simulator.

Region compatibility of AWS STS session tokens

For new AWS Regions, we’re also changing region compatibility for session tokens from the AWS Security Token Service (STS) global endpoint. As a best practice, we recommend using the regional STS endpoints to reduce latency. If you’re using regional STS endpoints or don’t plan to operate in new AWS Regions, then the following change doesn’t apply to you and no action is required.

If you’re using the global STS endpoint (https://sts.amazonaws.com) for session tokens and plan to operate in new AWS Regions, the session token size is going to increase. This may impact functionality if you store session tokens in any of your systems. To ensure your systems work with this change, we recommend that you update your existing systems to use regional STS endpoints using the AWS SDK.

Summary

AWS is changing the default behavior for all new Regions going forward. For new AWS Regions, you will opt in to enable your account to operate in those Regions. This makes it easier for you to select the regions where you can create and manage AWS resources. To prepare for upcoming Region launches, we recommend that you validate the capability to enable and disable AWS Regions to ensure only approved IAM principals can enable and disable AWS Regions for your account.

If you have comments about this post, submit them in the Comments section below. If you have questions about or suggestions for this solution, start a new thread on the AWS forums.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

The author

Sulay Shah

Sulay is the product manager for Identity and Access Management service at AWS. He strongly believes in the customer first approach and is always looking for new opportunities to assist customers. Outside of work, Sulay enjoys playing soccer and watching movies. Sulay holds a master’s degree in computer science from the North Carolina State University.

A cybersecurity strategy to thwart advanced attackers

Post Syndicated from Tim Rains original https://aws.amazon.com/blogs/security/a-cybersecurity-strategy-to-thwart-advanced-attackers/

Today, many Chief Information Security Officers and cybersecurity practitioners are looking for an effective cybersecurity strategy that will help them achieve measurably better security for their organizations. AWS has released two new whitepapers to help customers plan and implement a strategy that has helped many organizations protect, detect, and respond to modern-day attacks.

  • Breaking Intrusion Kill Chains with AWS provides context and shows you, in detail, how to mitigate advanced attackers’ favorite strategies and tactics using the AWS cloud platform. It also offers advice on how to measure the effectiveness of this approach.
  • Breaking Intrusion Kill Chains with AWS Reference Material contains a detailed example of how AWS services, features, functionality, and AWS Partner offerings can be used together to safeguard your organization’s data and cloud infrastructure. This paper will save you time and effort by providing you with a comprehensive AWS security control mapping to each phase of advanced attacks, which you’d otherwise have to do on your own.

    This document provides a list of some of the key AWS security controls, organized in an easy-to-understand format, and it includes a mapping to the AWS Cloud Adoption Framework (CAF). Many organizations use the CAF to build a comprehensive approach to cloud computing across their organization. If your organization uses the CAF, and you decide to implement some or all of the controls described in Breaking Intrusion Kill Chains with AWS, then the reference material in this whitepaper can be used to cross-reference with your other CAF efforts, potentially increasing your ROI.

For a high-level introduction, check out my webinar recording. In it, I discuss cybersecurity strategies, explain the framework that’s used, and talk about how to implement it on AWS.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Tim Rains

Tim is the Regional Leader for Security and Compliance in Europe, Africa, and the Middle East for AWS. He helps federal, regional and local governments, in addition to non-profit organizations, education and health care customers with their security and compliance needs. Tim is a frequent speaker at AWS and industry events. Prior to joining AWS, Tim held a variety of executive-level cybersecurity strategy positions at global companies.

How to rotate Amazon DocumentDB and Amazon Redshift credentials in AWS Secrets Manager

Post Syndicated from Apurv Awasthi original https://aws.amazon.com/blogs/security/how-to-rotate-amazon-documentdb-and-amazon-redshift-credentials-in-aws-secrets-manager/

Using temporary credentials is an AWS Identity and Access Management (IAM) best practice. Even Dilbert is learning to set up temporary credentials. Today, AWS Secrets Manager made it easier to follow this best practice by launching support for rotating credentials for Amazon DocumentDB and Amazon Redshift automatically. Now, with a few clicks, you can configure Secrets Manager to rotate these credentials automatically, turning a typical, long-term credential into a temporary credential.

In this post, I summarize the key features of AWS Secrets Manager. Then, I show you how to store a database credential for an Amazon DocumentDB cluster and how your applications can access this secret. Finally, I show you how to configure AWS Secrets Manager to rotate this secret automatically.

Key features of Secrets Manager

These features include the ability to:

  • Rotate secrets safely. You can configure Secrets Manager to rotate secrets automatically without disrupting your applications, turning long-term secrets into temporary secrets. Secrets Manager natively supports rotating secrets for all Amazon database services—Amazon RDS, Amazon DocumentDB, and Amazon Redshift—that require a user name and password. You can extend Secrets Manager to meet your custom rotation requirements by creating an AWS Lambda function to rotate other types of secrets.
  • Manage access with fine-grained policies. You can store all your secrets centrally and control access to these securely using fine-grained AWS Identity and Access Management (IAM) policies and resource-based policies. You can also tag secrets to help you discover, organize, and control access to secrets used throughout your organization.
  • Audit and monitor secrets centrally. Secrets Manager integrates with AWS logging and monitoring services to enable you to meet your security and compliance requirements. For example, you can audit AWS AWS CloudTrail logs to see when Secrets Manager rotated a secret or configure AWS CloudWatch Events to alert you when an administrator deletes a secret.
  • Pay as you go. Pay for the secrets you store in Secrets Manager and for the use of these secrets; there are no long-term contracts or licensing fees.
  • Compliance. You can use AWS Secrets Manager to manage secrets for workloads that are subject to U.S. Health Insurance Portability and Accountability Act (HIPAA), Payment Card Industry Data Security Standard (PCI-DSS), and ISO/IEC 27001, ISO/IEC 27017, ISO/IEC 27018, or ISO 9001.

Phase 1: Store a secret in Secrets Manager

Now that you’re familiar with the key features, I’ll show you how to store the credential for a DocumentDB cluster. To demonstrate how to retrieve and use the secret, I use a Python application running on Amazon EC2 that requires this database credential to access the DocumentDB cluster. Finally, I show how to configure Secrets Manager to rotate this database credential automatically.

  1. In the Secrets Manager console, select Store a new secret.
     
    Figure 1: Select "Store a new secret"

    Figure 1: Select “Store a new secret”

  2. Next, select Credentials for DocumentDB database. For this example, I store the credentials for the database masteruser. I start by securing the masteruser because it’s the most powerful database credential and has full access over the database.
     
    Figure 2: Select "Credentials for DocumentDB database"

    Figure 2: Select “Credentials for DocumentDB database”

    Note: To follow along, you need the AWSSecretsManagerReadWriteAccess managed policy because this policy grants permissions to store secrets in Secrets Manager. Read the AWS Secrets Manager Documentation for more information about the minimum IAM permissions required to store a secret.

  3. By default, Secrets Manager creates a unique encryption key for each AWS region and AWS account where you use Secrets Manager. I chose to encrypt this secret with the default encryption key.
     
    Figure 3: Select the default or your CMK

    Figure 3: Select the default or your CMK

  4. Next, view the list of DocumentDB clusters in my account and select the database this credential accesses. For this example, I select the DB instance documentdb-instance, and then select Next.
     
    Figure 4: Select the instance you created

    Figure 4: Select the instance you created

  5. In this step, specify values for Secret Name and Description. Based on where you will use this secret, give it a hierarchical name, such as Applications/MyApp/Documentdb-instancee, and then select Next.
     
    Figure 5: Provide a name and description

    Figure 5: Provide a name and description

  6. For the next step, I chose to keep the Disable automatic rotation default setting because in my example my application that uses the secret is running on Amazon EC2. I’ll enable rotation after I’ve updated my application (see Phase 2 below) to use Secrets Manager APIs to retrieve secrets. Select Next.
     
    Figure 6: Choose to either enable or disable automatic rotation

    Figure 6: Choose to either enable or disable automatic rotation

    Note:If you’re storing a secret that you’re not using in your application, select Enable automatic rotation. See AWS Secrets Manager getting started guide on rotation for details.

  7. Review the information on the next screen and, if everything looks correct, select Store. You’ve now successfully stored a secret in Secrets Manager.
  8. Next, select See sample code in Python.
     
    Figure 7: Select the "See sample code" button

    Figure 7: Select the “See sample code” button

  9. Finally, take note of the code samples provided. You will use this code to update your application to retrieve the secret using Secrets Manager APIs.
     
    Figure 8: Copy the code sample for use in your application

    Figure 8: Copy the code sample for use in your application

Phase 2: Update an application to retrieve a secret from Secrets Manager

Now that you’ve stored the secret in Secrets Manager, you can update your application to retrieve the database credential from Secrets Manager instead of hard-coding this information in a configuration file or source code. For this example, I show how to configure a Python application to retrieve this secret from Secrets Manager.

  1. I connect to my Amazon EC2 instance via Secure Shell (SSH).
    
        import DocumentDB
        import config
        
        def no_secrets_manager_sample()
        
        # Get the user name, password, and database connection information from a config file.
        database = config.database
        user_name = config.user_name
        password = config.password                
        

  2. Previously, I configured my application to retrieve the database user name and password from the configuration file. Below is the source code for my application.
    
        # Use the user name, password, and database connection information to connect to the database
        db = Database.connect(database.endpoint, user_name, password, database.db_name, database.port) 
        

  3. I use the sample code from Phase 1 above and update my application to retrieve the user name and password from Secrets Manager. This code sets up the client, then retrieves and decrypts the secret Applications/MyApp/Documentdb-instance. I’ve added comments to the code to make the code easier to understand.
    
        # Use this code snippet in your app.
        # If you need more information about configurations or implementing the sample code, visit the AWS docs:   
        # https://aws.amazon.com/developers/getting-started/python/
        
        import boto3
        import base64
        from botocore.exceptions import ClientError
        
        
        def get_secret():
        
            secret_name = "Applications/MyApp/Documentdb-instance"
            region_name = "us-west-2"
        
            # Create a Secrets Manager client
            session = boto3.session.Session()
            client = session.client(
                service_name='secretsmanager',
                region_name=region_name
            )
        
            # In this sample we only handle the specific exceptions for the 'GetSecretValue' API.
            # See https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_GetSecretValue.html
            # We rethrow the exception by default.
        
            try:
                get_secret_value_response = client.get_secret_value(
                    SecretId=secret_name
                )
            except ClientError as e:
                if e.response['Error']['Code'] == 'DecryptionFailureException':
                    # Secrets Manager can't decrypt the protected secret text using the provided KMS key.
                    # Deal with the exception here, and/or rethrow at your discretion.
                    raise e
                elif e.response['Error']['Code'] == 'InternalServiceErrorException':
                    # An error occurred on the server side.
                    # Deal with the exception here, and/or rethrow at your discretion.
                    raise e
                elif e.response['Error']['Code'] == 'InvalidParameterException':
                    # You provided an invalid value for a parameter.
                    # Deal with the exception here, and/or rethrow at your discretion.
                    raise e
                elif e.response['Error']['Code'] == 'InvalidRequestException':
                    # You provided a parameter value that is not valid for the current state of the resource.
                    # Deal with the exception here, and/or rethrow at your discretion.
                    raise e
                elif e.response['Error']['Code'] == 'ResourceNotFoundException':
                    # We can't find the resource that you asked for.
                    # Deal with the exception here, and/or rethrow at your discretion.
                    raise e
            else:
                # Decrypts secret using the associated KMS CMK.
                # Depending on whether the secret is a string or binary, one of these fields will be populated.
                if 'SecretString' in get_secret_value_response:
                    secret = get_secret_value_response['SecretString']
                else:
                    decoded_binary_secret = base64.b64decode(get_secret_value_response['SecretBinary'])
                    
            # Your code goes here.                          
        

  4. Applications require permissions to access Secrets Manager. My application runs on Amazon EC2 and uses an IAM role to obtain access to AWS services. I will attach the following policy to my IAM role. This policy uses the GetSecretValue action to grant my application permissions to read a secret from Secrets Manager. This policy also uses the resource element to limit my application to read only the Applications/MyApp/Documentdb-instance secret from Secrets Manager. You can visit the AWS Secrets Manager documentation to understand the minimum IAM permissions required to retrieve a secret.
    
        {
        "Version": "2012-10-17",
        "Statement": {
        "Sid": "RetrieveDbCredentialFromSecretsManager",
        "Effect": "Allow",
        "Action": "secretsmanager:GetSecretValue",
        "Resource": "arn:aws:secretsmanager:::secret:Applications/MyApp/Documentdb-instance"
        }
        }                   
        

Phase 3: Enable rotation for your secret

Rotating secrets regularly is a security best practice. Secrets Manager makes it easier to follow this security best practice by offering built-in integrations and supporting extensibility with Lambda. When you enable rotation, Secrets Manager creates a Lambda function and attaches an IAM role to this function to execute rotations on a schedule you define.

Note: Configuring rotation is a privileged action that requires several IAM permissions, and you should only grant this access to trusted individuals. To grant these permissions, you can use the AWS IAMFullAccess managed policy.

Now, I show you how to configure Secrets Manager to rotate the secret
Applications/MyApp/Documentdb-instance automatically.

  1. From the Secrets Manager console, I go to the list of secrets and choose the secret I created in phase 1, Applications/MyApp/Documentdb-instance.
     
    Figure 9: Choose the secret from Phase 1

    Figure 9: Choose the secret from Phase 1

  2. Scroll to Rotation configuration, and then select Edit rotation.
     
    Figure 10: Select the Edit rotation configuration

    Figure 10: Select the Edit rotation configuration

  3. To enable rotation, select Enable automatic rotation, and then choose how frequently Secrets Manager rotates this secret. For this example, I set the rotation interval to 30 days. Then, choose create a new Lambda function to perform rotation and give the function an easy to remember name. For this example, I choose the name RotationFunctionforDocumentDB.
     
    Figure 11: Chose to enable automatic rotation, select a rotation interval, create a new Lambda function, and give it a name

    Figure 11: Chose to enable automatic rotation, select a rotation interval, create a new Lambda function, and give it a name

  4. Next, Secrets Manager requires permissions to rotate this secret on your behalf. Because I’m storing the masteruser database credential, Secrets Manager can use this credential to perform rotations. Therefore, I select Use this secret, and then select Save.
     
    Figure12: Select credentials for Secret Manager to use

    Figure12: Select credentials for Secret Manager to use

  5. The banner on the next screen confirms that I successfully configured rotation and the first rotation is in progress, which enables you to verify that rotation is functioning as expected. Secrets Manager will rotate this credential automatically every 30 days.
     
    Figure 13: The banner at the top of the screen will show the status of the rotation

    Figure 13: The banner at the top of the screen will show the status of the rotation

Summary

I explained the key benefits of AWS Secrets Manager and showed how you can use temporary credentials to access your Amazon DocumentDB clusters and Amazon Redshift instances securely. You can follow similar steps to rotate credentials for Amazon Redshift.

Secrets Manager helps you protect access to your applications, services, and IT resources without the upfront investment and on-going maintenance costs of operating your own secrets management infrastructure. To get started, visit the Secrets Manager console. To learn more, read the Secrets Manager documentation. If you have comments about this post, submit them in the Comments section below. If you have questions about anything in this post, start a new thread on the Secrets Manager forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Apurv Awasthi

Apurv is the product manager for credentials management services at AWS, including AWS Secrets Manager and IAM Roles. He enjoys the “Day 1” culture at Amazon because it aligns with his experience building startups in the sports and recruiting industries. Outside of work, Apurv enjoys hiking. He holds an MBA from UCLA and an MS in computer science from University of Kentucky.

AWS Heroes: Putting AWS security services to work for you

Post Syndicated from Mark Nunnikhoven original https://aws.amazon.com/blogs/aws/aws-heroes-putting-aws-security-services-to-work-for-you/

Guest post by AWS Community Hero Mark Nunnikhoven. Mark is the Vice President of Cloud Research at long-time APN Advanced Technology Partner Trend Micro. In addition to helping educate the AWS community about modern security and privacy, he has spearheaded Trend Micro’s launch-day support of most of the AWS security services and attended every AWS re:Invent!

Security is a pillar of the AWS Well-Architected Framework. It’s critical to the success of any workload. But it’s also often misunderstood. It’s steeped in jargon and talked about in terms of threats and fear. This has led to security getting a bad reputation. It’s often thought of as a roadblock and something to put up with.

Nothing could be further from the truth.

At its heart, cybersecurity is simple. It’s a set of processes and controls that work to make sure that whatever I’ve built works as intended… and only as intended. How do I make that happen in the AWS Cloud?

Shared responsibility

It all starts with the shared responsibility model. The model defines the line where responsibility for day-to-day operations shifts from AWS to me, the user. AWS provides the security of the cloud and I am responsible for security in the cloud. For each type of service, more and more of my responsibilities shift to AWS.

My tinfoil hat would be taken away if I didn’t mention that everyone needs to verify that AWS is holding up their end of the deal (#protip: they are and at world-class levels). This is where AWS Artifact enters the picture. It is an easy way to download the evidence that AWS is fulfilling their responsibilities under the model.

But what about my responsibilities under the model? AWS offers help there in the form of various services under the Security, Identity, & Compliance category.

Security services

The trick is understanding how all of these security services fit together to help me meet my responsibilities. Based on conversations I’ve had around the world and helping teach these services at various AWS Summits, I’ve found that grouping them into five subcategories makes things clearer: authorization, protected stores, authentication, enforcement, and visibility.

A few of these categories are already well understood.

  • Authentication services help me identify my users.
  • Authorization services allow me to determine what they—and other services—are allowed to do and under what conditions.
  • Protected stores allow me to encrypt sensitive data and regulate access to it.

Two subcategories aren’t as well understood: enforcement and visibility. I use the services in these categories daily in my security practice and they are vital to ensuring that my apps are working as intended.

Enforcement

Teams struggle with how to get the most out of enforcement controls and it can be difficult to understand how to piece these together into a workable security practice. Most of these controls detect issues, essentially raising their hand when something might be wrong. To protect my deployments, I need a process to handle those detections.

By remembering the goal of ensuring that whatever I build works as intended and only as intended, I can better frame how each of these services helps me.

AWS CloudTrail logs nearly every API action in an account but mining those logs for suspicious activity is difficult. Enter Amazon GuardDuty. It continuously scours CloudTrail logs—as well as Amazon VPC flow logs and DNS logs—for threats and suspicious activity at the AWS account level.

Amazon EC2 instances have the biggest potential for security challenges as they are running a full operating system and applications written by various third parties. All that complexity added up to over 13,000 reported vulnerabilities last year. Amazon Inspector runs on-demand assessments of your instances and raises findings related to the operating system and installed applications that include recommended mitigations.

Despite starting from a locked-down state, teams often make mistakes and sometimes accidentally expose sensitive data in an Amazon S3 bucket. Amazon Macie continuously scans targeted buckets looking for sensitive information and misconfigurations. This augments additional protections like S3 Block Public Access and Trusted Advisor checks.

AWS WAF and AWS Shield work on AWS edge locations and actively stop attacks that they are configured to detect. AWS Shield targets DDoS activity and AWS WAF takes aim at layer seven or web attacks.

Each of these services support the work teams do in hardening configurations and writing quality code. They are designed to help highlight areas of concern for taking action. The challenge is prioritizing those actions.

Visibility

Prioritization is where the visibility services step in. As previously mentioned, AWS Artifact provides visibility into AWS’ activities under the shared responsibility model. The new AWS Security Hub helps me understand the data generated by the other AWS security, identity, and compliance services along with data generated by key APN Partner solutions.

The goal of AWS Security Hub is to be the first stop for any security activity. All data sent to the hub is normalized in the Amazon Finding Format, which includes a standardized severity rating. This provides context for each findings and helps me determine which actions to take first.

This prioritized list of findings quickly translates in a set of responses to undertake. At first, these might be manual responses but as with anything in the AWS Cloud, automation is the key to success.

Using AWS Lambda to react to AWS Security Hub findings is a wildly successful and simple way of modernizing an approach to security. This automated workflow sits atop a pyramid of security controls:

• Core AWS security services and APN Partner solutions at the bottom
• The AWS Security Hub providing visibility in the middle
• Automation as the crown jewel on top

What’s next?

In this post, I described my high-level approach to security success in the AWS Cloud. This aligns directly with the AWS Well-Architected Framework and thousands of customer success stories. When you understand the shared responsibility model and the value of each service, you’re well on your way to demystifying security and building better in the AWS Cloud.

New Whitepaper: Active Directory Domain Services on AWS

Post Syndicated from Vinod Madabushi original https://aws.amazon.com/blogs/architecture/new-whitepaper-active-directory-domain-services-on-aws/

The cloud is now at the center of most Enterprise IT strategies. As such, a well-planned move to the cloud can result in immediate business payoff. To achieve such success, it’s important that you adopt Microsoft Active Directory (AD), the foundation of many large enterprise Windows and .NET applications in a secure, scalable, and highly available manner within the AWS Cloud.

AWS offers flexible options for running AD, so as a customer it’s essential to select an architecture well-suited to support your applications. AWS offers a fully managed option called AWS Managed Active Directory, which enables your directory-aware workloads to use Managed Active Directory in AWS. You can also run Active Directory on Amazon Elastic Compute Cloud (Amazon EC2) and manage both the EC2 Instances and Active Directory, which provides the flexibility needed to extend an existing Active Directory domain to the AWS infrastructure.

In this regard, we are very excited to release Active Directory Domain Services for AWS Whitepaper. This Active Directory whitepaper describes best practices for running Active Directory on AWS, including different architectural approaches for running AWS Managed AD and Active Directory on EC2 Instances. In addition, this document discusses the design considerations, security, network connectivity, and multi-region deployment of Active Directory for both scenarios.

Read the whitepaper: Active Directory on AWS.

About the author

Vinod MadabushiVinod Madabushi is an Enterprise Solutions Architect and subject matter expert in Microsoft technologies, including Active Directory. He works with customers on building highly available, scalable, and resilient applications on AWS Cloud. He’s passionate about solving technology challenges and helping customers with their cloud journey.

 

Registration for AWS re:Inforce 2019 now open!

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/registration-for-aws-reinforce-2019-now-open/

AWS re:Inforce

In late November, I announced AWS re:Inforce, a standalone conference where we will deep dive into the latest approaches to security, identity, and risk management utilizing AWS services, features, and tools. Now, after months of planning, the time has arrived to open registration! Ticket sales begin on March 12th at 10:00am PDT, and you can access the ticket sales website here. We do expect to sell out, so please consider registering soon to also secure a hotel (as well as take advantage of our travel discounts). In celebration, we are offering a limited, while supplies last, $300 discount on the full conference ticket price of $1,099. Register with code RFSAL19 to take advantage of this limited offer.

The benefits of attending AWS re:Inforce 2019 are considerable. The conference will be built around gaining hands-on tactical knowledge of cloud security, identity, and compliance. Over 100 security-specific AWS Partners will be featured in our learning hub to help you tackle all manner of security concerns. Additionally, we’ll have bootcamps where you can meet with likeminded professionals to learn skills that are applicable to your individual job scope. More details about specific session offerings will be announced in the next few weeks, but you can already find details on the track types and session levels here.

Taking a step back for a moment, creating a conference focused on cloud security was important to AWS because, as we’ve often stated, security is job zero for us. While re:Invent is a great opportunity to check in yearly with customers on our new features and services, we felt a conference tailored specifically to cloud security & identity professionals offered a great opportunity for everyone to strengthen their own security program from the ground up. We’ll have four tracks, geared for those just starting out all the way up to next generation aspirational security. We want to be at the forefront of an industry shift from reactive to proactive security, and our inaugural re:Inforce gathering is a great chance for us to hear from customers about their real-world concerns, from encryption to resiliency. We also think building an ongoing community of security stakeholders is critical—we know that excellent guidance for customers doesn’t always come directly from AWS. It can also spring forth from peer conversations and networking opportunities. The strength of the AWS cloud is customers. Our customers see use cases every day that both inform our security roadmap and make our cloud stronger for everyone. Simply put, there is no AWS security story without the tremendous diligence of customers and partners. Creating a space where all parties can come together to exchange knowledge and ideas, whether in a formal session or at a casual dinner, was at the forefront of our thinking when we first considered launching re:Inforce. Seeing the threads and details come together on this re:Inforce has been personally exciting and professionally validating; I can’t wait to see you all there in late June.

Purchase tickets for AWS re:Inforce via the ticket sales website here.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Follow Steve on Twitter.

Author

Steve Schmidt

Steve is Vice President and Chief Information Security Officer for AWS. His duties include leading product design, management, and engineering development efforts focused on bringing the competitive, economic, and security benefits of cloud computing to business and government customers. Prior to AWS, he had an extensive career at the Federal Bureau of Investigation, where he served as a senior executive and section chief. He currently holds five patents in the field of cloud security architecture.

How to visualize Amazon GuardDuty findings: serverless edition

Post Syndicated from Ben Romano original https://aws.amazon.com/blogs/security/how-to-visualize-amazon-guardduty-findings-serverless-edition/

Note: This blog provides an alternate solution to Visualizing Amazon GuardDuty Findings, in which the authors describe how to build an Amazon Elasticsearch Service-powered Kibana dashboard to ingest and visualize Amazon GuardDuty findings.

Amazon GuardDuty is a managed threat detection service powered by machine learning that can monitor your AWS environment with just a few clicks. GuardDuty can identify threats such as unusual API calls or potentially unauthorized users attempting to access your servers. Many customers also like to visualize their findings in order to generate additional meaningful insights. For example, you might track resources affected by security threats to see how they evolve over time.

In this post, we provide a solution to ingest, process, and visualize your GuardDuty finding logs in a completely serverless fashion. Serverless applications automatically run and scale in response to events you define, rather than requiring you to provision, scale, and manage servers. Our solution covers how to build a pipeline that ingests findings into Amazon Simple Storage Service (Amazon S3), transforms their nested JSON structure into tabular form using Amazon Athena and AWS Glue, and creates visualizations using Amazon QuickSight. We aim to provide both an easy-to-implement and cost-effective solution for consuming and analyzing your GuardDuty findings, and to more generally showcase a repeatable example for processing and visualizing many types of complex JSON logs.

Many customers already maintain centralized logging solutions using Amazon Elasticsearch Service (Amazon ES). If you want to incorporate GuardDuty findings with an existing solution, we recommend referencing this blog post to get started. If you don’t have an existing solution or previous experience with Amazon ES, if you prefer to use serverless technologies, or if you’re familiar with more traditional business intelligence tools, read on!

Before you get started

To follow along with this post, you’ll need to enable GuardDuty in order to start generating findings. See Setting Up Amazon GuardDuty for details if you haven’t already done so. Once enabled, GuardDuty will automatically generate findings as events occur. If you have public-facing compute resources in the same region in which you’ve enabled GuardDuty, you may soon find that they are being scanned quite often. All the more reason to continue reading!

You’ll also need Amazon QuickSight enabled in your account for the visualization sections of this post. You can find instructions in Setting Up Amazon QuickSight.

Architecture from end to end

 

Figure 1:  Complete architecture from findings to visualization

Figure 1: Complete architecture from findings to visualization

Figure 1 highlights the solution architecture, from finding generation all the way through final visualization. The steps are as follows:

  1. Deliver GuardDuty findings to Amazon CloudWatch Events
  2. Push GuardDuty Events to S3 using Amazon Kinesis Data Firehose
  3. Use AWS Lambda to reorganize S3 folder structure
  4. Catalog your GuardDuty findings using AWS Glue
  5. Configure Views with Amazon Athena
  6. Build a GuardDuty findings dashboard in Amazon QuickSight

Below, we’ve included an AWS CloudFormation template to launch a complete ingest pipeline (Steps 1-4) so that we can focus this post on the steps dedicated to building the actual visualizations (Steps 5-6). We cover steps 1-4 briefly in the next section to provide context, and we provide links to the pertinent pages in the documentation for those of you interested in building your own pipeline.
 
Select this image to open a link that starts building the CloudFormation stack

Ingest (Steps 1-4): Get Amazon GuardDuty findings into Amazon S3 and AWS Glue Data Catalog

 

Figure 2: In this section, we'll cover the services highlighted in blue

Figure 2: In this section, we’ll cover the services highlighted in blue

Step 1: Deliver GuardDuty findings to Amazon CloudWatch Events

GuardDuty has integration with and can deliver findings to Amazon CloudWatch Events. To perform this manually, follow the instructions in Creating a CloudWatch Events Rule and Target for GuardDuty.

Step 2: Push GuardDuty events to Amazon S3 using Kinesis Data Firehose

Amazon CloudWatch Events can write to an Kinesis Data Firehose delivery stream to store your GuardDuty events in S3, where you can use AWS Lambda, AWS Glue, and Amazon Athena to build the queries you’ll need to visualize the data. You can create your own delivery stream by following the instructions in Creating a Kinesis Data Firehose Delivery Stream and then adding it as a target for CloudWatch Events.

Step 3: Use AWS Lambda to reorganize Amazon S3 folder structure

Kinesis Data Firehose will automatically create a datetime-based file hierarchy to organize the findings as they come in. Due to the variability of the GuardDuty finding types, we recommend reorganizing the file hierarchy with a folder for each finding type, with separate datetime subfolders for each. This will make it easier to target findings that you want to focus on in your visualization. The provided AWS CloudFormation template utilizes an AWS Lambda function to rewrite the files in a new hierarchy as new files are written to S3. You can use the code provided in it along with Using AWS Lambda with S3 to trigger your own function that reorganizes the data. Once the Lambda function has run, the S3 bucket structure should look similar to the structure we show in figure 3.
 

Figure 3: Sample S3 bucket structure

Figure 3: Sample S3 bucket structure

Step 4: Catalog the GuardDuty findings using AWS Glue

With the reorganized findings stored in S3, use an AWS Glue crawler to scan and catalog each finding type. The CloudFormation template we provided schedules the crawler to run once a day. You can also run it on demand as needed. To build your own crawler, refer to Cataloging Tables with a Crawler. Assuming GuardDuty has generated findings in your account, you can navigate to the GuardDuty findings database in the AWS Glue Data Catalog. It should look something like figure 4:
 

Figure 4: List of finding type tables in the AWS Glue Catalog

Figure 4: List of finding type tables in the AWS Glue Catalog

Note: Because AWS Glue crawlers will attempt to combine similar data into one table, you might need to generate sample findings to ensure enough variability for each finding type to have its own table. If you only intend to build your dashboard from a small subset of finding types, you can opt to just edit the crawler to have multiple data sources and specify the folder path for each desired finding type.

Explore the table structure

Before moving on to the next step, take some time to explore the schema structure of the tables. Selecting one of the tables will bring you to a page that looks like what’s shown in figure 5.
 

Figure 5: Schema information for a single finding table

Figure 5: Schema information for a single finding table

You should see that most of the columns contain basic information about each finding, but there’s a column named detail that is of type struct. Select it to expand, as shown in figure 6.
 

Figure 6: The "detail" column expanded

Figure 6: The “detail” column expanded

Ah, this is where the interesting information is tucked away! The tables for each finding may differ slightly, but in all cases the detail column will hold the bulk of the information you’ll want to visualize. See GuardDuty Active Finding Types for information on what you should expect to find in the logs for each finding type. In the next step, we’ll focus on unpacking detail to prepare it for visualization!

Process (Step 5): Unpack nested JSON and configure views with Amazon Athena

 

Figure 7: In this section, we'll cover the services highlighted in blue

Figure 7: In this section, we’ll cover the services highlighted in blue

Note: This step picks up where the CloudFormation template finishes

Explore the table structure (again) in the Amazon Athena console

Begin by navigating to Athena from the AWS Management Console. Once there, you should see a drop-down menu with a list of databases. These are the same databases that are available in the AWS Glue Data Catalog. Choose the database with your GuardDuty findings and expand a table.
 

Figure 8: Expanded table in the Athena console

Figure 8: Expanded table in the Athena console

This should look very familiar to the table information you explored in step 4, including the detail struct!

You’ll need a method to unpack the struct in order to effectively visualize the data. There are many methods and tools to approach this problem. One that we recommend (and will show) is to use SQL queries within Athena to construct tabular views. This approach will allow you to push the bulk of the processing work to Athena. It will also allow you to simplify building visualizations when using Amazon QuickSight by providing a more conventional tabular format.

Extract details for use in visualization using SQL

The following examples contain SQL statements that will provide everything necessary to extract the necessary fields from the detail struct of the Recon:EC2/PortProbeUnprotectedPort finding to build the Amazon QuickSight dashboard we showcase in the next section. The examples also cover most of the operations you’ll need to work with the elements found in GuardDuty findings (such as deeply nested data with lists), and they serve as a good starting point for constructing your own custom queries. In general, you’ll want to traverse the nested layers (i.e. root.detail.service.count) and create new records for each item in an embedded list that you want to target using the UNNEST function. See this blog for even more examples of constructing queries on complex JSON data using Amazon Athena.

Simply copy the SQL statements that you want into the Athena query field to build the port_probe_geo and affected_instances views.

Note: If your account has yet to generate Recon:EC2/PortProbeUnprotectedPort findings, you can generate sample findings to follow along.


CREATE OR REPLACE VIEW "port_probe_geo" AS

WITH getportdetails AS (
    SELECT id, portdetails
    FROM by_finding_type
    CROSS JOIN UNNEST(detail.service.action.portProbeAction.portProbeDetails) WITH ORDINALITY AS p (portdetails, portdetailsindex)
)

SELECT 
    root.id AS id,
    root.region AS region,
    root.time AS time,
    root.detail.type AS type,
    root.detail.service.count AS count, 
    portdetails.localportdetails.port AS localport, 
    portdetails.localportdetails.portname AS localportname, 
    portdetails.remoteipdetails.geolocation.lon AS longitude, 
    portdetails.remoteipdetails.geolocation.lat AS latitude, 
    portdetails.remoteipdetails.country.countryname AS country, 
    portdetails.remoteipdetails.city.cityname AS city 

FROM 
    by_finding_type  as root, getPortDetails
    
WHERE 
    root.id = getportdetails.id

CREATE OR REPLACE VIEW "affected_instances" AS

SELECT 
    max(root.detail.service.count) AS count,
    date_parse(root.time,'%Y-%m-%dT%H:%i:%sZ') as time,
    root.detail.resource.instancedetails.instanceid

FROM 
    recon_ec2_portprobeunprotectedport  AS root

GROUP BY  
    root.detail.resource.instancedetails.instanceid, 
    time

Visualize (Step 6): Build a GuardDuty findings dashboard in Amazon QuickSight

 

Figure 9: In this section we will cover the services highlighted in blue

Figure 9: In this section we will cover the services highlighted in blue

Now that you’ve created tabular views using Athena, you can jump into Amazon QuickSight from the AWS Management Console and begin visualizing! If you haven’t already done so, enable Amazon QuickSight in your account by following the instructions for Setting Up Amazon QuickSight.

For this example, we’ll leverage the geo_port_probe view to build a geographic visualization and see the locations from which nefarious actors are launching port probes.

Creating an analysis

In the upper left-hand corner of the Amazon QuickSight console select New analysis and then New data set.
 

Figure 10: Create a new analysis

Figure 10: Create a new analysis

To utilize the views you built in the previous step, select Athena as the data source. Give your data source a name (in our example, we use “port probe geo”), and select the database that contains the views you created in the previous section. Then select Visualize.
 

Figure 11: Available data sources in Amazon QuickSight. Be sure to choose Athena!

Figure 11: Available data sources in Amazon QuickSight. Be sure to choose Athena!

 

Figure 12: Select the "port prob geo view" you created in step 5

Figure 12: Select the “port prob geo view” you created in step 5

Viz time!

From the Visual types menu in the bottom left corner, select the globe icon to create a map. Then select the latitude and longitude geospatial coordinates. Choose count (with a max aggregation) for size. Finally, select localportname to break the data down by color.
 

Figure 13: A visual containing a map of port probe scans in Amazon QuickSight

Figure 13: A visual containing a map of port probe scans in Amazon QuickSight

Voila! A detailed map of your environment’s attackers!

Build out a dashboard

Once you like how everything looks, you can move on to adding more visuals to create a full monitoring dashboard.

To add another visual to the analysis, select Add and then Add visual.
 

Figure 14: Add another visual using the 'Add' option from the Amazon QuickSight menu bar

Figure 14: Add another visual using the ‘Add’ option from the Amazon QuickSight menu bar

If the new visual will use the same dataset, then you can immediately start selecting fields to build it. If you want to create a visual from a different data set (our example dashboard below adds the affected_instances view), follow the Creating Data Sets guide to add a new data set. Then return to the current analysis and associate the data set with the analysis by selecting the pencil icon shown below and selecting Add data set.
 

Figure 15: Adding a new data set to your Amazon QuickSight analysis

Figure 15: Adding a new data set to your Amazon QuickSight analysis

Repeat this process until you’ve built out everything you need in your monitoring dashboard. Once it’s completed, you can publish the dashboard by selecting Share and then Publish dashboard.
 

Figure 16: Publish your dashboard using the "Share" option of the Amazon QuickSight menu

Figure 16: Publish your dashboard using the “Share” option of the Amazon QuickSight menu

Here’s an example of a dashboard we created using the port_probe_geo and affected_instances views:
 

Figure 17: An example dashboard created using the "port_probe_geo" and "affected_instances" views

Figure 17: An example dashboard created using the “port_probe_geo” and “affected_instances” views

What does something like this cost?

To get an idea of the scale of the cost, we’ve provided a small pricing example (accurate as of the writing of this blog) that assumes 10,000 GuardDuty findings per month with an average payload size of 5KB.

ServicePricing StructureAmount ConsumedTotal Cost
Amazon CloudWatch Events$1 per million events/td>

10000 events $0.01
Amazon Kinesis Data Firehose$0.029 per GB ingested0.05GB ingested $0.00145
Amazon S3$0.029 per GB stored per month0.1GB stored $0.00230
AWS LambdaFirst million invocations free~200 invocations $0
Amazon Athena$5 per TB Scanned0.003TB scanned (Assume 2 full data scans per day to refresh views) $0.015
AWS Glue$0.44 per DPU hour (2 DPU minimum and 10 minute minimum) = $0.15 per crawler run30 crawler runs $4.50
Total Processing Cost$4.53

Oh, the joys of a consumption-based model: Less than five dollars per month for all of that processing!

From here, all that remains are your visualization costs using Amazon QuickSight. This pricing is highly dependent upon your number of users and their respective usage patterns. See the Amazon QuickSight pricing page for more specific details.

Summary

In this post, we demonstrated how you can ingest your GuardDuty findings into S3, process them with AWS Glue and Amazon Athena, and visualize with Amazon QuickSight. All serverless! Each portion of what we showed can be used in tandem or on its own for this or many other data sets. Go launch the template and get started monitoring your AWS environment!

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Ben Romano

Ben is a Solutions Architect in AWS supporting customers in their journey to the cloud with a focus on big data solutions. Ben loves to delight customers by diving deep on AWS technologies and helping them achieve their business and technology objectives.

Author

Jimmy Boyle

Jimmy is a Solutions Architect in AWS with a background in software development. He enjoys working with all things serverless because he doesn’t have to maintain infrastructure. Jimmy enjoys delighting customers to drive their business forward and design solutions that will scale as their business grows.

2018 C5 attestation is now available

Post Syndicated from Gerald Boyne original https://aws.amazon.com/blogs/security/2018-c5-attestation-is-now-available/

AWS has completed its 2018 assessment against the Cloud Computing Compliance Controls Catalog (C5) information security and compliance program. Germany’s national cybersecurity authority—Bundesamt für Sicherheit in der Informationstechnik (BSI)—established C5 to define a reference standard for German cloud security requirements. With C5 (as well as with IT-Grundschutz), customers in German member states can use the work performed under this BSI compliance catalog to comply with stringent local requirements.

AWS has added the Irish region DUB and 29 services to this year’s scope:

  • AWS AppSync
  • AWS Batch
  • AWS Certificate Manager
  • AWS CodeBuild
  • AWS CodeCommit
  • AWS Config
  • AWS Firewall Manager
  • AWS IoT Device Management
  • AWS Managed Services
  • AWS OpsWorks
  • AWS Service Catalog
  • AWS Snowball
  • AWS Snowball Edge
  • AWS Snowmobile
  • AWS WAF
  • AWS X-ray
  • Amazon Kinesis Video Streams
  • Amazon Athena
  • Amazon Cloud Directory
  • Amazon Inspector
  • Amazon MQ
  • Amazon Polly
  • Amazon QuickSight
  • Amazon Rekognition
  • Amazon SageMaker
  • Amazon Simple Email Service
  • Amazon SimpleDB
  • Amazon WorkDocs
  • Amazon WorkMail

AWS now has 71 services in scope of C5. In addition, AWS has included the C5 aspect of “Confidentiality” as an advanced C5 testing, which further supports compliance with GDPR by testing the Technical and Organizational Measures (TOMs), and the C5 aspect of “Availability” as an advanced C5 testing, with which customers will achieve a higher independent assurance for the availability of AWS services.

For more information, German readers can take a look at these resources:

The English version of the C5 report is available through AWS Artifact.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Guidelines for protecting your AWS account while using programmatic access

Post Syndicated from Mahmoud Matouk original https://aws.amazon.com/blogs/security/guidelines-for-protecting-your-aws-account-while-using-programmatic-access/

One of the most important things you can do as a customer to ensure the security of your resources is to maintain careful control over who has access to them. This is especially true if any of your AWS users have programmatic access. Programmatic access allows you to invoke actions on your AWS resources either through an application that you write or through a third-party tool. You use an access key ID and a secret access key to sign your requests for authorization to AWS. Programmatic access can be quite powerful, so implementing best practices to protect access key IDs and secret access keys is important in order to prevent accidental or malicious account activity. In this post, I’ll highlight some general guidelines to help you protect your account, as well as some of the options you have when you need to provide programmatic access to your AWS resources.

Protect your root account

Your AWS root account—the account that’s created when you initially sign up with AWS—has unrestricted access to all your AWS resources. There’s no way to limit permissions on a root account. For this reason, AWS always recommends that you do not generate access keys for your root account. This would give your users the power to do things like close the entire account—an ability that they probably don’t need. Instead, you should create individual AWS Identity and Access Management (IAM) users, then grant each user permissions based on the principle of least privilege: Grant them only the permissions required to perform their assigned tasks. To more easily manage the permissions of multiple IAM users, you should assign users with the same permissions to an IAM group.

Your root account should always be protected by Multi-Factor Authentication (MFA). This additional layer of security helps protect against unauthorized logins to your account by requiring two factors: something you know (a password) and something you have (for example, an MFA device). AWS supports virtual and hardware MFA devices, U2F security keys, and SMS text message-based MFA.

Decide how to grant access to your AWS account

To allow users access to the AWS Management Console and AWS Command Line Interface (AWS CLI), you have two options. The first one is to create identities and allow users to log in using a username and password managed by the IAM service. The second approach is to use federation
to allow your users to use their existing corporate credentials to log into the AWS console and CLI.

Each approach has its use cases. Federation is generally better for enterprises that have an existing central directory or plan to need more than the current limit of 5,000 IAM users.

Note: Access to all AWS accounts is managed by AWS IAM. Regardless of the approach you choose, make sure to familiarize yourself with and follow IAM best practices.

Decide when to use access keys

Applications running outside of an AWS environment will need access keys for programmatic access to AWS resources. For example, monitoring tools running on-premises and third-party automation tools will need access keys.

However, if the resources that need programmatic access are running inside AWS, the best practice is to use IAM roles instead. An IAM role is a defined set of permissions—it’s not associated with a specific user or group. Instead, any trusted entity can assume the role to perform a specific business task.

By utilizing roles, you can grant a resource access without hardcoding an access key ID and secret access key into the configuration file. For example, you can grant an Amazon Elastic Compute Cloud (EC2) instance access to an Amazon Simple Storage Service (Amazon S3) bucket by attaching a role with a policy that defines this access to the EC2 instance. This approach improves your security, as IAM will dynamically manage the credentials for you with temporary credentials that are rotated automatically.

Grant least privileges to service accounts

If you decided to create service accounts (that is, accounts used for programmatic access by applications running outside of the AWS environment) and generate access keys for them, you should create a dedicated service account for each use case. This will allow you to restrict the associated policy to only the permissions needed for the particular use case, limiting the blast radius if the credentials are compromised. For example, if a monitoring tool and a release management tool both require access to your AWS environment, create two separate service accounts with two separate policies that define the minimum set of permissions for each tool.

In addition to this, it’s also a best practice to add conditions to the policy that further restrict access—such as restricting access to only the source IP address range of your clients.

Below is an example policy that represents least privileges. It grants the needed permissions (PutObject) on to a specific resource (an S3 bucket named “examplebucket”) while adding further conditions (the client must come from IP range 203.0.113.0/24).


{
    "Version": "2012-10-17",
    "Id": "S3PolicyRestrictPut",
    "Statement": [
            {
            "Sid": "IPAllow",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::examplebucket/*",
            "Condition": {
                "IpAddress": {"aws:SourceIp": "203.0.113.0/24"}
            } 
        } 
    ]
}

Use temporary credentials from AWS STS

AWS Security Token Service (AWS STS) is a web service that enables you to request temporary credentials for use in your code, CLI, or third-party tools. It allows you to assume an IAM role with which you have a trusted relationship and then generate temporary, time-limited credentials based on the permissions associated with the role. These credentials can only be used during the validity period, which reduces your risk.

There are two ways to generate temporary credentials. You can generate them from the CLI, which is helpful when you need credentials for testing from your local machine or from an on-premises or third-party tool. You can also generate them from code using one of the AWS SDKs. This approach is helpful if you need credentials in your application, or if you have multiple user types that require different permission levels.

Create temporary credentials using the CLI

If you have access to the AWS CLI, you can use it to generate temporary credentials with limited permissions to use in your local testing or with third-party tools. To be able to use this approach, here’s what you need:

  • Access to the AWS CLI through your primary user account or through federation. To learn how to configure CLI access using your IAM credentials, follow this link. If you use federation, you still can use the CLI by following the instructions in this blog post.
  • An IAM role that represents the permissions needed for your test client. In the example below, I use “s3-read”. This role should have a policy attached that grants the least privileges needed for the use case.
  • A trusted relationship between the service role (“s3-read”) and your user account, to allow you to assume the service role and generate temporary credentials. Visit this link for the steps to create this trust relationship.

The example command below will generate a temporary access key ID and secret access key that are valid for 15 minutes, based on permissions associated with the role named “s3-read”. You can replace the values below with your own account number, service role, and duration, then use the secret access key and access key ID in your local clients.


aws sts assume-role --role-arn <arn:aws:iam::AWS-ACCOUNT-NUMBER:role/s3-read> --role-session-name <s3-access> --duration-seconds <900>

Here are my results from running the command:


{ "AssumedRoleUser": 
    { 
        "AssumedRoleId": "AROAIEGLQIIQUSJ2I5XRM:s3-access", 
        "Arn": "arn:aws:sts::AWS-ACCOUNT-NUMBER:assumed-role/s3-read/s3-access" 
    }, 
    "Credentials": { 
        "SecretAccessKey":"wZJph6PX3sn0ZU4g6yfXdkyXp5m+nwkEtdUHwC3w",  
        "SessionToken": "FQoGZXIvYXdzENr//////////<<REST-OF-TOKEN>>",
        "Expiration": "2018-11-02T16:46:23Z",
        "AccessKeyId": "ASIAXQZXUENECYQBAAQG" 
    } 
  }

Create temporary credentials from your code

If you have an application that already uses the AWS SDK, you can use AWS STS to generate temporary credentials right from the code instead of hard-coding credentials into your configurations. This approach is recommended if you have client-side code that requires credentials, or if you have multiple types of users (for example, admins, power-users, and regular users) since it allows you to avoid hardcoding multiple sets of credentials for each user type.

For more information about using temporary credentials from the AWS SDK, visit this link.

Utilize Access Advisor

The IAM console provides information about when an AWS service was last accessed by different principals. This information is called service last accessed data.

Using this tool, you can view when an IAM user, group, role, or policy last attempted to access services to which they have permissions. Based on this information, you can decide if certain permissions need to be revoked or restricted further.

Make this tool part of your periodic security check. Use it to evaluate the permissions of all your IAM entities and to revoke unused permissions until they’re needed. You can also automate the process of periodic permissions evaluation using Access Advisor APIs. If you want to learn how, this blog post is a good starting point.

Other tools for credentials management

While least privilege access and temporary credentials are important, it’s equally important that your users are managing their credentials properly—from rotation to storage. Below is a set of services and features that can help to securely store, retrieve, and rotate credentials.

AWS Systems Manager Parameter Store

AWS Systems Manager offers a capability called Parameter Store that provides secure, centralized storage for configuration parameters and secrets across your AWS account. You can store plain text or encrypted data like configuration parameters, credentials, and license keys. Once stored, you can configure granular access to specify who can obtain these parameters in your application, adding another layer of security to protect your data.

Parameter store is a good choice for use cases in which you need hierarchical storage for configuration data management across your account. For example, you can store database access credentials (username and password) in parameter store, encrypt them with an encryption key managed by AWS Key Management Service, and grant EC2 instances running your application permissions to read and decrypt those credentials.

For more information on using AWS Systems Manager Parameter Store, visit this link.

AWS Secrets Manager

AWS Secrets Manager is a service that allows you to centrally manage the lifecycle of secrets used in your organization, including rotation, audits, and access control. By enabling you to rotate secrets automatically, Secrets Manager can help you meet your security and compliance requirements. Secrets Manager also offers built-in integration for MySQL, PostgreSQL, and Amazon Aurora on Amazon RDS and can be extended to other services.

For more information about using AWS Secrets Manager to store and retrieve secrets, visit this link.

Amazon Cognito

Amazon Cognito lets you add user registration, sign-in, and access management features to your web and mobile applications.

Cognito can be used as an Identity Provider (IdP), where it stores and maintains users and credentials securely for your applications, or it can be integrated with OpenID Connect, SAML, and other popular web identity providers like Amazon.com.

Using Amazon Cognito, you can generate temporary access credentials for your clients to access AWS services, eliminating the need to store long-term credentials in client applications.

To learn more about using Amazon Cognito as an IdP, visit our developer guide to Amazon Cognito User Pools. If you’re interested in information about using Amazon Cognito with a third party IdP, review our guide to Amazon Cognito Identity Pools (Federated Identities).

AWS Trusted Advisor

AWS Trusted Advisor is a service that provides a real-time review of your AWS account and offers guidance on how to optimize your resources to reduce cost, increase performance, expand reliability, and improve security.

The “Security” section of AWS Trusted Advisor should be reviewed on regular basis to evaluate the health of your AWS account. Currently, there are multiple security specific checks that occur—from IAM access keys that haven’t been rotated to insecure security groups. Trusted Advisor is a tool to help you more easily perform a daily or weekly review of your AWS account.

git-secrets

git-secrets
, available from the AWS Labs GitHub account, helps you avoid committing passwords and other sensitive credentials to a git repository. It scans commits, commit messages, and –no-ff merges to prevent your users from inadvertently adding secrets to your repositories.

Conclusion

In this blog post, I’ve introduced some options to replace long-term credentials in your applications with temporary access credentials that can be generated using various tools and services on the AWS platform. Using temporary credentials can reduce the risk of falling victim to a compromised environment, further protecting your business.

I also discussed the concept of least privilege and provided some helpful services and procedures to maintain and audit the permissions given to various identities in your environment.

If you have questions or feedback about this blog post, submit comments in the Comments section below, or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mahmoud Matouk

Mahmoud is part of our world-wide public sector Solutions Architects, helping higher education customers build innovative, secured, and highly available solutions using various AWS services.

Author

Joe Chapman

Joe is a Solutions Architect with Amazon Web Services. He primarily serves AWS EdTech customers, providing architectural guidance and best practice recommendations for new and existing workloads. Outside of work, he enjoys spending time with his wife and dog, and finding new adventures while traveling the world.

AWS achieves HDS certification

Post Syndicated from Stephan Hadinger original https://aws.amazon.com/blogs/security/aws-achieves-hds-certification/

At AWS, the security, privacy, and protection of customer data always comes first, which is why I am pleased to share the news that AWS has achieved “Hébergeur de Données de Santé” (HDS) certification. With HDS certification, customers and partners who host French Personal Health Information (PHI) are now able to use AWS services to store and process personal health data. The HDS certificate for AWS can be found in AWS Artifact.

Introduced by the French governmental agency for health, “Agence Française de la Santé Numérique” (ASIP Santé), HDS certification aims to strengthen the security and protection of personal health data. Achieving this certification demonstrates that AWS provides a framework for technical and governance measures to secure and protect personal health data, governed by French law. The HDS certification validates that AWS ensures data confidentiality, integrity, and availability to its customers and partners. AWS worked with Bureau Veritas, an independent third-party auditor, to achieve the certification.

By adopting the AWS cloud, hospitals, health insurance companies, researchers, and other organizations processing personal health data, will be able to improve agility and collaboration, increase experimentation, and foster innovation in order to provide the best possible patient care. The HDS certification currently covers two AWS Regions in Europe (Ireland and Frankfurt), and this will be followed by the AWS Region in Paris, which is planned for the second quarter of 2019.

HDS certification adds to the list of internationally recognized certifications and attestations of compliance for AWS, which include ISO 27017 for cloud security, ISO 27018 for cloud privacy, SOC 1, SOC 2, SOC 3, and PCI DSS (Level 1). You can learn more about AWS HDS certification and other compliance certifications and accreditations here.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.