Tag Archives: Government

How US federal agencies can authenticate to AWS with multi-factor authentication

Post Syndicated from Kyle Hart original https://aws.amazon.com/blogs/security/how-us-federal-agencies-can-authenticate-to-aws-with-multi-factor-authentication/

This post is part of a series about how AWS can help your US federal agency meet the requirements of the President’s Executive Order on Improving the Nation’s Cybersecurity. We recognize that government agencies have varying degrees of identity management and cloud maturity and that the requirement to implement multi-factor, risk-based authentication across an entire enterprise is a vast undertaking. This post specifically focuses on how you can use AWS information security practices to help meet the requirement to “establish multi-factor, risk-based authentication and conditional access across the enterprise” as it applies to your AWS environment.

This post focuses on the best-practices for enterprise authentication to AWS – specifically federated access via an existing enterprise identity provider (IdP).

Many federal customers use authentication factors on their Personal Identity Verification (PIV) or Common Access Cards (CAC) to authenticate to an existing enterprise identity service which can support Security Assertion Markup Language (SAML), which is then used to grant user access to AWS. SAML is an industry-standard protocol and most IdPs support a range of authentication methods, so if you’re not using a PIV or CAC, the concepts will still work for your organization’s multi-factor authentication (MFA) requirements.

Accessing AWS with MFA

There are two categories we want to look at for authentication to AWS services:

  1. AWS APIs, which include access through the following:
  2. Resources you launch that are running within your AWS VPC, which can include database engines or operating system environments such as Amazon Elastic Compute Cloud (Amazon EC2) instances, Amazon WorkSpaces, or Amazon AppStream 2.0.

There is also a third category of services where authentication occurs in AWS that is beyond the scope of this post: applications that you build on AWS that authenticate internal or external end users to those applications. For this category, multi-factor authentication is still important, but will vary based on the specifics of the application architecture. Workloads that sit behind an AWS Application Load Balancer can use the ALB to authenticate users using either Open ID Connect or SAML IdP that enforce MFA upstream.

MFA for the AWS APIs

AWS recommends that you use SAML and an IdP that enforces MFA as your means of granting users access to AWS. Many government customers achieve AWS federated authentication with Active Directory Federation Services (AD FS). The IdP used by our federal government customers should enforce usage of CAC/PIV to achieve MFA and be the sole means of access to AWS.

Federated authentication uses SAML to assume an AWS Identity and Access Management (IAM) role for access to AWS resources. A role doesn’t have standard long-term credentials such as a password or access keys associated with it. Instead, when you assume a role, it provides you with temporary security credentials for your role session.

AWS accounts in all AWS Regions, including AWS GovCloud (US) Regions, have the same authentication options for IAM roles through identity federation with a SAML IdP. The AWS Single Sign-on (SSO) service is another way to implement federated authentication to the AWS APIs in regions where it is available.

MFA for AWS CLI access

In AWS Regions excluding AWS GovCloud (US), you can consider using the AWS CloudShell service, which is an interactive shell environment that runs in your web browser and uses the same authentication pipeline that you use to access the AWS Management Console—thus inheriting MFA enforcement from your SAML IdP.

If you need to use federated authentication with MFA for the CLI on your own workstation, you’ll need to retrieve and present the SAML assertion token. For information about how you can do this in Windows environments, see the blog post How to Set Up Federated API Access to AWS by Using Windows PowerShell. For information about how to do this with Python, see How to Implement a General Solution for Federated API/CLI Access Using SAML 2.0.

Conditional access

IAM permissions policies support conditional access. Common use cases include allowing certain actions only from a specified, trusted range of IP addresses; granting access only to specified AWS Regions; and granting access only to resources with specific tags. You should create your IAM policies to provide least-privilege access across a number of attributes. For example, you can grant an administrator access to launch or terminate an EC2 instance only if the request originates from a certain IP address and is tagged with an appropriate tag.

You can also implement conditional access controls using SAML session tags provided by their IdP and passed through the SAML assertion to be consumed by AWS. This means two separate users from separate departments can assume the same IAM role but have tailored, dynamic permissions. As an example, the SAML IdP can provide each individual’s cost center as a session tag on the role assertion. IAM policy statements can be written to allow the user from cost center A the ability to administer resources from cost center A, but not resources from cost center B.

Many customers ask about how to limit control plane access to certain IP addresses. AWS supports this, but there is an important caveat to highlight. Some AWS services, such as AWS CloudFormation, perform actions on behalf of an authorized user or role, and execute from within the AWS cloud’s own IP address ranges. See this document for an example of a policy statement using the aws:ViaAWSService condition key to exclude AWS services from your IP address restrictions to avoid unexpected authorization failures.

Multi-factor authentication to resources you launch

You can launch resources such as Amazon WorkSpaces, AppStream 2.0, Redshift, and EC2 instances that you configure to require MFA. The Amazon WorkSpaces Streaming Protocol (WSP) supports CAC/PIV authentication for pre-authentication, and in-session access to the smart card. For more information, see Use smart cards for authentication. To see a short video of it in action, see the blog post Amazon WorkSpaces supports CAC/PIV smart card authentication. Redshift and AppStream 2.0 support SAML 2.0 natively, so you can configure those services to work with your SAML IdP similarly to how you configure AWS Console access and inherit the MFA enforced by the upstream IdP.

MFA access to EC2 instances can occur via the existing methods and enterprise directories used in your on-premises environments. You can, of course, implement other systems that enforce MFA access to an operating system such as RADIUS or other third-party directory or MFA token solutions.

Shell access with Systems Manager Session Manager

An alternative method for MFA for shell access to EC2 instances is to use the Session Manager feature of AWS Systems Manager. Session Manager uses the Systems Manager management agent to provide role-based access to a shell (PowerShell on Windows) on an instance. Users can access Session Manager from the AWS Console or from the command line with the Session Manager AWS CLI plugin. Similar to using CloudShell for CLI access, accessing EC2 hosts via Session Manager uses the same authentication pipeline you use for accessing the AWS control plane. Your interactive session on that host can be configured for audit logging.

Security best practices in IAM

The focus of this blog is on integrating an agency’s existing MFA-enabled enterprise authentication service; but to make it easier for you to view the entire security picture, you might be interested in IAM security best practices. You can enforce these best-practice security configurations with AWS Organizations Service Control Policies.

Conclusion

This post covered methods your federal agency should consider in your efforts to apply the multi-factor authentication (MFA) requirements in the Executive Order on Improving the Nation’s Cybersecurity to your AWS environment. To learn more about how AWS can help you meet the requirements of the executive order, see the other posts in this series:

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Kyle Hart

Kyle is a Principal Solutions Architect supporting US federal government customers in the Washington, D.C. area.

Cybersecurity in the Infrastructure Bill

Post Syndicated from Harley Geiger original https://blog.rapid7.com/2021/08/31/cybersecurity-in-the-infrastructure-bill/

Cybersecurity in the Infrastructure Bill

On August 10, 2021, the U.S. Senate passed the Infrastructure Investment and Jobs Act of 2021 (H.R.3684). The bill comes in at 2,700+ pages, provides for $1.2T in spending, and includes several cybersecurity items. We expect this legislation to become law around late September and do not expect significant changes to the content. This post provides highlights on cybersecurity from the legislation.

(Check out our joint letter calling for cybersecurity in infrastructure legislation here.)

Cybersecurity is a priority — that’s progress

Cybersecurity is essential to ensure modern infrastructure is safe, and Rapid7 commends Congress and the Administration for including cybersecurity in the Infrastructure Investment and Jobs Act. Rapid7 led industry calls to include cybersecurity in the bill, and we are encouraged that several priorities identified by industry are reflected in the text, such as cybersecurity-specific funding for state and local governments and the electrical grid.

On the other hand, cybersecurity will be competing with natural disasters and extreme weather for funding in many (not all) grants created under the bill. In addition, not all critical infrastructure sectors receive cybersecurity resources through the legislation, with healthcare being a notable exclusion. Congress should address these gaps in the upcoming budget reconciliation package.

What’s in the bill for infrastructure cybersecurity

Below is a brief-ish summary of cybersecurity-related items in the bill. The infrastructure sectors with the most allocations appear to be energy, water, transportation, and state and local governments. Many of these funding opportunities take the form of federal grants for infrastructure resilience, which includes cybersecurity as well as natural hazards. Other funds are dedicated solely to cybersecurity.

Please note that this list aims to include major infrastructure cybersecurity funding items, but is not comprehensive. (For example, the bill also provides funding for the National Cyber Director.) Citations to the Senate-passed legislation are included.

  1. State and local governments: $1B over 4 years for the State, Local, Tribal, and Territorial (SLTT) Grant Program. This new grant program will help SLTT governments to develop or implement cybersecurity plans. FEMA will administer the program. This is also known as The State and Local Cybersecurity Improvement Act. [Sec. 70611]

  2. Energy: $250M over five years for the Rural and Municipal Utility Advanced Cybersecurity Grant and Technological Assistance Program. The Department of Energy (DOE) must create a new program to provide grants and technical assistance to improve electric utilities’ ability to detect, respond to, and recover from cybersecurity threats. [Sec. 40124]

  3. Energy: Enhanced grid security. The DOE must create a program to develop advanced cybersecurity applications and technologies for the energy sector, among other things. Over a period of five years, this section authorizes $250M for the Cybersecurity for the Energy Sector RD&D program, $50M for the Energy Sector Operational Support for Cyberresilience Program, and $50M for Modeling and Assessing Energy Infrastructure Risk. [Sec. 40125]

  4. Energy: State energy security plans. This creates federal financial and technical assistance for states to develop or implement an energy security plan that secures state energy infrastructure against cybersecurity threats, among other things. [Sec. 40108]

  5. Water: $250M over 5 years for the Midsize and Large Drinking Water System Infrastructure Resilience and Sustainability Program. This creates a new grant program to assist midsize and large drinking water systems with increasing resilience to cybersecurity vulnerabilities, as well as natural hazards. [Sec. 50107]

  6. Water: $175M over five years for technical assistance and grants for emergencies affecting public water systems. This extends an expired fund to help mitigate threats and emergencies to drinking water. This includes, among other things, emergency situations caused by a cybersecurity incident. [Sec. 50101]

  7. Water: $25M over five years for the Clean Water Infrastructure Resiliency and Sustainability Program. This creates a new program providing grants to owners/operators of publicly owned treatment works to increase the resiliency of water systems against cybersecurity vulnerabilities, as well as natural hazards. [Sec. 50205]

  8. Transportation: Cybersecurity eligible for National Highway Performance Program (NHPP). This expands on the existing NHPP grant program to allow states to use funds for resiliency of the National Highway System. "Resiliency" includes cybersecurity, as well as natural hazards. [Sec. 11105]

  9. Transportation: Cybersecurity eligible for Surface Transportation Block Grant Program. This expands the existing grant program to allow funding measures to protect transportation facilities from cybersecurity threats, among other things. [Sec. 11109]

  10. General: $100M over five years for the Cyber Response and Recovery Fund. This creates a fund for CISA to provide direct support to public or private entities that respond and recover from cyberattacks and breaches designated as a “significant incident.” The support can include technical assistance and response activities, such as vulnerability assessment, threat detection, network protection, and more. The program ends in 2028. [Sec. 70602, Div. J]

Other sectors next?

These cybersecurity items are significant down payments to safeguard the nation’s investment in infrastructure modernization. Combined with the recent Executive Order and memorandum on industrial control systems security, the Biden Administration is demonstrating that cybersecurity is a high priority.

However, more work must be done to address cybersecurity weaknesses in critical infrastructure. While the Infrastructure Investment and Jobs Act provides cybersecurity resources for some sectors, most of the 16 critical infrastructure sectors are excluded. Healthcare is an especially notable example, as the sector faces a serious ransomware problem in the middle of a deadly pandemic.

Congress is now preparing a larger budget reconciliation bill, to be advanced at roughly the same time as the infrastructure legislation. We encourage Congress and the Administration to take this opportunity to boost cybersecurity for other sectors, especially healthcare. As with the infrastructure bill, we suggest providing grants dedicated to cybersecurity, and requiring that grant funds be used to adopt or implement standards-based security safeguards and risk management practices.

Congress’ activity during the COVID-19 crisis continues to be punctuated by large, ambitious bills. To secure the modern economy and essential services, we hope the Infrastructure Investment and Jobs Act sets a precedent that sound cybersecurity policies will be integrated into transformative legislation to come.

How US federal agencies can use AWS to improve logging and log retention

Post Syndicated from Derek Doerr original https://aws.amazon.com/blogs/security/how-us-federal-agencies-can-use-aws-to-improve-logging-and-log-retention/

This post is part of a series about how Amazon Web Services (AWS) can help your US federal agency meet the requirements of the President’s Executive Order on Improving the Nation’s Cybersecurity. You will learn how you can use AWS information security practices to help meet the requirement to improve logging and log retention practices in your AWS environment.

Improving the security and operational readiness of applications relies on improving the observability of the applications and the infrastructure on which they operate. For our customers, this translates to questions of how to gather the right telemetry data, how to securely store it over its lifecycle, and how to analyze the data in order to make it actionable. These questions take on more importance as our federal customers seek to improve their collection and management of log data in all their IT environments, including their AWS environments, as mandated by the executive order.

Given the interest in the technologies used to support logging and log retention, we’d like to share our perspective. This starts with an overview of logging concepts in AWS, including log storage and management, and then proceeds to how to gain actionable insights from that logging data. This post will address how to improve logging and log retention practices consistent with the Security and Operational Excellence pillars of the AWS Well-Architected Framework.

Log actions and activity within your AWS account

AWS provides you with extensive logging capabilities to provide visibility into actions and activity within your AWS account. A security best practice is to establish a wide range of detection mechanisms across all of your AWS accounts. Starting with services such as AWS CloudTrail, AWS Config, Amazon CloudWatch, Amazon GuardDuty, and AWS Security Hub provides a foundation upon which you can base detective controls, remediation actions, and forensics data to support incident response. Here is more detail on how these services can help you gain more security insights into your AWS workloads:

  • AWS CloudTrail provides event history for all of your AWS account activity, including API-level actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. You can use CloudTrail to identify who or what took which action, what resources were acted upon, when the event occurred, and other details. If your agency uses AWS Organizations, you can automate this process for all of the accounts in the organization.
  • CloudTrail logs can be delivered from all of your accounts into a centralized account. This places all logs in a tightly controlled, central location, making it easier to both protect them as well as to store and analyze them. As with AWS CloudTrail, you can automate this process for all of the accounts in the organization using AWS Organizations.  CloudTrail can also be configured to emit metrical data into the CloudWatch monitoring service, giving near real-time insights into the usage of various services.
  • CloudTrail log file integrity validation produces and cyptographically signs a digest file that contains references and hashes for every CloudTrail file that was delivered in that hour. This makes it computationally infeasible to modify, delete or forge CloudTrail log files without detection. Validated log files are invaluable in security and forensic investigations. For example, a validated log file enables you to assert positively that the log file itself has not changed, or that particular user credentials performed specific API activity.
  • AWS Config monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. For example, you can use AWS Config to verify that resources are encrypted, multi-factor authentication (MFA) is enabled, and logging is turned on, and you can use AWS Config rules to identify noncompliant resources. Additionally, you can review changes in configurations and relationships between AWS resources and dive into detailed resource configuration histories, helping you to determine when compliance status changed and the reason for the change.
  • Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts and workloads. Amazon GuardDuty analyzes and processes the following data sources: VPC Flow Logs, AWS CloudTrail management event logs, CloudTrail Amazon Simple Storage Service (Amazon S3) data event logs, and DNS logs. It uses threat intelligence feeds, such as lists of malicious IP addresses and domains, and machine learning to identify potential threats within your AWS environment.
  • AWS Security Hub provides a single place that aggregates, organizes, and prioritizes your security alerts, or findings, from multiple AWS services and optional third-party products to give you a comprehensive view of security alerts and compliance status.

You should be aware that most AWS services do not charge you for enabling logging (for example, AWS WAF) but the storage of logs will incur ongoing costs. Always consult the AWS service’s pricing page to understand cost impacts. Related services such as Amazon Kinesis Data Firehose (used to stream data to storage services), and Amazon Simple Storage Service (Amazon S3), used to store log data, will incur charges.

Turn on service-specific logging as desired

After you have the foundational logging services enabled and configured, next turn your attention to service-specific logging. Many AWS services produce service-specific logs that include additional information. These services can be configured to record and send out information that is necessary to understand their internal state, including application, workload, user activity, dependency, and transaction telemetry. Here’s a sampling of key services with service-specific logging features:

  • Amazon CloudWatch provides you with data and actionable insights to monitor your applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. CloudWatch collects monitoring and operational data in the form of logs, metrics, and events, providing you with a unified view of AWS resources, applications, and services that run on AWS and on-premises servers. You can gain additional operational insights from your AWS compute instances (Amazon Elastic Compute Cloud, or EC2) as well as on-premises servers using the CloudWatch agent. Additionally, you can use CloudWatch to detect anomalous behavior in your environments, set alarms, visualize logs and metrics side by side, take automated actions, troubleshoot issues, and discover insights to keep your applications running smoothly.
  • Amazon CloudWatch Logs is a component of Amazon CloudWatch which you can use to monitor, store, and access your log files from Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS CloudTrail, Route 53, and other sources. CloudWatch Logs enables you to centralize the logs from all of your systems, applications, and AWS services that you use, in a single, highly scalable service. You can then easily view them, search them for specific error codes or patterns, filter them based on specific fields, or archive them securely for future analysis. CloudWatch Logs enables you to see all of your logs, regardless of their source, as a single and consistent flow of events ordered by time, and you can query them and sort them based on other dimensions, group them by specific fields, create custom computations with a powerful query language, and visualize log data in dashboards.
  • Traffic Mirroring allows you to achieve full packet capture (as well as filtered subsets) of network traffic from an elastic network interface of EC2 instances inside your VPC. You can then send the captured traffic to out-of-band security and monitoring appliances for content inspection, threat monitoring, and troubleshooting.
  • The Elastic Load Balancing service provides access logs that capture detailed information about requests that are sent to your load balancer. Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses. The specific information logged varies by load balancer type:
  • Amazon S3 access logs record the S3 bucket and account that are being accessed, the API action, and requester information.
  • AWS Web Application Firewall (WAF) logs record web requests that are processed by AWS WAF, and indicate whether the requests matched AWS WAF rules and what actions, if any, were taken. These logs are delivered to Amazon S3 by using Amazon Kinesis Data Firehose.
  • Amazon Relational Database Service (Amazon RDS) log files can be downloaded or published to Amazon CloudWatch Logs. Log settings are specific to each database engine. Agencies use these settings to apply their desired logging configurations and chose which events are logged.  Amazon Aurora and Amazon RDS for Oracle also support a real-time logging feature called “database activity streams” which provides even more detail, and cannot be accessed or modified by database administrators.
  • Amazon Route 53 provides options for logging for both public DNS query requests as well as Route53 Resolver DNS queries:
    • Route 53 Resolver DNS query logs record DNS queries and responses that originate from your VPC, that use an inbound Resolver endpoint, that use an outbound Resolver endpoint, or that use a Route 53 Resolver DNS Firewall.
    • Route 53 DNS public query logs record queries to Route 53 for domains you have hosted with AWS, including the domain or subdomain that was requested; the date and time of the request; the DNS record type; the Route 53 edge location that responded to the DNS query; and the DNS response code.
  • Amazon Elastic Compute Cloud (Amazon EC2) instances can use the unified CloudWatch agent to collect logs and metrics from Linux, macOS, and Windows EC2 instances and publish them to the Amazon CloudWatch service.
  • Elastic Beanstalk logs can be streamed to CloudWatch Logs. You can also use the AWS Management Console to request the last 100 log entries from the web and application servers, or request a bundle of all log files that is uploaded to Amazon S3 as a ZIP file.
  • Amazon CloudFront logs record user requests for content that is cached by CloudFront.

Store and analyze log data

Now that you’ve enabled foundational and service-specific logging in your AWS accounts, that data needs to be persisted and managed throughout its lifecycle. AWS offers a variety of solutions and services to consolidate your log data and store it, secure access to it, and perform analytics.

Store log data

The primary service for storing all of this logging data is Amazon S3. Amazon S3 is ideal for this role, because it’s a highly scalable, highly resilient object storage service. AWS provides a rich set of multi-layered capabilities to secure log data that is stored in Amazon S3, including encrypting objects (log records), preventing deletion (the S3 Object Lock feature), and using lifecycle policies to transition data to lower-cost storage over time (for example, to S3 Glacier). Access to data in Amazon S3 can also be restricted through AWS Identity and Access Management (IAM) policies, AWS Organizations service control policies (SCPs), S3 bucket policies, Amazon S3 Access Points, and AWS PrivateLink interfaces. While S3 is particularly easy to use with other AWS services given its various integrations, many customers also centralize their storage and analysis of their on-premises log data, or log data from other cloud environments, on AWS using S3 and the analytic features described below.

If your AWS accounts are organized in a multi-account architecture, you can make use of the AWS Centralized Logging solution. This solution enables organizations to collect, analyze, and display CloudWatch Logs data in a single dashboard. AWS services generate log data, such as audit logs for access, configuration changes, and billing events. In addition, web servers, applications, and operating systems all generate log files in various formats. This solution uses the Amazon Elasticsearch Service (Amazon ES) and Kibana to deploy a centralized logging solution that provides a unified view of all the log events. In combination with other AWS-managed services, this solution provides you with a turnkey environment to begin logging and analyzing your AWS environment and applications.

You can also make use of services such as Amazon Kinesis Data Firehose, which you can use to transport log information to S3, Amazon ES, or any third-party service that is provided with an HTTP endpoint, such as Datadog, New Relic, or Splunk.

Finally, you can use Amazon EventBridge to route and integrate event data between AWS services and to third-party solutions such as software as a service (SaaS) providers or help desk ticketing systems. EventBridge is a serverless event bus service that allows you to connect your applications with data from a variety of sources. EventBridge delivers a stream of real-time data from your own applications, SaaS applications, and AWS services, and then routes that data to targets such as AWS Lambda.

Analyze log data and respond to incidents

As the final step in managing your log data, you can use AWS services such as Amazon Detective, Amazon ES, CloudWatch Logs Insights, and Amazon Athena to analyze your log data and gain operational insights.

  • Amazon Detective makes it easy to analyze, investigate, and quickly identify the root cause of security findings or suspicious activities. Detective automatically collects log data from your AWS resources. It then uses machine learning, statistical analysis, and graph theory to help you visualize and conduct faster and more efficient security investigations.
  • Incident Manager is a component of AWS Systems Manger which enables you to automatically take action when a critical issue is detected by an Amazon CloudWatch alarm or Amazon Eventbridge event. Incident Manager executes pre-configured response plans to engage responders via SMS and phone calls, enable chat commands and notifications using AWS Chatbot, and execute AWS Systems Manager Automation runbooks. The Incident Manager console integrates with AWS Systems Manager OpsCenter to help you track incidents and post-incident action items from a central place that also synchronizes with popular third-party incident management tools such as Jira Service Desk and ServiceNow.
  • Amazon Elasticsearch Service (Amazon ES) is a fully managed service that collects, indexes, and unifies logs and metrics across your environment to give you unprecedented visibility into your applications and infrastructure. With Amazon ES, you get the scalability, flexibility, and security you need for the most demanding log analytics workloads. You can configure a CloudWatch Logs log group to stream data it receives to your Amazon ES cluster in near real time through a CloudWatch Logs subscription.
  • CloudWatch Logs Insights enables you to interactively search and analyze your log data in CloudWatch Logs.
  • Amazon Athena is an interactive query service that you can use to analyze data in Amazon S3 by using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.

Conclusion

As called out in the executive order, information from network and systems logs is invaluable for both investigation and remediation services. AWS provides a broad set of services to collect an unprecedented amount of data at very low cost, optionally store it for long periods of time in tiered storage, and analyze that telemetry information from your cloud-based workloads. These insights will help you improve your organization’s security posture and operational readiness and, as a result, improve your organization’s ability to deliver on its mission.

Next steps

To learn more about how AWS can help you meet the requirements of the executive order, see the other post in this series:

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Derek Doerr

Derek is a Senior Solutions Architect with the Public Sector team at AWS. He has been working with AWS technology for over four years. Specializing in enterprise management and governance, he is passionate about helping AWS customers navigate their journeys to the cloud. In his free time, he enjoys time with family and friends, as well as scuba diving.

How AWS can help your US federal agency meet the executive order on improving the nation’s cybersecurity

Post Syndicated from Michael Cotton original https://aws.amazon.com/blogs/security/how-aws-can-help-your-us-federal-agency-meet-the-executive-order-on-improving-the-nations-cybersecurity/

AWS can support your information security modernization program to meet the President’s Executive Order on Improving the Nation’s Cybersecurity (issued May 12th, 2021). When working with AWS, a US federal agency gains access to resources, expertise, technology, professional services, and our AWS Partner Network (APN), which can help the agency meet the security and compliance requirements of the executive order.

For federal agencies, the Executive Order on Improving the Nation’s Cybersecurity requires an update to agency plans to prioritize cloud adoption, identify the most sensitive data and update the protections for that data, encrypt data at rest and in transit, implement multi-factor authentication, and meet expanded logging requirements. It also introduces Zero Trust Architectures and, for the first time, requires an agency to develop plans implementing Zero Trust concepts.

This post focuses on how AWS can help you plan for and accelerate cloud adoption. In the rest of the series you’ll learn how AWS offers guidance for building architectures with a Zero Trust security model, multi-factor authentication, encryption for data at-rest and in-transit, and logging capabilities required to increase visibility for security and compliance purposes.

Prioritize the adoption and use of cloud technologies

AWS has developed multiple frameworks to help you plan your migration to AWS and establish a structured, programmatic approach to AWS adoption. We provide a variety of tools, including server, data, and database features, to rapidly migrate various types of applications from on-premises to AWS. The following lists include links and helpful information regarding the ways AWS can help accelerate your cloud adoption.

Planning tools

  • AWS Cloud Adoption Framework (AWS CAF) – We developed the AWS CAF to assist your organization in developing and implementing efficient and effective plans for cloud adoption. The guidance and best practices provided by the framework help you build a comprehensive approach to cloud computing across your organization, and throughout the IT lifecycle. Using the AWS CAF will help you realize measurable business benefits from cloud adoption faster, and with less risk.
  • Migration Evaluator – You can build a data-driven business case for your cloud adoption on AWS by using our Migration Evaluator (formerly TSO Logic) to gain access to insights and help accelerate decision-making for migration to AWS.
  • AWS Migration Acceleration Program This program assists your organization with migrating to the cloud by providing you training, professional services, and service credits to streamline your migration, helping your agency more quickly decommission legacy hardware, software, and data centers.

AWS services and technologies for migration

  • AWS Application Migration Service (AWS MGN) – This service allows you to replicate entire servers to AWS using block-level replication, performs tests to verify the migration, and executes the cutover to AWS. This is the simplest and fastest method to migrate to AWS.
  • AWS CloudEndure Migration Factory Solution – This solution enables you to replicate entire servers to AWS using block-level replication and executes the cutover to AWS. This solution is designed to coordinate and automate manual processes for large-scale migrations involving a substantial number of servers.
  • AWS Server Migration Service – This is an agentless service that automates the migration of your on-premises VMware vSphere, Microsoft Hyper-V/SCVMM, and Azure virtual machines to AWS. It replicates existing servers as Amazon Machine Images (AMIs), enabling you to transition more quickly and easily to AWS.
  • AWS Database Migration Service – This service automates replication of your on-premises databases to AWS, making it much easier for you to migrate large and complex applications to AWS with minimal downtime.
  • AWS DataSync – This is an online data transfer service that simplifies, automates, and accelerates moving your data between on-premises storage systems and AWS.
  • VMware Cloud on AWS – This service simplifies and speeds up your migration to AWS by enabling your agency to use the same VMware Cloud Foundation technologies across your on-premises environments and in the AWS Cloud. VMware workloads running on AWS have access to more than 200 AWS services, making it easier to move and modernize applications without having to purchase new hardware, rewrite applications, or modify your operations.
  • AWS Snow Family – These services provide devices that can physically transport exabytes of data into and out of AWS. These devices are fully encrypted and integrate with AWS security, monitoring, storage management, and computing capabilities to help accelerate your migration of large data sets to AWS.

AWS Professional Services

  • AWS Professional Services – Use the AWS Cloud to more effectively reach your constituents and better achieve your core mission. This is a global team of experts that can help you realize your desired business outcomes when using the AWS Cloud. Each offering delivers a set of activities, best practices, and documentation reflecting our experience supporting hundreds of customers in their journey to the AWS Cloud.

AWS Partners

  • AWS Government Competency Partners – This page identifies partners who have demonstrated their ability to help government customers accelerate their migration of applications and legacy infrastructure to AWS.

AWS has solutions and partners to assist in your planning and accelerating your migration to the cloud. We can help you develop integrated, cost-effective solutions to help secure your environment and implement the executive order requirements. In short, AWS is ready to help you meet the accelerated timeline goals set in this executive order.

Next steps

For further reading, see the blog post Zero Trust architectures: An AWS perspective, and to learn more about how AWS can help you meet the requirements of the executive order, see the other post in this series:

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Michael Cotton

Michael is a Senior Solutions Architect at AWS.

Hack Back Is Still Wack

Post Syndicated from Jen Ellis original https://blog.rapid7.com/2021/08/10/hack-back-is-still-wack/

Hack Back Is Still Wack

Every year or two, we see a policy proposal around authorizing private-sector hack back. The latest of these is legislation from two U.S. Senators, Daines and Whitehouse, and it would require the U.S. Department of Homeland Security (DHS) to “conduct a study on the potential benefits and risks of amending section 1030 of title 18, United States Code (commonly known as the ‘Computer Fraud and Abuse Act’), to allow private entities to take proportional actions in response to an unlawful network breach, subject to oversight and regulation by a designated Federal agency.”

While we believe the bill would be harmful and do not support the bill in any way, we do acknowledge that at least this legislation is attempting to address how hack back could work in practice and identifying the potential risks. This gets at the heart of one of the main issues with policy proposals for hack back — they rarely address how it would actually work in reality, and how opportunities for abuse or unintended harms would be handled.

Rapid7 does not believe it’s possible to provide sufficient oversight or accountability to make private-sector hack back viable without negative consequences. Further, the very fact that we’re once again discussing private-sector hack back as a possibility is extremely troubling.

Here, we’ll outline why Rapid7 is against the authorization of private-sector hack back.

What is hack back?

When we say “hack back,” we’re referring to non-government organizations taking intrusive action against a cyber attacker on technical assets or systems not owned or leased by the person taking action or their client. This is generally illegal in countries that have anti-hacking laws.

The appeal of hack back is easy to understand. Organizations are subject to more frequent, varied, and costly attacks, often from cybercriminals who have no fear of reprisal or prosecution due to the existence of safe-haven nations that either can’t or won’t crack down on their activities. The scales feel firmly stacked in the favor of these cybercriminals, and it’s understandable that organizations want to shift that balance and give attackers reason to think again before targeting them.

Along these lines, arguments for hack back justify it in a number of ways, citing a desire to recapture lost data, better understand the nature of the attacks, neutralize threats, or use the method as a tit for tat. Hack back activities may be conflated with threat hunting, threat intelligence, or detection and response activities. Confusingly, some proponents for these activities are quick to decry hack back while simultaneously advocating for authority to take intrusive action on third-party assets without consent from their owners.

Hack back is also sometimes referred to as Active Defense or Active Cyber Defense. This can cause confusion, as these terms can also refer to other defensive measures that are not intrusive or conducted without consent from the technology owner. For example, active defense can also describe intrusion prevention systems or deception technologies designed to confuse attackers and gain greater intelligence on them, such as honeypots. Rapid7 encourages organizations to employ active defense techniques within their own environments.

Rapid7’s criticisms of hack back

While the reasons for advocating for private-sector hack back are easy to understand and empathize with, that doesn’t make the idea workable in practice. There’s a wealth of reasons why hack back is a bad idea.

Impracticalities of attribution and application

One of the most widely stated and agreed-upon tenets in security is that attribution is hard. In fact, in many cases, it’s essentially impossible to know for certain that we’ve accurately attributed an attack. Even when we find indications that point in a certain direction, it’s very difficult to ensure they’re not red herrings intentionally planted by the attacker, either to throw suspicion off themselves or specifically to incriminate another party.

We like to talk about digital fingerprints, but the reality is that there’s no such thing: In the digital world, pretty much anything can be spoofed or obfuscated with enough time, patience, skill, and resources. Attackers are constantly evolving their techniques to stay one step ahead of defenders and law enforcement, and the emergence of deception capabilities is just one example of this. So being certain we have the right actor before we take action is extremely difficult.

In addition, where do we draw the line in determining whether an actor or computing entity could be considered a viable target? For example, if someone is under attack from devices that are being controlled as part of a botnet, those devices – and their owners – are as much victims of the attacker as the target of the attack.

Rapid7’s Project Heisenberg observes exactly this phenomenon: The honeypots often pick up traffic from legitimate organizations whose systems have been compromised and leveraged in malicious activity. Should one of these compromised systems be used to attack an organization, and that organization then take action against those affected systems to neutralize the threat against themselves, that would mean the organization defending itself was revictimizing the entity whose systems were already compromised. Depending on the action taken, this could end up being catastrophic and costly for both organizations.  

We must also take motivations into account, even though they’re often unclear or easy to misunderstand. For example, research projects that scan ports on the public-facing internet do so in order to help others understand the attack surface and reduce exposure and opportunities for attackers. This activity is benign and often results in security disclosures that have helped security professionals reduce their organization’s risk. However, it’s not unusual for these scans to encounter a perimeter monitoring tool, throwing up an alert to the security team. If an organization saw the alerts and, in their urgency to defend themselves, took a “shoot first, ask questions later” approach, they could end up attacking the researcher.

Impracticalities of limiting reach and impact

Many people have likened hack back to homeowners defending their property against intruders. They evoke images of malicious, armed criminals breaking into your home to do you and your loved ones harm. They call to you to arm yourself and stand bravely in defense, refusing to be a victim in your own home.

It’s an appealing idea — however, the reality is more akin to standing by your fence and spraying bullets out into the street, hoping to get lucky and stop an attacker as they flee the scene of the crime. With such an approach, even if you do manage to reach your attacker, you’re risking terrible collateral damage, too.

This is because the internet doesn’t operate in neatly defined and clearly demarcated boundaries. If we take action targeted at a specific actor or group of actors, it would be extremely challenging to ensure that action won’t unintentionally negatively impact innocent others. Not only should this concern lawmakers, it should also disincentivize participation. The potential negative consequences of a hack back gone awry could be far-reaching. We frequently discuss damage to equipment or systems, or loss of data, but in the age of the Internet of Things, negative consequences could include physical harm to individuals. And let’s not forget that cyberattacks can be considered acts of war.

Organizations that believe they can avoid negative outcomes in the majority of cases need to understand that even just one or two errors could be extremely costly. Imagine, for example, that a high-value target organization, such as a bank, undertakes 100 hack backs per year and makes a negatively impactful error on two occasions. A 2% fail rate may not seem that terrible — but if either or both of those errors resulted in compromise of another company or harm to a group of individuals, the hack-backer could see themselves tied up in expensive legal proceedings, reputational damage, and loss of trust. Attempts to make organizations exempt from this kind of legal action are problematic, as they raise the question of how we can spot and stop abuses.

Impracticalities of providing appropriate oversight

To date, proposals to legalize hack back have been overly broad and non-specific about how such activities should be managed, and what oversight would be required to ensure there are no abuses of the system. The Daines/Whitehouse bill tries to address this and alludes to a framework for oversight that would determine “which entities would be allowed to take such actions and under what circumstances.”

This seems to refer to an approach commonly advocated by proponents of hack back whereby a license or special authorization to conduct hack back activities is granted to vetted and approved entities. Some advocates have pointed to the example of how privateers were issued Letters of Marque to capture enemy ships — and their associated spoils. Putting aside fundamental concerns about taking as our standard a 200-year-old law passed during a time of prolonged kinetic war and effectively legalizing piracy, there are a number of pragmatic issues with how this would work in practice.  

Indeed, creating a framework and system for such oversight is highly impractical and costly, raising many issues. The government would need to determine basic administrative issues, such as who would run it and how it would be funded. It would also need to identify a path to address far more complex issues around accountability and oversight to avoid abuses. For example, who will determine which activities are acceptable and where the line should be drawn? How would an authorizing agent ensure standards are met and maintained within approved organizations? Existing cybersecurity certification and accreditation schemes have long raised concerns, and these will only worsen when certification results in increased authorities for activities that can result in harm and escalation of aggressions on the internet.

When a government entity itself takes action against attackers, it does so with a high degree of oversight and accountability. They must meet evidentiary standards to prove the action is appropriate, and even then, there are parameters determining the types of targets they can pursue and the kinds of actions they can take. Applying the same level of oversight to the private sector is impractical. At the same time, authorizing the private sector to participate in these activities without this same level of oversight would undermine the checks and balances in place for the government and likely lead to unintended harms.

An authorizing agent cannot have eyes everywhere and at all times, so it would be highly impractical to create a system for oversight that would enable the governing authority to spot and stop accidental or intentional abuses of the system in real time. If the Daines/Whitehouse bill does pass (and we have no indication of that at present), I very much hope that DHS’s resulting report will reflect these issues or, if possible, provide adequate responses to address these concerns.

These issues of practical execution also raise questions around who will bear the responsibility and liability if something goes wrong. For example, if a company hacks back and accidentally harms another organization or individual, the entity that undertook the hacking may incur expensive legal proceedings, reputational damage, and loss of trust. They could become embroiled in complicated and expensive multi-jurisdiction legal action, even if the company has a license to hack back in its home jurisdiction. In scenarios where hack back activities are undertaken by an organization or individual on behalf of a third party, both the agent and their client may bear these negative consequences. There may also be an argument that any licensing authority could also bear some of the liability.  

Making organizations exempt from legal action around unintended consequences would be problematic and likely to result in more recklessness, as well as infringing on the rights of the victim organization. While the internet is a borderless space accessed from every country in the world, each of those countries has its own legal system and expects its citizens to abide by it. It would be very risky for companies and individuals who hack back to avoid running afoul of the laws of other countries or international bodies. When national governments take this kind of action, it tends to occur within existing international legal frameworks and under some regulatory oversight, but this may not apply in the private sector, again begging the question of where the liability rests.

It’s also worth noting that once one major power authorizes private-sector hack back, other governments will likely follow, and legal expectations or boundaries may vary. This raises questions of how governments will respond when their citizens are being attacked as part of a private-sector hack back gone wrong, and whether it will likely lead to escalation of political tensions.

Inequalities of applicability

Should a viable system be developed and hack back authorized, effective participation would likely be costly, as it would require specialist skills. Not every organization would be able to participate. If the authorization framework isn’t stringent, many organizations might try to participate with insufficient expertise, which would likely be ineffective, damaging, or both. At the same time, other organizations won’t have the maturity or budget to participate even in this way.

These are the same organizations that sit below the “cybersecurity poverty line” and can’t afford a great deal of in-house security expertise and technologies to protect themselves – in other words, these organizations are already highly vulnerable. As organizations that do have sufficient resources start to hack back, the cost of attacking these organizations will increase. Profit-motivated attackers will eventually shift toward targeting the less-resourced organizations that reside below the security poverty line. Rather than authorizing a measure as fraught with risk as hack back, we should instead be thinking about how to better protect these vulnerable organizations — for example, by subsidizing or incentivizing security hygiene.

The line between legitimate research and hack back

Those who follow Rapid7’s policy work will know that we’re big proponents of security research and have worked for many years to see greater recognition of its value and importance in public policy. It may come as a surprise to see us advocate so enthusiastically against hack back as, from a brief look, they have some things in common. In both cases, we’re talking about activity undertaken in the name of cybersecurity, which may be intrusive in nature and involve third-party assets without consent of the owner.

While independent, good-faith security research and threat intelligence investigations are both very valuable for security, they’re not the same thing, and we don’t believe we should view related legal restrictions in the same way for both.

Good-faith security research is typically performed independently of manufacturers and operators in order to identify flaws or exposures in systems that provide opportunities for attackers. The goal is to remediate or mitigate these issues so we can reduce opportunities for attackers and thus decrease the risk for technology users. This kind of research is generally about protecting the safety and privacy of the many, and while researchers may take actions without authorization, they only perform those actions on the technology of those ultimately responsible for both creating and mitigating the exposure. Without becoming aware of the issue, the technology provider and their users would continue to be exposed to risk.

Research may bypass authorization to sidestep issues arising from manufacturers and operators prioritizing their reputation or profit above the security of their customers. In contrast, threat intel investigations or operations that involve interrogating or interacting with third-party assets prioritize the interests of the specific entity undertaking or commissioning the activity, rather than other potential victims whose compromised assets may have been leveraged in the attack.

While threat intelligence can help us understand attacker behavior and identify or prepare for attacks, data gathering and operations should be limited only to assessing risks and threats to assets that are owned or operated by the entity authorizing the work, or to non-invasive activities such as port scanning. Because cyber attacks are criminal activity, if more investigation is needed, it should be undertaken with appropriate law enforcement involvement and oversight.

The path forward

It seems likely that the hack back debate will continue to come up as organizations strive to find new ways to repel attacks. I could make a snarky comment here about how organizations should perhaps focus instead on user awareness training, reducing their attack exposure, managing supply chain risk, proper segmentation, patching, Identity Access Management (IAM), and all the other things that make up a robust defense-in-depth program and that we frequently see fail, but I shall refrain. Cough cough.

We shall wait to see what happens with Senators Daines’ and Whitehouse’s “Study on Cyber-Attack Response Options Act’’ bill and hope that, if it passes, DHS will consider the concerns raised in this blog. The same is true for other policymakers as cybercrime is an international blight and governments around the world are subject to lobbying from entities looking to take a more active role in their defense. While we understand and sympathize with the desire to do more, take more control, and fight back, we urge policymakers to be mindful of the potential for catastrophe.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Approaches to meeting Australian Government gateway requirements on AWS

Post Syndicated from John Hildebrandt original https://aws.amazon.com/blogs/security/approaches-to-meeting-australian-government-gateway-requirements-on-aws/

Australian Commonwealth Government agencies are subject to specific requirements set by the Protective Security Policy Framework (PSPF) for securing connectivity between systems that are running sensitive workloads, and for accessing less trusted environments, such as the internet. These agencies have often met the requirements by using some form of approved gateway solution that provides network-based security controls.

This post examines the types of controls you need to provide a gateway that can meet Australian Government requirements defined in the Protective Security Policy Framework (PSPF) and the challenges of using traditional deployment models to support cloud-based solutions. PSPF requirements are mandatory for non-corporate Commonwealth entities, and represent better practice for corporate Commonwealth entities, wholly-owned Commonwealth companies, and state and territory agencies. We discuss the ability to deploy gateway-style solutions in the cloud, and show how you can meet the majority of gateway requirements by using standard cloud architectures plus services. We provide guidance on deploying gateway solutions in the AWS Cloud, and highlight services that can support such deployments. Finally, we provide an illustrative AWS web architecture pattern to show how to meet the majority of gateway requirements through Well-Architected use of services.

Australian Government gateway requirements

The Australian Government Protective Security Policy Framework (PSPF) highlights the requirement to use secure internet gateways (SIGs) and references the Australian Information Security Manual (ISM) control framework to guide agencies. The ISM has a chapter on gateways, which includes the following recommendations for gateway architecture and operations:

  • Provide a central control point for traffic in and out of the system.
  • Inspect and filter traffic.
  • Log and monitor traffic and gateway operation to a secure location. Use appropriate security event alerting.
  • Use secure administration practices, including multi-factor authentication (MFA) access control, minimum privilege, separation of roles, and network segregation.
  • Perform appropriate authentication and authorization of users, traffic, and equipment. Use MFA when possible.
  • Use demilitarized zone (DMZ) patterns to limit access to internal networks.
  • Test security controls regularly.
  • Set up firewalls between security domains and public network infrastructure.

Since the PSPF references the ISM, the agency should apply the overall ISM framework to meet ISM requirements such as governance and security patching for the environment. The ISM is a risk-based framework, and the risk posture of the workload and organization should inform how to assess the controls. For example, requirements for authentication of users might be relaxed for a public-facing website.

In traditional on-premises environments, some Australian Government agencies have mandated centrally assessed and managed gateway capabilities in order to drive economies of scale across multiple government agencies. However, the PSPF does provide the option for gateways used only by a single government agency to undertake their own risk-based assessment for the single agency gateway solution.

Other government agencies also have specific requirements to connect with cloud providers. For example, the U.S. Government Office of Management and Budget (OMB) mandates that U.S. government users access the cloud through a specific agency connection.

Connecting to the cloud through on-premises gateways

Given the existence of centrally managed off-cloud gateways, one approach by customers has been to continue to use these off-cloud gateways and then connect to AWS through the on-premises gateway environment by using AWS Direct Connect, as shown in Figure 1.
 

Figure 1: Connecting to the AWS Cloud through an agency gateway and then through AWS Direct Connect

Figure 1: Connecting to the AWS Cloud through an agency gateway and then through AWS Direct Connect

Although this approach does work, and makes use of existing gateway capability, it has a number of downsides:

  • A potential single point of failure: If the on-premises gateway capability is unavailable, the agency can lose connectivity to the cloud-based solution.
  • Bandwidth limitations: The agency is limited by the capacity of the gateway, which might not have been developed with dynamically scalable and bandwidth-intensive cloud-based workloads in mind.
  • Latency issues: The requirement to traverse multiple network hops, in addition to the gateway, will introduce additional latency. This can be particularly problematic with architectures that involve API communications being sent back and forth across the gateway environment.
  • Castle-and-moat thinking: Relying only on the gateway as the security boundary can discourage agencies from using and recognizing the cloud-based security controls that are available.

Some of these challenges are discussed in the context of US Trusted Internet Connection (TIC) programs in this whitepaper.

Moving gateways to the cloud

In response to the limitations discussed in the last section, both customers and AWS Partners have built gateway solutions on AWS to meet gateway requirements while remaining fully within the cloud environment. See this type of solution in Figure 2.
 

Figure 2: Moving the gateway to the AWS Cloud

Figure 2: Moving the gateway to the AWS Cloud

With this approach, you can fully leverage the scalable bandwidth that is available from the AWS environment, and you can also reduce latency issues, particularly when multiple hops to and from the gateway are required. This blog post describes a pilot program in the US that combines AWS services and AWS Marketplace technologies to provide a cloud-based gateway.

You can use AWS Transit Gateway (released after the referenced pilot program) to provide the option to centralize such a gateway capability within an organization. This makes it possible to utilize the gateway across multiple cloud solutions that are running in their own virtual private clouds (VPCs) and accounts. This approach also facilitates the principle of the gateway being the central control point for traffic flowing in and out. For more information on using AWS Transit Gateway with security appliances, see the Appliance VPC topic in the Amazon VPC documentation.

More recently, AWS has released additional services and features that can assist with delivering government gateway requirements.

Elastic Load Balancing Gateway Load Balancer provide the capability to deploy third-party network appliances in a scalable fashion. With this capability, you can leverage existing investment in licensing, use familiar tooling, reuse intellectual property (IP) such as rule sets, and reuse skills, because staff are already trained in configuring and managing the chosen device. You have one gateway for distributing traffic across multiple virtual appliances, while scaling the appliances up and down based on demand. This reduces the potential points of failure in your network and increases availability. Gateway Load Balancer is a straightforward way to use third-party network appliances from industry leaders in the cloud. You benefit from the features of these devices, while Gateway Load Balancer makes them automatically scalable and easier to deploy. You can find an AWS Partner with Gateway Load Balancer expertise on the AWS Marketplace. For more information on combining Transit Gateway and Gateway Load Balancer for a centralized inspection architecture, see this blog post. The post shows centralized architecture for East-West (VPC-to-VPC) and North-South (internet or on-premises bound) traffic inspection, plus processing.

To further simplify this area for customers, AWS has introduced the AWS Network Firewall service. Network Firewall is a managed service that you can use to deploy essential network protections for your VPCs. The service is simple to set up and scales automatically with your network traffic so you don’t have to worry about deploying and managing any infrastructure. You can combine Network Firewall with Transit Gateway to set up centralized inspection architecture models, such as those described in this blog post.

Reviewing a typical web architecture in the cloud

In the last section, you saw that SIG patterns can be created in the cloud. Now we can put that in context with the layered security controls that are implemented in a typical web application deployment. Consider a web application hosted on Amazon Elastic Compute Cloud (Amazon EC2) instances, as shown in Figure 3, within the context of other services that will support the architecture.
 

Figure 3: Security controls in a web application hosted on EC2

Figure 3: Security controls in a web application hosted on EC2

Although this example doesn’t include a traditional SIG-type infrastructure that inspects and controls traffic before it’s sent to the AWS Cloud, the architecture has many of the technical controls that are called for in SIG solutions as a result of using the AWS Well-Architected Framework. We’ll now step through some of these services to highlight the relevant security functionality that each provides.

Network control services

Amazon Virtual Private Cloud (Amazon VPC) is a service you can use to launch AWS resources in a logically isolated virtual network that you define. You have complete control over your virtual networking environment, including selection of your own IP address range, creation of subnets, and configuration of route tables and network gateways. Amazon VPC lets you use multiple layers of security, including security groups and network access control lists (network ACLs), to help control access to Amazon EC2 instances in each subnet. Security groups act as a firewall for associated EC2 instances, controlling both inbound and outbound traffic at the instance level. A network ACL is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. You might set up network ACLs with rules similar to your security groups to add an additional layer of security to your VPC. Read about the specific differences between security groups and network ACLs.

Having this level of control throughout the application architecture has advantages over relying only on a central, border-style gateway pattern, because security groups for each tier of the application architecture can be locked down to only those ports and sources required for that layer. For example, in the architecture shown in Figure 3, only the application load balancer security group would allow web traffic (ports 80, 443) from the internet. The web-tier-layer security group would only accept traffic from the load-balancer layer, and the database-layer security group would only accept traffic from the web tier.

If you need to provide a central point of control with this model, you can use AWS Firewall Manager, which simplifies the administration and maintenance of your VPC security groups across multiple accounts and resources. With Firewall Manager, you can configure and audit your security groups for your organization using a single, central administrator account. Firewall Manager automatically applies rules and protections across your accounts and resources, even as you add new resources. Firewall Manager is particularly useful when you want to protect your entire organization, or if you frequently add new resources that you want to protect via a central administrator account.

To support separation of management plan activities from data plane aspects in workloads, agencies can use multiple elastic network interface patterns on EC2 instances to provide a separate management network path.

Edge protection services

In the example in Figure 3, several services are used to provide edge-based protections in front of the web application. AWS Shield is a managed distributed denial of service (DDoS) protection service that safeguards applications that are running on AWS. AWS Shield provides always-on detection and automatic inline mitigations that minimize application downtime and latency, so there’s no need to engage AWS Support to benefit from DDoS protection. There are two tiers of AWS Shield: Standard and Advanced. When you use Shield Advanced, you can apply protections at both the Amazon CloudFront, Amazon EC2 and application load balancer layers. Shield Advanced also gives you 24/7 access to the AWS DDoS Response Team (DRT).

AWS WAF is a web application firewall that helps protect your web applications or APIs against common web exploits that can affect availability, compromise security, or consume excessive resources. AWS WAF gives you control over how traffic reaches your applications by enabling you to create security rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that filter out specific traffic patterns that you define. Again, you can apply this protection at both the Amazon CloudFront and application load balancer layers in our illustrated solution. Agencies can also use managed rules for WAF to benefit from rules developed and maintained by AWS Marketplace sellers.

Amazon CloudFront is a fast content delivery network (CDN) service. CloudFront seamlessly integrates with AWS ShieldAWS WAF, and Amazon Route 53 to help protect against multiple types of unauthorized access, including network and application layer DDoS attacks.

Logging and monitoring services

The example application in Figure 3 shows several services that provide logging and monitoring of network traffic, application activity, infrastructure, and AWS API usage.

At the VPC level, the VPC Flow Logs feature provides you with the ability to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data can be published to Amazon CloudWatch Logs or Amazon Simple Storage Service (Amazon S3). Traffic Mirroring is a feature that you can use in a VPC to capture traffic if needed for inspection. This allows agencies to implement full packet capture on a continuous basis, or in response to a specific event within the application.

Amazon CloudWatch provides a monitoring service with alarms and analytics. In the example application, AWS WAF can also be configured to log activity as described in the AWS WAF Developer Guide.

AWS Config provides a timeline view of the configuration of the environment. You can also define rules to provide alerts and remediation when the environment moves away from the desired configuration.

AWS CloudTrail is a service that you can use to handle governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity that is related to actions across your AWS infrastructure.

Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts. GuardDuty analyzes tens of billions of events across multiple AWS data sources, such as AWS CloudTrail event logs, Amazon VPC Flow Logs, and DNS logs. This blog post highlights a third-party assessment of GuardDuty that compares its performance to other intrusion detection systems (IDS).

Route 53 Resolver Query Logging lets you log the DNS queries that originate in your VPCs. With query logging turned on, you can see which domain names have been queried, the AWS resources from which the queries originated—including source IP and instance ID—and the responses that were received.

With Route 53 Resolver DNS Firewall, you can filter and regulate outbound DNS traffic for your VPCs. To do this, you create reusable collections of filtering rules in DNS Firewall rule groups, associate the rule groups to your VPC, and then monitor activity in DNS Firewall logs and metrics. Based on the activity, you can adjust the behavior of DNS Firewall accordingly.

Mapping services to control areas

Based on the above description of the use of additional services, we can summarize which services contribute to the control and recommendation areas in the gateway chapter in the Australian ISM framework.

Control and recommendation areas Contributing services
Inspect and filter traffic AWS WAF, VPC Traffic Mirroring
Central control point Infrastructure as code, AWS Firewall Manager
Authentication and authorization (MFA) AWS Identity and Access Management (IAM), solution and application IAM, VPC security groups
Logging and monitoring Amazon CloudWatch, AWS CloudTrail, AWS Config, Amazon VPC (flow logs and mirroring), load balancer logs, Amazon CloudFront logs, Amazon GuardDuty, Route 53 Resolver Query Logging
Secure administration (MFA) IAM, directory federation (if used)
DMZ patterns VPC subnet layout, security groups, network ACLs
Firewalls VPC security groups, network ACLs, AWS WAF, Route 53 Resolver DNS Firewall
Web proxy; site and content filtering and scanning AWS WAF, Firewall Manager

Note that the listed AWS service might not provide all relevant controls in each area, and it is part of the customer’s risk assessment and design to determine what additional controls might need to be implemented.

As you can see, many of the recommended practices and controls from the Australian Government gateway requirements are already encompassed in a typical Well-Architected solution. The implementing agency has the choice of two options: it can continue to place such a solution behind a gateway that runs either within or outside of AWS, leveraging the gateway controls that are inherent in the application architecture as additional layers of defense. Otherwise, the agency can conduct a risk assessment to understand which gateway controls can be supplied by means of the application architecture to reduce the gateway control requirements at any gateway layer in front of the application.

Summary

In this blog post, we’ve discussed the requirements for Australian Government gateways which provide network controls to secure workloads. We’ve outlined the downsides of using traditional on-premises solutions and illustrated how services such as AWS Transit Gateway, Elastic Load Balancing, Gateway Load Balancer, and AWS Network Firewall facilitate moving gateway solutions into the cloud. These are services you can evaluate against your network control requirements. Finally, we reviewed a typical web architecture running in the AWS Cloud with associated services to illustrate how many of the typical gateway controls can be met by using a standard Well-Architected approach.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on one of the AWS Security or Networking forums or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

John Hildebrandt

John is a Principal Solutions Architect in the Australian National Security team at AWS in Canberra, Australia. He is passionate about facilitating cloud adoption for customers to enable innovation. John has been working with government customers at AWS for over 8 years, as the first employee for the ANZ Public Sector team.

AWS achieves FedRAMP P-ATO for 5 services in AWS US East/West and GovCloud (US) Regions

Post Syndicated from Amendaze Thomas original https://aws.amazon.com/blogs/security/aws-achieves-fedramp-p-ato-for-5-services-in-aws-us-east-west-and-govcloud-us-regions/

We’re pleased to announce that five additional AWS services have achieved provisional authorization (P-ATO) by the Federal Risk and Authorization Management Program (FedRAMP) Joint Authorization Board (JAB). These services provide the following capabilities for the federal government and customers with regulated workloads:

  • Enable your organization’s developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs with AWS Batch.
  • Aggregate, organize, and prioritize your security alerts or findings from multiple AWS services using AWS Security Hub.
  • Provision, manage, and deploy public and private Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates using AWS Certificate Manager.
  • Enable customers to set up and govern a new, secure, multi-account AWS environment using AWS Control Tower.
  • Provide a fully managed Kubernetes service with Amazon Elastic Kubernetes Service.

The following services are now listed on the FedRAMP Marketplace and the AWS Services in Scope by Compliance Program page.

AWS US East/West Regions (FedRAMP Moderate Authorization)

AWS GovCloud (US) Regions (FedRAMP High Authorization)

AWS is continually expanding the scope of our compliance programs to help enable your organization to use our services for sensitive and regulated workloads. Today, AWS offers 90 AWS services authorized in the AWS US East/West Regions under FedRAMP Moderate Authorization, and 76 services authorized in the AWS GovCloud (US) Regions under FedRAMP High Authorization.

To learn what other public sector customers are doing on AWS, see our Government, Education, and Nonprofits Case Studies and Customer Success Stories. Stay tuned for future updates on our Services in Scope by Compliance Program page. If you have feedback about this blog post, let us know in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

author photo

Amendaze Thomas

Amendaze is the manager of the AWS Government Assessments and Authorization Program (GAAP). He has 15 years of experience providing advisory services to clients in the federal government, and over 13 years of experience supporting CISO teams with risk management framework (RMF) activities.