Tag Archives: Government

Rapid7 CEO Corey E. Thomas Appointed To National Security Telecommunications Advisory Committee

Post Syndicated from Rapid7 original https://blog.rapid7.com/2023/02/16/rapid7-ceo-corey-e-thomas-appointed-to-national-security-telecommunications-advisory-committee/

Rapid7 CEO Corey E. Thomas Appointed To National Security Telecommunications Advisory Committee

President Biden has announced his intent to appoint a group of highly qualified and diverse industry leaders, including Rapid7 chairman & CEO Corey E. Thomas, to the President’s National Security Telecommunications Advisory Committee (NSTAC).

Rapid7 CEO Corey E. Thomas Appointed To National Security Telecommunications Advisory Committee

NSTAC’s mission is to to provide the best possible technical information and policy advice to assist the President and other stakeholders responsible for critical national security and emergency preparedness (NS/EP) services. The committee advises the White House on the reliability, security, and preparedness of vital communications and information infrastructure. It is focused on five key themes:

  • Strengthening national security
  • Enhancing cybersecurity
  • Maintaining the global communications infrastructure
  • Assuring communications for disaster response
  • Addressing critical infrastructure interdependencies and dependencies

Thomas joins a talented group of telecommunications and security executives from companies such as AT&T, Microsoft, Cisco, Lockheed Martin, T-Mobile, and Verizon. These executives bring diverse perspectives backed by years of unique industry experience.

“It is an extreme honor and privilege to be named to the President’s National Security Telecommunications Advisory Committee,” said Thomas. “I look forward to the remarkable opportunity to provide cybersecurity guidance to the President’s administration and to work alongside and learn from  this talented group of individuals, many of whom I’ve admired throughout my career.”

AWS now licensed by DESC to operate as a Tier 1 cloud service provider in the Middle East (UAE) Region

Post Syndicated from Ioana Mecu original https://aws.amazon.com/blogs/security/aws-now-licensed-by-desc-to-operate-as-a-tier-1-cloud-service-provider-in-the-middle-east-uae-region/

We continue to expand the scope of our assurance programs at Amazon Web Services (AWS) and are pleased to announce that our Middle East (UAE) Region is now certified by the Dubai Electronic Security Centre (DESC) to operate as a Tier 1 cloud service provider (CSP). This alignment with DESC requirements demonstrates our continuous commitment to adhere to the heightened expectations for CSPs. AWS government customers can run their applications in the AWS Cloud certified Regions in confidence.

AWS was evaluated by independent third-party auditor BSI on behalf of DESC on January 23, 2023. The Certificate of Compliance illustrating the AWS compliance status is available through AWS Artifact. AWS Artifact is a self-service portal for on-demand access to AWS compliance reports. Sign in to AWS Artifact in the AWS Management Console, or learn more at Getting Started with AWS Artifact.

As of this writing, 62 services offered in the Middle East (UAE) Region are in scope of this certification. For up-to-date information, including when additional services are added, visit the AWS Services in Scope by Compliance Program webpage and choose DESC CSP.

AWS strives to continuously bring services into scope of its compliance programs to help you meet your architectural and regulatory needs. Please reach out to your AWS account team if you have questions or feedback about DESC compliance.

To learn more about our compliance and security programs, see AWS Compliance Programs. As always, we value your feedback and questions; reach out to the AWS Compliance team through the Contact Us page.

 
If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Ioana Mecu

Ioana Mecu

Ioana is a Security Audit Program Manager at AWS based in Madrid, Spain. She leads security audits, attestations, and certification programs across Europe and the Middle East. Ioana has previously worked in risk management, security assurance, and technology audits in the financial sector industry for the past 15 years.

Rapid7 Added to Carahsoft GSA Schedule Contract

Post Syndicated from Rapid7 original https://blog.rapid7.com/2023/01/24/rapid7-added-to-carahsoft-gsa-schedule-contract/

Rapid7 Added to Carahsoft GSA Schedule Contract

We are happy to announce that Rapid7 has been added to Carahsoft’s GSA Schedule contract, making our suite of comprehensive security solutions widely available to Federal, State, and Local agencies through Carahsoft and its reseller partners.

“With the ever-evolving threat landscape, it is important that the public sector has the resources to defend against sophisticated cyber attacks and vulnerabilities,” said Alex Whitworth, Sales Director who leads the Rapid7 Team at Carahsoft.

“The addition of Rapid7’s cloud risk management and threat detection solutions to our GSA Schedule gives Government customers and our reseller partners expansive access to the tools necessary to protect their critical infrastructure.”

With the GSA contract award, Rapid7 is able to significantly expand its availability to Federal, State, Local, and Government markets. In addition to GSA, Rapid7 was recently added to the Department of Homeland Security (DHS) Continuous Diagnostics Mitigation’s Approved Products List.

“As the attack surface continues to increase in size and complexity, it’s imperative that all organizations have access to the tools and services they need to monitor risk across their environments,” said Damon Cabanillas, Vice President of Public Sector Sales at Rapid7.

“This contract award is a massive step forward for Rapid7 as we work to further serve the public sector.”

Rapid7 is available through Carahsoft’s GSA Schedule No. 47QSWA18D008F. For more information on Rapid7’s products and services, contact the Rapid7 team at Carahsoft at [email protected].

Rapid7 Now Available Through Carahsoft’s NASPO ValuePoint

Post Syndicated from Rapid7 original https://blog.rapid7.com/2023/01/24/rapid7-now-available-through-carahsofts-naspo-valuepoint/

Rapid7 Now Available Through Carahsoft’s NASPO ValuePoint

We are happy to announce that Rapid7’s solutions have been added to the NASPO ValuePoint Cloud Solutions contract held by Carahsoft Technology Corp. The addition of this contract enables Carahsoft and its reseller partners to provide Rapid7’s Insight platform to participating States, Local Governments, and Educational (SLED) institutions.

“Rapid7’s Insight platform goes beyond threat detection by enabling organizations to quickly respond to attacks with intelligent automation,” said Alex Whitworth, Sales Director who leads the Rapid7 Team at Carahsoft.

“We are thrilled to work with Rapid7 and our reseller partners to deliver these advanced cloud risk management and threat detection solutions to NASPO members to further protect IT environments across the SLED space.”

NASPO ValuePoint is a cooperative purchasing program facilitating public procurement solicitations and agreements using a lead-state model. The program provides the highest standard of excellence in public cooperative contracting. By leveraging the leadership and expertise of all states and the purchasing power of their public entities, NASPO ValuePoint delivers the highest valued, reliable and competitively sourced contracts, offering public entities outstanding prices.

“In partnership with Carahsoft and their reseller partners, we look forward to providing broader availability of the Insight platform to help security teams better protect their organizations from an increasingly complex and volatile threat landscape,” said Damon Cabanillas, Vice President of Public Sector Sales at Rapid7.

The Rapid7 Insight platform is available through Carahsoft’s NASPO ValuePoint Master Agreement #AR2472. For more information, visit https://www.carahsoft.com/rapid7/contracts.

2022 Canadian Centre for Cyber Security Assessment Summary report available with 12 additional services

Post Syndicated from Naranjan Goklani original https://aws.amazon.com/blogs/security/2022-canadian-centre-for-cyber-security-assessment-summary-report-available-with-12-additional-services/

We are pleased to announce the availability of the 2022 Canadian Centre for Cyber Security (CCCS) assessment summary report for Amazon Web Services (AWS). This assessment will bring the total to 132 AWS services and features assessed in the Canada (Central) AWS Region, including 12 additional AWS services. A copy of the summary assessment report is available for review and download on demand through AWS Artifact.

The full list of services in scope for the CCCS assessment is available on the AWS Services in Scope page. The 12 new services are:

The CCCS is Canada’s authoritative source of cyber security expert guidance for the Canadian government, industry, and the general public. Public and commercial sector organizations across Canada rely on CCCS’s rigorous Cloud Service Provider (CSP) IT Security (ITS) assessment in their decisions to use cloud services. In addition, CCCS’s ITS assessment process is a mandatory requirement for AWS to provide cloud services to Canadian federal government departments and agencies.

The CCCS Cloud Service Provider Information Technology Security Assessment Process determines if the Government of Canada (GC) ITS requirements for the CCCS Medium cloud security profile (previously referred to as GC’s Protected B/Medium Integrity/Medium Availability [PBMM] profile) are met as described in ITSG-33 (IT security risk management: A lifecycle approach). As of November 2022, 132 AWS services in the Canada (Central) Region have been assessed by the CCCS and meet the requirements for the CCCS Medium cloud security profile. Meeting the CCCS Medium cloud security profile is required to host workloads that are classified up to and including the medium categorization. On a periodic basis, CCCS assesses new or previously unassessed services and reassesses the AWS services that were previously assessed to verify that they continue to meet the GC’s requirements. CCCS prioritizes the assessment of new AWS services based on their availability in Canada, and on customer demand for the AWS services. The full list of AWS services that have been assessed by CCCS is available on our Services in Scope for CCCS Assessment page.

To learn more about the CCCS assessment or our other compliance and security programs, visit AWS Compliance Programs. As always, we value your feedback and questions; you can reach out to the AWS Compliance team through the Contact Us page.

If you have feedback about this post, submit comments in the Comments section below. Want more AWS Security news? Follow us on Twitter.

Naranjan Goklani

Naranjan Goklani

Naranjan is a Security Audit Manager at AWS, based in Toronto (Canada). He leads audits, attestations, certifications, and assessments across North America and Europe. Naranjan has more than 13 years of experience in risk management, security assurance, and performing technology audits. Naranjan previously worked in one of the Big 4 accounting firms and supported clients from the financial services, technology, retail, ecommerce, and utilities industries.

Incident Reporting Regulations Summary and Chart

Post Syndicated from Harley Geiger original https://blog.rapid7.com/2022/08/26/incident-reporting-regulations-summary-and-chart/

Incident Reporting Regulations Summary and Chart

A growing number of regulations require organizations to report significant cybersecurity incidents. We’ve created a chart that summarizes 11 proposed and current cyber incident reporting regulations and breaks down their common elements, such as who must report, what cyber incidents must be reported, the deadline for reporting, and more.

Incident Reporting Regulations Summary and Chart
Download the chart now

This chart is intended as an educational tool to enhance the security community’s awareness of upcoming public policy actions, and provide a big picture look at how the incident reporting regulatory environment is unfolding. Please note, this chart is not comprehensive (there are even more incident reporting regulations out there!) and is only current as of August 8, 2022. Many of the regulations are subject to change.

This summary is for educational purposes only and nothing in this summary is intended as, or constitutes, legal advice.

Peter Woolverton led the research and initial drafting of this chart.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Additional reading:

AWS achieves the first OSCAL format system security plan submission to FedRAMP

Post Syndicated from Matthew Donkin original https://aws.amazon.com/blogs/security/aws-achieves-the-first-oscal-format-system-security-plan-submission-to-fedramp/

Amazon Web Services (AWS) is the first cloud service provider to produce an Open Security Control Assessment Language (OSCAL)–formatted system security plan (SSP) for the FedRAMP Project Management Office (PMO). OSCAL is the first step in the AWS effort to automate security documentation to simplify our customers’ journey through cloud adoption and accelerate the authorization to operate (ATO) process.

AWS continues its commitment to innovation and customer obsession. Our incorporation of the OSCAL format will improve the customer experience of reviewing and assessing security documentation. It can take an estimated 4,200 workforce hours for companies to receive an ATO, with much of the effort due to manual review and transcription of documentation. Automating this process through a machine-translatable language gives our customers the ability to ingest security documentation into a governance, risk management, and compliance (GRC) tool to automate much of this time-consuming task. AWS worked with an AWS Partner, to ingest the AWS SSP through their tool, Xacta.

This is a first step in several initiatives AWS has planned to automate the security assurance process across multiple compliance frameworks. We continue to look for ways to earn trust with our customers, and over the next year we will continue to release new solutions that customers can use to rapidly deploy secure and innovative services.

“Providing the SSP packages in OSCAL is a great milestone in security automation marking the beginning of a new era in cybersecurity. We appreciate the leadership in this area and look forward to working with all cyber professionals, in particular with the visionary cloud service providers, to help deliver secure innovation faster to the people they serve.”

– Dr. Michaela Iorga, OSCAL Strategic Outreach Director, NIST

To learn more about OSCAL, visit the NIST OSCAL website. To learn more about FedRAMP’s plans for OSCAL, visit the FedRAMP Blog.

To learn what other public sector customers are doing on AWS, see our Government, Education, and Nonprofits case studies and customer success stories. Stay tuned for future updates on our Services in Scope by Compliance Program page. Let us know how this post will help your mission by reaching out to your AWS account team. Lastly, if you have feedback about this blog post, let us know in the Comments section.

Want more AWS Security news? Follow us on Twitter.

Matthew Donkin

Matthew Donkin

Matthew Donkin, AWS Security Compliance Lead, provides direction and guidance for security documentation automation, physical security compliance, and assists customers in navigating compliance in the cloud. He is leading the development of the industries’ first open security controls assessment language (OSCAL) artifacts for adoption of a faster and more reliable way to process resource intensive documentation within the authorization process.

Canadian Centre for Cyber Security Assessment Summary report now available in AWS Artifact

Post Syndicated from Rob Samuel original https://aws.amazon.com/blogs/security/canadian-centre-for-cyber-security-assessment-summary-report-now-available-in-aws-artifact/

French version

At Amazon Web Services (AWS), we are committed to providing continued assurance to our customers through assessments, certifications, and attestations that support the adoption of AWS services. We are pleased to announce the availability of the Canadian Centre for Cyber Security (CCCS) assessment summary report for AWS, which you can view and download on demand through AWS Artifact.

The CCCS is Canada’s authoritative source of cyber security expert guidance for the Canadian government, industry, and the general public. Public and commercial sector organizations across Canada rely on CCCS’s rigorous Cloud Service Provider (CSP) IT Security (ITS) assessment in their decision to use CSP services. In addition, CCCS’s ITS assessment process is a mandatory requirement for AWS to provide cloud services to Canadian federal government departments and agencies.

The CCCS Cloud Service Provider Information Technology Security Assessment Process determines if the Government of Canada (GC) ITS requirements for the CCCS Medium Cloud Security Profile (previously referred to as GC’s PROTECTED B/Medium Integrity/Medium Availability [PBMM] profile) are met as described in ITSG-33 (IT Security Risk Management: A Lifecycle Approach, Annex 3 – Security Control Catalogue). As of September, 2021, 120 AWS services in the Canada (Central) Region have been assessed by the CCCS, and meet the requirements for medium cloud security profile. Meeting the medium cloud security profile is required to host workloads that are classified up to and including medium categorization. On a periodic basis, CCCS assesses new or previously unassessed services and re-assesses the AWS services that were previously assessed to verify that they continue to meet the GC’s requirements. CCCS prioritizes the assessment of new AWS services based on their availability in Canada, and customer demand for the AWS services. The full list of AWS services that have been assessed by CCCS is available on our Services in Scope by Compliance Program page.

To learn more about the CCCS assessment or our other compliance and security programs, visit AWS Compliance Programs. If you have questions about this blog post, please start a new thread on the AWS Artifact forum or contact AWS Support.

If you have feedback about this post, submit comments in the Comments section below. Want more AWS Security news? Follow us on Twitter.

Rob Samuel

Rob Samuel

Rob Samuel is a Principal technical leader for AWS Security Assurance. He partners with teams across AWS to translate data protection principles into technical requirements, aligns technical direction and priorities, orchestrates new technical solutions, helps integrate security and privacy solutions into AWS services and features, and addresses cross-cutting security and privacy requirements and expectations. Rob has more than 20 years of experience in the technology industry, and has previously held leadership roles, including Head of Security Assurance for AWS Canada, Chief Information Security Officer (CISO) for the Province of Nova Scotia, various security leadership roles as a public servant, and served as a Communications and Electronics Engineering Officer in the Canadian Armed Forces.

Naranjan Goklani

Naranjan Goklani

Naranjan Goklani is a Security Audit Manager at AWS, based in Toronto (Canada). He leads audits, attestations, certifications, and assessments across North America and Europe. Naranjan has more than 12 years of experience in risk management, security assurance, and performing technology audits. Naranjan previously worked in one of the Big 4 accounting firms and supported clients from the retail, ecommerce, and utilities industries.

Brian Mycroft

Brian Mycroft

Brian Mycroft is a Chief Technologist at AWS, based in Ottawa (Canada), specializing in national security, intelligence, and the Canadian federal government. Brian is the lead architect of the AWS Secure Environment Accelerator (ASEA) and focuses on removing public sector barriers to cloud adoption.

.


 

Rapport sommaire de l’évaluation du Centre canadien pour la cybersécurité disponible sur AWS Artifact

Par Robert Samuel, Naranjan Goklani et Brian Mycroft
Amazon Web Services (AWS) s’engage à fournir à ses clients une assurance continue à travers des évaluations, des certifications et des attestations qui appuient l’adoption des services proposés par AWS. Nous avons le plaisir d’annoncer la mise à disposition du rapport sommaire de l’évaluation du Centre canadien pour la cybersécurité (CCCS) pour AWS, que vous pouvez dès à présent consulter et télécharger à la demande sur AWS Artifact.

Le CCC est l’autorité canadienne qui met son expertise en matière de cybersécurité au service du gouvernement canadien, du secteur privé et du grand public. Les organisations des secteurs public et privé établies au Canada dépendent de la rigoureuse évaluation de la sécurité des technologies de l’information s’appliquant aux fournisseurs de services infonuagiques conduite par le CCC pour leur décision relative à l’utilisation de ces services infonuagiques. De plus, le processus d’évaluation de la sécurité des technologies de l’information est une étape obligatoire pour permettre à AWS de fournir des services infonuagiques aux agences et aux ministères du gouvernement fédéral canadien.

Le Processus d’évaluation de la sécurité des technologies de l’information s’appliquant aux fournisseurs de services infonuagiques détermine si les exigences en matière de technologie de l’information du Gouvernement du Canada (GC) pour le profil de contrôle de la sécurité infonuagique moyen (précédemment connu sous le nom de Protégé B/Intégrité moyenne/Disponibilité moyenne) sont satisfaites conformément à l’ITSG-33 (Gestion des risques liés à la sécurité des TI : Une méthode axée sur le cycle de vie, Annexe 3 – Catalogue des contrôles de sécurité). En date de septembre 2021, 120 services AWS de la région (centrale) du Canada ont été évalués par le CCC et satisfont aux exigences du profil de sécurité moyen du nuage. Satisfaire les exigences du niveau moyen du nuage est nécessaire pour héberger des applications classées jusqu’à la catégorie moyenne incluse. Le CCC évalue périodiquement les nouveaux services, ou les services qui n’ont pas encore été évalués, et réévalue les services AWS précédemment évalués pour s’assurer qu’ils continuent de satisfaire aux exigences du Gouvernement du Canada. Le CCC priorise l’évaluation des nouveaux services AWS selon leur disponibilité au Canada et en fonction de la demande des clients pour les services AWS. La liste complète des services AWS évalués par le CCC est consultable sur notre page Services AWS concernés par le programme de conformité.

Pour en savoir plus sur l’évaluation du CCC ainsi que sur nos autres programmes de conformité et de sécurité, visitez la page Programmes de conformité AWS. Comme toujours, nous accordons beaucoup de valeur à vos commentaires et à vos questions; vous pouvez communiquer avec l’équipe Conformité AWS via la page Communiquer avec nous.

Si vous avez des commentaires sur cette publication, n’hésitez pas à les partager dans la section Commentaires ci-dessous. Vous souhaitez en savoir plus sur AWS Security? Retrouvez-nous sur Twitter.

Biographies des auteurs :

Rob Samuel : Rob Samuel est responsable technique principal d’AWS Security Assurance. Il collabore avec les équipes AWS pour traduire les principes de protection des données en recommandations techniques, aligne la direction technique et les priorités, met en œuvre les nouvelles solutions techniques, aide à intégrer les solutions de sécurité et de confidentialité aux services et fonctionnalités proposés par AWS et répond aux exigences et aux attentes en matière de confidentialité et de sécurité transversale. Rob a plus de 20 ans d’expérience dans le secteur de la technologie et a déjà occupé des fonctions dirigeantes, comme directeur de l’assurance sécurité pour AWS Canada, responsable de la cybersécurité et des systèmes d’information (RSSI) pour la province de la Nouvelle-Écosse, divers postes à responsabilités en tant que fonctionnaire et a servi dans les Forces armées canadiennes en tant qu’officier du génie électronique et des communications.

Naranjan Goklani : Naranjan Goklani est responsable des audits de sécurité pour AWS, il est basé à Toronto (Canada). Il est responsable des audits, des attestations, des certifications et des évaluations pour l’Amérique du Nord et l’Europe. Naranjan a plus de 12 ans d’expérience dans la gestion des risques, l’assurance de la sécurité et la réalisation d’audits de technologie. Naranjan a exercé dans l’une des quatre plus grandes sociétés de comptabilité et accompagné des clients des industries de la distribution, du commerce en ligne et des services publics.

Brian Mycroft : Brian Mycroft est technologue en chef pour AWS, il est basé à Ottawa (Canada) et se spécialise dans la sécurité nationale, le renseignement et le gouvernement fédéral du Canada. Brian est l’architecte principal de l’AWS Secure Environment Accelerator (ASEA) et s’intéresse principalement à la suppression des barrières à l’adoption du nuage pour le secteur public.

New US Law to Require Cyber Incident Reports

Post Syndicated from Harley Geiger original https://blog.rapid7.com/2022/03/10/new-us-laws-to-require-cyber-incident-reports/

New US Law to Require Cyber Incident Reports

On March 9, 2022, the US Congress passed the Cyber Incident Reporting for Critical Infrastructure Act of 2022. Once signed by the President, it will become law. The law will require critical infrastructure owners and operators to report cyber incidents and ransomware payments. The legislation was developed in the wake of the SolarWinds supply chain attack and recently gained additional momentum from the Russia-Ukraine conflict. This post will walk through highlights from the law.

Rapid7 supports efforts to increase transparency and information sharing in order to strengthen awareness of the cybersecurity threat landscape and prepare for cyberattacks. We applaud passage of the Cyber Incident Reporting for Critical Infrastructure Act.

What’s this law about?

The Cyber Incident Reporting for Critical Infrastructure Act will require critical infrastructure owners and operators — such as water and energy utilities, health care organizations, some IT providers, etc. — to submit reports to the Cybersecurity and Infrastructure Security Agency (CISA) for cybersecurity incidents and ransomware payments. The law will provide liability protections for submitting reports to encourage compliance, but noncompliance can result in a civil lawsuit. The law will also require the government to analyze, anonymize, and share information from the reports to provide agencies, Congress, companies, and the public with a better view of the cyber threat landscape.

An important note about the timeline: The requirements do not take effect until CISA issues a clarifying regulation. The law will require CISA to issue this regulation within 42 months (though CISA may take less time), so the requirements may not be imminent. In the meantime, the Cyber Incident Reporting for Critical Infrastructure Act provides information on what CISA’s future rule must address.

We detail these items from the law below.

Requiring reporting of cyber incidents and ransom payments

  • Report requirement. Critical infrastructure owners and operators must report substantial cybersecurity incidents to CISA, as well as any ransom payments. (However, as described below, this requirement does not come into effect until CISA issues a regulation.)
  • Type of incident. The types of cyber incidents that must be reported shall include actual breaches of sensitive information and attacks that disrupt business or operations. Mere threats or failed attacks do not need to be reported.
  • Report timeline. For a cyber incident, the report must be submitted within 72 hours after the affected organization determines the incident is substantial enough that it must be reported. For ransom payments, the report must be submitted within 24 hours after the payment is made.
  • Report contents. The reports must include a list of information, including attacker tactics and techniques. Information related to the incident must be preserved until the incident is fully resolved.
  • Enforcement. If an entity does not comply with reporting requirements, CISA may issue a subpoena to compel entities to produce the required information. The Justice Department may initiate a civil lawsuit to enforce the subpoena. Entities that do not comply with the subpoena may be found in contempt of court.

CISA rule to fill in details

  • Rule requirement. CISA is required to issue a regulation that will establish details on the reporting requirements. The reporting requirements do not take effect until this regulation is final.
  • Rule timeline. CISA has up to 42 months to finalize the rule (but the agency can choose to take less time).
  • Rule contents. The rule will establish the types of cyber incidents that must be reported, the types of critical infrastructure entities that must report, the content to be included in the reports, the mechanism for submitting the reports, and the details for preserving data related to the reports.

Protections for submitting reports

  • Not used for regulation. Reports submitted to CISA cannot be used to regulate the activities of the entity that submitted the report.
  • Privileges preserved. The covered entity may designate the reports as commercial and proprietary information. Submission of a report shall not be considered a waiver of any privilege or legal protection.
  • No liability for submitting. No court may maintain a cause of action against any person or entity on the sole basis of submitting a report in compliance with this law.
  • Cannot be used as evidence. Reports, and material used to prepare the reports, cannot be received as evidence or used in discovery proceedings in any federal or state court or regulatory body.

What the government will do with the report information

  • Authorized purposes. The federal government may use the information in the reports cybersecurity purposes, responding to safety or serious economic threats, and preventing child exploitation.
  • Rapid response. For reports on ongoing threats, CISA must rapidly disseminate cyber threat indicators and defensive measures with stakeholders.
  • Information sharing. CISA must analyze reports and share information with other federal agencies, Congress, private sector stakeholders, and the public. CISA’s information sharing must include assessment of the effectiveness of security controls, adversary tactics and techniques, and the national cyber threat landscape.

What’s Rapid7’s view of the law?

Rapid7 views the Cyber Incident Reporting for Critical Infrastructure Act as a positive step. Cybersecurity is essential to ensure critical infrastructure is safe, and this law would give federal agencies more insight into attack trends, and would potentially help provide early warnings of major vulnerabilities or attacks in progress before they spread. The law carefully avoids requiring reports too early in the incident response process and provides protections to encourage companies to be open and transparent in their reports.

Still, the Cyber Incident Reporting for Critical Infrastructure Act does little to ensure critical infrastructure has safeguards that prevent cyber incidents from occurring in the first place. This law is unlikely to change the fact that many critical infrastructure entities are under-resourced and, in some cases, have security maturity that is not commensurate with the risks they face. The law’s enforcement mechanism (a potential contempt of court penalty) is not especially strong, and the final reporting rules may not be implemented for another 3.5 years. Ultimately, the law’s effect may be similar to state breach notification laws, which raised awareness but did not prompt widespread adoption of security safeguards for personal information until states implemented data security laws.

So, the Cyber Incident Reporting for Critical Infrastructure Act is a needed and helpful improvement — but, as always, there is more to be done.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

AWS achieves FedRAMP P-ATO for 15 services in the AWS US East/West and AWS GovCloud (US) Regions

Post Syndicated from Alexis Robinson original https://aws.amazon.com/blogs/security/aws-achieves-fedramp-p-ato-for-15-services-in-the-aws-us-east-west-and-aws-govcloud-us-regions/

AWS is pleased to announce that 15 additional AWS services have achieved Provisional Authority to Operate (P-ATO) from the Federal Risk and Authorization Management Program (FedRAMP) Joint Authorization Board (JAB).

AWS is continually expanding the scope of our compliance programs to help customers use authorized services for sensitive and regulated workloads. AWS now offers 111 AWS services authorized in the AWS US East/West Regions under FedRAMP Moderate Authorization, and 91 services authorized in the AWS GovCloud (US) Regions under FedRAMP High Authorization.

Figure 1. Newly authorized services list

Figure 1. Newly authorized services list

Descriptions of AWS Services now in FedRAMP P-ATO

These additional AWS services now provide the following capabilities for the U.S. federal government and customers with regulated workloads:

  • Amazon Detective simplifies analyzing, investigating, and quickly identifying the root cause of potential security issues or suspicious activities. Amazon Detective automatically collects log data from your AWS resources, and uses machine learning, statistical analysis, and graph theory to build a linked set of data enabling you to easily conduct faster and more efficient security investigations.
  • Amazon FSx for Lustre provides fully managed shared storage with the scalability and performance of the popular Lustre file system.
  • Amazon FSx for Windows File Server provides fully managed shared storage built on Windows Server, and delivers a wide range of data access, data management, and administrative capabilities.
  • Amazon Kendra is an intelligent search service powered by machine learning (ML).
  • Amazon Keyspaces (for Apache Cassandra) is a scalable, highly available, and managed Apache Cassandra-compatible database service.
  • Amazon Lex is an AWS service for building conversational interfaces into applications using voice and text.
  • Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS.
  • Amazon MQ is a managed message broker service for Apache ActiveMQ and RabbitMQ that simplifies setting up and operating message brokers on AWS.
  • AWS CloudHSM is a cloud-based hardware security module (HSM) that lets you generate and use your own encryption keys on the AWS Cloud.
  • AWS Cloud Map is a cloud resource discovery service. With Cloud Map, you can define custom names for your application resources, and CloudMap maintains the updated location of these dynamically changing resources.
  • AWS Glue DataBrew is a new visual data preparation tool that lets data analysts and data scientists quickly clean and normalize data to prepare it for analytics and machine learning.
  • AWS Outposts (hardware excluded) is a fully managed service that extends AWS infrastructure, services, APIs, and tools to customer premises. By providing local access to AWS managed infrastructure, AWS Outposts enables you to build and run applications on premises using the same programming interfaces used in AWS Regions, while using local compute and storage resources for lower latency and local data processing needs.
  • AWS Resource Groups grants you the ability to organize your AWS resources, managing and automating tasks for large numbers of resources at the same time.
  • AWS Snowmobile is an Exabyte-scale data transfer service used to move extremely large amounts of data to AWS. You can transfer up to 100PB per Snowmobile, a 45-foot long ruggedized shipping container, pulled by a semi-trailer truck. After an initial assessment, a Snowmobile will be transported to your data center and AWS personnel will configure it so it can be accessed as a network storage target. After you load your data, the Snowmobile is driven back to an AWS regional data center, where AWS imports the data into Amazon Simple Storage Service (Amazon S3).
  • AWS Transfer Family securely scales your recurring business-to-business file transfers to Amazon S3 and Amazon Elastic File System (Amazon EFS) using SFTP, FTPS, and FTP protocols.

The following services are now listed on the FedRAMP Marketplace and the AWS Services in Scope by Compliance Program page.

Service authorizations by Region

Service FedRAMP Moderate in AWS US East/West FedRAMP High in AWS GovCloud (US)
Amazon Detective
Amazon FSx for Lustre
Amazon FSx for Windows File Server
Amazon Kendra
Amazon Keyspaces (for Apache Cassandra)
Amazon Lex
Amazon Macie
Amazon MQ
AWS CloudHSM
AWS Cloud Map
AWS Glue DataBrew
AWS Outposts
AWS Resource Groups
AWS Snowmobile
AWS Transfer Family

To learn what other public sector customers are doing on AWS, see our Government, Education, and Nonprofits Case Studies and Customer Success Stories. Stay tuned for future updates on our Services in Scope by Compliance Program page. Let us know how this post will help your mission by reaching out to your AWS Account Team. Lastly, if you have feedback about this blog post, let us know in the Comments section.

Want more AWS Security news? Follow us on Twitter.

Author

Alexis Robinson

Alexis is the Head of the U.S. Government Security and Compliance Program for AWS. For over 10 years, she has served federal government clients advising on security best practices and conducting cyber and financial assessments. She currently supports the security of the AWS internal environment including cloud services applicable to AWS East/West and AWS GovCloud (US) Regions.

Update to GLBA Security Requirements for Financial Institutions

Post Syndicated from Harley Geiger original https://blog.rapid7.com/2021/11/10/update-to-glba-security-requirements-for-financial-institutions/

Update to GLBA Security Requirements for Financial Institutions

Heads up financial institutions: the Federal Trade Commission (FTC) announced the first cybersecurity updates to the Gramm Leach-Bliley Act (GLBA) Safeguards Rule since 2003. The new rule strengthens the required security safeguards for customer information. This includes formal risk assessments, access controls, regular penetration testing and vulnerability scanning, and incident response capabilities, among other things.

Several of these changes go into effect in November 2022, to provide organizations time to prepare for compliance. Below, we’ll detail the changes in comparison to the previous rule.

Background on the Safeguards Rule

GLBA requires, among other things, a wide range of “financial institutions” to protect customer information. Enforcement for GLBA is split up among several different federal agencies, with FTC jurisdiction covering non-banking financial institutions in the Safeguards Rule. Previously, the Safeguards Rule left the implementation details of several aspects of the information security program up to the financial institution, based on its risk assessment.

The Safeguards Rule broad definition of “financial institutions” includes non-bank businesses that offer financial products or services — such as retailers, automobile dealers, mortgage brokers, non-bank lenders, property appraisers, tax preparers, and others. The definition of “customer information” is also broad, to include any record containing non-public personally identifiable information about a customer that is handled or maintained by or on behalf of a financial institution.

Updates to the Safeguards Rule

Many of the other updates concern strengthened requirements on how financial institutions must implement aspects of their security programs. Below is a short summary of changes. Where applicable, we include citations to both the updated rule (starting at page 123) and the previous rule (at 16 USC 314) for easy comparison.

Overall security program

  • Current rule: Financial institutions must maintain a comprehensive, written information security program with administrative, technical, and physical safeguards to ensure the security, confidentiality, and integrity of customer information. 16 USC 314.3(a)-(b).
  • Updated rule: The updated rule now requires the information security program to include the processes and safeguards listed below (i.e., risk assessment, security safeguards, etc.). 16 USC 314.3(a).
  • Approx. effective date: November 2022

Risk assessment

  • Current rule: Financial institutions are required to identify internal and external risks to security, confidentiality, and integrity of customer information. The risk assessment must include employee training, risks to information systems, and detecting and responding to security incidents and events. 16 USC 314.4(b).
  • Updated rule: The update includes more specific criteria for what the risk assessment must include. This includes criteria for evaluating and categorizing of security risks and threats, and criteria for assessing the adequacy of security safeguards. The risk assessment must describe how identified risks will be mitigated or accepted. The risk assessment must be in writing. 16 USC 314.4(b).
  • Approx. effective date: November 2022

Security safeguards

  • Current rule: Financial institutions must implement safeguards to control the risks identified through the risk assessment. 16 USC 314.4(c). Financial institutions must require service providers to maintain safeguards to protect customer information. 16 USC 314.4(d).
  • Updated rule: The updated rule requires that the safeguards must include
    – Access controls, including providing the least privilege;
    – Inventory and classification of data, devices, and systems;
    – Encryption of customer information at rest and in transit over internal networks;
    – Secure development practices for in-house software and applications;
    – Multi-factor authentication;
    – Secure data disposal;
    – Change management procedures; and
    – Monitoring activity of unauthorized users and detecting unauthorized access or use of customer information. 16 USC 314.4(c)(1)-(8).
  • Approx. effective date: November 2022

Testing and evaluation

  • Current rule: Financial institutions must regularly test or monitor the effectiveness of the security safeguards, and make adjustments based on the testing. 16 USC 314.4(c), (e).
  • Updated rule: Regular testing of safeguards must now include either continuous monitoring or periodic penetration testing (annually) and vulnerability assessments (semi-annually). 16 USC 314.4(d).
  • Approx. effective date: November 2022

Incident response

  • Current rule: Financial institutions must include cybersecurity incident detection and response in their risk assessments, and have safeguards to address those risks. 16 USC 314.4(b)(3)-(c).
  • Updated rule: Financial institutions are required to establish a written plan for responding to any security event materially affecting confidentiality, integrity, or availability of customer information. 16 USC 314.4(h).
  • Approx. effective date: November 2022

Workforce and personnel

  • Current rule: Financial institutions must designate an employee to coordinate the information security program. 16 USC 314.4(a). Financial institutions must select service providers that can maintain security and require service providers to implement the safeguards. 16 USC 314.4(d).
  • Updated rule: The rule now requires designation of a single “qualified individual” to be responsible for the security program. This can be a third-party contractor. 16 USC 314.4(a). Financial institutions must now provide security awareness training and updates to personnel. 16 USC 314.4(e). The rule now also requires periodic reports to a Board of Directors or governing body regarding all material matters related to the information security program. 16 USC 314.4(i).
  • Approx. effective date: November 2022

Scope of coverage

  • Updated rule: The FTC update expands on the definition of “financial institution” to require “finders” — companies that bring together buyers and sellers — to follow the Safeguards Rule. 16 USC 314.2(h)(1). However, financial institutions that maintain information on fewer than 5,000 consumers are exempt from the requirements of a written risk assessment, continuous monitoring or periodic pentesting and/or vulnerability scans, incident response plan, and annual reporting to the Board. 16 USC 314.6.
  • Approx. effective date: November 2021 (unlike many of the other updates, this item is not delayed for a year)

Incident reporting next?

In addition to the above, the FTC is also considering requirements that financial institutions report cybersecurity incidents and events to the FTC. Similar requirements are in place under the Cybersecurity Regulation at the New York Department of Financial Services. If the FTC moves forward with these incident reporting requirements, financial institutions could expect the requirements to be implemented later in 2022 or early 2023.

Financial institutions with robust security programs will already be performing many of these practices. For them, the updated Safeguards Rule will not represent a sea change in internal security operations. However, by making these security practices a formal regulatory requirement, the updated Safeguards will make accountability and compliance even more important.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

AWS achieves FedRAMP P-ATO for 18 additional services in the AWS US East/West and AWS GovCloud (US) Regions

Post Syndicated from Alexis Robinson original https://aws.amazon.com/blogs/security/aws-achieves-fedramp-p-ato-for-18-additional-services-in-the-aws-us-east-west-and-aws-govcloud-us-regions/

We’re pleased to announce that 18 additional AWS services have achieved Provisional Authority to Operate (P-ATO) by the Federal Risk and Authorization Management Program (FedRAMP) Joint Authorization Board (JAB). The following are the 18 additional services with FedRAMP authorization for the US federal government, and organizations with regulated workloads:

  • Amazon Cognito lets you add user sign-up, sign-in, and access control to their web and mobile apps quickly and easily.
  • Amazon Comprehend Medical is a HIPAA-eligible natural language processing (NLP) service that uses machine learning to extract health data from medical text–no machine learning experience is required.
  • Amazon Elastic Kubernetes Service (Amazon EKS) is a managed container service that gives you the flexibility to start, run, and scale Kubernetes applications in the AWS cloud or on-premises.
  • Amazon Pinpoint is a flexible and scalable outbound and inbound marketing communications service.
  • Amazon QuickSight is a scalable, serverless, embeddable, machine learning-powered business intelligence (BI) service built for the cloud that lets you easily create and publish interactive BI dashboards that include Machine Learning-powered insights.
  • Amazon Simple Email Service (Amazon SES) is a cost-effective, flexible, and scalable email service that enables developers to send mail from within any application.
  • Amazon Textract is a machine learning service that automatically extracts text, handwriting, and other data from scanned documents that goes beyond simple optical character recognition (OCR) to identify, understand, and extract data from forms and tables.
  • AWS Backup enables you to centralize and automate data protection across AWS services.
  • AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your own encryption keys on the AWS Cloud.
  • AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates.
  • AWS Ground Station is a fully managed service that lets you control satellite communications, process data, and scale your operations without having to worry about building or managing your own ground station infrastructure.
  • AWS OpsWorks for Chef Automate and AWS OpsWorks for Puppet Enterprise. AWS OpsWorks for Chef Automate provides a fully managed Chef Automate server and suite of automation tools that give you workflow automation for continuous deployment, automated testing for compliance and security, and a user interface that gives you visibility into your nodes and node statuses. AWS OpsWorks for Puppet Enterprise is a fully managed configuration management service that hosts Puppet Enterprise, a set of automation tools from Puppet for infrastructure and application management.
  • AWS Personal Health Dashboard provides alerts and guidance for AWS events that might affect your environment, and provides proactive and transparent notifications about your specific AWS environment.
  • AWS Resource Groups grants you the ability to organize your AWS resources, and manage and automate tasks on large numbers of resources at one time.
  • AWS Security Hub is a cloud security posture management service that performs security best practice checks, aggregates alerts, and enables automated remediation.
  • AWS Storage Gateway is a set of hybrid cloud storage services that gives you on-premises access to virtually unlimited cloud storage.
  • AWS Systems Manager provides a unified user interface so you can track and resolve operational issues across your AWS applications and resources from a central place.
  • AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture.

The following services are now listed on the FedRAMP Marketplace and the AWS Services in Scope by Compliance Program page.

Service authorizations by Region

Service FedRAMP Moderate in AWS US East/West FedRAMP High in AWS GovCloud (US)
Amazon Cognito  
Amazon Comprehend Medical
Amazon Elastic Kubernetes Service (Amazon EKS)  
Amazon Pinpoint  
Amazon QuickSight  
Amazon Simple Email Service (Amazon SES)  
Amazon Textract
AWS Backup
AWS CloudHSM  
AWS CodePipeline
AWS Ground Station  

AWS OpsWorks for Chef Automate and

AWS OpsWorks for Puppet Enterprise

 
AWS Personal Health Dashboard
AWS Resource Groups  
AWS Security Hub  
AWS Storage Gateway
AWS Systems Manager
AWS X-Ray

 
AWS is continually expanding the scope of our compliance programs to help customers use authorized services for sensitive and regulated workloads. Today, AWS offers 100 AWS services authorized in the AWS US East/West Regions under FedRAMP Moderate Authorization, and 90 services authorized in the AWS GovCloud (US) Regions under FedRAMP High Authorization.

To learn what other public sector customers are doing on AWS, see our Customer Success Stories page. For up-to-date information when new services are added, see our AWS Services in Scope by Compliance Program page.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Alexis Robinson

Alexis is the Head of the U.S. Government Security & Compliance Program for AWS. For over 10 years, she has served federal government clients advising on security best practices and conducting cyber and financial assessments. She currently supports the security of the AWS internal environment including cloud services applicable to AWS East/West and AWS GovCloud (US) Regions.

How US federal agencies can use AWS to encrypt data at rest and in transit

Post Syndicated from Robert George original https://aws.amazon.com/blogs/security/how-us-federal-agencies-can-use-aws-to-encrypt-data-at-rest-and-in-transit/

This post is part of a series about how Amazon Web Services (AWS) can help your US federal agency meet the requirements of the President’s Executive Order on Improving the Nation’s Cybersecurity. You will learn how you can use AWS information security practices to meet the requirement to encrypt your data at rest and in transit, to the maximum extent possible.

Encrypt your data at rest in AWS

Data at rest represents any data that you persist in non-volatile storage for any duration in your workload. This includes block storage, object storage, databases, archives, IoT devices, and any other storage medium on which data is persisted. Protecting your data at rest reduces the risk of unauthorized access when encryption and appropriate access controls are implemented.

AWS KMS provides a streamlined way to manage keys used for at-rest encryption. It integrates with AWS services to simplify using your keys to encrypt data across your AWS workloads. It uses hardware security modules that have been validated under FIPS 140-2 to protect your keys. You choose the level of access control that you need, including the ability to share encrypted resources between accounts and services. AWS KMS logs key usage to AWS CloudTrail to provide an independent view of who accessed encrypted data, including AWS services that are using keys on your behalf. As of this writing, AWS KMS integrates with 81 different AWS services. Here are details on recommended encryption for workloads using some key services:

You can use AWS KMS to encrypt other data types including application data with client-side encryption. A client-side application or JavaScript encrypts data before uploading it to S3 or other storage resources. As a result, uploaded data is protected in transit and at rest. Customer options for client-side encryption include the AWS SDK for KMS, the AWS Encryption SDK, and use of third-party encryption tools.

You can also use AWS Secrets Manager to encrypt application passwords, connection strings, and other secrets. Database credentials, resource names, and other sensitive data used in AWS Lambda functions can be encrypted and accessed at run time. This increases the security of these secrets and allows for easier credential rotation.

KMS HSMs are FIPS 140-2 validated and accessible using FIPS validated endpoints. Agencies with additional requirements that require a FIPS 140-3 validated hardware security module (HSM) (for example, for securing third-party secrets managers) can use AWS CloudHSM.

For more information about AWS KMS and key management best practices, visit these resources:

Encrypt your data in transit in AWS

In addition to encrypting data at rest, agencies must also encrypt data in transit. AWS provides a variety of solutions to help agencies encrypt data in transit and enforce this requirement.

First, all network traffic between AWS data centers is transparently encrypted at the physical layer. This data-link layer encryption includes traffic within an AWS Region as well as between Regions. Additionally, all traffic within a virtual private cloud (VPC) and between peered VPCs is transparently encrypted at the network layer when you are using supported Amazon EC2 instance types. Customers can choose to enable Transport Layer Security (TLS) for the applications they build on AWS using a variety of services. All AWS service endpoints support TLS to create a secure HTTPS connection to make API requests.

AWS offers several options for agency-managed infrastructure within the AWS Cloud that needs to terminate TLS. These options include load balancing services (for example, Elastic Load Balancing, Network Load Balancer, and Application Load Balancer), Amazon CloudFront (a content delivery network), and Amazon API Gateway. Each of these endpoint services enable customers to upload their digital certificates for the TLS connection. Digital certificates then need to be managed appropriately to account for expiration and rotation requirements. AWS Certificate Manager (ACM) simplifies generating, distributing, and rotating digital certificates. ACM offers publicly trusted certificates that can be used in AWS services that require certificates to terminate TLS connections to the internet. ACM also provides the ability to create a private certificate authority (CA) hierarchy that can integrate with existing on-premises CAs to automatically generate, distribute, and rotate certificates to secure internal communication among customer-managed infrastructure.

Finally, you can encrypt communications between your EC2 instances and other AWS resources that are connected to your VPC, such as Amazon Relational Database Service (Amazon RDS) databasesAmazon Elastic File System (Amazon EFS) file systemsAmazon S3Amazon DynamoDBAmazon Redshift, Amazon EMR, Amazon OpenSearch Service, Amazon ElasticCache for RedisAmazon FSx Windows File Server, AWS Direct Connect (DX) MACsec, and more.

Conclusion

This post has reviewed services that are used to encrypt data at rest and in transit, following the Executive Order on Improving the Nation’s Cybersecurity. I discussed the use of AWS KMS to manage encryption keys that handle the management of keys for at-rest encryption, as well as the use of ACM to manage certificates that protect data in transit.

Next steps

To learn more about how AWS can help you meet the requirements of the executive order, see the other posts in this series:

Subscribe to the AWS Public Sector Blog newsletter to get the latest in AWS tools, solutions, and innovations from the public sector delivered to your inbox, or contact us.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Robert George

Robert is a Solutions Architect on the Worldwide Public Sector (WWPS) team who works with customers to design secure, high-performing, and cost-effective systems in the AWS Cloud. He has previously worked in cybersecurity roles focused on designing security architectures, securing enterprise systems, and leading incident response teams for highly regulated environments.

17 additional AWS services authorized for DoD workloads in the AWS GovCloud Regions

Post Syndicated from Tyler Harding original https://aws.amazon.com/blogs/security/17-additional-aws-services-authorized-for-dod-workloads-in-the-aws-govcloud-regions/

I’m pleased to announce that the Defense Information Systems Agency (DISA) has authorized 17 additional Amazon Web Services (AWS) services and features in the AWS GovCloud (US) Regions, bringing the total to 105 services and major features that are authorized for use by the U.S. Department of Defense (DoD). AWS now offers additional services to DoD mission owners in these categories: business applications; computing; containers; cost management; developer tools; management and governance; media services; security, identity, and compliance; and storage.

Why does authorization matter?

DISA authorization of 17 new cloud services enables mission owners to build secure innovative solutions to include systems that process unclassified national security data (for example, Impact Level 5). DISA’s authorization demonstrates that AWS effectively implemented more than 421 security controls by using applicable criteria from NIST SP 800-53 Revision 4, the US General Services Administration’s FedRAMP High baseline, and the DoD Cloud Computing Security Requirements Guide.

Recently authorized AWS services at DoD Impact Levels (IL) 4 and 5 include the following:

Business Applications

Compute

Containers

Cost Management

  • AWS Budgets – Set custom budgets to track your cost and usage, from the simplest to the most complex use cases
  • AWS Cost Explorer – An interface that lets you visualize, understand, and manage your AWS costs and usage over time
  • AWS Cost & Usage Report – Itemize usage at the account or organization level by product code, usage type, and operation

Developer Tools

  • AWS CodePipeline – Automate continuous delivery pipelines for fast and reliable updates
  • AWS X-Ray – Analyze and debug production and distributed applications, such as those built using a microservices architecture

Management & Governance

Media Services

  • Amazon Textract – Extract printed text, handwriting, and data from virtually any document

Security, Identity & Compliance

  • Amazon Cognito – Secure user sign-up, sign-in, and access control
  • AWS Security Hub – Centrally view and manage security alerts and automate security checks

Storage

  • AWS Backup – Centrally manage and automate backups across AWS services

Figure 1 shows the IL 4 and IL 5 AWS services that are now authorized for DoD workloads, broken out into functional categories.
 

Figure 1: The AWS services newly authorized by DISA

Figure 1: The AWS services newly authorized by DISA

To learn more about AWS solutions for the DoD, see our AWS solution offerings. Follow the AWS Security Blog for updates on our Services in Scope by Compliance Program. If you have feedback about this blog post, let us know in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Tyler Harding

Tyler is the DoD Compliance Program Manager for AWS Security Assurance. He has over 20 years of experience providing information security solutions to the federal civilian, DoD, and intelligence agencies.

How US federal agencies can authenticate to AWS with multi-factor authentication

Post Syndicated from Kyle Hart original https://aws.amazon.com/blogs/security/how-us-federal-agencies-can-authenticate-to-aws-with-multi-factor-authentication/

This post is part of a series about how AWS can help your US federal agency meet the requirements of the President’s Executive Order on Improving the Nation’s Cybersecurity. We recognize that government agencies have varying degrees of identity management and cloud maturity and that the requirement to implement multi-factor, risk-based authentication across an entire enterprise is a vast undertaking. This post specifically focuses on how you can use AWS information security practices to help meet the requirement to “establish multi-factor, risk-based authentication and conditional access across the enterprise” as it applies to your AWS environment.

This post focuses on the best-practices for enterprise authentication to AWS – specifically federated access via an existing enterprise identity provider (IdP).

Many federal customers use authentication factors on their Personal Identity Verification (PIV) or Common Access Cards (CAC) to authenticate to an existing enterprise identity service which can support Security Assertion Markup Language (SAML), which is then used to grant user access to AWS. SAML is an industry-standard protocol and most IdPs support a range of authentication methods, so if you’re not using a PIV or CAC, the concepts will still work for your organization’s multi-factor authentication (MFA) requirements.

Accessing AWS with MFA

There are two categories we want to look at for authentication to AWS services:

  1. AWS APIs, which include access through the following:
  2. Resources you launch that are running within your AWS VPC, which can include database engines or operating system environments such as Amazon Elastic Compute Cloud (Amazon EC2) instances, Amazon WorkSpaces, or Amazon AppStream 2.0.

There is also a third category of services where authentication occurs in AWS that is beyond the scope of this post: applications that you build on AWS that authenticate internal or external end users to those applications. For this category, multi-factor authentication is still important, but will vary based on the specifics of the application architecture. Workloads that sit behind an AWS Application Load Balancer can use the ALB to authenticate users using either Open ID Connect or SAML IdP that enforce MFA upstream.

MFA for the AWS APIs

AWS recommends that you use SAML and an IdP that enforces MFA as your means of granting users access to AWS. Many government customers achieve AWS federated authentication with Active Directory Federation Services (AD FS). The IdP used by our federal government customers should enforce usage of CAC/PIV to achieve MFA and be the sole means of access to AWS.

Federated authentication uses SAML to assume an AWS Identity and Access Management (IAM) role for access to AWS resources. A role doesn’t have standard long-term credentials such as a password or access keys associated with it. Instead, when you assume a role, it provides you with temporary security credentials for your role session.

AWS accounts in all AWS Regions, including AWS GovCloud (US) Regions, have the same authentication options for IAM roles through identity federation with a SAML IdP. The AWS Single Sign-on (SSO) service is another way to implement federated authentication to the AWS APIs in regions where it is available.

MFA for AWS CLI access

In AWS Regions excluding AWS GovCloud (US), you can consider using the AWS CloudShell service, which is an interactive shell environment that runs in your web browser and uses the same authentication pipeline that you use to access the AWS Management Console—thus inheriting MFA enforcement from your SAML IdP.

If you need to use federated authentication with MFA for the CLI on your own workstation, you’ll need to retrieve and present the SAML assertion token. For information about how you can do this in Windows environments, see the blog post How to Set Up Federated API Access to AWS by Using Windows PowerShell. For information about how to do this with Python, see How to Implement a General Solution for Federated API/CLI Access Using SAML 2.0.

Conditional access

IAM permissions policies support conditional access. Common use cases include allowing certain actions only from a specified, trusted range of IP addresses; granting access only to specified AWS Regions; and granting access only to resources with specific tags. You should create your IAM policies to provide least-privilege access across a number of attributes. For example, you can grant an administrator access to launch or terminate an EC2 instance only if the request originates from a certain IP address and is tagged with an appropriate tag.

You can also implement conditional access controls using SAML session tags provided by their IdP and passed through the SAML assertion to be consumed by AWS. This means two separate users from separate departments can assume the same IAM role but have tailored, dynamic permissions. As an example, the SAML IdP can provide each individual’s cost center as a session tag on the role assertion. IAM policy statements can be written to allow the user from cost center A the ability to administer resources from cost center A, but not resources from cost center B.

Many customers ask about how to limit control plane access to certain IP addresses. AWS supports this, but there is an important caveat to highlight. Some AWS services, such as AWS CloudFormation, perform actions on behalf of an authorized user or role, and execute from within the AWS cloud’s own IP address ranges. See this document for an example of a policy statement using the aws:ViaAWSService condition key to exclude AWS services from your IP address restrictions to avoid unexpected authorization failures.

Multi-factor authentication to resources you launch

You can launch resources such as Amazon WorkSpaces, AppStream 2.0, Redshift, and EC2 instances that you configure to require MFA. The Amazon WorkSpaces Streaming Protocol (WSP) supports CAC/PIV authentication for pre-authentication, and in-session access to the smart card. For more information, see Use smart cards for authentication. To see a short video of it in action, see the blog post Amazon WorkSpaces supports CAC/PIV smart card authentication. Redshift and AppStream 2.0 support SAML 2.0 natively, so you can configure those services to work with your SAML IdP similarly to how you configure AWS Console access and inherit the MFA enforced by the upstream IdP.

MFA access to EC2 instances can occur via the existing methods and enterprise directories used in your on-premises environments. You can, of course, implement other systems that enforce MFA access to an operating system such as RADIUS or other third-party directory or MFA token solutions.

Shell access with Systems Manager Session Manager

An alternative method for MFA for shell access to EC2 instances is to use the Session Manager feature of AWS Systems Manager. Session Manager uses the Systems Manager management agent to provide role-based access to a shell (PowerShell on Windows) on an instance. Users can access Session Manager from the AWS Console or from the command line with the Session Manager AWS CLI plugin. Similar to using CloudShell for CLI access, accessing EC2 hosts via Session Manager uses the same authentication pipeline you use for accessing the AWS control plane. Your interactive session on that host can be configured for audit logging.

Security best practices in IAM

The focus of this blog is on integrating an agency’s existing MFA-enabled enterprise authentication service; but to make it easier for you to view the entire security picture, you might be interested in IAM security best practices. You can enforce these best-practice security configurations with AWS Organizations Service Control Policies.

Conclusion

This post covered methods your federal agency should consider in your efforts to apply the multi-factor authentication (MFA) requirements in the Executive Order on Improving the Nation’s Cybersecurity to your AWS environment. To learn more about how AWS can help you meet the requirements of the executive order, see the other posts in this series:

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Kyle Hart

Kyle is a Principal Solutions Architect supporting US federal government customers in the Washington, D.C. area.

Cybersecurity in the Infrastructure Bill

Post Syndicated from Harley Geiger original https://blog.rapid7.com/2021/08/31/cybersecurity-in-the-infrastructure-bill/

Cybersecurity in the Infrastructure Bill

On August 10, 2021, the U.S. Senate passed the Infrastructure Investment and Jobs Act of 2021 (H.R.3684). The bill comes in at 2,700+ pages, provides for $1.2T in spending, and includes several cybersecurity items. We expect this legislation to become law around late September and do not expect significant changes to the content. This post provides highlights on cybersecurity from the legislation.

(Check out our joint letter calling for cybersecurity in infrastructure legislation here.)

Cybersecurity is a priority — that’s progress

Cybersecurity is essential to ensure modern infrastructure is safe, and Rapid7 commends Congress and the Administration for including cybersecurity in the Infrastructure Investment and Jobs Act. Rapid7 led industry calls to include cybersecurity in the bill, and we are encouraged that several priorities identified by industry are reflected in the text, such as cybersecurity-specific funding for state and local governments and the electrical grid.

On the other hand, cybersecurity will be competing with natural disasters and extreme weather for funding in many (not all) grants created under the bill. In addition, not all critical infrastructure sectors receive cybersecurity resources through the legislation, with healthcare being a notable exclusion. Congress should address these gaps in the upcoming budget reconciliation package.

What’s in the bill for infrastructure cybersecurity

Below is a brief-ish summary of cybersecurity-related items in the bill. The infrastructure sectors with the most allocations appear to be energy, water, transportation, and state and local governments. Many of these funding opportunities take the form of federal grants for infrastructure resilience, which includes cybersecurity as well as natural hazards. Other funds are dedicated solely to cybersecurity.

Please note that this list aims to include major infrastructure cybersecurity funding items, but is not comprehensive. (For example, the bill also provides funding for the National Cyber Director.) Citations to the Senate-passed legislation are included.

  1. State and local governments: $1B over 4 years for the State, Local, Tribal, and Territorial (SLTT) Grant Program. This new grant program will help SLTT governments to develop or implement cybersecurity plans. FEMA will administer the program. This is also known as The State and Local Cybersecurity Improvement Act. [Sec. 70611]

  2. Energy: $250M over five years for the Rural and Municipal Utility Advanced Cybersecurity Grant and Technological Assistance Program. The Department of Energy (DOE) must create a new program to provide grants and technical assistance to improve electric utilities’ ability to detect, respond to, and recover from cybersecurity threats. [Sec. 40124]

  3. Energy: Enhanced grid security. The DOE must create a program to develop advanced cybersecurity applications and technologies for the energy sector, among other things. Over a period of five years, this section authorizes $250M for the Cybersecurity for the Energy Sector RD&D program, $50M for the Energy Sector Operational Support for Cyberresilience Program, and $50M for Modeling and Assessing Energy Infrastructure Risk. [Sec. 40125]

  4. Energy: State energy security plans. This creates federal financial and technical assistance for states to develop or implement an energy security plan that secures state energy infrastructure against cybersecurity threats, among other things. [Sec. 40108]

  5. Water: $250M over 5 years for the Midsize and Large Drinking Water System Infrastructure Resilience and Sustainability Program. This creates a new grant program to assist midsize and large drinking water systems with increasing resilience to cybersecurity vulnerabilities, as well as natural hazards. [Sec. 50107]

  6. Water: $175M over five years for technical assistance and grants for emergencies affecting public water systems. This extends an expired fund to help mitigate threats and emergencies to drinking water. This includes, among other things, emergency situations caused by a cybersecurity incident. [Sec. 50101]

  7. Water: $25M over five years for the Clean Water Infrastructure Resiliency and Sustainability Program. This creates a new program providing grants to owners/operators of publicly owned treatment works to increase the resiliency of water systems against cybersecurity vulnerabilities, as well as natural hazards. [Sec. 50205]

  8. Transportation: Cybersecurity eligible for National Highway Performance Program (NHPP). This expands on the existing NHPP grant program to allow states to use funds for resiliency of the National Highway System. "Resiliency" includes cybersecurity, as well as natural hazards. [Sec. 11105]

  9. Transportation: Cybersecurity eligible for Surface Transportation Block Grant Program. This expands the existing grant program to allow funding measures to protect transportation facilities from cybersecurity threats, among other things. [Sec. 11109]

  10. General: $100M over five years for the Cyber Response and Recovery Fund. This creates a fund for CISA to provide direct support to public or private entities that respond and recover from cyberattacks and breaches designated as a “significant incident.” The support can include technical assistance and response activities, such as vulnerability assessment, threat detection, network protection, and more. The program ends in 2028. [Sec. 70602, Div. J]

Other sectors next?

These cybersecurity items are significant down payments to safeguard the nation’s investment in infrastructure modernization. Combined with the recent Executive Order and memorandum on industrial control systems security, the Biden Administration is demonstrating that cybersecurity is a high priority.

However, more work must be done to address cybersecurity weaknesses in critical infrastructure. While the Infrastructure Investment and Jobs Act provides cybersecurity resources for some sectors, most of the 16 critical infrastructure sectors are excluded. Healthcare is an especially notable example, as the sector faces a serious ransomware problem in the middle of a deadly pandemic.

Congress is now preparing a larger budget reconciliation bill, to be advanced at roughly the same time as the infrastructure legislation. We encourage Congress and the Administration to take this opportunity to boost cybersecurity for other sectors, especially healthcare. As with the infrastructure bill, we suggest providing grants dedicated to cybersecurity, and requiring that grant funds be used to adopt or implement standards-based security safeguards and risk management practices.

Congress’ activity during the COVID-19 crisis continues to be punctuated by large, ambitious bills. To secure the modern economy and essential services, we hope the Infrastructure Investment and Jobs Act sets a precedent that sound cybersecurity policies will be integrated into transformative legislation to come.

How US federal agencies can use AWS to improve logging and log retention

Post Syndicated from Derek Doerr original https://aws.amazon.com/blogs/security/how-us-federal-agencies-can-use-aws-to-improve-logging-and-log-retention/

This post is part of a series about how Amazon Web Services (AWS) can help your US federal agency meet the requirements of the President’s Executive Order on Improving the Nation’s Cybersecurity. You will learn how you can use AWS information security practices to help meet the requirement to improve logging and log retention practices in your AWS environment.

Improving the security and operational readiness of applications relies on improving the observability of the applications and the infrastructure on which they operate. For our customers, this translates to questions of how to gather the right telemetry data, how to securely store it over its lifecycle, and how to analyze the data in order to make it actionable. These questions take on more importance as our federal customers seek to improve their collection and management of log data in all their IT environments, including their AWS environments, as mandated by the executive order.

Given the interest in the technologies used to support logging and log retention, we’d like to share our perspective. This starts with an overview of logging concepts in AWS, including log storage and management, and then proceeds to how to gain actionable insights from that logging data. This post will address how to improve logging and log retention practices consistent with the Security and Operational Excellence pillars of the AWS Well-Architected Framework.

Log actions and activity within your AWS account

AWS provides you with extensive logging capabilities to provide visibility into actions and activity within your AWS account. A security best practice is to establish a wide range of detection mechanisms across all of your AWS accounts. Starting with services such as AWS CloudTrail, AWS Config, Amazon CloudWatch, Amazon GuardDuty, and AWS Security Hub provides a foundation upon which you can base detective controls, remediation actions, and forensics data to support incident response. Here is more detail on how these services can help you gain more security insights into your AWS workloads:

  • AWS CloudTrail provides event history for all of your AWS account activity, including API-level actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. You can use CloudTrail to identify who or what took which action, what resources were acted upon, when the event occurred, and other details. If your agency uses AWS Organizations, you can automate this process for all of the accounts in the organization.
  • CloudTrail logs can be delivered from all of your accounts into a centralized account. This places all logs in a tightly controlled, central location, making it easier to both protect them as well as to store and analyze them. As with AWS CloudTrail, you can automate this process for all of the accounts in the organization using AWS Organizations.  CloudTrail can also be configured to emit metrical data into the CloudWatch monitoring service, giving near real-time insights into the usage of various services.
  • CloudTrail log file integrity validation produces and cyptographically signs a digest file that contains references and hashes for every CloudTrail file that was delivered in that hour. This makes it computationally infeasible to modify, delete or forge CloudTrail log files without detection. Validated log files are invaluable in security and forensic investigations. For example, a validated log file enables you to assert positively that the log file itself has not changed, or that particular user credentials performed specific API activity.
  • AWS Config monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. For example, you can use AWS Config to verify that resources are encrypted, multi-factor authentication (MFA) is enabled, and logging is turned on, and you can use AWS Config rules to identify noncompliant resources. Additionally, you can review changes in configurations and relationships between AWS resources and dive into detailed resource configuration histories, helping you to determine when compliance status changed and the reason for the change.
  • Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts and workloads. Amazon GuardDuty analyzes and processes the following data sources: VPC Flow Logs, AWS CloudTrail management event logs, CloudTrail Amazon Simple Storage Service (Amazon S3) data event logs, and DNS logs. It uses threat intelligence feeds, such as lists of malicious IP addresses and domains, and machine learning to identify potential threats within your AWS environment.
  • AWS Security Hub provides a single place that aggregates, organizes, and prioritizes your security alerts, or findings, from multiple AWS services and optional third-party products to give you a comprehensive view of security alerts and compliance status.

You should be aware that most AWS services do not charge you for enabling logging (for example, AWS WAF) but the storage of logs will incur ongoing costs. Always consult the AWS service’s pricing page to understand cost impacts. Related services such as Amazon Kinesis Data Firehose (used to stream data to storage services), and Amazon Simple Storage Service (Amazon S3), used to store log data, will incur charges.

Turn on service-specific logging as desired

After you have the foundational logging services enabled and configured, next turn your attention to service-specific logging. Many AWS services produce service-specific logs that include additional information. These services can be configured to record and send out information that is necessary to understand their internal state, including application, workload, user activity, dependency, and transaction telemetry. Here’s a sampling of key services with service-specific logging features:

  • Amazon CloudWatch provides you with data and actionable insights to monitor your applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. CloudWatch collects monitoring and operational data in the form of logs, metrics, and events, providing you with a unified view of AWS resources, applications, and services that run on AWS and on-premises servers. You can gain additional operational insights from your AWS compute instances (Amazon Elastic Compute Cloud, or EC2) as well as on-premises servers using the CloudWatch agent. Additionally, you can use CloudWatch to detect anomalous behavior in your environments, set alarms, visualize logs and metrics side by side, take automated actions, troubleshoot issues, and discover insights to keep your applications running smoothly.
  • Amazon CloudWatch Logs is a component of Amazon CloudWatch which you can use to monitor, store, and access your log files from Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS CloudTrail, Route 53, and other sources. CloudWatch Logs enables you to centralize the logs from all of your systems, applications, and AWS services that you use, in a single, highly scalable service. You can then easily view them, search them for specific error codes or patterns, filter them based on specific fields, or archive them securely for future analysis. CloudWatch Logs enables you to see all of your logs, regardless of their source, as a single and consistent flow of events ordered by time, and you can query them and sort them based on other dimensions, group them by specific fields, create custom computations with a powerful query language, and visualize log data in dashboards.
  • Traffic Mirroring allows you to achieve full packet capture (as well as filtered subsets) of network traffic from an elastic network interface of EC2 instances inside your VPC. You can then send the captured traffic to out-of-band security and monitoring appliances for content inspection, threat monitoring, and troubleshooting.
  • The Elastic Load Balancing service provides access logs that capture detailed information about requests that are sent to your load balancer. Each log contains information such as the time the request was received, the client’s IP address, latencies, request paths, and server responses. The specific information logged varies by load balancer type:
  • Amazon S3 access logs record the S3 bucket and account that are being accessed, the API action, and requester information.
  • AWS Web Application Firewall (WAF) logs record web requests that are processed by AWS WAF, and indicate whether the requests matched AWS WAF rules and what actions, if any, were taken. These logs are delivered to Amazon S3 by using Amazon Kinesis Data Firehose.
  • Amazon Relational Database Service (Amazon RDS) log files can be downloaded or published to Amazon CloudWatch Logs. Log settings are specific to each database engine. Agencies use these settings to apply their desired logging configurations and chose which events are logged.  Amazon Aurora and Amazon RDS for Oracle also support a real-time logging feature called “database activity streams” which provides even more detail, and cannot be accessed or modified by database administrators.
  • Amazon Route 53 provides options for logging for both public DNS query requests as well as Route53 Resolver DNS queries:
    • Route 53 Resolver DNS query logs record DNS queries and responses that originate from your VPC, that use an inbound Resolver endpoint, that use an outbound Resolver endpoint, or that use a Route 53 Resolver DNS Firewall.
    • Route 53 DNS public query logs record queries to Route 53 for domains you have hosted with AWS, including the domain or subdomain that was requested; the date and time of the request; the DNS record type; the Route 53 edge location that responded to the DNS query; and the DNS response code.
  • Amazon Elastic Compute Cloud (Amazon EC2) instances can use the unified CloudWatch agent to collect logs and metrics from Linux, macOS, and Windows EC2 instances and publish them to the Amazon CloudWatch service.
  • Elastic Beanstalk logs can be streamed to CloudWatch Logs. You can also use the AWS Management Console to request the last 100 log entries from the web and application servers, or request a bundle of all log files that is uploaded to Amazon S3 as a ZIP file.
  • Amazon CloudFront logs record user requests for content that is cached by CloudFront.

Store and analyze log data

Now that you’ve enabled foundational and service-specific logging in your AWS accounts, that data needs to be persisted and managed throughout its lifecycle. AWS offers a variety of solutions and services to consolidate your log data and store it, secure access to it, and perform analytics.

Store log data

The primary service for storing all of this logging data is Amazon S3. Amazon S3 is ideal for this role, because it’s a highly scalable, highly resilient object storage service. AWS provides a rich set of multi-layered capabilities to secure log data that is stored in Amazon S3, including encrypting objects (log records), preventing deletion (the S3 Object Lock feature), and using lifecycle policies to transition data to lower-cost storage over time (for example, to S3 Glacier). Access to data in Amazon S3 can also be restricted through AWS Identity and Access Management (IAM) policies, AWS Organizations service control policies (SCPs), S3 bucket policies, Amazon S3 Access Points, and AWS PrivateLink interfaces. While S3 is particularly easy to use with other AWS services given its various integrations, many customers also centralize their storage and analysis of their on-premises log data, or log data from other cloud environments, on AWS using S3 and the analytic features described below.

If your AWS accounts are organized in a multi-account architecture, you can make use of the AWS Centralized Logging solution. This solution enables organizations to collect, analyze, and display CloudWatch Logs data in a single dashboard. AWS services generate log data, such as audit logs for access, configuration changes, and billing events. In addition, web servers, applications, and operating systems all generate log files in various formats. This solution uses the Amazon Elasticsearch Service (Amazon ES) and Kibana to deploy a centralized logging solution that provides a unified view of all the log events. In combination with other AWS-managed services, this solution provides you with a turnkey environment to begin logging and analyzing your AWS environment and applications.

You can also make use of services such as Amazon Kinesis Data Firehose, which you can use to transport log information to S3, Amazon ES, or any third-party service that is provided with an HTTP endpoint, such as Datadog, New Relic, or Splunk.

Finally, you can use Amazon EventBridge to route and integrate event data between AWS services and to third-party solutions such as software as a service (SaaS) providers or help desk ticketing systems. EventBridge is a serverless event bus service that allows you to connect your applications with data from a variety of sources. EventBridge delivers a stream of real-time data from your own applications, SaaS applications, and AWS services, and then routes that data to targets such as AWS Lambda.

Analyze log data and respond to incidents

As the final step in managing your log data, you can use AWS services such as Amazon Detective, Amazon ES, CloudWatch Logs Insights, and Amazon Athena to analyze your log data and gain operational insights.

  • Amazon Detective makes it easy to analyze, investigate, and quickly identify the root cause of security findings or suspicious activities. Detective automatically collects log data from your AWS resources. It then uses machine learning, statistical analysis, and graph theory to help you visualize and conduct faster and more efficient security investigations.
  • Incident Manager is a component of AWS Systems Manger which enables you to automatically take action when a critical issue is detected by an Amazon CloudWatch alarm or Amazon Eventbridge event. Incident Manager executes pre-configured response plans to engage responders via SMS and phone calls, enable chat commands and notifications using AWS Chatbot, and execute AWS Systems Manager Automation runbooks. The Incident Manager console integrates with AWS Systems Manager OpsCenter to help you track incidents and post-incident action items from a central place that also synchronizes with popular third-party incident management tools such as Jira Service Desk and ServiceNow.
  • Amazon Elasticsearch Service (Amazon ES) is a fully managed service that collects, indexes, and unifies logs and metrics across your environment to give you unprecedented visibility into your applications and infrastructure. With Amazon ES, you get the scalability, flexibility, and security you need for the most demanding log analytics workloads. You can configure a CloudWatch Logs log group to stream data it receives to your Amazon ES cluster in near real time through a CloudWatch Logs subscription.
  • CloudWatch Logs Insights enables you to interactively search and analyze your log data in CloudWatch Logs.
  • Amazon Athena is an interactive query service that you can use to analyze data in Amazon S3 by using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.

Conclusion

As called out in the executive order, information from network and systems logs is invaluable for both investigation and remediation services. AWS provides a broad set of services to collect an unprecedented amount of data at very low cost, optionally store it for long periods of time in tiered storage, and analyze that telemetry information from your cloud-based workloads. These insights will help you improve your organization’s security posture and operational readiness and, as a result, improve your organization’s ability to deliver on its mission.

Next steps

To learn more about how AWS can help you meet the requirements of the executive order, see the other post in this series:

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Derek Doerr

Derek is a Senior Solutions Architect with the Public Sector team at AWS. He has been working with AWS technology for over four years. Specializing in enterprise management and governance, he is passionate about helping AWS customers navigate their journeys to the cloud. In his free time, he enjoys time with family and friends, as well as scuba diving.

How AWS can help your US federal agency meet the executive order on improving the nation’s cybersecurity

Post Syndicated from Michael Cotton original https://aws.amazon.com/blogs/security/how-aws-can-help-your-us-federal-agency-meet-the-executive-order-on-improving-the-nations-cybersecurity/

AWS can support your information security modernization program to meet the President’s Executive Order on Improving the Nation’s Cybersecurity (issued May 12th, 2021). When working with AWS, a US federal agency gains access to resources, expertise, technology, professional services, and our AWS Partner Network (APN), which can help the agency meet the security and compliance requirements of the executive order.

For federal agencies, the Executive Order on Improving the Nation’s Cybersecurity requires an update to agency plans to prioritize cloud adoption, identify the most sensitive data and update the protections for that data, encrypt data at rest and in transit, implement multi-factor authentication, and meet expanded logging requirements. It also introduces Zero Trust Architectures and, for the first time, requires an agency to develop plans implementing Zero Trust concepts.

This post focuses on how AWS can help you plan for and accelerate cloud adoption. In the rest of the series you’ll learn how AWS offers guidance for building architectures with a Zero Trust security model, multi-factor authentication, encryption for data at-rest and in-transit, and logging capabilities required to increase visibility for security and compliance purposes.

Prioritize the adoption and use of cloud technologies

AWS has developed multiple frameworks to help you plan your migration to AWS and establish a structured, programmatic approach to AWS adoption. We provide a variety of tools, including server, data, and database features, to rapidly migrate various types of applications from on-premises to AWS. The following lists include links and helpful information regarding the ways AWS can help accelerate your cloud adoption.

Planning tools

  • AWS Cloud Adoption Framework (AWS CAF) – We developed the AWS CAF to assist your organization in developing and implementing efficient and effective plans for cloud adoption. The guidance and best practices provided by the framework help you build a comprehensive approach to cloud computing across your organization, and throughout the IT lifecycle. Using the AWS CAF will help you realize measurable business benefits from cloud adoption faster, and with less risk.
  • Migration Evaluator – You can build a data-driven business case for your cloud adoption on AWS by using our Migration Evaluator (formerly TSO Logic) to gain access to insights and help accelerate decision-making for migration to AWS.
  • AWS Migration Acceleration Program This program assists your organization with migrating to the cloud by providing you training, professional services, and service credits to streamline your migration, helping your agency more quickly decommission legacy hardware, software, and data centers.

AWS services and technologies for migration

  • AWS Application Migration Service (AWS MGN) – This service allows you to replicate entire servers to AWS using block-level replication, performs tests to verify the migration, and executes the cutover to AWS. This is the simplest and fastest method to migrate to AWS.
  • AWS CloudEndure Migration Factory Solution – This solution enables you to replicate entire servers to AWS using block-level replication and executes the cutover to AWS. This solution is designed to coordinate and automate manual processes for large-scale migrations involving a substantial number of servers.
  • AWS Server Migration Service – This is an agentless service that automates the migration of your on-premises VMware vSphere, Microsoft Hyper-V/SCVMM, and Azure virtual machines to AWS. It replicates existing servers as Amazon Machine Images (AMIs), enabling you to transition more quickly and easily to AWS.
  • AWS Database Migration Service – This service automates replication of your on-premises databases to AWS, making it much easier for you to migrate large and complex applications to AWS with minimal downtime.
  • AWS DataSync – This is an online data transfer service that simplifies, automates, and accelerates moving your data between on-premises storage systems and AWS.
  • VMware Cloud on AWS – This service simplifies and speeds up your migration to AWS by enabling your agency to use the same VMware Cloud Foundation technologies across your on-premises environments and in the AWS Cloud. VMware workloads running on AWS have access to more than 200 AWS services, making it easier to move and modernize applications without having to purchase new hardware, rewrite applications, or modify your operations.
  • AWS Snow Family – These services provide devices that can physically transport exabytes of data into and out of AWS. These devices are fully encrypted and integrate with AWS security, monitoring, storage management, and computing capabilities to help accelerate your migration of large data sets to AWS.

AWS Professional Services

  • AWS Professional Services – Use the AWS Cloud to more effectively reach your constituents and better achieve your core mission. This is a global team of experts that can help you realize your desired business outcomes when using the AWS Cloud. Each offering delivers a set of activities, best practices, and documentation reflecting our experience supporting hundreds of customers in their journey to the AWS Cloud.

AWS Partners

  • AWS Government Competency Partners – This page identifies partners who have demonstrated their ability to help government customers accelerate their migration of applications and legacy infrastructure to AWS.

AWS has solutions and partners to assist in your planning and accelerating your migration to the cloud. We can help you develop integrated, cost-effective solutions to help secure your environment and implement the executive order requirements. In short, AWS is ready to help you meet the accelerated timeline goals set in this executive order.

Next steps

For further reading, see the blog post Zero Trust architectures: An AWS perspective, and to learn more about how AWS can help you meet the requirements of the executive order, see the other post in this series:

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Michael Cotton

Michael is a Senior Solutions Architect at AWS.

Hack Back Is Still Wack

Post Syndicated from Jen Ellis original https://blog.rapid7.com/2021/08/10/hack-back-is-still-wack/

Hack Back Is Still Wack

Every year or two, we see a policy proposal around authorizing private-sector hack back. The latest of these is legislation from two U.S. Senators, Daines and Whitehouse, and it would require the U.S. Department of Homeland Security (DHS) to “conduct a study on the potential benefits and risks of amending section 1030 of title 18, United States Code (commonly known as the ‘Computer Fraud and Abuse Act’), to allow private entities to take proportional actions in response to an unlawful network breach, subject to oversight and regulation by a designated Federal agency.”

While we believe the bill would be harmful and do not support the bill in any way, we do acknowledge that at least this legislation is attempting to address how hack back could work in practice and identifying the potential risks. This gets at the heart of one of the main issues with policy proposals for hack back — they rarely address how it would actually work in reality, and how opportunities for abuse or unintended harms would be handled.

Rapid7 does not believe it’s possible to provide sufficient oversight or accountability to make private-sector hack back viable without negative consequences. Further, the very fact that we’re once again discussing private-sector hack back as a possibility is extremely troubling.

Here, we’ll outline why Rapid7 is against the authorization of private-sector hack back.

What is hack back?

When we say “hack back,” we’re referring to non-government organizations taking intrusive action against a cyber attacker on technical assets or systems not owned or leased by the person taking action or their client. This is generally illegal in countries that have anti-hacking laws.

The appeal of hack back is easy to understand. Organizations are subject to more frequent, varied, and costly attacks, often from cybercriminals who have no fear of reprisal or prosecution due to the existence of safe-haven nations that either can’t or won’t crack down on their activities. The scales feel firmly stacked in the favor of these cybercriminals, and it’s understandable that organizations want to shift that balance and give attackers reason to think again before targeting them.

Along these lines, arguments for hack back justify it in a number of ways, citing a desire to recapture lost data, better understand the nature of the attacks, neutralize threats, or use the method as a tit for tat. Hack back activities may be conflated with threat hunting, threat intelligence, or detection and response activities. Confusingly, some proponents for these activities are quick to decry hack back while simultaneously advocating for authority to take intrusive action on third-party assets without consent from their owners.

Hack back is also sometimes referred to as Active Defense or Active Cyber Defense. This can cause confusion, as these terms can also refer to other defensive measures that are not intrusive or conducted without consent from the technology owner. For example, active defense can also describe intrusion prevention systems or deception technologies designed to confuse attackers and gain greater intelligence on them, such as honeypots. Rapid7 encourages organizations to employ active defense techniques within their own environments.

Rapid7’s criticisms of hack back

While the reasons for advocating for private-sector hack back are easy to understand and empathize with, that doesn’t make the idea workable in practice. There’s a wealth of reasons why hack back is a bad idea.

Impracticalities of attribution and application

One of the most widely stated and agreed-upon tenets in security is that attribution is hard. In fact, in many cases, it’s essentially impossible to know for certain that we’ve accurately attributed an attack. Even when we find indications that point in a certain direction, it’s very difficult to ensure they’re not red herrings intentionally planted by the attacker, either to throw suspicion off themselves or specifically to incriminate another party.

We like to talk about digital fingerprints, but the reality is that there’s no such thing: In the digital world, pretty much anything can be spoofed or obfuscated with enough time, patience, skill, and resources. Attackers are constantly evolving their techniques to stay one step ahead of defenders and law enforcement, and the emergence of deception capabilities is just one example of this. So being certain we have the right actor before we take action is extremely difficult.

In addition, where do we draw the line in determining whether an actor or computing entity could be considered a viable target? For example, if someone is under attack from devices that are being controlled as part of a botnet, those devices – and their owners – are as much victims of the attacker as the target of the attack.

Rapid7’s Project Heisenberg observes exactly this phenomenon: The honeypots often pick up traffic from legitimate organizations whose systems have been compromised and leveraged in malicious activity. Should one of these compromised systems be used to attack an organization, and that organization then take action against those affected systems to neutralize the threat against themselves, that would mean the organization defending itself was revictimizing the entity whose systems were already compromised. Depending on the action taken, this could end up being catastrophic and costly for both organizations.  

We must also take motivations into account, even though they’re often unclear or easy to misunderstand. For example, research projects that scan ports on the public-facing internet do so in order to help others understand the attack surface and reduce exposure and opportunities for attackers. This activity is benign and often results in security disclosures that have helped security professionals reduce their organization’s risk. However, it’s not unusual for these scans to encounter a perimeter monitoring tool, throwing up an alert to the security team. If an organization saw the alerts and, in their urgency to defend themselves, took a “shoot first, ask questions later” approach, they could end up attacking the researcher.

Impracticalities of limiting reach and impact

Many people have likened hack back to homeowners defending their property against intruders. They evoke images of malicious, armed criminals breaking into your home to do you and your loved ones harm. They call to you to arm yourself and stand bravely in defense, refusing to be a victim in your own home.

It’s an appealing idea — however, the reality is more akin to standing by your fence and spraying bullets out into the street, hoping to get lucky and stop an attacker as they flee the scene of the crime. With such an approach, even if you do manage to reach your attacker, you’re risking terrible collateral damage, too.

This is because the internet doesn’t operate in neatly defined and clearly demarcated boundaries. If we take action targeted at a specific actor or group of actors, it would be extremely challenging to ensure that action won’t unintentionally negatively impact innocent others. Not only should this concern lawmakers, it should also disincentivize participation. The potential negative consequences of a hack back gone awry could be far-reaching. We frequently discuss damage to equipment or systems, or loss of data, but in the age of the Internet of Things, negative consequences could include physical harm to individuals. And let’s not forget that cyberattacks can be considered acts of war.

Organizations that believe they can avoid negative outcomes in the majority of cases need to understand that even just one or two errors could be extremely costly. Imagine, for example, that a high-value target organization, such as a bank, undertakes 100 hack backs per year and makes a negatively impactful error on two occasions. A 2% fail rate may not seem that terrible — but if either or both of those errors resulted in compromise of another company or harm to a group of individuals, the hack-backer could see themselves tied up in expensive legal proceedings, reputational damage, and loss of trust. Attempts to make organizations exempt from this kind of legal action are problematic, as they raise the question of how we can spot and stop abuses.

Impracticalities of providing appropriate oversight

To date, proposals to legalize hack back have been overly broad and non-specific about how such activities should be managed, and what oversight would be required to ensure there are no abuses of the system. The Daines/Whitehouse bill tries to address this and alludes to a framework for oversight that would determine “which entities would be allowed to take such actions and under what circumstances.”

This seems to refer to an approach commonly advocated by proponents of hack back whereby a license or special authorization to conduct hack back activities is granted to vetted and approved entities. Some advocates have pointed to the example of how privateers were issued Letters of Marque to capture enemy ships — and their associated spoils. Putting aside fundamental concerns about taking as our standard a 200-year-old law passed during a time of prolonged kinetic war and effectively legalizing piracy, there are a number of pragmatic issues with how this would work in practice.  

Indeed, creating a framework and system for such oversight is highly impractical and costly, raising many issues. The government would need to determine basic administrative issues, such as who would run it and how it would be funded. It would also need to identify a path to address far more complex issues around accountability and oversight to avoid abuses. For example, who will determine which activities are acceptable and where the line should be drawn? How would an authorizing agent ensure standards are met and maintained within approved organizations? Existing cybersecurity certification and accreditation schemes have long raised concerns, and these will only worsen when certification results in increased authorities for activities that can result in harm and escalation of aggressions on the internet.

When a government entity itself takes action against attackers, it does so with a high degree of oversight and accountability. They must meet evidentiary standards to prove the action is appropriate, and even then, there are parameters determining the types of targets they can pursue and the kinds of actions they can take. Applying the same level of oversight to the private sector is impractical. At the same time, authorizing the private sector to participate in these activities without this same level of oversight would undermine the checks and balances in place for the government and likely lead to unintended harms.

An authorizing agent cannot have eyes everywhere and at all times, so it would be highly impractical to create a system for oversight that would enable the governing authority to spot and stop accidental or intentional abuses of the system in real time. If the Daines/Whitehouse bill does pass (and we have no indication of that at present), I very much hope that DHS’s resulting report will reflect these issues or, if possible, provide adequate responses to address these concerns.

These issues of practical execution also raise questions around who will bear the responsibility and liability if something goes wrong. For example, if a company hacks back and accidentally harms another organization or individual, the entity that undertook the hacking may incur expensive legal proceedings, reputational damage, and loss of trust. They could become embroiled in complicated and expensive multi-jurisdiction legal action, even if the company has a license to hack back in its home jurisdiction. In scenarios where hack back activities are undertaken by an organization or individual on behalf of a third party, both the agent and their client may bear these negative consequences. There may also be an argument that any licensing authority could also bear some of the liability.  

Making organizations exempt from legal action around unintended consequences would be problematic and likely to result in more recklessness, as well as infringing on the rights of the victim organization. While the internet is a borderless space accessed from every country in the world, each of those countries has its own legal system and expects its citizens to abide by it. It would be very risky for companies and individuals who hack back to avoid running afoul of the laws of other countries or international bodies. When national governments take this kind of action, it tends to occur within existing international legal frameworks and under some regulatory oversight, but this may not apply in the private sector, again begging the question of where the liability rests.

It’s also worth noting that once one major power authorizes private-sector hack back, other governments will likely follow, and legal expectations or boundaries may vary. This raises questions of how governments will respond when their citizens are being attacked as part of a private-sector hack back gone wrong, and whether it will likely lead to escalation of political tensions.

Inequalities of applicability

Should a viable system be developed and hack back authorized, effective participation would likely be costly, as it would require specialist skills. Not every organization would be able to participate. If the authorization framework isn’t stringent, many organizations might try to participate with insufficient expertise, which would likely be ineffective, damaging, or both. At the same time, other organizations won’t have the maturity or budget to participate even in this way.

These are the same organizations that sit below the “cybersecurity poverty line” and can’t afford a great deal of in-house security expertise and technologies to protect themselves – in other words, these organizations are already highly vulnerable. As organizations that do have sufficient resources start to hack back, the cost of attacking these organizations will increase. Profit-motivated attackers will eventually shift toward targeting the less-resourced organizations that reside below the security poverty line. Rather than authorizing a measure as fraught with risk as hack back, we should instead be thinking about how to better protect these vulnerable organizations — for example, by subsidizing or incentivizing security hygiene.

The line between legitimate research and hack back

Those who follow Rapid7’s policy work will know that we’re big proponents of security research and have worked for many years to see greater recognition of its value and importance in public policy. It may come as a surprise to see us advocate so enthusiastically against hack back as, from a brief look, they have some things in common. In both cases, we’re talking about activity undertaken in the name of cybersecurity, which may be intrusive in nature and involve third-party assets without consent of the owner.

While independent, good-faith security research and threat intelligence investigations are both very valuable for security, they’re not the same thing, and we don’t believe we should view related legal restrictions in the same way for both.

Good-faith security research is typically performed independently of manufacturers and operators in order to identify flaws or exposures in systems that provide opportunities for attackers. The goal is to remediate or mitigate these issues so we can reduce opportunities for attackers and thus decrease the risk for technology users. This kind of research is generally about protecting the safety and privacy of the many, and while researchers may take actions without authorization, they only perform those actions on the technology of those ultimately responsible for both creating and mitigating the exposure. Without becoming aware of the issue, the technology provider and their users would continue to be exposed to risk.

Research may bypass authorization to sidestep issues arising from manufacturers and operators prioritizing their reputation or profit above the security of their customers. In contrast, threat intel investigations or operations that involve interrogating or interacting with third-party assets prioritize the interests of the specific entity undertaking or commissioning the activity, rather than other potential victims whose compromised assets may have been leveraged in the attack.

While threat intelligence can help us understand attacker behavior and identify or prepare for attacks, data gathering and operations should be limited only to assessing risks and threats to assets that are owned or operated by the entity authorizing the work, or to non-invasive activities such as port scanning. Because cyber attacks are criminal activity, if more investigation is needed, it should be undertaken with appropriate law enforcement involvement and oversight.

The path forward

It seems likely that the hack back debate will continue to come up as organizations strive to find new ways to repel attacks. I could make a snarky comment here about how organizations should perhaps focus instead on user awareness training, reducing their attack exposure, managing supply chain risk, proper segmentation, patching, Identity Access Management (IAM), and all the other things that make up a robust defense-in-depth program and that we frequently see fail, but I shall refrain. Cough cough.

We shall wait to see what happens with Senators Daines’ and Whitehouse’s “Study on Cyber-Attack Response Options Act’’ bill and hope that, if it passes, DHS will consider the concerns raised in this blog. The same is true for other policymakers as cybercrime is an international blight and governments around the world are subject to lobbying from entities looking to take a more active role in their defense. While we understand and sympathize with the desire to do more, take more control, and fight back, we urge policymakers to be mindful of the potential for catastrophe.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Approaches to meeting Australian Government gateway requirements on AWS

Post Syndicated from John Hildebrandt original https://aws.amazon.com/blogs/security/approaches-to-meeting-australian-government-gateway-requirements-on-aws/

Australian Commonwealth Government agencies are subject to specific requirements set by the Protective Security Policy Framework (PSPF) for securing connectivity between systems that are running sensitive workloads, and for accessing less trusted environments, such as the internet. These agencies have often met the requirements by using some form of approved gateway solution that provides network-based security controls.

This post examines the types of controls you need to provide a gateway that can meet Australian Government requirements defined in the Protective Security Policy Framework (PSPF) and the challenges of using traditional deployment models to support cloud-based solutions. PSPF requirements are mandatory for non-corporate Commonwealth entities, and represent better practice for corporate Commonwealth entities, wholly-owned Commonwealth companies, and state and territory agencies. We discuss the ability to deploy gateway-style solutions in the cloud, and show how you can meet the majority of gateway requirements by using standard cloud architectures plus services. We provide guidance on deploying gateway solutions in the AWS Cloud, and highlight services that can support such deployments. Finally, we provide an illustrative AWS web architecture pattern to show how to meet the majority of gateway requirements through Well-Architected use of services.

Australian Government gateway requirements

The Australian Government Protective Security Policy Framework (PSPF) highlights the requirement to use secure internet gateways (SIGs) and references the Australian Information Security Manual (ISM) control framework to guide agencies. The ISM has a chapter on gateways, which includes the following recommendations for gateway architecture and operations:

  • Provide a central control point for traffic in and out of the system.
  • Inspect and filter traffic.
  • Log and monitor traffic and gateway operation to a secure location. Use appropriate security event alerting.
  • Use secure administration practices, including multi-factor authentication (MFA) access control, minimum privilege, separation of roles, and network segregation.
  • Perform appropriate authentication and authorization of users, traffic, and equipment. Use MFA when possible.
  • Use demilitarized zone (DMZ) patterns to limit access to internal networks.
  • Test security controls regularly.
  • Set up firewalls between security domains and public network infrastructure.

Since the PSPF references the ISM, the agency should apply the overall ISM framework to meet ISM requirements such as governance and security patching for the environment. The ISM is a risk-based framework, and the risk posture of the workload and organization should inform how to assess the controls. For example, requirements for authentication of users might be relaxed for a public-facing website.

In traditional on-premises environments, some Australian Government agencies have mandated centrally assessed and managed gateway capabilities in order to drive economies of scale across multiple government agencies. However, the PSPF does provide the option for gateways used only by a single government agency to undertake their own risk-based assessment for the single agency gateway solution.

Other government agencies also have specific requirements to connect with cloud providers. For example, the U.S. Government Office of Management and Budget (OMB) mandates that U.S. government users access the cloud through a specific agency connection.

Connecting to the cloud through on-premises gateways

Given the existence of centrally managed off-cloud gateways, one approach by customers has been to continue to use these off-cloud gateways and then connect to AWS through the on-premises gateway environment by using AWS Direct Connect, as shown in Figure 1.
 

Figure 1: Connecting to the AWS Cloud through an agency gateway and then through AWS Direct Connect

Figure 1: Connecting to the AWS Cloud through an agency gateway and then through AWS Direct Connect

Although this approach does work, and makes use of existing gateway capability, it has a number of downsides:

  • A potential single point of failure: If the on-premises gateway capability is unavailable, the agency can lose connectivity to the cloud-based solution.
  • Bandwidth limitations: The agency is limited by the capacity of the gateway, which might not have been developed with dynamically scalable and bandwidth-intensive cloud-based workloads in mind.
  • Latency issues: The requirement to traverse multiple network hops, in addition to the gateway, will introduce additional latency. This can be particularly problematic with architectures that involve API communications being sent back and forth across the gateway environment.
  • Castle-and-moat thinking: Relying only on the gateway as the security boundary can discourage agencies from using and recognizing the cloud-based security controls that are available.

Some of these challenges are discussed in the context of US Trusted Internet Connection (TIC) programs in this whitepaper.

Moving gateways to the cloud

In response to the limitations discussed in the last section, both customers and AWS Partners have built gateway solutions on AWS to meet gateway requirements while remaining fully within the cloud environment. See this type of solution in Figure 2.
 

Figure 2: Moving the gateway to the AWS Cloud

Figure 2: Moving the gateway to the AWS Cloud

With this approach, you can fully leverage the scalable bandwidth that is available from the AWS environment, and you can also reduce latency issues, particularly when multiple hops to and from the gateway are required. This blog post describes a pilot program in the US that combines AWS services and AWS Marketplace technologies to provide a cloud-based gateway.

You can use AWS Transit Gateway (released after the referenced pilot program) to provide the option to centralize such a gateway capability within an organization. This makes it possible to utilize the gateway across multiple cloud solutions that are running in their own virtual private clouds (VPCs) and accounts. This approach also facilitates the principle of the gateway being the central control point for traffic flowing in and out. For more information on using AWS Transit Gateway with security appliances, see the Appliance VPC topic in the Amazon VPC documentation.

More recently, AWS has released additional services and features that can assist with delivering government gateway requirements.

Elastic Load Balancing Gateway Load Balancer provide the capability to deploy third-party network appliances in a scalable fashion. With this capability, you can leverage existing investment in licensing, use familiar tooling, reuse intellectual property (IP) such as rule sets, and reuse skills, because staff are already trained in configuring and managing the chosen device. You have one gateway for distributing traffic across multiple virtual appliances, while scaling the appliances up and down based on demand. This reduces the potential points of failure in your network and increases availability. Gateway Load Balancer is a straightforward way to use third-party network appliances from industry leaders in the cloud. You benefit from the features of these devices, while Gateway Load Balancer makes them automatically scalable and easier to deploy. You can find an AWS Partner with Gateway Load Balancer expertise on the AWS Marketplace. For more information on combining Transit Gateway and Gateway Load Balancer for a centralized inspection architecture, see this blog post. The post shows centralized architecture for East-West (VPC-to-VPC) and North-South (internet or on-premises bound) traffic inspection, plus processing.

To further simplify this area for customers, AWS has introduced the AWS Network Firewall service. Network Firewall is a managed service that you can use to deploy essential network protections for your VPCs. The service is simple to set up and scales automatically with your network traffic so you don’t have to worry about deploying and managing any infrastructure. You can combine Network Firewall with Transit Gateway to set up centralized inspection architecture models, such as those described in this blog post.

Reviewing a typical web architecture in the cloud

In the last section, you saw that SIG patterns can be created in the cloud. Now we can put that in context with the layered security controls that are implemented in a typical web application deployment. Consider a web application hosted on Amazon Elastic Compute Cloud (Amazon EC2) instances, as shown in Figure 3, within the context of other services that will support the architecture.
 

Figure 3: Security controls in a web application hosted on EC2

Figure 3: Security controls in a web application hosted on EC2

Although this example doesn’t include a traditional SIG-type infrastructure that inspects and controls traffic before it’s sent to the AWS Cloud, the architecture has many of the technical controls that are called for in SIG solutions as a result of using the AWS Well-Architected Framework. We’ll now step through some of these services to highlight the relevant security functionality that each provides.

Network control services

Amazon Virtual Private Cloud (Amazon VPC) is a service you can use to launch AWS resources in a logically isolated virtual network that you define. You have complete control over your virtual networking environment, including selection of your own IP address range, creation of subnets, and configuration of route tables and network gateways. Amazon VPC lets you use multiple layers of security, including security groups and network access control lists (network ACLs), to help control access to Amazon EC2 instances in each subnet. Security groups act as a firewall for associated EC2 instances, controlling both inbound and outbound traffic at the instance level. A network ACL is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. You might set up network ACLs with rules similar to your security groups to add an additional layer of security to your VPC. Read about the specific differences between security groups and network ACLs.

Having this level of control throughout the application architecture has advantages over relying only on a central, border-style gateway pattern, because security groups for each tier of the application architecture can be locked down to only those ports and sources required for that layer. For example, in the architecture shown in Figure 3, only the application load balancer security group would allow web traffic (ports 80, 443) from the internet. The web-tier-layer security group would only accept traffic from the load-balancer layer, and the database-layer security group would only accept traffic from the web tier.

If you need to provide a central point of control with this model, you can use AWS Firewall Manager, which simplifies the administration and maintenance of your VPC security groups across multiple accounts and resources. With Firewall Manager, you can configure and audit your security groups for your organization using a single, central administrator account. Firewall Manager automatically applies rules and protections across your accounts and resources, even as you add new resources. Firewall Manager is particularly useful when you want to protect your entire organization, or if you frequently add new resources that you want to protect via a central administrator account.

To support separation of management plan activities from data plane aspects in workloads, agencies can use multiple elastic network interface patterns on EC2 instances to provide a separate management network path.

Edge protection services

In the example in Figure 3, several services are used to provide edge-based protections in front of the web application. AWS Shield is a managed distributed denial of service (DDoS) protection service that safeguards applications that are running on AWS. AWS Shield provides always-on detection and automatic inline mitigations that minimize application downtime and latency, so there’s no need to engage AWS Support to benefit from DDoS protection. There are two tiers of AWS Shield: Standard and Advanced. When you use Shield Advanced, you can apply protections at both the Amazon CloudFront, Amazon EC2 and application load balancer layers. Shield Advanced also gives you 24/7 access to the AWS DDoS Response Team (DRT).

AWS WAF is a web application firewall that helps protect your web applications or APIs against common web exploits that can affect availability, compromise security, or consume excessive resources. AWS WAF gives you control over how traffic reaches your applications by enabling you to create security rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that filter out specific traffic patterns that you define. Again, you can apply this protection at both the Amazon CloudFront and application load balancer layers in our illustrated solution. Agencies can also use managed rules for WAF to benefit from rules developed and maintained by AWS Marketplace sellers.

Amazon CloudFront is a fast content delivery network (CDN) service. CloudFront seamlessly integrates with AWS ShieldAWS WAF, and Amazon Route 53 to help protect against multiple types of unauthorized access, including network and application layer DDoS attacks.

Logging and monitoring services

The example application in Figure 3 shows several services that provide logging and monitoring of network traffic, application activity, infrastructure, and AWS API usage.

At the VPC level, the VPC Flow Logs feature provides you with the ability to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data can be published to Amazon CloudWatch Logs or Amazon Simple Storage Service (Amazon S3). Traffic Mirroring is a feature that you can use in a VPC to capture traffic if needed for inspection. This allows agencies to implement full packet capture on a continuous basis, or in response to a specific event within the application.

Amazon CloudWatch provides a monitoring service with alarms and analytics. In the example application, AWS WAF can also be configured to log activity as described in the AWS WAF Developer Guide.

AWS Config provides a timeline view of the configuration of the environment. You can also define rules to provide alerts and remediation when the environment moves away from the desired configuration.

AWS CloudTrail is a service that you can use to handle governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity that is related to actions across your AWS infrastructure.

Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts. GuardDuty analyzes tens of billions of events across multiple AWS data sources, such as AWS CloudTrail event logs, Amazon VPC Flow Logs, and DNS logs. This blog post highlights a third-party assessment of GuardDuty that compares its performance to other intrusion detection systems (IDS).

Route 53 Resolver Query Logging lets you log the DNS queries that originate in your VPCs. With query logging turned on, you can see which domain names have been queried, the AWS resources from which the queries originated—including source IP and instance ID—and the responses that were received.

With Route 53 Resolver DNS Firewall, you can filter and regulate outbound DNS traffic for your VPCs. To do this, you create reusable collections of filtering rules in DNS Firewall rule groups, associate the rule groups to your VPC, and then monitor activity in DNS Firewall logs and metrics. Based on the activity, you can adjust the behavior of DNS Firewall accordingly.

Mapping services to control areas

Based on the above description of the use of additional services, we can summarize which services contribute to the control and recommendation areas in the gateway chapter in the Australian ISM framework.

Control and recommendation areas Contributing services
Inspect and filter traffic AWS WAF, VPC Traffic Mirroring
Central control point Infrastructure as code, AWS Firewall Manager
Authentication and authorization (MFA) AWS Identity and Access Management (IAM), solution and application IAM, VPC security groups
Logging and monitoring Amazon CloudWatch, AWS CloudTrail, AWS Config, Amazon VPC (flow logs and mirroring), load balancer logs, Amazon CloudFront logs, Amazon GuardDuty, Route 53 Resolver Query Logging
Secure administration (MFA) IAM, directory federation (if used)
DMZ patterns VPC subnet layout, security groups, network ACLs
Firewalls VPC security groups, network ACLs, AWS WAF, Route 53 Resolver DNS Firewall
Web proxy; site and content filtering and scanning AWS WAF, Firewall Manager

Note that the listed AWS service might not provide all relevant controls in each area, and it is part of the customer’s risk assessment and design to determine what additional controls might need to be implemented.

As you can see, many of the recommended practices and controls from the Australian Government gateway requirements are already encompassed in a typical Well-Architected solution. The implementing agency has the choice of two options: it can continue to place such a solution behind a gateway that runs either within or outside of AWS, leveraging the gateway controls that are inherent in the application architecture as additional layers of defense. Otherwise, the agency can conduct a risk assessment to understand which gateway controls can be supplied by means of the application architecture to reduce the gateway control requirements at any gateway layer in front of the application.

Summary

In this blog post, we’ve discussed the requirements for Australian Government gateways which provide network controls to secure workloads. We’ve outlined the downsides of using traditional on-premises solutions and illustrated how services such as AWS Transit Gateway, Elastic Load Balancing, Gateway Load Balancer, and AWS Network Firewall facilitate moving gateway solutions into the cloud. These are services you can evaluate against your network control requirements. Finally, we reviewed a typical web architecture running in the AWS Cloud with associated services to illustrate how many of the typical gateway controls can be met by using a standard Well-Architected approach.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on one of the AWS Security or Networking forums or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

John Hildebrandt

John is a Principal Solutions Architect in the Australian National Security team at AWS in Canberra, Australia. He is passionate about facilitating cloud adoption for customers to enable innovation. John has been working with government customers at AWS for over 8 years, as the first employee for the ANZ Public Sector team.