Tag Archives: Foundational (100)

AWS Security Profile: Ely Kahn, Principal Product Manager for AWS Security Hub

Post Syndicated from Maddie Bacon original https://aws.amazon.com/blogs/security/aws-security-profile-ely-kahn-principal-product-manager-for-aws-security-hub/

In the AWS Security Profile series, I interview some of the humans who work in Amazon Web Services Security and help keep our customers safe and secure. This interview is with Ely Kahn, principal product manager for AWS Security Hub. Security Hub is a cloud security posture management service that performs security best practice checks, aggregates alerts, and facilitates automated remediation.

How long have you been at AWS and what do you do in your current role?

I’ve been with AWS just over 4 years. I came to AWS through the acquisition of a company I co-founded called Sqrrl, which then became Amazon Detective. Shortly after the acquisition, I moved from the Sqrrl/Detective team and helped launch AWS Security Hub. In my current role, I’m the head of product for Security Hub, which means I lead our product roadmap and our product strategy, and I translate customer requirements into technical specifications.

How did you get started in the world of security?

My career started inside the U.S. federal government, first inside the Department of Homeland Security and, specifically, inside the Transportation Security Administration (TSA). At the time, the TSA had uncovered a vulnerability concerning boarding passes and the terrorist no-fly list. I was tasked with figuring out how to close that vulnerability, and I came up with a new way to embed a digital signature inside the barcode to help ensure the authenticity of the boarding pass. After that, people thought I was a cybersecurity expert, and I began working on a lot of cybersecurity strategy and policy at the Department of Homeland Security and then at the White House.

How do you explain your job to your non-tech friends?

I actually explain it the same way to technical and non-technical friends. I head up a service called Security Hub, which is designed to help you do a couple of different things. It helps you understand your security posture on AWS—what sort of risks you face and the most urgent security issues that you need to address across your AWS accounts. It also gives you the tools to improve your security posture and help you fix as many of those security issues as possible. We do that through three primary functions. First, we aggregate all of your security alerts into a standardized data format that’s available in one place. Second, we do our own automated security checks. We look at all the resources you’ve enabled on AWS and help check that those resources are configured in accordance with best practices that we define, and in alignment with various regulatory frameworks. Third, we help you auto-remediate and auto-respond to as many of those issues as possible.

What are you currently working on that you’re excited about?

Our number one priority with Security Hub is to expand coverage of the automated security checks that we provide. We have almost 200 automated security checks today covering several dozen AWS services. Over the next few years, we plan to expand this to more AWS services, which will add a large number of additional security checks. This is important because customers don’t want to have to write these security checks themselves. They want the one-click capability to turn on the checks—or controls, as we call them in Security Hub—and they should be automatically on in all of your accounts. They should only run if you’re using resources that are actually in-scope for those checks, and they should produce a security score to help you quickly understand the security posture of different accounts and of your organization as a whole.

What would you say is the coolest feature of Security Hub?

The coolest feature is probably the one that gets the least attention. It’s what we call our AWS Security Finding Format (ASFF). The ASFF is really just a data standard—it consists of over 1,000 JSON fields and objects, and it’s how you normalize all of your different security alerts. We’ve integrated 75 different services and partner products. The real advantage of Security Hub is that we automatically take all of those different alerts from all of those different integration partners and normalize them into this standardized data format, so that when you’re searching the findings you have a common set of fields to search against if you’re trying to do correlations. For example, you can imagine a situation where Amazon GuardDuty detects unusual activity in an Amazon Simple Storage Service (Amazon S3) bucket, one of our Security Hub checks detects that the bucket is open, and Amazon Macie determines that the bucket contains sensitive information. It’s much easier to do correlations for situations like this when the alerts from those different tools are in the same format. Similarly, building auto-response, auto-remediation workflows is much easier when all of your alerts are in the same format. One of our biggest customers at AWS called the ASFF the gold standard for how to normalize security alerts, which is something we’re super proud of.

As you mentioned, Security Hub integrates with a lot of other AWS services, like GuardDuty and Macie. How do you work with other service teams?

We work across AWS in a couple of different ways. We build out these integrations with other AWS services to either send or receive findings from those services. So, we receive findings from services like GuardDuty and Macie, and we send our findings to other services like AWS Trusted Advisor to give them the same view of security that we see in Security Hub. In general, we try to make it as simple and as low impact as possible because every service team is extremely busy. Wherever possible, we do the integration work and don’t put the onus of effort on the other service team.

The other way we work with other service teams is to formally define the best practices for that service. We have a security engineering team on Security Hub, and we partner with AWS Professional Services and their security consultants. Together, we have been working through the list of the most popular AWS services using a standard taxonomy of control categories to define security controls and best practices for that service. We then work with product managers and engineers on those service teams to review the controls we’re proposing, get their feedback, and then finally code them up as AWS Config rules before deploying them in Security Hub. We have a very well-honed process now to partner with the service teams to integrate with and define the security controls for each service.

Where do you suggest customers start with Security Hub if they are newer in their cloud journey?

The first step with Security Hub is just to turn it on across all of your accounts and AWS Regions. When you do, you’re likely going to see a lot of alerts. Don’t get overwhelmed with the number of alerts you see. Focus initially on the critical and high-severity alerts and work them as campaigns. Identify the owners for all open critical and high-severity alerts and start tracking burndown on a weekly basis. Coordinate with the leadership in your organization so you can identify which teams are keeping up with the alerts and which ones aren’t.

What’s your favorite Leadership Principle and why?

My favorite is one that I initially discounted: frugality. When I first joined AWS, what came to mind was Jeff Bezos using doors as desks. Although that’s certainly a component of frugality, I’ve found that for me, this principle means that we need to be frugal with each other’s time. There are so many competing demands on everyone’s time, and it’s extremely important in a place like AWS to be mindful of that. Make sure you’ve done your due diligence on something before you broadly ask the question or escalate.

What’s the thing you’re most proud of in your career?

There are two things. First is the acquisition of Sqrrl by AWS. I couldn’t have picked a better landing spot for Sqrrl and the team. I feel really lucky that I joined AWS through this acquisition. I’ve really learned a lot here in a short amount of time.

The other thing I’m especially proud of is to have been selected to do a stint through the White House National Security Council staff as the Department of Homeland Security representative to the Council. I sat in the cybersecurity directorate from 2009–2010 as part of that detail to the White House and got a chance to work in the West Wing and attend meetings in the Situation Room, which was just such a special experience.

If you had to pick an industry outside of security, what would you want to do?

This is pretty similar to security, but I got very close to going into the military. Out of high school, I was being recruited for lacrosse at the U.S. Air Force Academy. I had convinced myself that I wanted to go fly jets. I have the utmost respect for our military community, and I certainly could’ve seen myself taking that path.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Maddie Bacon

Maddie (she/her) is a technical writer for AWS Security with a passion for creating meaningful content. She previously worked as a security reporter and editor at TechTarget and has a BA in Mathematics. In her spare time, she enjoys reading, traveling, and all things Harry Potter.

Ely Kahn

Ely Kahn

Ely Kahn is the Principal Product Manager for AWS Security Hub. Before his time at AWS, Ely was a co-founder for Sqrrl, a security analytics startup that AWS acquired and is now Amazon Detective. Earlier, Ely served in a variety of positions in the federal government, including Director of Cybersecurity at the National Security Council in the White House.

New IDC whitepaper released – Trusted Cloud: Overcoming the Tension Between Data Sovereignty and Accelerated Digital Transformation

Post Syndicated from Marta Taggart original https://aws.amazon.com/blogs/security/new-idc-whitepaper-released-trusted-cloud-overcoming-the-tension-between-data-sovereignty-and-accelerated-digital-transformation/

A new International Data Corporation (IDC) whitepaper sponsored by AWS, Trusted Cloud: Overcoming the Tension Between Data Sovereignty and Accelerated Digital Transformation, examines the importance of the cloud in building the future of digital EU organizations. IDC predicts that 70% of CEOs of large European organizations will be incentivized to generate at least 40% of their revenues from digital by 2025, which means they have to accelerate their digital transformation. In a 2022 IDC survey of CEOs across Europe, 46% of European CEOs will accelerate the shift to cloud as their most strategic IT initiative in 2022.

In the whitepaper, IDC offers perspectives on how operational effectiveness, digital investment, and ultimately business growth need to be balanced with data sovereignty requirements. IDC defines data sovereignty as “a subset of digital sovereignty. It is the concept of data being subject to the laws and governance structures within the country it is collected or pertains to.”

IDC provides a perspective on some of the current discourse on cloud data sovereignty, including extraterritorial reach of foreign intelligence under national security laws, and the level of protection for individuals’ privacy in-country or with cross-border data transfer. The Schrems II decision and its implications with respect to personal data transfers between the EU and US has left many organizations grappling with how to comply with their legal requirements when transferring data outside the EU.

IDC provides the following background on controls in the cloud:

  • Cloud providers do not have unrestricted access to customer data in the cloud. Organizations retain all ownership and control of their data. Through credential and permission settings, the customer is the controller of who has access to their data.
  • Cloud providers use a rigorous set of organizational and technical controls based on least privilege to protect data from unauthorized access and inappropriate use.
  • Most cloud service operations, including maintenance and trouble-shooting, are fully automated. Should human access to customer data be required, it is temporary and limited to what is necessary to provide the contracted service to the customer. All access should be strictly logged, monitored, and audited to verify that activity is valid and compliant.
  • Technical controls such as encryption and key management assume greater importance. Encryption is considered fundamental to data protection best practices and highly recommended by regulators. Encrypted data processed in memory within hardware-based trusted execution environment (TEEs), also known as enclaves, can alleviate these regulatory concerns by rendering sensitive information invisible to host operating systems and cloud providers. The AWS Nitro System, the underlying platform that runs Amazon EC2 instances, is an industry example that provides such protection capability.
  • Independent accreditation against official standards are a recognized basis for assessing adherence to privacy and security practices. Approved by the European Data Protection Board, the EU Cloud Code of Conduct and CISPE’s Code of Conduct for Cloud Infrastructure Service Providers provide an accountability framework to help demonstrate compliance with processor obligations under GDPR Article 28. Whilst not required for GDPR compliance, CISPE requires accredited cloud providers to offer customers the option to retain all personal data in their customer content in the European Economic Area (EEA).
  • Greater data control and security is often cited as a driver to hosting data in-country. However, IDC notes that the physical location of the data has no bearing on mitigating data risk to cyber threats. Data residency can run counter to an organization’s objectives for security and resilience. More and more European organizations now are trusting the cloud for their security needs, as many organizations simply do not have the resource and expertise to provide the same security benefits as large cloud providers can.

For more information about how to translate your data sovereignty requirements into an actionable business and IT strategy, read the full IDC whitepaper Trusted Cloud: Overcoming the Tension Between Data Sovereignty and Accelerated Digital Transformation. You can also read more about AWS commitments to protect EU customers’ data on our EU data protection webpage.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Marta Taggart

Marta is a Seattle-native and Senior Product Marketing Manager in AWS Security Product Marketing, where she focuses on data protection services. Outside of work you’ll find her trying to convince Jack, her rescue dog, not to chase squirrels and crows (with limited success).

Orlando Scott-Cowley

Orlando Scott-Cowley

Orlando is Amazon Web Services’ Worldwide Public Sector Lead for Security & Compliance in EMEA. Orlando customers with their security and compliance and adopting AWS. Orlando specialises in Cyber Security, with a background in security consultancy, penetration testing and compliance; he holds a CISSP, CCSP and CCSK.

AWS welcomes new Trans-Atlantic Data Privacy Framework

Post Syndicated from Michael Punke original https://aws.amazon.com/blogs/security/aws-welcomes-new-trans-atlantic-data-privacy-framework/

Amazon Web Services (AWS) welcomes the new Trans-Atlantic Data Privacy Framework (Data Privacy Framework) that was agreed to, in principle, between the European Union (EU) and the United States (US) last month. This announcement demonstrates the common will between the US and EU to strengthen privacy protections in trans-Atlantic data flows, and will supplement the safeguards AWS and other companies already offer today. AWS commits to undertaking certification in accordance with the Data Privacy Framework as it is adopted, and we look forward to our customers and their end users benefiting from the new safeguards.

The Data Privacy Framework, once finalized, will re-establish a mechanism for certified businesses to conduct trans-Atlantic data transfers between the US and EU. According to the announcement, the new Data Privacy Framework will address the concerns raised by the Court of Justice of the European Union (CJEU) when it invalidated the EU-US Privacy Shield in its Schrems II decision in uly 2020. The Data Privacy Framework will adopt new safeguards to ensure that US intelligence activities are limited to what is necessary and proportionate to protect national security, and also create a new redress system to address the complaints of EU citizens.

As one of the architects of the Trusted Cloud Principles (a cloud-industry initiative to help safeguard the interests of organizations and the basic rights of individuals using cloud), AWS fully supports improved rules and regulations that advance privacy and security protections for any organization that wants to use cloud technologies and maintain control of their data.

While organizations using AWS technology have been able to conduct trans-Atlantic data transfers even after Schrems II, the new Data Privacy Framework will ensure further clarity and agility for our customers in their data transfer assessments. This will help our customers unlock value in terms of growth, digital transformation, and global competitive advantage.

Organizations that want to trade with speed and agility to and from the European Economic Area (EEA) need certainty that their goals to innovate and invest in the best technology for growth is supported by international frameworks promoting privacy across borders. Once finalized, the new Data Privacy Framework, coupled with our continued commitment to privacy at AWS, will provide even more simplicity and confidence for customers who choose to transfer data to and from Europe when using AWS services.

More than ever, our collective security requires mutual trust across both sides of the Atlantic and beyond. We therefore look forward to participating in, and remain committed to, the finalization of the Data Privacy Framework. We also support efforts to build broad consensus around the appropriate balance between privacy and security in forums such as the OECD’s workstream on trusted government access to data held by the private sector.

About AWS privacy and security

AWS is committed to protecting customer data. We continue to help customers successfully meet evolving European laws and standards, and achieve the highest levels of security, privacy, and resilience. AWS already offers comprehensive technical, operational, and contractual measures to protect and transfer customer content outside of Europe in compliance with the General Data Protection Regulation (GDPR) and the Schrems II ruling. Customers can also choose to store their content in the European Union by selecting any one or more of our regions in France, Germany, Ireland, Italy, Sweden, and later in 2022, Spain, with the confidence that their data stays in the AWS Region they select. In addition, customers can use an advanced set of access, encryption, and logging features to maintain full control of their content.

Today, AWS customers can also transfer their data outside of the European Economic Area (EEA) by relying on the new Standard Contractual Clauses (SCCs) included in the AWS Data Processing Addendum (DPA), which is supplemented by our strengthened contractual commitments to protect customer data, such as challenging law enforcement requests that conflict with EU law.

We also have a wide variety of tools available to enhance the security of cross-border data transfers for customers with global services. For example, AWS CloudHSM and AWS Key Management Service (AWS KMS) allow customers to encrypt data in transit and at rest, and securely generate and manage control of encryption keys. By building on top of the AWS Nitro System, our answer to confidential computing, which includes the use of specialized hardware and associated firmware to protect customer code and data during processing from outside access, customers can further secure data during processing, and thereby enhance confidentiality and privacy.

AWS has achieved internationally recognized certifications and attestations that demonstrate compliance with rigorous international privacy and security standards, including the Cloud Infrastructure Services in Europe (CISPE) Data Protection Code of Conduct, Cloud Computing Compliance Controls Catalog (C5), ISO27018, and the Esquema National de Securidad (ENS, Spain).

As well as benefitting from these existing measures, our extensive online resources can help customers more easily complete data-transfer assessments and fulfill their GDPR compliance requirements, in accordance with the European Data Protection Board (EDPB) recommendations. This includes regular Information Request Reports showing requests to access data from governments and our responses.

Further information

Our technical paper Navigating Compliance with EU Data Transfer Requirements and AWS’s Privacy Features for AWS Services provide further information to help customers assess the right services for their individual needs.

If you have questions or need more information, visit our EU Data Protection page.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Michael Punke

Michael Punke

Michael Punke is Vice President for Global Public Policy, Amazon Web Services, and lives with his family in Montana. He has more than 25 years of experience in international trade and regulatory issues. Punke served from 2010 to 2017 as Deputy US Trade Representative and US Ambassador to the World Trade Organization (WTO) in Geneva.

Canadian Centre for Cyber Security Assessment Summary report now available in AWS Artifact

Post Syndicated from Rob Samuel original https://aws.amazon.com/blogs/security/canadian-centre-for-cyber-security-assessment-summary-report-now-available-in-aws-artifact/

French version

At Amazon Web Services (AWS), we are committed to providing continued assurance to our customers through assessments, certifications, and attestations that support the adoption of AWS services. We are pleased to announce the availability of the Canadian Centre for Cyber Security (CCCS) assessment summary report for AWS, which you can view and download on demand through AWS Artifact.

The CCCS is Canada’s authoritative source of cyber security expert guidance for the Canadian government, industry, and the general public. Public and commercial sector organizations across Canada rely on CCCS’s rigorous Cloud Service Provider (CSP) IT Security (ITS) assessment in their decision to use CSP services. In addition, CCCS’s ITS assessment process is a mandatory requirement for AWS to provide cloud services to Canadian federal government departments and agencies.

The CCCS Cloud Service Provider Information Technology Security Assessment Process determines if the Government of Canada (GC) ITS requirements for the CCCS Medium Cloud Security Profile (previously referred to as GC’s PROTECTED B/Medium Integrity/Medium Availability [PBMM] profile) are met as described in ITSG-33 (IT Security Risk Management: A Lifecycle Approach, Annex 3 – Security Control Catalogue). As of September, 2021, 120 AWS services in the Canada (Central) Region have been assessed by the CCCS, and meet the requirements for medium cloud security profile. Meeting the medium cloud security profile is required to host workloads that are classified up to and including medium categorization. On a periodic basis, CCCS assesses new or previously unassessed services and re-assesses the AWS services that were previously assessed to verify that they continue to meet the GC’s requirements. CCCS prioritizes the assessment of new AWS services based on their availability in Canada, and customer demand for the AWS services. The full list of AWS services that have been assessed by CCCS is available on our Services in Scope by Compliance Program page.

To learn more about the CCCS assessment or our other compliance and security programs, visit AWS Compliance Programs. If you have questions about this blog post, please start a new thread on the AWS Artifact forum or contact AWS Support.

If you have feedback about this post, submit comments in the Comments section below. Want more AWS Security news? Follow us on Twitter.

Rob Samuel

Rob Samuel

Rob Samuel is a Principal technical leader for AWS Security Assurance. He partners with teams across AWS to translate data protection principles into technical requirements, aligns technical direction and priorities, orchestrates new technical solutions, helps integrate security and privacy solutions into AWS services and features, and addresses cross-cutting security and privacy requirements and expectations. Rob has more than 20 years of experience in the technology industry, and has previously held leadership roles, including Head of Security Assurance for AWS Canada, Chief Information Security Officer (CISO) for the Province of Nova Scotia, various security leadership roles as a public servant, and served as a Communications and Electronics Engineering Officer in the Canadian Armed Forces.

Naranjan Goklani

Naranjan Goklani

Naranjan Goklani is a Security Audit Manager at AWS, based in Toronto (Canada). He leads audits, attestations, certifications, and assessments across North America and Europe. Naranjan has more than 12 years of experience in risk management, security assurance, and performing technology audits. Naranjan previously worked in one of the Big 4 accounting firms and supported clients from the retail, ecommerce, and utilities industries.

Brian Mycroft

Brian Mycroft

Brian Mycroft is a Chief Technologist at AWS, based in Ottawa (Canada), specializing in national security, intelligence, and the Canadian federal government. Brian is the lead architect of the AWS Secure Environment Accelerator (ASEA) and focuses on removing public sector barriers to cloud adoption.

.


 

Rapport sommaire de l’évaluation du Centre canadien pour la cybersécurité disponible sur AWS Artifact

Par Robert Samuel, Naranjan Goklani et Brian Mycroft
Amazon Web Services (AWS) s’engage à fournir à ses clients une assurance continue à travers des évaluations, des certifications et des attestations qui appuient l’adoption des services proposés par AWS. Nous avons le plaisir d’annoncer la mise à disposition du rapport sommaire de l’évaluation du Centre canadien pour la cybersécurité (CCCS) pour AWS, que vous pouvez dès à présent consulter et télécharger à la demande sur AWS Artifact.

Le CCC est l’autorité canadienne qui met son expertise en matière de cybersécurité au service du gouvernement canadien, du secteur privé et du grand public. Les organisations des secteurs public et privé établies au Canada dépendent de la rigoureuse évaluation de la sécurité des technologies de l’information s’appliquant aux fournisseurs de services infonuagiques conduite par le CCC pour leur décision relative à l’utilisation de ces services infonuagiques. De plus, le processus d’évaluation de la sécurité des technologies de l’information est une étape obligatoire pour permettre à AWS de fournir des services infonuagiques aux agences et aux ministères du gouvernement fédéral canadien.

Le Processus d’évaluation de la sécurité des technologies de l’information s’appliquant aux fournisseurs de services infonuagiques détermine si les exigences en matière de technologie de l’information du Gouvernement du Canada (GC) pour le profil de contrôle de la sécurité infonuagique moyen (précédemment connu sous le nom de Protégé B/Intégrité moyenne/Disponibilité moyenne) sont satisfaites conformément à l’ITSG-33 (Gestion des risques liés à la sécurité des TI : Une méthode axée sur le cycle de vie, Annexe 3 – Catalogue des contrôles de sécurité). En date de septembre 2021, 120 services AWS de la région (centrale) du Canada ont été évalués par le CCC et satisfont aux exigences du profil de sécurité moyen du nuage. Satisfaire les exigences du niveau moyen du nuage est nécessaire pour héberger des applications classées jusqu’à la catégorie moyenne incluse. Le CCC évalue périodiquement les nouveaux services, ou les services qui n’ont pas encore été évalués, et réévalue les services AWS précédemment évalués pour s’assurer qu’ils continuent de satisfaire aux exigences du Gouvernement du Canada. Le CCC priorise l’évaluation des nouveaux services AWS selon leur disponibilité au Canada et en fonction de la demande des clients pour les services AWS. La liste complète des services AWS évalués par le CCC est consultable sur notre page Services AWS concernés par le programme de conformité.

Pour en savoir plus sur l’évaluation du CCC ainsi que sur nos autres programmes de conformité et de sécurité, visitez la page Programmes de conformité AWS. Comme toujours, nous accordons beaucoup de valeur à vos commentaires et à vos questions; vous pouvez communiquer avec l’équipe Conformité AWS via la page Communiquer avec nous.

Si vous avez des commentaires sur cette publication, n’hésitez pas à les partager dans la section Commentaires ci-dessous. Vous souhaitez en savoir plus sur AWS Security? Retrouvez-nous sur Twitter.

Biographies des auteurs :

Rob Samuel : Rob Samuel est responsable technique principal d’AWS Security Assurance. Il collabore avec les équipes AWS pour traduire les principes de protection des données en recommandations techniques, aligne la direction technique et les priorités, met en œuvre les nouvelles solutions techniques, aide à intégrer les solutions de sécurité et de confidentialité aux services et fonctionnalités proposés par AWS et répond aux exigences et aux attentes en matière de confidentialité et de sécurité transversale. Rob a plus de 20 ans d’expérience dans le secteur de la technologie et a déjà occupé des fonctions dirigeantes, comme directeur de l’assurance sécurité pour AWS Canada, responsable de la cybersécurité et des systèmes d’information (RSSI) pour la province de la Nouvelle-Écosse, divers postes à responsabilités en tant que fonctionnaire et a servi dans les Forces armées canadiennes en tant qu’officier du génie électronique et des communications.

Naranjan Goklani : Naranjan Goklani est responsable des audits de sécurité pour AWS, il est basé à Toronto (Canada). Il est responsable des audits, des attestations, des certifications et des évaluations pour l’Amérique du Nord et l’Europe. Naranjan a plus de 12 ans d’expérience dans la gestion des risques, l’assurance de la sécurité et la réalisation d’audits de technologie. Naranjan a exercé dans l’une des quatre plus grandes sociétés de comptabilité et accompagné des clients des industries de la distribution, du commerce en ligne et des services publics.

Brian Mycroft : Brian Mycroft est technologue en chef pour AWS, il est basé à Ottawa (Canada) et se spécialise dans la sécurité nationale, le renseignement et le gouvernement fédéral du Canada. Brian est l’architecte principal de l’AWS Secure Environment Accelerator (ASEA) et s’intéresse principalement à la suppression des barrières à l’adoption du nuage pour le secteur public.

AWS Security Profile: Philip Winstanley, Security Engineering

Post Syndicated from Maddie Bacon original https://aws.amazon.com/blogs/security/aws-security-profile-philip-winstanley-security-engineering/

AWS Security Profile: Philip Winstanley, Security Engineering
In the AWS Security Profile series, I interview some of the humans who work in Amazon Web Services (AWS) Security and help keep our customers safe and secure. This interview is with Philip Winstanley, a security engineer and AWS Guardian. The Guardians program identifies and develops security experts within engineering teams across AWS, enabling these teams to use Amazon Security more effectively. Through the empowerment of these security-minded Amazonians called “Guardians,” we foster a culture of informed security ownership throughout the development lifecycle.


How long have you been at AWS, and what do you do in your current role?

I’ve been with AWS for just over three years now. I joined in Dublin, Ireland, and I’ve since transferred back to the UK, back to my home city of Manchester. I’m a security engineer on the service team for AWS Managed Services (AMS). We support customer workloads in the cloud and help customers manage them, optimize them, and keep them safe and secure.

How did you get started in the world of security?

I was a software developer for many years, and in building software I discovered that security is an integral part of delivering safe and secure solutions to my customers. That really sparked my interest in the security space, and I started researching and learning about all the different types of attacks that were out there, and learning about organized crime. That led me to work with the UK’s National Crime Agency, where I became a special officer, and to the United Kingdom Royal Airforce, where I worked in the cyber defense team. I managed to merge my technical knowledge with my law enforcement and military knowledge, and then bring them all together as the security engineer that I am today.

What are you currently working on that you’re excited about?

I have the joy of working with full-spectrum security, which is everything from protecting our environments to detecting risks within our environments to responding to those risks. But the bulk of my work is in helping our service teams build safe and secure software. Sometimes we call that AppSec (application security), sometimes we call it secure development. As part of that, I work with a group of volunteers and specialists within engineering teams that we call Guardians. They are our security specialists embedded within AWS service teams. These are people who champion security and make sure that everything we build meets a high security bar, which often goes beyond what we’re asked to do by compliance or regulation. We take it that extra mile. As Guardians, we push our development teams to continually raise the bar on security, privacy, compliance, and the confidentiality of customer data.

What are the most important aspects of being a Guardian?

A Guardian is there to help teams do the right thing when it comes to security—to contextualize knowledge of their team’s business and technology and help them identify areas and opportunities to improve security. Guardians will often think outside the box. They will come at things from a security point of view, not just a development point of view. But they do it within the context of what our customers need. Guardians are always looking around corners; they’re looking at what’s coming next. They’re looking at the risks that are out there, looking at the way environments are evolving, and trying to build in protections now for issues that will come down the line. Guardians are there to help our service teams anticipate and protect against future risks.

How have you as a Guardian improved the quality of security outcomes for customers?

Many of our customers are moving to the cloud, some for the first time, and they have high standards around data sovereignty, around the privacy of the data they manage. In addition to helping service teams meet the security bar, Guardians seek to understand our customers’ security and privacy requirements. As a result, our teams’ Guardians inform the development of features that not only meet our security bar, but also help our customers meet their security, privacy, and compliance requirements.

How have you helped develop security experts within your team?

I have the joy of working with security experts from many different fields. Inside Amazon, we have a huge community of security expertise, touching every single domain of security. What we try to do is cross-pollinate; we teach each other about our own areas of expertise. I focus on application security and work very closely with my colleagues who work in threat intelligence and incident response. We all work together and collaborate to raise the bar for each of us, sharing our knowledge, our skills, our expertise. We do this through training that we build, we do it through knowledge-sharing sessions where we get together and talk about security issues, we do it through being jointly introspective about the work that we’ve done. We will even do reviews of each other’s work and bar raise, adding our own specialist knowledge and expertise to that of our colleagues.

What advice would you give to customers who are considering their own Guardians program?

Security culture is something that comes from within an organization. It’s also something that’s best when it’s done from the ground up. You can’t just tell people to be secure, you have to find people who are passionate about security and empower them. Give them permission to put that passion into their work and give them the opportunity to learn from security training and experts. What you’ll see, if you have people with that passion for security, is that they’ll bring that enthusiasm into the work from the start. They’ll already care about security and want to do more of it.

You’re a self-described “disruptive anti-CISO.” What does that mean?

I wrote a piece on LinkedIn about what it really is, but I’ll give a shorter answer. The world of information security is not new—it’s been around for 20, 30 years, so all the thinking around security comes from a world of on-premises infrastructure. It’s from a time before the cloud even existed and unfortunately, a lot of the security thinking out there is still borne of that age. When we’re in a world of hyper-scaled environments, where we’re dealing with millions of resources, millions of endpoints, we can’t use that traditional thinking anymore. We can’t just lock everything in a box and make sure no one’s got access to it. Quite the opposite, we need to enable innovations, we need to let the business drive that creativity and produce solutions, which means security needs to be an enabler of creativity, not a blocker. I have a firm belief that security plays a part in delivering solutions, in helping solutions land, and making sure that they succeed. Security is not and should never be a gatekeeper to success. More often than not in industries, that was the position that security took. I believe in the opposite—security should enable business. I take that thinking and use it to help AWS customers succeed, through sharing our experience and knowledge with them to keep them safe and secure in the cloud.

What’s the thing you’re most proud of in your career?

When I was at the National Crime Agency, I worked in the dark web threat intelligence unit and some of my work was to combat child exploitation and human trafficking. The work I did there was some of the most rewarding I’ve ever done, and I’m incredibly proud of what we achieved. But it wasn’t just within that agency, it was partnering with other organizations, police forces around the world, and cloud providers such as AWS that combat exploitation and help move vulnerable children into safety. Working to protect victims of crime, especially the most vulnerable, helped me build a customer-centric view to security, ensuring we always think about our end customers and their customers. It’s all about people; we are here to protect and defend families and real lives, not just 1’s and 0’s.

If you had to pick an industry outside of security, what would you want to do?

I have always loved space and would adore working in the space sector. I’m fascinated by all of the renewed space exploration that’s happening at the moment, be it through Blue Origin or Space X or any of these other people out there doing it. If I could have my time again, or even if I could pivot now in my career, I would go and be a space man. I don’t need to be an astronaut, but I would want to contribute to the success of these missions and see humanity go out into the stars.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Maddie Bacon

Maddie (she/her) is a technical writer for AWS Security with a passion for creating meaningful content. She previously worked as a security reporter and editor at TechTarget and has a BA in Mathematics. In her spare time, she enjoys reading, traveling, and all things Harry Potter.

Philip Winstanley

Philip Winstanley

Philip works in Security Engineering to help people, teams, and organizations succeed in the cloud. Philip brings his law enforcement and military experience, combined with technical expertise, to deliver innovative pragmatic security solutions.

ISO/IEC 27001 certificates now available in French and Spanish

Post Syndicated from Rodrigo Fiuza original https://aws.amazon.com/blogs/security/iso-iec-27001-certificates-now-available-in-french-and-spanish/

French version
Spanish version

We continue to listen to our customers, regulators, and stakeholders to understand their needs regarding audit, assurance, certification, and attestation programs at Amazon Web Services (AWS). We are pleased to announce that ISO/IEC 27001 certificates for AWS are now available in French and Spanish on AWS Artifact. These translated reports will help drive greater engagement and alignment with customer and regulatory requirements across Latin America, Canada, and EMEA.

Current translated (French and Spanish) ISO/IEC 27001 certificates are available through AWS Artifact. Future ISO certificates will be published on an annual basis in accordance with the audit period.

We value your feedback and questions—feel free to reach out to our team or give feedback about this post through our Contact Us page.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

.

 


 
 

Les certificats ISO/IEC 27001 sont désormais disponibles en français et en espagnol

Nous restons à l’écoute de nos clients, des régulateurs et des parties prenantes pour comprendre leurs besoins en matière de programmes d’audit, d’assurance, de certification et d’attestation chez Amazon Web Services (AWS). Nous avons le plaisir d’annoncer que les certificats ISO/IEC 27001 d’AWS sont désormais disponibles en français et en espagnol sur AWS Artifact. Ces rapports traduits permettront de renforcer l’engagement et l’alignement sur les exigences des clients et des réglementations en Amérique latine, au Canada et en EMEA.

Les certificats ISO/IEC 27001 actuellement traduits (français et espagnol) sont disponibles via AWS Artifact. Les futurs certificats ISO seront publiés sur une base annuelle en fonction de la période d’audit.

Vos commentaires et vos questions sont importants pour nous. N’hésitez pas à contacter notre équipe ou à nous faire part de vos commentaires sur cet article par le biais de notre page Nous contacter.

Si vous avez des commentaires sur cet article, envoyez-les dans la section Commentaires ci-dessous.

Vous voulez plus d’informations sur la sécurité AWS ? Suivez-nous sur Twitter.

.

 


 
 

Los certificados ISO/IEC 27001 ahora están disponibles en francés y español

Seguimos escuchando a nuestros clientes y reguladores y entendemos sus necesidades con respecto a los programas de garantías en Amazon Web Services (AWS) y nos complace anunciar que los certificados ISO/IEC 27001 ya están disponibles en francés y español. Estos certificados traducidos ayudarán a impulsar los requisitos regulatorios y de los clientes locales en las regiones de LATAM, Canadá y EMEA.

Los certificados ISO/IEC 27001 traducidos actualmente (Francés y Español) están disponibles en AWS Artifact. Los futuros certificados ISO se publicarán anualmente según el período de auditoría.

Valoramos sus comentarios y preguntas; no dude en ponerse en contacto con nuestro equipo o enviarnos sus comentarios sobre esta publicación a través de nuestra página Contáctenos.

Si tienes comentarios sobre esta publicación, envía comentarios en la sección Comentarios a continuación.

¿Desea obtener más noticias sobre seguridad de AWS? Síguenos en Twitter.

Author

Rodrigo Fiuza

Rodrigo is a security audit manager at AWS, based in São Paulo. He leads audits, attestations, certifications, and assessments across Latin America, the Caribbean, and Europe. Rodrigo previously worked in risk management, security assurance, and technology audits for 12 years.

Naranjan Goklani

Naranjan Goklani

Naranjan is a security audit manager at AWS, based in Toronto. He leads audits, attestations, certifications, and assessments across North America and Europe. Naranjan has previously worked in risk management, security assurance, and technology audits for the past 12 years.

Author

Sonali Vaidya

Sonali is a compliance program manager at AWS, where she leads multiple global compliance programs including HITRUST, ISO 27001, ISO 27017, ISO 27018, ISO 27701, ISO 9001, ISO 22301, and CSA STAR. Sonali has over 20 years of experience in information security and privacy management and holds multiple certifications such as CISSP, CCSK, CEH, CISA, and ISO 22301 LA.

Ransomware mitigation: Using Amazon WorkDocs to protect end-user data

Post Syndicated from James Perry original https://aws.amazon.com/blogs/security/ransomware-mitigation-using-amazon-workdocs-to-protect-end-user-data/

Amazon Web Services (AWS) has published whitepapers, blog articles, and videos with prescriptive guidance to assist you in developing an enterprise strategy to mitigate risks associated with ransomware and other destructive events. We also announced a strategic partnership with CrowdStrike and Presidio where together we developed a Ransomware Risk Mitigation Kit, and a Quick-Start engagement to assist with deployment, to provide you with tools to deal with security events before and after they occur.

Developing a ransomware mitigation strategy often uses a risk-based approach, where priority is given to protecting mission-critical applications and data. Managing identified risks associated with individual end users is often deemed a lower priority. However, in many organizations, such as research universities, the work performed by individual researchers is the organizational mission.

End users are increasingly mobile. They’re working remotely, on the go, and frequently moving from one project to the next. They’re also collaborating across borders, time zones, and organizations. You need options for your employees to work securely from any location.

This post covers how you can help prevent, back up, and recover your critical end-user data from ransomware by using Amazon WorkDocs.

Introduction to Amazon WorkDocs

Amazon WorkDocs is a fully managed, secure content creation, storage, and collaboration service. With Amazon WorkDocs, you can create, edit, and share content, and because content is stored centrally on AWS, access it from anywhere, on any device. Amazon WorkDocs makes it easier to collaborate with others, and lets you share content, provide rich feedback, and collaboratively edit documents.

You can access Amazon WorkDocs on the web, or install apps for Windows, MacOS, Android, and iOS devices. In addition, the Amazon WorkDocs Companion lets you open and edit a file from the web client in a single step. When you edit a file, Companion saves your changes to Amazon WorkDocs as a new file version. Amazon WorkDocs Drive enables you to open and work with Amazon WorkDocs files on your computer’s desktop. And the Amazon WorkDocs SDK includes APIs that allow you to build new applications or create integrations with existing Amazon WorkDocs solutions and applications.

As illustrated in Figure 1, these features combine to enable end-user and team file storage, team content and collaboration workflows, secure and auditable content sharing, cloud-based file sharing, and mobile workforce enablement, with support for automation and extensibility.

Figure 1: Common use cases enabled by Amazon WorkDocs

Figure 1: Common use cases enabled by Amazon WorkDocs

Amazon WorkDocs security

Amazon WorkDocs is built with security in mind. Amazon WorkDocs files are stored using the highly durable AWS storage infrastructure, and are encrypted both while in transit and at rest. The service supports the use of multi-factor authentication (MFA), IP filtering of allow lists, and the ability to specify which AWS Region will be used to meet data residency requirements. Your organization can set security policies that prevent your employees from sharing documents externally. Third-party auditors assess the security and compliance of Amazon WorkDocs as part of multiple AWS compliance programs, including SOC, PCI DSS, FedRAMP, HIPAA, ISO 9001, ISO 27001, ISO 27017, and ISO 27018.

Auto activation and authentication

Amazon WorkDocs uses a directory to store and manage organization information for your users and their documents. You can choose from three supported options: Simple Active Directory (Simple AD), Active Directory (AD) Connector, or AWS Managed Microsoft AD.

Simple AD

You can use Simple AD as a standalone directory in the cloud to support Windows workloads that need basic AD features and compatible AWS applications, or to support Linux workloads that need LDAP service. However, Simple AD does not support MFA. For more information, see Simple Active Directory.

AD Connector

AD Connector is a proxy service that provides an easy way to connect compatible AWS applications, such as Amazon WorkDocs, to your existing on-premises Microsoft Active Directory. With AD Connector, you can simply add one service account to your Active Directory. AD Connector also eliminates the need for directory synchronization, as well as the cost and complexity of hosting a federation infrastructure.

AWS Managed Microsoft AD

AWS Managed Microsoft AD is powered by Microsoft Windows Server Active Directory (AD), managed by AWS in the AWS Cloud. It enables you to migrate a broad range of Active Directory–aware applications to the AWS Cloud. AWS Managed Microsoft AD works with Microsoft SharePoint, Microsoft SQL Server Always-On Availability Groups, and many .NET applications. It also supports AWS managed applications and services, including Amazon WorkDocs.

You can attach a supported directory to a WorkDocs site during provisioning. When you do, an Amazon WorkDocs feature called Auto activation adds the users in the directory to the site as managed users, meaning they don’t need separate credentials to log in to your site. You can also create user groups, enable MFA, and configure single sign-on (SSO) for your Amazon WorkDocs site.

Ransomware risk mitigation with Amazon WorkDocs

Amazon WorkDocs also includes built-in security features that enable you to selectively prevent file downloads and changes, revert files to a previous version, and recover deleted files, all of which can mitigate impact and support recovery from a ransomware event.

File versioning

You can keep track of prior versions in Amazon WorkDocs with unlimited versioning. A new version of a file is created every time you save it. With Amazon WorkDocs, all feedback is associated with a specific file version, so you can refer back to comments in earlier iterations. Previous versions can be retrieved, as shown in Figure 2, when you access Amazon WorkDocs with a web browser.

Figure 2: File versioning in Amazon WorkDocs via web browser

Figure 2: File versioning in Amazon WorkDocs via web browser

Using the file versioning feature can help enable the restoration of an unlocked file that has been altered by ransomware to a previous version.

File recovery

When files or folders are deleted, they are stored in an end-user managed recycle bin, as shown in Figure 3, where they can be recovered by the end user if needed.

Figure 3: End-user file recovery from recycle bin in Amazon WorkDocs via web browser

Figure 3: End-user file recovery from recycle bin in Amazon WorkDocs via web browser

After a period of 30 days, the files and folders will be retained for an additional 60 days in an Amazon WorkDocs site administrator-managed recovery bin before being permanently deleted. 60 days is the default retention period, but site administrators can adjust this period to any value from 0 to 365 days. Files will be retained for the specified period and permanently deleted when the retention period limit is reached.

In addition, customers can sync files from Amazon WorkDocs to Amazon S3 for additional resiliency.

Using the file recovery features can provide the ability to restore individual files and folders that were deleted—by ransomware or even just by accident. Note that as of today, file recovery works on a per file or folder basis.

File control

Amazon WorkDocs lets you control who can access, comment on, and download or print your files. And, because the Amazon WorkDocs web client performs remote file rendering via HTML (see supported file types), users gain protection they would not otherwise be afforded when viewing potentially infected files locally. This, combined with the ability to prevent a file from being downloaded as illustrated in Figure 4, can help to mitigate the risk of malware spreading.

You can also lock files while making changes, and enable settings that prevent edits from being overwritten by other contributors, eliminating the need to coordinate changes. You can also disable feedback when you’ve completed a file. When you lock a file, as illustrated in Figure 4, a new version of that file cannot be uploaded until you unlock the file. If someone else needs access to the file, they can request that you unlock it, and you’ll be notified of the request.

Figure 4: End-user file lock settings in Amazon WorkDocs via web browser

Figure 4: End-user file lock settings in Amazon WorkDocs via web browser

Using the file locking feature can prevent ransomware from making unauthorized changes (such as encrypting) to a locked file.

Conclusion

In this blog post, I showed how AWS customers can help prevent, back up, and recover critical end-user data from ransomware incidents by using the file versioning, recovery, and control features of Amazon WorkDocs.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

James Perry

James Perry

James is the Solutions Architecture Security Leader for the Amazon Web Services Worldwide Public Sector Education and State & Local Government team.

Customers can now request the AWS CyberGRX report for their third-party supplier due diligence

Post Syndicated from Niyaz Noor original https://aws.amazon.com/blogs/security/customers-can-now-request-the-aws-cybergrx-report-for-their-third-party-supplier-due-diligence/

CyberGRX

Gaining and maintaining customer trust is an ongoing commitment at Amazon Web Services (AWS). We are continuously expanding our compliance programs to provide customers with more tools and resources to be able to perform effective due diligence on AWS. We are excited to announce the availability of the AWS CyberGRX report for our customers.

With the increase in adoption of cloud platforms and services across multiple sectors and industries, AWS has become one of the most critical components of customers’ third-party ecosystems. Regulated customers, such as those in the financial services sector, are held to higher standards by their regulators and auditors when it comes to exercising effective due diligence on their third parties. Customers are using third-party cyber risk management (TPCRM) platforms such as CyberGRX to better manage risks from their evolving third-party ecosystems and drive operational efficiencies. To help customers in such efforts, AWS has completed CyberGRX assessment of its security posture. The assessment is performed annually and is validated by independent CyberGRX partners.

CyberGRX assessment applies a dynamic approach to third-party risk assessment, which is updated in line with changes in risk level of cloud service providers, or as AWS updates its security posture and controls. This approach eliminates outdated static spreadsheets for third-party risk assessments, in which the risk matrices are not updated in near real time. CyberGRX assessment provides advanced capabilities by integrating AWS responses with analytics, threat intelligence, and sophisticated risk models to provide an in-depth view of the AWS security posture. In addition, AWS customers can use CyberGRX’s Framework Mapper feature to map AWS assessment controls and responses to well-known industry standards and frameworks (such as NIST 800-53, NIST Cybersecurity Framework (CSF), ISO 27001, PCI DSS, HIPAA) which can significantly reduce customers’ third-party supplier due-diligence burden.

The AWS CyberGRX report is available to all customers free of cost. Customers can request access to the report by completing an access request form, available on the AWS CyberGRX page.

As always, we value your feedback and questions. Reach out to the AWS Compliance team through the Contact Us page, or if you have feedback about this post, submit comments in the Comments section below. To learn more about our other compliance and security programs, see AWS Compliance Programs.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Niyaz Noor

Niyaz is the Security Audit Program Manager at AWS. Niyaz leads multiple security certification programs across Europe and other regions. During his professional career, he has helped multiple cloud service providers in obtaining global and regional security certification. He is passionate about delivering programs that build customers’ trust and provide them assurance on cloud security.

Naranjan Goklani

Naranjan Goklani

Naranjan is a Security Audit Manager at AWS, based in Toronto. He leads audits, attestations, certifications, and assessments across North America and Europe. Naranjan has previously worked in risk management, security assurance, and technology audits for the past 12 years.

SOC reports now available in Spanish

Post Syndicated from Rodrigo Fiuza original https://aws.amazon.com/blogs/security/soc-reports-now-available-in-spanish/

At Amazon Web Services (AWS), we continue to listen to our customers, regulators, and stakeholders to understand their needs regarding audit, assurance, certification, and attestation programs. We are pleased to announce that Fall 2021 AWS SOC 1, SOC 2 and SOC 3 reports are now available in Spanish. These translated reports will help drive greater engagement and alignment with customer and regulatory requirements across Latin America and Spain.

The English language version of the reports should be taken into account regarding the independent opinion issued by the auditors and control test results. They will be a complement to the Spanish version.

Translated SOC Reports in Spanish are available through AWS Artifact. Translated SOC reports in Spanish will be published twice a year in alignment with the Fall and Spring reporting cycles.

We value your feedback and questions—feel free to reach out to our team or give feedback about this post through the Contact Us page.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

 


 

Los informes SOC ahora están disponibles en español

Seguimos escuchando a nuestros clientes, reguladores y partes interesadas para comprender sus necesidades en relación con los programas de auditoría, garantía, certificación y atestación en Amazon Web Services (AWS). Nos complace anunciar que los informes SOC 1, SOC 2 y SOC 3 de AWS de otoño de 2021 ya están disponibles en español. Estos informes traducidos ayudarán a impulsar un mayor compromiso y alineación con los requisitos regulatorios y de los clientes en las regiones de América Latina y España.

La versión en inglés de los informes debe tenerse en cuenta en relación con la opinión independiente emitida por los auditores y los resultados de las pruebas de control, como complemento de las versiones en español.

Los informes SOC traducidos en español están disponibles en AWS Artifact. Los informes SOC traducidos en español se publicarán dos veces al año según los ciclos de informes de otoño y primavera.

Valoramos sus comentarios y preguntas; no dude en ponerse en contacto con nuestro equipo o enviarnos sus comentarios sobre esta publicación a través de nuestra página Contáctenos.

Si tienes comentarios sobre esta publicación, envíalos en la sección Comentarios a continuación.

¿Desea obtener más noticias sobre seguridad de AWS? Síguenos en Twitter.
 

Rodrigo Fiuza

Rodrigo Fiuza

Rodrigo is a Security Audit Manager at AWS, based in São Paulo. He leads audits, attestations, certifications, and assessments across Latin America, Caribbean and Europe. Rodrigo has previously worked in risk management, security assurance, and technology audits for the past 12 years.

Author

Nimesh Ravasa

Nimesh is a Compliance Program Manager at Amazon Web Services. He leads multiple security and privacy initiatives within AWS. Nimesh has 14 years of experience in information security and holds CISSP, CISA, PMP, CSX, AWS Solution Architect – Associate, and AWS Security Specialty certifications.

Emma Zhang

Emma Zhang

Emma is a Compliance Program Manager at Amazon Web Services. She leads multiple process improvement projects across multiple compliance programs within AWS. Emma has 8 years of experience in risk management, IT risk assurance, and technology risk advisory.

What is cryptographic computing? A conversation with two AWS experts

Post Syndicated from Supriya Anand original https://aws.amazon.com/blogs/security/a-conversation-about-cryptographic-computing-at-aws/

Joan Feigenbaum
Joan Feigenbaum
Amazon Scholar, AWS Cryptography
Bill Horne
Bill Horne
Principal Product Manager, AWS Cryptography

AWS Cryptography tools and services use a wide range of encryption and storage technologies that can help customers protect their data both at rest and in transit. In some instances, customers also require protection of their data even while it is in use. To address these needs, Amazon Web Services (AWS) is developing new techniques for cryptographic computing, a set of technologies that allow computations to be performed on encrypted data, so that sensitive data is never exposed. This foundation is used to help protect the privacy and intellectual property of data owners, data users, and other parties involved in machine learning activities.

We recently spoke to Bill Horne, Principal Product Manager in AWS Cryptography, and Joan Feigenbaum, Amazon Scholar in AWS Cryptography, about their experiences with cryptographic computing, why it’s such an important topic, and how AWS is addressing it.

Tell me about yourselves: what made you decide to work in cryptographic computing? And, why did you come to AWS to do cryptographic computing?

Joan: I’m a computer science professor at Yale and an Amazon Scholar. I started graduate school at Stanford in Computer Science in the fall of 1981. Before that, I was an undergraduate math major at Harvard. Almost from the beginning, I have been interested in what has now come to be called cryptographic computing. During the fall of 1982, Andrew Yao, who was my PhD advisor, published a paper entitled “Protocols for Secure Computation,” which introduced the millionaire’s problem: Two millionaires want to run a protocol at the end of which they will know which one of them has more millions, but not know exactly how many millions the other one has. If you dig deeper, you’ll find a few antecedents, but that’s the paper that’s usually credited with launching the field of cryptographic computing. Over the course of my 40 years as a computer scientist, I’ve worked in many different areas of computer science research, but I’ve always come back to cryptographic computing, because it’s absolutely fascinating and has many practical applications.

Bill: I originally got my PhD in Machine Learning in 1993, but I switched over to security in the late 1990s. I’ve spent most of my career in industrial research laboratories, where I was always interested in how to bring technology out of the lab and get it into real products. There’s a lot of interest from customers right now around cryptographic computing, and so I think that we’re at a really interesting point in time, where this could take off in the next few years. Being a part of something like this is really exciting.

What exactly is cryptographic computing?

Bill: Cryptographic computing is not a single thing. Rather, it is a methodology for protecting data in use—a set of techniques for doing computation over sensitive data without revealing that data to other parties. For example, if you are a financial services company, you might want to work with other financial services companies to develop machine learning models for credit card fraud detection. You might need to use sensitive data about your customers as training data for your models, but you don’t want to share your customer data in plaintext form with the other companies, and vice versa. Cryptographic computing gives organizations a way to train models collaboratively without exposing plaintext data about their customers to each other, or even to an intermediate third party such as a cloud provider like AWS.

Why is it challenging to protect data in use? How does cryptographic computing help with this challenge?

Bill: Protecting data-at-rest and data-in-transit using cryptography is very well understood.

Protecting data-in-use is a little trickier. When we say we are protecting data-in-use, we mean protecting it while we are doing computation on it. One way to do that is with other types of security mechanisms besides encryption. Specifically, we can use isolation and access control mechanisms to tightly control who or what can gain access to those computations. The level of control can vary greatly from standard virtual machine isolation, all the way down to isolated, hardened, and constrained enclaves backed by a combination of software and specialized hardware. The data is decrypted and processed within the enclave, and is inaccessible to any external code and processes. AWS offers Nitro Enclaves, which is a very tightly controlled environment that uses this kind of approach.

Cryptographic computing offers a completely different approach to protecting data-in-use. Instead of using isolation and access control, data is always cryptographically protected, and the processing happens directly on the protected data. The hardware doing the computation doesn’t even have access to the cryptographic keys used to encrypt the data, so it is computationally intractable for that hardware, any software running on that hardware, or any person who has access to that hardware to learn anything about your data. In fact, you arguably don’t even need isolation and access control if you are using cryptographic computing, since nothing can be learned by viewing the computation.

What are some cryptographic computing techniques and how do they work?

Bill: Two applicable fundamental cryptographic computing techniques are homomorphic encryption and secure multi-party computation. Homomorphic encryption allows for computation on encrypted data. Basically, the idea is that there are special cryptosystems that support basic mathematical operations like addition and multiplication which work on encrypted data. From those simple operations, you can form complex circuits to implement any function you want.

Secure multi-party computation is a very different paradigm. In secure multi-party computation, you have two or more parties who want to jointly compute some function, but they don’t want to reveal their data to each other. An example might be that you have a list of customers and I have a list of customers, and we want to find out what customers we have in common without revealing anything else about our data to each other, in order to protect customer privacy. That’s a special kind of multi-party computation called private set intersection (PSI).

Joan: To add some detail to what Bill said, homomorphic encryption was heavily influenced by a 2009 breakthrough by Craig Gentry, who is now a Research Fellow at the Algorand Foundation. If a customer has dataset X, needs f(X), and is willing to reveal X to the server, he uploads X and has the cloud service compute Y= f(X) and return Y. If he wants (or is required by law or policy) to hide X from the cloud provider, he homomorphically encrypts X on the client side to get X’, uploads it, receives an encrypted result Y’, and homomorphically decrypts Y’ (again on the client side) to get Y. The confidential data, the result, and the cryptographic keys all remain on the client side.

In secure multi-party computation, there are n ≥ 2 parties that have datasets X1, X2, …, Xn, and they wish to compute Y=f(X1, X2, …, Xn). No party wants to reveal to the others anything about his own data that isn’t implied by the result Y. They execute an n-party protocol in which they exchange messages and perform local computations; at the end, all parties know the result, but none has obtained additional information about the others’ inputs or the intermediate results of the (often multi-round) distributed computation. Multi-party computation might use encryption, but often it uses other data-hiding techniques such as secret sharing.

Cryptographic computing seems to be appearing in the popular technical press a lot right now and AWS is leading work in this area. Why is this a hot topic right now?

Joan: There’s strong motivation to deploy this stuff now, because cloud computing has become a big part of our tech economy and a big part of our information infrastructure. Parties that might have previously managed compute environments on-premises where data privacy is easier to reason about are now choosing third-party cloud providers to provide this compute environment. Data privacy is harder to reason about in the cloud, so they’re looking for techniques where they don’t have to completely rely on their cloud provider for data privacy. There’s a tremendous amount of confidential data—in health care, medical research, finance, government, education, and so on—data which organizations want to use in the cloud to take advantage of state-of-the-art computational techniques that are hard to implement in-house. That’s exactly what cryptographic computing is intended for: using data without revealing it.

Bill: Data privacy has become one the most important issues in security. There is clearly a lot of regulatory pressure right now to protect the privacy of individuals. But progressive companies are actually trying to go above and beyond what they are legally required to do. Cryptographic computing offers customers a compelling set of new tools for being able to protect data throughout its lifecycle without exposing it to unauthorized parties.

Also, there’s a lot of hype right now about homomorphic encryption that’s driving a lot of interest in the popular tech press. But I don’t think people fully understand its power, applicability, or limitations. We’re starting to see homomorphic encryption being used in practice for some small-scale applications, but we are just at the beginning of what homomorphic encryption can offer. AWS is actively exploring ideas and finding new opportunities to solve customer problems with this technology.

Can you talk about the research that’s been done at AWS in cryptographic computing?

Joan: We researched and published on a novel use of homomorphic encryption applied to a popular machine learning algorithm called XGBoost. You have an XGBoost model that has been trained in the standard way, and a large set of users that want to query that model. We developed PPXGBoost inference (where the “PP” stands for privacy preserving). Each user stores a personalized, encrypted version of the model on a remote server, and then submits encrypted queries to that server. The user receives encrypted inferences, which are decrypted and stored on a personal device. For example, imagine a healthcare application, where over time the device uses these inferences to build up a health profile that is stored locally. Note that the user never reveals any personal health data to the server, because the submitted queries are all encrypted.

There’s another application our colleague Eric Crockett, Sr. Applied Scientist, published a paper about. It deals with a standard machine-learning technique called logistic regression. Crockett developed HELR, an application that trains logistic-regression models on homomorphically encrypted data.

Both papers are available on the AWS Cryptographic Computing webpage. The HELR code and PPXGBoost code are available there as well. You can download that code, experiment with it, and use it in your applications.

What are you working on right now that you’re excited about?

Bill: We’ve been talking with a lot of internal and external customers about their data protection problems, and have identified a number of areas where cryptographic computing offers solutions. We see a lot of interest in collaborative data analysis using secure multi-party computation. Customers want to jointly compute all sorts of functions and perform analytics without revealing their data to each other. We see interest in everything from simple comparisons of data sets through jointly training machine learning models.

Joan: To add to what Bill said: We’re exploring two use cases in which cryptographic computing (in particular, secure multi-party computation and homomorphic encryption) can be applied to help solve customers’ security and privacy challenges at scale. The first use case is privacy-preserving federated learning, and the second is private set intersection (PSI).

Federated learning makes it possible to take advantage of machine learning while minimizing the need to collect user data. Imagine you have a server and a large set of clients. The server has constructed a model and pushed it out to the clients for use on local devices; one typical use case is voice recognition. As clients use the model, they make personalized updates that improve it. Some of the local improvements made locally in my environment could also be relevant in millions of other users’ environments. The server gathers up all these local improvements and aggregates them into one improvement to the global model; then the next time it pushes out a new model to existing and new clients, it has an improved model to push out. To accomplish privacy-preserving federated learning, one uses cryptographic computing techniques to ensure that individual users’ local improvements are never revealed to the server or to other users in the process of computing a global improvement.

Using PSI, two or more AWS customers who have related datasets can compute the intersection of their datasets—that is, the data elements that they all have in common—while hiding crucial information about the data elements that are not common to all of them. PSI is a key enabler in several business use cases that we have heard about from customers, including data enrichment, advertising, and healthcare.

This post is meant to introduce some of the cryptographic computing and novel use cases AWS is exploring. If you are serious about exploring this approach, we encourage you to reach out to us and discuss what problems you are trying to solve and whether cryptographic computing can help you. Learn more and get in touch with us at our Cryptographic Computing webpage or send us an email at [email protected]

Want more AWS Security news? Follow us on Twitter.

Author

Supriya Anand

Supriya is a Senior Digital Strategist at AWS, focused on marketing, encryption, and emerging areas of cybersecurity. She has worked to drive large scale marketing and content initiatives forward in a variety of regulated industries. She is passionate about helping customers learn best practices to secure their AWS cloud environment so they can innovate faster on behalf of their business.

Author

Maddie Bacon

Maddie (she/her) is a technical writer for AWS Security with a passion for creating meaningful content. She previously worked as a security reporter and editor at TechTarget and has a BA in Mathematics. In her spare time, she enjoys reading, traveling, and all things Harry Potter.

AWS achieves FedRAMP P-ATO for 15 services in the AWS US East/West and AWS GovCloud (US) Regions

Post Syndicated from Alexis Robinson original https://aws.amazon.com/blogs/security/aws-achieves-fedramp-p-ato-for-15-services-in-the-aws-us-east-west-and-aws-govcloud-us-regions/

AWS is pleased to announce that 15 additional AWS services have achieved Provisional Authority to Operate (P-ATO) from the Federal Risk and Authorization Management Program (FedRAMP) Joint Authorization Board (JAB).

AWS is continually expanding the scope of our compliance programs to help customers use authorized services for sensitive and regulated workloads. AWS now offers 111 AWS services authorized in the AWS US East/West Regions under FedRAMP Moderate Authorization, and 91 services authorized in the AWS GovCloud (US) Regions under FedRAMP High Authorization.

Figure 1. Newly authorized services list

Figure 1. Newly authorized services list

Descriptions of AWS Services now in FedRAMP P-ATO

These additional AWS services now provide the following capabilities for the U.S. federal government and customers with regulated workloads:

  • Amazon Detective simplifies analyzing, investigating, and quickly identifying the root cause of potential security issues or suspicious activities. Amazon Detective automatically collects log data from your AWS resources, and uses machine learning, statistical analysis, and graph theory to build a linked set of data enabling you to easily conduct faster and more efficient security investigations.
  • Amazon FSx for Lustre provides fully managed shared storage with the scalability and performance of the popular Lustre file system.
  • Amazon FSx for Windows File Server provides fully managed shared storage built on Windows Server, and delivers a wide range of data access, data management, and administrative capabilities.
  • Amazon Kendra is an intelligent search service powered by machine learning (ML).
  • Amazon Keyspaces (for Apache Cassandra) is a scalable, highly available, and managed Apache Cassandra-compatible database service.
  • Amazon Lex is an AWS service for building conversational interfaces into applications using voice and text.
  • Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS.
  • Amazon MQ is a managed message broker service for Apache ActiveMQ and RabbitMQ that simplifies setting up and operating message brokers on AWS.
  • AWS CloudHSM is a cloud-based hardware security module (HSM) that lets you generate and use your own encryption keys on the AWS Cloud.
  • AWS Cloud Map is a cloud resource discovery service. With Cloud Map, you can define custom names for your application resources, and CloudMap maintains the updated location of these dynamically changing resources.
  • AWS Glue DataBrew is a new visual data preparation tool that lets data analysts and data scientists quickly clean and normalize data to prepare it for analytics and machine learning.
  • AWS Outposts (hardware excluded) is a fully managed service that extends AWS infrastructure, services, APIs, and tools to customer premises. By providing local access to AWS managed infrastructure, AWS Outposts enables you to build and run applications on premises using the same programming interfaces used in AWS Regions, while using local compute and storage resources for lower latency and local data processing needs.
  • AWS Resource Groups grants you the ability to organize your AWS resources, managing and automating tasks for large numbers of resources at the same time.
  • AWS Snowmobile is an Exabyte-scale data transfer service used to move extremely large amounts of data to AWS. You can transfer up to 100PB per Snowmobile, a 45-foot long ruggedized shipping container, pulled by a semi-trailer truck. After an initial assessment, a Snowmobile will be transported to your data center and AWS personnel will configure it so it can be accessed as a network storage target. After you load your data, the Snowmobile is driven back to an AWS regional data center, where AWS imports the data into Amazon Simple Storage Service (Amazon S3).
  • AWS Transfer Family securely scales your recurring business-to-business file transfers to Amazon S3 and Amazon Elastic File System (Amazon EFS) using SFTP, FTPS, and FTP protocols.

The following services are now listed on the FedRAMP Marketplace and the AWS Services in Scope by Compliance Program page.

Service authorizations by Region

Service FedRAMP Moderate in AWS US East/West FedRAMP High in AWS GovCloud (US)
Amazon Detective
Amazon FSx for Lustre
Amazon FSx for Windows File Server
Amazon Kendra
Amazon Keyspaces (for Apache Cassandra)
Amazon Lex
Amazon Macie
Amazon MQ
AWS CloudHSM
AWS Cloud Map
AWS Glue DataBrew
AWS Outposts
AWS Resource Groups
AWS Snowmobile
AWS Transfer Family

To learn what other public sector customers are doing on AWS, see our Government, Education, and Nonprofits Case Studies and Customer Success Stories. Stay tuned for future updates on our Services in Scope by Compliance Program page. Let us know how this post will help your mission by reaching out to your AWS Account Team. Lastly, if you have feedback about this blog post, let us know in the Comments section.

Want more AWS Security news? Follow us on Twitter.

Author

Alexis Robinson

Alexis is the Head of the U.S. Government Security and Compliance Program for AWS. For over 10 years, she has served federal government clients advising on security best practices and conducting cyber and financial assessments. She currently supports the security of the AWS internal environment including cloud services applicable to AWS East/West and AWS GovCloud (US) Regions.

AWS User Guide to Financial Services Regulations and Guidelines in Switzerland and FINMA workbooks publications

Post Syndicated from Margo Cronin original https://aws.amazon.com/blogs/security/aws-user-guide-to-financial-services-regulations-and-guidelines-in-switzerland-and-finma/

AWS is pleased to announce the publication of the AWS User Guide to Financial Services Regulations and Guidelines in Switzerland whitepaper and workbooks.

This guide refers to certain rules applicable to financial institutions in Switzerland, including banks, insurance companies, stock exchanges, securities dealers, portfolio managers, trustees and other financial entities which are overseen (directly or indirectly) by the Swiss Financial Market Supervisory Authority (FINMA).

Amongst other topics, this guide covers requirements created by the following regulations and publications of interest to Swiss financial institutions:

  • Federal Laws – including Article 47 of the Swiss Banking Act (BA). Banks and Savings Banks are overseen by FINMA and governed by the BA (Bundesgesetz über die Banken und Sparkassen, Bankengesetz, BankG). Article 47 BA holds relevance in the context of outsourcing.
  • Response on Cloud Guidelines for Swiss Financial institutions produced by the Swiss Banking Union, Schweizerische Bankiervereinigung SBVg.
  • Controls outlined by FINMA, Switzerland’s independent regulator of financial markets, that may be applicable to Swiss banks and insurers in the context of outsourcing arrangements to the cloud.

In combination with the AWS User Guide to Financial Services Regulations and Guidelines in Switzerland whitepaper, customers can use the detailed AWS FINMA workbooks and ISAE 3000 report available from AWS Artifact.

The five core FINMA circulars are intended to assist Swiss-regulated financial institutions in understanding approaches to due diligence, third-party management, and key technical and organizational controls that should be implemented in cloud outsourcing arrangements, particularly for material workloads. The AWS FINMA workbooks and ISAE 3000 report scope covers, in detail, requirements of the following FINMA circulars:

  • 2018/03 Outsourcing – banks and insurers (04.11.2020)
  • 2008/21 Operational Risks – Banks – Principle 4 Technology Infrastructure (31.10.2019)
  • 2008/21 Operational Risks – Banks – Appendix 3 Handling of electronic Client Identifying Data (31.10.2019)
  • 2013/03 Auditing – Information Technology (04.11.2020)
  • 2008/10 Self-regulation as a minimum standard – Minimum Business Continuity Management (BCM) minimum standards proposed by the Swiss Insurance Association (01.06.2015) and Swiss Bankers Association (29.08.2013)

Customers can use the detailed FINMA workbooks, which include detailed control mappings for each FINMA control, covering both the AWS control activities and the Customer User Entity Controls. Where applicable, under the AWS Shared Responsibility Model, these workbooks provide industry standard practices, incorporating AWS Well-Architected, to assist Swiss customers in their own preparation for FINMA circular alignment.

This whitepaper follows the issuance of the second Swiss Financial Market Supervisory Authority (FINMA) ISAE 3000 Type 2 attestation report. The latest report covers the period from October 1, 2020 to September 30, 2021, with a total of 141 AWS services and 23 global AWS Regions included in the scope. Customers can download the report from AWS Artifact. A full list of certified services and Regions is presented within the published FINMA report.

As always, AWS is committed to bringing new services into the scope of our FINMA program in the future based on customers’ architectural and regulatory needs. Please reach out to your AWS account team if you have any questions or feedback. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Margo Cronin

Margo is a Principal Security Specialist at Amazon Web Services based in Zurich, Switzerland. She spends her days working with customers, from startups to the largest of enterprises, helping them build new capabilities and accelerating their cloud journey. She has a strong focus on security, helping customers improve their security, risk, and compliance in the cloud.

Top 2021 AWS Security service launches security professionals should review – Part 1

Post Syndicated from Ryan Holland original https://aws.amazon.com/blogs/security/top-2021-aws-security-service-launches-part-1/

Given the speed of Amazon Web Services (AWS) innovation, it can sometimes be challenging to keep up with AWS Security service and feature launches. To help you stay current, here’s an overview of some of the most important 2021 AWS Security launches that security professionals should be aware of. This is the first of two related posts; Part 2 will highlight some of the important 2021 launches that security professionals should be aware of across all AWS services.

Amazon GuardDuty

In 2021, the threat detection service Amazon GuardDuty expanded the internal AWS security intelligence it consumes to use more of the intel that AWS internal threat detection teams collect, including additional nation-state threat intelligence. Sharing more of the important intel that internal AWS teams collect lets you quickly improve your protection. GuardDuty also launched domain reputation modeling. These machine learning models take all the domain requests from across all of AWS, and feed them into a model that allows AWS to categorize previously unseen domains as highly likely to be malicious or benign based on their behavioral characteristics. In practice, AWS is seeing that these models often deliver high-fidelity threat detections, identifying malicious domains 7–14 days before they are identified and available on commercial threat feeds.

AWS also launched second generation anomaly detection for GuardDuty. Shortly after the original GuardDuty launch in 2017, AWS added additional anomaly detection for user behavior analytics and monitoring for unusual activity of AWS Identity and Access Management (IAM) users. After receiving customer feedback that the original feature was a little too noisy, and that it was difficult to understand why some findings were generated, the GuardDuty analytics team rebuilt this functionality on an entirely new machine learning model, considerably reducing the number of detections and generating a more accurate positive-detection rate. The new model also added additional context that security professionals (such as analysts) can use to understand why the model shows findings as suspicious or unusual.

Since its introduction, GuardDuty has detected when AWS EC2 Role credentials are used to call AWS APIs from IP addresses outside of AWS. Beginning in early 2022, GuardDuty now supports detection when credentials are used from other AWS accounts, inside the AWS network. This is a complex problem for customers to solve on their own, which is why the GuardDuty team added this enhancement. The solution considers that there are legitimate reasons why a source IP address that is communicating with AWS services APIs might be different than the Amazon Elastic Compute Cloud (Amazon EC2) instance IP address, or a NAT gateway associated with the instance’s VPC. The enhancement also considers complex network topologies that route traffic to one or multiple VPCs—for example, AWS Transit Gateway or AWS Direct Connect.

Our customers are increasingly running container workloads in production; helping to raise the security posture of these workloads became an AWS development priority in 2021. GuardDuty for EKS Protection is one recent feature that has resulted from this investment. This new GuardDuty feature monitors Amazon Elastic Kubernetes Service (Amazon EKS) cluster control plane activity by analyzing Kubernetes audit logs. GuardDuty is integrated with Amazon EKS, giving it direct access to the Kubernetes audit logs without requiring you to turn on or store these logs. Once a threat is detected, GuardDuty generates a security finding that includes container details such as pod ID, container image ID, and associated tags. See below for details on how the new Amazon Inspector is also helping to protect containers.

Amazon Inspector

At AWS re:Invent 2021, we launched the new Amazon Inspector, a vulnerability management service that continually scans AWS workloads for software vulnerabilities and unintended network exposure. The original Amazon Inspector was completely re-architected in this release to automate vulnerability management and to deliver near real-time findings to minimize the time needed to discover new vulnerabilities. This new Amazon Inspector has simple one-click enablement and multi-account support using AWS Organizations, similar to our other AWS Security services. This launch also introduces a more accurate vulnerability risk score, called the Inspector score. The Inspector score is a highly contextualized risk score that is generated for each finding by correlating Common Vulnerability and Exposures (CVE) metadata with environmental factors for resources such as network accessibility. This makes it easier for you to identify and prioritize your most critical vulnerabilities for immediate remediation. One of the most important new capabilities is that Amazon Inspector automatically discovers running EC2 instances and container images residing in Amazon Elastic Container Registry (Amazon ECR), at any scale, and immediately starts assessing them for known vulnerabilities. Now you can consolidate your vulnerability management solutions for both Amazon EC2 and Amazon ECR into one fully managed service.

AWS Security Hub

In addition to a significant number of smaller enhancements throughout 2021, in October AWS Security Hub, an AWS cloud security posture management service, addressed a top customer enhancement request by adding support for cross-Region finding aggregation. You can now view all your findings from all accounts and all selected Regions in a single console view, and act on them from an Amazon EventBridge feed in a single account and Region. Looking back at 2021, Security Hub added 72 additional best practice checks, four new AWS service integrations, and 13 new external partner integrations. A few of these integrations are Atlassian Jira Service Management, Forcepoint Cloud Security Gateway (CSG), and Amazon Macie. Security Hub also achieved FedRAMP High authorization to enable security posture management for high-impact workloads.

Amazon Macie

Based on customer feedback, data discovery tool Amazon Macie launched a number of enhancements in 2021. One new feature, which made it easier to manage Amazon Simple Storage Service (Amazon S3) buckets for sensitive data, was criteria-based bucket selection. This Macie feature allows you to define runtime criteria to determine which S3 buckets should be included in a sensitive data-discovery job. When a job runs, Macie identifies the S3 buckets that match your criteria, and automatically adds or removes them from the job’s scope. Before this feature, once a job was configured, it was immutable. Now, for example, you can create a policy where if a bucket becomes public in the future, it’s automatically added to the scan, and similarly, if a bucket is no longer public, it will no longer be included in the daily scan.

Originally Macie included all managed data identifiers available for all scans. However, customers wanted more surgical search criteria. For example, they didn’t want to be informed if there were exposed data types in a particular environment. In September 2021, Macie launched the ability to enable/disable managed data identifiers. This allows you to customize the data types you deem sensitive and would like Macie to alert on, in accordance with your organization’s data governance and privacy needs.

Amazon Detective

Amazon Detective is a service to analyze and visualize security findings and related data to rapidly get to the root cause of potential security issues. In January 2021, Amazon Detective added a convenient, time-saving integration that allows you to start security incident investigation workflows directly from the GuardDuty console. This new hyperlink pivot in the GuardDuty console takes findings directly from the GuardDuty console into the Detective console. Another time-saving capability added was the IP address drill down functionality. This new capability can be useful to security forensic teams performing incident investigations, because it helps quickly determine the communications that took place from an EC2 instance under investigation before, during, and after an event.

In December 2021, Detective added support for AWS Organizations to simplify management for security operations and investigations across all existing and future accounts in an organization. This launch allows new and existing Detective customers to onboard and centrally manage the Detective graph database for up to 1,200 AWS accounts.

AWS Key Management Service

In June 2021, AWS Key Management Service (AWS KMS) introduced multi-Region keys, a capability that lets you replicate keys from one AWS Region into another. With multi-Region keys, you can more easily move encrypted data between Regions without having to decrypt and re-encrypt with different keys for each Region. Multi-Region keys are supported for client-side encryption using direct AWS KMS API calls, or in a simplified manner with the AWS Encryption SDK and Amazon DynamoDB Encryption Client.

AWS Secrets Manager

Last year was a busy year for AWS Secrets Manager, with four feature launches to make it easier to manage secrets at scale, not just for client applications, but also for platforms. In March 2021, Secrets Manager launched multi-Region secrets to automatically replicate secrets for multi-Region workloads. Also in March, Secrets Manager added three new rules to AWS Config, to help administrators verify that secrets in Secrets Manager are configured according to organizational requirements. Then in April 2021, Secrets Manager added a CSI driver plug-in, to make it easy to consume secrets from Amazon EKS by using Kubernetes’s standard Secrets Store interface. In November, Secrets Manager introduced a higher secret limit of 500,000 per account to simplify secrets management for independent software vendors (ISVs) that rely on unique secrets for a large number of end customers. Although launched in January 2022, it’s also worth mentioning Secrets Manager’s release of rotation windows to align automatic rotation of secrets with application maintenance windows.

Amazon CodeGuru and Secrets Manager

In November 2021, AWS announced a new secrets detector feature in Amazon CodeGuru that searches your codebase for hardcoded secrets. Amazon CodeGuru is a developer tool powered by machine learning that provides intelligent recommendations to detect security vulnerabilities, improve code quality, and identify an application’s most expensive lines of code.

This new feature can pinpoint locations in your code with usernames and passwords; database connection strings, tokens, and API keys from AWS; and other service providers. When a secret is found in your code, CodeGuru Reviewer provides an actionable recommendation that links to AWS Secrets Manager, where developers can secure the secret with a point-and-click experience.

Looking ahead for 2022

AWS will continue to deliver experiences in 2022 that meet administrators where they govern, developers where they code, and applications where they run. A lot of customers are moving to container and serverless workloads; you can expect to see more work on this in 2022. You can also expect to see more work around integrations, like CodeGuru Secrets Detector identifying plaintext secrets in code (as noted previously).

To stay up-to-date in the year ahead on the latest product and feature launches and security use cases, be sure to read the Security service launch announcements. Additionally, stay tuned to the AWS Security Blog for Part 2 of this blog series, which will provide an overview of some of the important 2021 launches that security professionals should be aware of across all AWS services.

If you’re looking for more opportunities to learn about AWS security services, check out AWS re:Inforce, the AWS conference focused on cloud security, identity, privacy, and compliance, which will take place June 28-29 in Houston, Texas.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Ryan Holland

Ryan is a Senior Manager with GuardDuty Security Response. His team is responsible for ensuring GuardDuty provides the best security value to customers, including threat intelligence, behavioral analytics, and finding quality.

Author

Marta Taggart

Marta is a Seattle-native and Senior Product Marketing Manager in AWS Security Product Marketing, where she focuses on data protection services. Outside of work you’ll find her trying to convince Jack, her rescue dog, not to chase squirrels and crows (with limited success).

C5 Type 2 attestation report now available with 141 services in scope

Post Syndicated from Mercy Kanengoni original https://aws.amazon.com/blogs/security/c5-type-2-attestation-report-now-available-with-141-services-in-scope/

Amazon Web Services (AWS) is pleased to announce the issuance of the new Cloud Computing Compliance Controls Catalogue (C5) Type 2 attestation report. We added 18 additional services and service features to the scope of the 2021 report.

Germany’s national cybersecurity authority, Bundesamt für Sicherheit in der Informationstechnik (BSI), established C5 to define a reference standard for German cloud security requirements. The C5 Type 2 report covers the time period from October 1, 2020, through September 30, 2021. It was issued by an independent third-party attestation organization, and assesses the design and the operational effectiveness of AWS’s controls against the new version C5:2020’s basic and additional criteria.

Customers in Germany and other European countries can use AWS’s attestation report to confirm that AWS meets the security requirements of the C5:2020 framework, and to review the details of the tested controls. This attestation demonstrates our commitment to meet and exceed the security expectations for cloud service providers set by the BSI.

AWS has added the following 18 services and service features to the new C5 scope:

You can see a current list of the services in scope for C5 on the AWS Services in Scope by Compliance Program page.

AWS strives to continuously bring services into scope of its compliance programs to help you meet your architectural and regulatory needs. Please reach out to your AWS account team if you have questions or feedback about the C5 report.

The C5 report and Continuing Operations Letter is available to AWS customers through AWS Artifact. For more information, see Cloud Computing Compliance Controls Catalogue (C5).

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Security Hub forum. To start your 30-day free trial of Security Hub, visit AWS Security Hub.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Mercy Kanengoni

Mercy Kanengoni

Mercy is a Security Audit Program Manager at AWS based in Manchester, UK. She leads security audits across Europe, and she has previously worked in security assurance and technology risk management.

Author

Karthik Amrutesh

Karthik is a Senior Manager, Security Assurance at AWS based in New York, U.S. His team is responsible for audits, attestations, certifications, and assessments globally. Karthik has previously worked in risk management, security assurance, and technology audits for the past 18 years.

AWS cloud services adhere to CISPE Data Protection Code of Conduct for added GDPR assurance

Post Syndicated from Chad Woolf original https://aws.amazon.com/blogs/security/aws-cloud-services-adhere-to-cispe-data-protection-code-of-conduct/

French version
German version

I’m happy to announce that AWS has declared 52 services under the Cloud Infrastructure Service Providers Europe Data Protection Code of Conduct (CISPE Code). This provides an independent verification and an added level of assurance to our customers that our cloud services can be used in compliance with the General Data Protection Regulation (GDPR).

Validated by the European Data Protection Board (EDPB) and approved by the French Data Protection Authority (CNIL), the CISPE Code assures organizations that their cloud infrastructure service provider meets the requirements applicable to personal data processed on their behalf (customer data) under the GDPR. The CISPE Code also raises the bar on data protection and privacy for cloud services in Europe, going beyond current GDPR requirements. For example:

  • Data in Europe: The CISPE Code goes beyond GDPR compliance by requiring cloud infrastructure service providers to give customers the choice to use services to store and process customer data exclusively in the European Economic Area (EEA).
  • Data privacy: The CISPE Code prohibits cloud infrastructure service providers from using customer data for data mining, profiling, or direct marketing.
  • Cloud infrastructure focused: The CISPE Code addresses the specific roles and responsibilities of cloud infrastructure service providers (not represented in more general codes).

These 52 AWS services have now been independently verified as complying with the CISPE Code. The verification process was conducted by Ernst & Young CertifyPoint (EY CertifyPoint), an independent, globally recognized monitoring body accredited by CNIL. AWS is bound by the CISPE Code’s requirements for the 52 declared services, and we are committed to bringing additional services into the scope of the CISPE compliance program.

About the CISPE Data Protection Code of Conduct

The CISPE Code is the first pan-European data protection code of conduct for cloud infrastructure service providers. In May 2021, the CISPE Code was approved by the EDPB, acting on behalf of the 27 data protection authorities across Europe; and in June 2021, the Code was formally adopted by the CNIL, acting as the lead supervisory authority.

EY CertifyPoint is accredited as an independent monitoring body for the CISPE Code by CNIL, based on criteria approved by the EDPB. EY CertifyPoint is responsible for supervising AWS’s ongoing compliance with the CISPE Code for all declared services.

AWS and the GDPR

To earn and maintain customer trust, AWS is committed to providing customers and partners an environment to deploy AWS services in compliance with the GDPR, and to build their own GDPR-compliant products, services, and solutions.

For more information, see the AWS General Data Protection Regulation (GDPR) Center.

Further information

A list of the 52 AWS services that are verified as compliant with the CISPE Code is available on the CISPE Public Register site.

AWS helps customers accelerate cloud-driven innovation and succeed at home and globally. You can read more about our ongoing commitments to protect EU customers’ data on our EU data protection section of the AWS Cloud Security site.

.


Les services cloud d’AWS adhèrent au code de conduite du CISPE sur la protection des données pour une garantie de conformité supplémentaire au RGPD.

par Chad Woolf

Je suis heureux d’annoncer qu’AWS a déclaré 52 services sous le Code de conduite sur la protection des données des fournisseurs de services d’infrastructure cloud en Europe (Code CISPE). Ceci donne une vérification indépendante et un niveau d’assurance supplémentaire à nos clients quant à la conformité de nos services cloud qu’ils utilisent avec le Règlement Général sur la Protection des Données (RGPD).

Validé par le Conseil Européen de la Protection des Données (CEPD) et approuvé par la Commission Nationale de l’Informatique et des Libertés (CNIL), le Code CISPE assure aux organisations que leur fournisseur de services d’infrastructure cloud répond aux exigences applicables aux données personnelles traitées en leur nom (données clients) sur base du RGPD. Le Code CISPE met la barre plus haut en matière de protection des données et de vie privée pour les services cloud en Europe, allant au-delà des exigences actuelles du RGPD. Par exemple :

  • Données en Europe : Le Code CISPE va au-delà de la conformité au RGPD en exigeant des fournisseurs de services d’infrastructure cloud qu’ils donnent aux clients le choix d’utiliser les services de stockage et de traitement des données clients exclusivement dans l’Espace Economique Européen (EEE).
  • Confidentialité des données : Le Code CISPE interdit aux fournisseurs de services d’infrastructure cloud d’utiliser les données clients pour l’exploration de données, le profilage ou le marketing direct.
  • Ciblage sur l’infrastructure cloud : Le Code CISPE traite des rôles et des responsabilités spécifiques des fournisseurs de services d’infrastructure cloud (non représentés dans des codes plus généraux).

Ces 52 services AWS ont aujourd’hui été vérifiés de manière indépendante comme étant conformes au Code CISPE. Le processus de vérification a été mené par Ernst & Young CertifyPoint (EY CertifyPoint), un organisme de contrôle indépendant et mondialement reconnu, accrédité par la CNIL. AWS est lié par les exigences du Code CISPE pour les 52 services déclarés, et nous nous engageons à faire entrer des services supplémentaires dans le champ d’application du programme de conformité CISPE.

À propos du Code de conduite sur la protection des données du CISPE

Le Code CISPE est le premier code de conduite paneuropéen sur la protection des données destiné aux fournisseurs de services d’infrastructure cloud. En mai 2021, le Code CISPE a été approuvé par le CEPD, agissant au nom des 27 autorités de protection des données à travers l’Europe ; et en juin 2021, le Code a été formellement adopté par la CNIL, agissant en tant qu’autorité de contrôle principale.

EY CertifyPoint est accrédité en tant qu’organisme indépendant de contrôle du Code CISPE par la CNIL, sur la base de critères approuvés par le CEPD. EY CertifyPoint est chargé de superviser la conformité permanente d’AWS au Code CISPE pour tous les services déclarés.

AWS et le GDPR

Pour gagner et conserver la confiance des clients, AWS s’engage à fournir aux clients et aux partenaires un environnement permettant de déployer les services AWS en conformité avec le RGPD, et de créer leurs propres produits, services et solutions conformes au RGPD.

Pour plus d’informations, consultez le Centre AWS sur le Règlement Générale sur la Protection des Données (RGPD).

Informations complémentaires

Une liste des 52 services AWS qui ont été vérifiés comme étant conformes au code CISPE est disponible sur le site du registre public CISPE.

AWS aide ses clients à accélérer l’innovation basée sur le cloud et à réussir chez eux et dans le monde entier. Vous pouvez en savoir plus sur nos engagements continus en matière de protection des données des clients de l’UE sur section Protection des Données de l’UE du site AWS Cloud Security.

.


 

AWS-Cloud-Dienste befolgen den CISPE-Verhaltenskodex für Datenschutz als zusätzliche Sicherheit bezüglich DSGVO

von Chad Woolf

Mit großer Freude darf ich verkünden, dass AWS 52 Dienste als im Einklang mit dem Verhaltenskodex für Cloud-Infrastruktur-Dienstanbieter in Europa (CISPE-Kodex) deklariert hat. Dies bietet unseren Kunden eine unabhängige Verifizierung und ein zusätzliches Maß an Sicherheit, dass unsere Cloud-Dienste in Übereinstimmung mit der Datenschutz-Grundverordnung (DSGVO) genutzt werden können.

Der CISPE-Kodex wurde vom Europäischen Datenschutzausschuss (EDSA) geprüft und von der französischen Datenschutzbehörde (CNIL) genehmigt. Er bietet Unternehmen die Sicherheit, dass ihr Cloud-Infrastruktur-Dienstanbieter die Anforderungen erfüllt, die für in ihrem Auftrag verarbeitete personenbezogene Daten (Kundendaten) gemäß der DSGVO gelten. Der CISPE-Kodex erhöht auch die Messlatte für Datenschutz für Cloud-Dienste in Europa, indem er über die aktuellen DSGVO-Anforderungen hinausgeht. Zum Beispiel:

  • Daten in Europa: Der CISPE-Kodex geht über die DSGVO-Konformität hinaus, indem er Cloud-Infrastruktur-Dienstanbieter dazu verpflichtet, ihren Kunden die Wahl zu geben, Dienste zur Speicherung und Verarbeitung von Kundendaten ausschließlich im Europäischen Wirtschaftsraum (EWR) zu nutzen.
  • Datenschutz: Der CISPE-Kodex verbietet Cloud-Infrastruktur-Dienstanbietern, Kundendaten für Data Mining, Profiling oder Direktmarketing zu verwenden.
  • Schwerpunkt auf Cloud-Infrastruktur: Der CISPE-Code adressiert die spezifischen Rollen und Verantwortlichkeiten von Cloud-Infrastruktur-Dienstanbietern (dies ist in allgemeineren Kodizes nicht abgebildet).

Für diese 52 AWS-Dienste wurde nun unabhängig verifiziert, dass sie mit dem CISPE-Kodex konform sind. Der Überprüfungsprozess wurde von Ernst & Young CertifyPoint (EY CertifyPoint) durchgeführt, einer unabhängigen, weltweit anerkannten Überprüfungsstelle, die von der CNIL akkreditiert ist. AWS ist an die Anforderungen des CISPE-Kodex für die 52 deklarierten Dienste gebunden, und wir sind bestrebt, zusätzliche Dienste in den Umfang des CISPE-Compliance-Programms aufzunehmen.

Über den CISPE-Verhaltenskodex für Datenschutz

Beim CISPE-Kodex handelt es sich um die ersten europaweiten Verhaltensregeln für Cloud-Infrastruktur-Dienstanbieter. Im Mai 2021 wurde der CISPE-Kodex vom EDSA im Namen der 27 Datenschutzbehörden aus ganz Europa genehmigt. Im Juni 2021 wurde der Kodex von der CNIL als federführende Aufsichtsbehörde offiziell verabschiedet.

EY CertifyPoint ist von der CNIL als unabhängige Überprüfungsstelle für den CISPE-Kodex auf der Grundlage der vom EDSA genehmigten Kriterien akkreditiert. EY CertifyPoint ist für die Überwachung der laufenden Einhaltung des CISPE-Kodex durch AWS für alle deklarierten Dienste verantwortlich.

AWS und die DSGVO

Um das Vertrauen von Kunden zu gewinnen und aufrechtzuerhalten, verpflichtet sich AWS, Kunden und Partnern eine Umgebung zu bieten, in der sie AWS-Dienste in Übereinstimmung mit der DSGVO verwenden und ihre eigenen DSGVO-konformen Produkte, Dienste und Lösungen entwickeln können.

Weitere Informationen finden Sie im AWS General Data Protection Regulation (GDPR) Center.

Weitere Informationen

Eine Liste der 52 AWS-Dienste, die als mit dem CISPE-Kodex konform verifiziert wurden, ist auf der CISPE Public Register Website verfügbar.

AWS hilft Kunden dabei, Cloud-getriebene Innovationen zu beschleunigen und sowohl zu Hause als auch weltweit erfolgreich zu sein. Weitere Informationen zu unserem kontinuierlichen Bestreben zum Schutz der Daten von EU-Kunden finden Sie in unserem Abschnitt zum EU-Datenschutz auf der AWS Cloud Security Website.


If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Chad Woolf

Chad joined Amazon in 2010 and built the AWS compliance functions from the ground up, including audit and certifications, privacy, contract compliance, control automation engineering and security process monitoring. Chad’s work also includes enabling public sector and regulated industry adoption of the AWS cloud and leads the AWS trade and product compliance team.

 

Fall 2021 PCI DSS report now available with 7 services added to compliance scope

Post Syndicated from Michael Oyeniya original https://aws.amazon.com/blogs/security/fall-2021-pci-dss-report-now-available-with-7-services-added-to-compliance-scope/

We’re continuing to expand the scope of our assurance programs at Amazon Web Services (AWS) and are pleased to announce that seven new services have been added to the scope of our Payment Card Industry Data Security Standard (PCI DSS) certification. These new services provide our customers with more options to process and store their payment card data and to architect their cardholder data environment (CDE) securely in AWS.

You can see the full list of services on our Services in Scope by Compliance program page. The seven new services are:

The Asia-Pacific (Jakarta) Region was newly added to scope, and assessed as PCI compliant as part of the Fall 2021 PCI assessment.

We were evaluated by Coalfire, a third-party Qualified Security Assessor (QSA). The Attestation of Compliance (AOC) that shows AWS PCI compliance status is available through AWS Artifact.

We value your feedback and questions—feel free to reach out to our team or give feedback about this post through our Contact Us page.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Author

Michael Oyeniya

Michael is a Compliance Program Manager at AWS on the Global Audits team, managing the PCI compliance program. He holds a Master’s degree in management and has over 18 years of experience in information technology security risk and control.

New IRAP full assessment report is now available on AWS Artifact for Australian customers

Post Syndicated from Clara Lim original https://aws.amazon.com/blogs/security/new-irap-full-assessment-report-is-now-available-on-aws-artifact-for-australian-customers/

We are excited to announce that a new Information Security Registered Assessors Program (IRAP) report is now available on AWS Artifact, after a successful full assessment completed in December 2021 by an independent ASD (Australian Signals Directorate) certified IRAP assessor.

The new IRAP report includes reassessment of the existing 111 services which are already in scope for IRAP, as well as the 14 additional services listed below, and the new Melbourne region. For the full list of in-scope services, see the AWS Services in Scope page on the IRAP tab. All services in scope are available in the Asia Pacific (Sydney) Region.

The IRAP assessment report is developed in accordance with the Australian Cyber Security Centre (ACSC) Cloud Security Guidance and their Anatomy of a Cloud Assessment and Authorisation framework, which addresses guidance within the Australian Government Information Security Manual (ISM), the Attorney-General’s Department Protective Security Policy Framework (PSPF), and the Digital Transformation Agency (DTA) Secure Cloud Strategy.

We have created the IRAP documentation pack on AWS Artifact, which includes the AWS Consumer Guide and the whitepaper Reference Architectures for ISM PROTECTED Workloads in the AWS Cloud, which was created to help Australian government agencies and their partners plan, architect, and risk assess workloads based on AWS Cloud services.

Please reach out to your AWS representatives to let us know which additional services you would like to see in scope for coming IRAP assessments. We strive to bring more services into the scope of the IRAP PROTECTED level, based on your requirements.

 
If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Clara Lim

Clara is the APJ-Lead Strategist supporting the compliance programs for the Asia Pacific Region, leading multiple security certification programs. Clara is passionate about leveraging her decade-long experience to deliver compliance programs that provide assurance and build trust with customers.

How to configure an incoming email security gateway with Amazon WorkMail

Post Syndicated from Jesse Thompson original https://aws.amazon.com/blogs/security/how-to-configure-an-incoming-email-security-gateway-with-amazon-workmail/

This blog post will walk you through the steps needed to integrate Amazon WorkMail with an email security gateway. Configuring WorkMail this way can provide a versatile defense strategy for inbound email threats.

Amazon WorkMail is a secure, managed business email and calendar service. WorkMail leverages the email receiving capabilities of Amazon Simple Email Service (Amazon SES) to scan all incoming and outgoing email for spam, malware, and viruses to help protect your users from harmful email. AWS Lambda for Amazon WorkMail functions can tap into the capabilities of other AWS services to accomplish additional business objectives, such as controlling message delivery or message modification.

For many organizations, existing features and integrations with Amazon SES are sufficient for their spam, malware, and virus detection. Other organizations may need either a dedicated on-premise security solution, or have other reasons to use an additional inspection point in the overall mail flow. A number of commercial and community-supported tools include features like special encryption capabilities, data loss prevention (DLP) content inspection engines, and embedded-hyperlink transformation features to protect end-user mailboxes.

Prerequisites

To implement this solution, you need:

  • A domain name and permission to alter domain name system (DNS) records in Amazon Route 53 or your existing DNS provider. This could be your organization’s existing domain (such as example.org), a new domain (such as example.net), or a subdomain (such as sub.example.org).
  • Access to an AWS account so you can configure WorkMail and Amazon SES. Optionally, you may also need the ability to create AWS Lambda functions to integrate with WorkMail.
  • Access to configure the email security gateway of your choosing.

How email flows with an email security gateway

Email security gateways function by handling the initial ingress of email via the Simple Mail Transport Protocol (SMTP). When email servers send messages to your domain’s email addresses, they look at your domain’s mail exchange (MX) record in the DNS. After processing an email message, the email security gateway delivers it to the downstream mailbox hosting service, such as WorkMail, by means of Amazon SES via SMTP. You can also optionally configure an AWS Lambda for Amazon WorkMail function to synchronously deliver messages into end-user junk email folders, or to take other actions. 

Figure 1. Interaction points while architecting an email security gateway

Figure 1. Interaction points while architecting an email security gateway

The interaction points are as follows:

  1. The email sender looks up the mail exchange (MX) record for the domain hosted by WorkMail. The domain name system (DNS) domain may be hosted in Route 53, or by another DNS hosting provider. The value of the MX record contains the internet protocol (IP) address of the email security gateway.
  2. The email sender connects to the email security gateway, and sends the message using the Simple Mail Transfer Protocol (SMTP)
  3. The email security gateway accepts, processes, and then delivers the message to the ingress SMTP endpoint for WorkMail. Amazon Simple Email Service (Amazon SES) handles inbound email receiving for WorkMail.
  4. Optionally, an AWS Lambda for Amazon WorkMail function can synchronously process messages before delivery to WorkMail.
  5. WorkMail receives the message for final delivery to the end-user.

The gateway assumes responsibility for inspecting incoming email, because the initial point of ingress is an important component of a multi-layer defense strategy against email-borne threats. The gateway could refuse or quarantine risky messages, it could modify the email subjects and body to add warnings visible to recipients, or it could append metadata to the email headers for downstream processing by an AWS Lambda function.

Why point of ingress email authentication is important

What is email authentication

SMTP was built at a time when networking was less reliable than it is today, and consequently, it was designed to be able to allow any domain to store and later forward messages on behalf of other domains to mitigate connection problems. While that helped at the time, today it presents real problems in authenticating who truly sent a message: the owner of the domain, or just someone else claiming to be the owner? To overcome this issue, the messaging industry has adopted three protocols to help verify the authenticity of a message: SPF, DKIM, and DMARC. These protocols aren’t perfect, but understanding how to use them is important when adding new steps to your message processing workflow, because they can affect how you receive inbound mail.

Sender Policy Framework

Sender Policy Framework (SPF) permits domain owners to declare which SMTP servers are allowed to send email messages claiming to be from their domain. This establishes an identity relationship between the owner of the domain and the authorized party that controls the SMTP server. When SPF is used, a message can only be handed off directly from an authorized SMTP server; it cannot be relayed through a second, unauthorized server without changing the originating address.

DomainKeys Identified Mail (DKIM)

DomainKeys Identified Mail (DKIM) permits domain owners to advertise a public key that a mail recipient’s system can use to verify the sender’s digital signature. This allows SMTP servers and other downstream applications to check the validity of the digital signature against the public key of the domain which had the matching private key to create the signature. DKIM signatures attached to messages can remain intact through intermediary SMTP servers, but if message contents (email body or email headers) are modified by intermediary servers, the final destination will find that the signature is no longer valid.

Domain-based Message Authentication, Reporting and Conformance (DMARC)

Domain-based Message Authentication, Reporting and Conformance (DMARC) permits domain owners to publish a policy telling receiving servers what to do when SPF or DKIM are not valid, such as if a message originated from an unauthorized server, or if it was tampered with after being sent. DMARC checks if a message matches what it knows about the sender via SPF and DKIM, a process known as alignment. The recipient’s server can then enforce the DMARC policy, for example by rejecting or quarantining non-aligned messages.

Tying it all together

Amazon WorkMail normally performs DMARC enforcement for inbound messages, based on their alignment when they were received by Amazon SES. But when an email security gateway acts as an intermediary SMTP server between the original sender and WorkMail, that breaks the relationship with the SMTP servers authorized by SPF, and if the gateway modifies the message, that invalidates the DKIM signature. This is why it’s necessary for the SMTP server at the point of ingress to perform the evaluation of SPF, DKIM, and DMARC. The email security gateway at the border should be made responsible for enforcing DMARC on messages that don’t align, and WorkMail DMARC enforcement should be disabled.

Figure 2. Diagram of SPF policy enforcement process. The full details of the interaction points are outlined below.

Figure 2. Diagram of SPF policy enforcement process. The full details of the interaction points are outlined below.

The interaction points for SPF policy enforcement are as follows:

  1. The email sender delivers the message to the email security gateway with a MAIL FROM address within a domain the sender owns.
  2. The email security gateway looks up the sender’s domain’s Sender Permitted From (SPF) policy in DNS to determine if the sending mail server’s IP address is authorized.
  3. The email security gateway delivers the message to Amazon SES with the same MAIL FROM address. The email security gateway has a different IP address than the original sending email server.
  4. When Amazon SES looks up the MAIL FROM domain’s SPF, it will not find the email security gateway’s IP address as authorized. From the perspective of Amazon SES, and the resulting logs in Amazon Cloudwatch, the message will appear to be unauthorized by the SPF policy. This result is ignored by disabling DMARC checks in the WorkMail organization configuration.
  5. The message continues delivery to WorkMail with an optional integration with AWS Lambda for Amazon WorkMail synchronous run configuration, which can analyze message headers to get a more complete picture of the message’s authenticity.

Choosing an email security gateway

Many email security vendors offer software as a service (SaaS) solutions. This offloads all management responsibilities to the software vendor’s platform. These solutions work as long as they support the email gateway features necessary for this solution as depicted in Figure 2 and described in the Why point of ingress email authentication is important section above.

If you wish to build and maintain your own email security gateway, you may deploy one available from the AWS Marketplace or add an open source application into your Amazon Virtual Private Cloud (Amazon VPC). You will need to remove port 25 restriction from your EC2 instance for the email security gateway within your Amazon VPC to send email to Amazon WorkMail.

How to configure Amazon WorkMail

Follow this procedure to configure your WorkMail organization and Amazon SES IP address filters to allow the email security gateway to process inbound email receiving.

To configure Amazon WorkMail

  1. From the WorkMail console, select your organization, navigate to Organization settings, and select Advanced. Edit the Inbound DMARC Settings and set Enforcement enabled to Off. This ensures that WorkMail does not re-evaluate DMARC.

    Figure 3. Picture of the AWS WorkMail console showing the advanced configuration depicting Inbound DMARC enforcement disabled

    Figure 3. Picture of the AWS WorkMail console showing the advanced configuration depicting Inbound DMARC enforcement disabled

  2. From the Amazon SES console, navigate to Email receiving and create IP address filters to allow the IP address or IP address range of the gateway(s).
  3. Add another rule to block 0.0.0.0/0. This prevents malicious actors from bypassing your email security gateway.

    Figure 4. Picture of the Amazon SES console showing example IP address filters to allow the email security gateway IP addresses and blocking every other IP Address

    Figure 4. Picture of the Amazon SES console showing example IP address filters to allow the email security gateway IP addresses and blocking every other IP Address

  4. From the Route 53 console, navigate to Hosted zones, select the domain and edit the MX record to the IP address or hostname of the gateway. This causes all email senders to deliver to your gateway, instead of delivering directly to WorkMail.
    Follow the instructions of your DNS provider if the domain’s DNS is hosted elsewhere.

    Figure 5. Picture of the Amazon Route 53 console showing that the DMS MX record for the domain needs to be configured with the IP address of the email security gateway

    Figure 5. Picture of the Amazon Route 53 console showing that the DMS MX record for the domain needs to be configured with the IP address of the email security gateway

  5. From the WorkMail console, navigate to Domains, select your domain to show the Domain verification status page.
  6. Copy the host name from the value of the MX record type as depicted in Figure 6.

    Figure 6. Picture showing the section of the MX record value to copy from the WorkMail console.

    Figure 6. Picture showing the section of the MX record value to copy from the WorkMail console.

  7. Configure your email security gateway with the value that you just copied (e.g. inbound-smtp.us-east-1.amazonaws.com) to send inbound messages to your WorkMail organization. Instructions for doing this will vary depending on which email security gateway you are using.

Some specifics of this configuration depend on which gateway you are using, how it is designed for high availability, and the type of features configured. Test your WorkMail integration with a non-production domain before changing your production domain.

It is normal for Amazon CloudWatch logs for WorkMail, as well as the individual message headers, to show that SPF fails for all messages which traverse the gateway. Configure the email security gateway to record its SPF evaluation in the message headers so that it remains available for troubleshooting and further processing.

Junk E-Mail folder integration

WorkMail normally moves spam messages into the Junk E-mail folder within each recipient’s mailbox. To replicate this behavior for spam messages identified by the email security gateway, use AWS Lambda for Amazon WorkMail with a synchronous run configuration to run a function for every inbound message to a group of recipients.

To configure an AWS Lambda function for every inbound message (optional)

  1. Configure the email security gateway to include a spam verdict in the message headers for all incoming mail.
  2. Create a synchronous run configuration using AWS Lambda for Amazon WorkMail  to interpret the message headers and return a response to WorkMail with type: BYPASS_SPAM_CHECK or MOVE_TO_JUNK. A sample Amazon WorkMail Upstream Gateway Filter solution is available in the AWS Serverless Application Repository.

Conclusion

By integrating an email security gateway and leveraging AWS Lambda for Amazon WorkMail, you will gain additional security and management control over your organization’s inbound email. To learn more, read the Amazon WorkMail FAQs and Amazon WorkMail documentation.

 
If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Jesse Thompson

Jesse Thompson

Jesse is a worldwide public sector senior solutions architect at AWS. His background is in public sector enterprise IT development and operations, with a focus on abuse mitigation and encouragement of authenticity practices with open standard protocols.

AWS publishes PiTuKri ISAE3000 Type II Attestation Report for Finnish customers

Post Syndicated from Niyaz Noor original https://aws.amazon.com/blogs/security/aws-publishes-pitukri-isae3000-type-ii-attestation-report-for-finnish-customers/

Gaining and maintaining customer trust is an ongoing commitment at Amazon Web Services (AWS). Our customers’ industry security requirements drive the scope and portfolio of compliance reports, attestations, and certifications we pursue. AWS is pleased to announce the issuance of the Criteria to Assess the Information Security of Cloud Services (PiTuKri) ISAE 3000 Type 2 attestation report. The scope of this report includes 141 AWS services and associated AWS global infrastructure, such as Regions and Edge Locations supporting these services.

Criteria for Assessing the Information Security of Cloud Services (PiTuKri) is a guidance document published by the Finnish Transport and Communications Agency (Traficom) Cyber Security Centre for assessing the security of cloud computing services, such as AWS. The PiTuKri criteria covers a total of 11 subdivisions such as security management, personnel security and physical security which cloud service providers are expected to implement. AWS has engaged with an independent third-party audit firm to examine whether the AWS control environment is appropriately designed and implemented to align with PiTuKri requirements. Additionally, the report provides customers with important guidance on complementary user entity controls (CUECs), which customers should consider implementing as part of the shared responsibility model to help them comply with PiTuKri requirements. The report covers the period from October 1, 2020 to September 30, 2021. A full list of certified services and Regions is presented within the published PiTuKri ISAE3000 report.

The alignment of AWS with PiTuKri requirements demonstrates our continuous commitment to meeting the heightened expectations for cloud service providers set by Traficom. Customers can use the AWS’ PiTuKri ISAE 3000 report as a tool to conduct their due diligence on AWS, which may minimize the effort and costs required for compliance. The report is now available free of charge to AWS customers from AWS Artifact. More information on how to download the report is available here.

As always, AWS is committed to bringing new services into the scope of our PiTuKri program in the future, based on customers’ architectural and regulatory needs. Please reach out to your AWS account team if you have questions about the PiTuKri report. You can also download this blog post translated into Finnish.

 
If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Author

Niyaz Noor

Niyaz is the Security Audit Program Manager at AWS. Niyaz leads multiple security certification programs across Europe and other regions. During his professional career, he has helped multiple cloud service providers in obtaining global and regional security certification. He is passionate about delivering programs that build customers’ trust and provide them assurance on cloud security.

2021 FINMA ISAE 3000 Type 2 attestation report for Switzerland now available on AWS Artifact

Post Syndicated from Niyaz Noor original https://aws.amazon.com/blogs/security/2021-finma-isae-3000-type-2-attestation-report-for-switzerland-now-available-on-aws-artifact/

AWS is pleased to announce the issuance of a second Swiss Financial Market Supervisory Authority (FINMA) ISAE 3000 Type 2 attestation report. The latest report covers the period from October 1, 2020 to September 30, 2021, with a total of 141 AWS services and 23 global AWS Regions included in the scope.

A full list of certified services and Regions are presented within the published FINMA report; customers can download the latest report from AWS Artifact.

The FINMA ISAE 3000 Type 2 report, conducted by an independent third-party audit firm, provides Swiss financial industry customers with the assurance that the AWS control environment is appropriately designed and implemented to address key operational risks, as well as risks related to outsourcing and business continuity management.

FINMA circulars

The report covers the five core FINMA circulars applicable to Swiss banks and insurers in the context of outsourcing arrangements to the cloud. These FINMA circulars are intended to assist Swiss-regulated financial institutions in understanding approaches to due diligence, third-party management, and key technical and organizational controls that should be implemented in cloud outsourcing arrangements, particularly for material workloads.

The report’s scope covers, in detail, the requirements of the following FINMA circulars:

  • 2018/03 Outsourcing – banks, insurance companies and selected financial institutions under FinIA;
  • 2008/21 Operational Risks – Banks – Principle 4 Technology Infrastructure (31.10.2019);
  • 2008/21 Operational Risks – Banks – Appendix 3 Handling of electronic Client Identifying Data (31.10.2019);
  • 2013/03 Auditing – Information Technology (04.11.2020);
  • 2008/10 Self-regulation as a minimum standard – Minimum Business Continuity Management (BCM) minimum standards proposed by the Swiss Insurance Association (01.06.2015) and Swiss Bankers Association (29.08.2013);

Customers can continue to use the detailed FINMA workbooks that include detailed control mappings for each FINMA circular covered under this audit report; these workbooks are available on AWS Artifact. Where applicable, under the AWS shared responsibility model, these workbooks provide best practices guidance using AWS Well-Architected to assist Swiss customers in their own preparation for alignment with FINMA circulars.

As always, AWS is committed to bringing new services into the future scope of our FINMA program based on customers’ architectural and regulatory needs. Please reach out to your AWS account team if you have questions or feedback about the FINMA report.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Author

Niyaz Noor

Niyaz is the Security Audit Program Manager at AWS. Niyaz leads multiple security certification programs across Europe and other regions. During his professional career, he has helped multiple cloud service providers in obtaining global and regional security certification. He is passionate about delivering programs that build customers’ trust and provide them assurance on cloud security.