Tag Archives: Foundational (100)

Get the best out of Amazon Verified Permissions by using fine-grained authorization methods

Post Syndicated from Jeff Lombardo original https://aws.amazon.com/blogs/security/get-the-best-out-of-amazon-verified-permissions-by-using-fine-grained-authorization-methods/

With the release of Amazon Verified Permissions, developers of custom applications can implement access control logic based on caller and resource information; group membership, hierarchy, and relationship; and session context, such as device posture, location, time, or method of authentication. With Amazon Verified Permissions, you can focus on building simple authorization policies and your applications—instead of, for example, building an authorization engine for your multi-tenant consumer applications.

Amazon Verified Permissions uses the Cedar policy language, which simplifies the implementation, review, and maintenance of large and complex access control strategies.

Amazon Verified Permissions includes schema definitions, policy statement grammar, and automated reasoning that scales across millions of permissions, which enables you to enforce the principles of default deny and of least privilege. These features facilitate the deployment of an in-depth fine-grained authorization model to support your Zero-Trust objectives.

In this blog post, we’ll discuss how you can use Amazon Verified Permissions to create authorization policies that are an improvement over traditional access control models, and we provide some best practices for the use of this feature.

What is fine-grained authorization? Is it a role-based or an attribute-based access control mechanism?

Traditionally, customers deploy access control strategies based on roles or attributes.

Role-based access control (RBAC) is an approach of granting access to resources through group memberships instead of individual users. This approach, although it simplifies the definition of entitlements, can become very complex when you scale out groups’ memberships, hierarchies, and nestings.

Consider a photo sharing application that allows users to upload photos and share those photos with friends. We have a user Alice who uploads their vacation photos to a folder named Austin2022. Alice decides to share these photos with friends.

Alice provides a link to their vacation photos to a friend named Bob. Using the link, Bob is able to view photos in the folder Austin2022, because Bob is in the user group Alice/Friends. That is, Bob has the role of Alice/Friends. If Bob were removed as Alice’s friend, Bob would not be able to view Alice’s photos. This is an example of how role-based access control works.

Attribute-based access control (ABAC) deviates from the static nature of RBAC by introducing access rules based on the characteristics of the following: the requestor identity; the attributes of the resources targeted; or contextual elements such as the request time, where the request originated, or the device used to make the request.

Let’s consider who can delete photos in the example photo sharing application. We want to make sure that only Alice can delete their photos. That is, we make an authorization decision based on the attribute owner of the resource photo.

Fine-grained authorization (FGA) is a model that combines the advantages of both RBAC and ABAC, so that customers can find the right balance between each approach for their individual use case. Understanding the FGA approach is key to writing policy statements in Amazon Verified Permissions.

How does permissions policy statement language work?

To define a policy statement, Amazon Verified Permissions uses a policy language based on the PARC model, as AWS Identity and Access Management (IAM) does for IAM policies. PARC refers to the four objects in the policy language: principal, action, resource, and condition, and these are defined as follows:

  1. The principal is the entity taking the action. Often this will be a human user, but it could also be another service or a device.
  2. The action is the operation being performed, for which permission must be granted. Often the action will map to an API call.
  3. The resource is the target of the call.
  4. The condition limits when or where the principal can make the action on the resource.

Using this language, you can create a policy that allows user Alice (the principal) to call deletePhoto (the action) on VacationPhoto_1.jpg (the resource) when Alice is logged in by using multi-factor authentication (the condition). After the Amazon Verified Permissions policy is authored, you will store it in your Amazon Verified Permissions policy store instance.

Policy statements are divided into two sections:

  1. The policy head, which defines the targets of the policy (principal, action, resource) and whether the policy permits or forbids the action.
  2. The Conditions section, which allows you to place conditions that authorize API actions only when specified criteria are met.

You can use the structure of the policy statements to tell at a glance whether a policy follows an RBAC, an ABAC, or an FGA approach, as shown in the following three examples.

// This style of policy can be used to implement a RBAC approach
permit(
  principal in UserGroup::"Alice/Friends",
  action in [
    Action::"readFile", 
    Action::"writeFile"
  ],
  resource in Folder::"Playa del Sol 2021"
);
// This style of policy can be used to implement an ABAC approach
permit(principal, action, resource)
when {
  principal.permitted_access_level >= resource.access_level
};
// This style of policy can be used to implement a hybrid approach
permit(
  principal in UserGroup::"Alice/Friends",
  action in [
    Action::"readFile", 
    Action::"writeFile"
  ],
  resource in Folder::"Playa del Sol 2021"
)
when {
  principal.permitted_access_level >= resource.access_level
};

Let’s go back to our example of Alice and Bob. Now, Alice can define a policy that allows their friends to view photos in their folder Austin2022, as follows.

permit(
    principal in UserGroup::"Alice/Friends",
    action == Action::"viewPhoto",
    resource in Folder::"Austin2022"
);

The policy head says to permit the viewPhoto action to be performed on resources in the folder Austin2022 for principals in user group Alice/Friends. There is no condition section for this policy. With the preceding policy, Bob can access the photos in Alice’s Austin2022 album as long as Bob is a member of the group Alice/Friends.

We can go back to the photo deletion workflow for a more complex scenario. To delete photos, you want to ensure that the requestor owns the photo. Additionally, you might require the user to be logged in via multi-factor authentication (MFA). This policy can be written as follows.

permit(
    principal,
    action == Action::"deletePhoto",
    resource == File::"photo"
)
when {
    resource.owner == principal.name (http://principal.name/)
    && context.MFA == true
};

The policy head permits a user to call the action deletePhoto on photos. The condition section limits the policy to permit photo deletion only when the resource’s owner attribute is the same as the principal’s name attribute and the context object’s MFA attribute equals true.

Designing well-architected policy statements

In this section, we cover six best practices that help customers scale out efficiently.

Use immutable identifiers to reduce risk of collision

The policy statements in this blog post and in Amazon Verified Permissions documentation intentionally use human-readable values such as Bob for a Principal entity, or Alice/Friends for a Group entity. This is useful when discussing general concepts, but in production systems, customers should utilize unique and immutable values for entities. As an example, what would happen if Alice wants to change their user name?

Instead of creating a user named Alice, you should use an autogenerated and unique identifier such as a Universally Unique Identifier (UUID). Those are generally available from your user directory, JSON Web Token, or file system. That way, you can create a user object with the ID a1b2c3d4-5678-90ab-cdef-EXAMPLE11111 and the name attribute Alice. This would allow you to update Alice’s user name without needing to recreate the user object.

Reduce the number of policies that use entity grouping

Policy statements can only contain a single principal entity and a single resource entity. If you want the same policy to apply to multiple principals or resources, you can group common entities and use an in statement.

In this example, Bob’s user account could be stored as the following object.

{
    "EntityId": {
        "EntityType": "User",
        "EntityId": "Bob"
    },
    "Parents":[
        {
            "EntityType": "UserGroup",
            "EntityId": "Alice/Friends"
        }
    ],
    "Attributes": {
        "username": {
            "String": "Bob"
        },
        "email": {
            "String":"[email protected]"
           },
    }
}

And user group Alice/Friends could be stored as the following object.

{                 
  "EntityId": {                     
    "EntityType": "UserGroup",                     
    "EntityId": "Alice/Friends"
    }
}

The parent relationship defined in Bob’s user account object is what makes Bob a member of the group Alice/Friends.

Now you can define a policy that allows Bob to gain access to Alice’s vacation photos because he is in the group Alice/Friends, as follows.

permit(
    principal in UserGroup::"Alice/Friends",
    action == Action::"viewPhoto",
    resource in Folder::"Austin2022"
);

Use namespaces to remove ambiguity

You can use namespaces to remove ambiguity. Returning to our application, let’s say that you want to give users the ability to delete their photos. But your moderators also need the ability to delete inappropriate photos. How can you distinguish between the user action deletePhoto and the administrator action deletePhoto? Namespaces give you this flexibility.

When creating your entities, you can add namespaces in the EntityType field, as in the following example.

{
  "EntityId": {
    "EntityType": "Admin::Action",
    "EntityId": "\"deletePhoto\""    
  },
  "Parents" : []
  "Attributes": {
    "readOnly": {
      "String": "false",
      },
      "appliesTo": {
        "String": "\"Photo\""
      }
  }
}

You then use the namespace in your permit policy, as follows.

permit(
  principal,
  action == Admin::Action::"deletePhoto",
  resource == File::"Photo")
when {
  principal.role == Moderator
};

This policy requires a user to have the role Moderator to successfully use the administrator deletePhoto action.

Set permission guardrails with forbid statements

The Amazon Verified Permissions policy engine denies any action that is not explicitly allowed with a permit policy. But you might want to establish permission guardrails to ensure that an action will be never allowed. You can create forbid policies for this purpose.

Returning to our photo sharing application, suppose that you want to ensure that no user can delete a photo unless the user has been authenticated with MFA. You could use the following policy.

forbid(
  principal,
  action == Action::"deletePhoto",
  resource == File::"Photo"
)
unless {
  context.MFA == true
}

This permission guardrail will help prevent the accidental grant of overly permissive deletePhoto permissions.

Simplify statements with unless conditions

When you define complex conditions for a policy statement, you might face situations where a policy needs multiple negative conditions. Amazon Verified Permissions provides an alternative keyword for the conditional expression: unless. For example, you might deny moderators the ability to delete photos unless they have flagged the photo as inappropriate, are authenticated using MFA, and are on the company’s network, in order to simplify policy statements.

Unless behaves the same as when, except that using unless requires all conditions to evaluate as false. With this additional expression, you can create statement that are less complex to review and maintain. The following example shows how you can simplify a condition with multiple parameters by using the unless expression.

// Allow access unless a resource was deleted more than 7 days ago
permit(
  principal in Group::"Alice/Friends",
  action == Action::"readPhoto",
  resource in Folder::"Playa del Sol 2021"
)
when {
  !(resource.status == "deleted"
   && resource.deletion_date < (context.time.now - 604800)) //7 days ago
}

The following example shows how you can simplify the previous policy by using an unless expression.

// Allow access unless a resource was deleted more than 7 days ago
permit(
  principal in Group::"Alice/Friends",
  action == Action::"readPhoto",
  resource in Folder::"Playa del Sol 2021"
)
unless {
  (resource.status == "deleted"
   && resource.deletion_date < (context.time.now - 604800)) //7 days ago
}

Rationalize policies with a template

You might face a situation where you are repeatedly creating the same rule for different contexts. In the following example, we demonstrate a policy that permits Alice to describe the folder Alice’s Org. Then we replicate the same policy for Bob and the folder Bob’s Org.

permit(
    principal == "Alice",
    action == Action::"describeFolder",
    resource == Folder::"Alice's Org"
)
when {
    resource.owner == principal.username
};

permit(
    principal == "Bob",
    action == Action::"describeFolder",
    resource == Folder::"Bob's Org"
)
when {
    resource.owner == principal.username
};

In this case, we recommend that you use a policy template to simplify the evaluation, as in the following example.

permit(
    principal == ?principal,
    action == Action::"describeFolder",
    resource == ?resource
)
when {
    resource.owner == principal.username
};

With a policy template, the statement inherits from a placeholder (in this example, ?principal and ?resource) and will be evaluated dynamically for each policy evaluation request, based on context that the application will provide.

Conclusion: Start authorizing with Amazon Verified Permissions

With Amazon Verified Permissions, you can create permission policies with expressiveness, performance, and readability in mind.

Using the best practices described in this post, you are ready to author policies with Amazon Verified Permissions. When combined with services like Amazon Cognito, Amazon API Gateway, an AWS Lambda authorizer, or AWS AppSync, Amazon Verified Permissions allows you to unlock in-depth and explicit access control logic securely using native AWS services.

Over the next months, AWS will release more resources to support our customers in their implementation of Amazon Verified Permissions. Learn more about Amazon Verified Permissions. Stay tuned and happy building.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Jeff Lombardo

Jeff is a Solutions Architect expert in IAM, Application Security, and Data Protection. Through 17 years as a security consultant for enterprises of all sizes and business verticals, he delivered innovative solutions with respect to standards and governance frameworks. Today at AWS, he helps organizations enforce best practices and defense in depth for secure cloud adoption.

Brad Burnett

Brad Burnett

Brad is a Security Specialist Solutions Architect focused on Identity. Before AWS, he worked as a Linux Systems Administrator and Incident Responder. When he isn’t helping customers design robust and secure Identity solutions, Brad can be found sharpening his offensive security skills or playing card games.

AWS Digital Sovereignty Pledge: Control without compromise

Post Syndicated from Matt Garman original https://aws.amazon.com/blogs/security/aws-digital-sovereignty-pledge-control-without-compromise/

French | German | Italian | Japanese | Korean

We’ve always believed that for the cloud to realize its full potential it would be essential that customers have control over their data. Giving customers this sovereignty has been a priority for AWS since the very beginning when we were the only major cloud provider to allow customers to control the location and movement of their data. The importance of this foundation has only grown over the last 16 years as the cloud has become mainstream, and governments and standards bodies continue to develop security, data protection, and privacy regulations.

Today, having control over digital assets, or digital sovereignty, is more important than ever.

As we’ve innovated and expanded to offer the world’s most capable, scalable, and reliable cloud, we’ve continued to prioritize making sure customers are in control and able to meet regulatory requirements anywhere they operate. What this looks like varies greatly across industries and countries. In many places around the world, like in Europe, digital sovereignty policies are evolving rapidly. Customers are facing an incredible amount of complexity, and over the last 18 months, many have told us they are concerned that they will have to choose between the full power of AWS and a feature-limited sovereign cloud solution that could hamper their ability to innovate, transform, and grow. We firmly believe that customers shouldn’t have to make this choice.

This is why today we’re introducing the AWS Digital Sovereignty Pledge—our commitment to offering all AWS customers the most advanced set of sovereignty controls and features available in the cloud.

AWS already offers a range of data protection features, accreditations, and contractual commitments that give customers control over where they locate their data, who can access it, and how it is used. We pledge to expand on these capabilities to allow customers around the world to meet their digital sovereignty requirements without compromising on the capabilities, performance, innovation, and scale of the AWS Cloud. At the same time, we will continue to work to deeply understand the evolving needs and requirements of both customers and regulators, and rapidly adapt and innovate to meet them.

Sovereign-by-design

Our approach to delivering on this pledge is to continue to make the AWS Cloud sovereign-by-design—as it has been from day one. Early in our history, we received a lot of input from customers in industries like financial services and healthcare—customers who are among the most security- and data privacy-conscious organizations in the world—about what data protection features and controls they would need to use the cloud. We developed AWS encryption and key management capabilities, achieved compliance accreditations, and made contractual commitments to satisfy the needs of our customers. As customers’ requirements evolve, we evolve and expand the AWS Cloud. A couple of recent examples include the data residency guardrails we added to AWS Control Tower (a service for governing AWS environments) late last year, which give customers even more control over the physical location of where customer data is stored and processed. In February 2022, we announced AWS services that adhere to the Cloud Infrastructure Service Providers in Europe (CISPE) Data Protection Code of Conduct, giving customers an independent verification and an added level of assurance that our services can be used in compliance with the General Data Protection Regulation (GDPR). These capabilities and assurances are available to all AWS customers.

We pledge to continue to invest in an ambitious roadmap of capabilities for data residency, granular access restriction, encryption, and resilience:

1. Control over the location of your data

Customers have always controlled the location of their data with AWS. For example, currently in Europe, customers have the choice to deploy their data into any of eight existing Regions. We commit to deliver even more services and features to protect our customers’ data. We further commit to expanding on our existing capabilities to provide even more fine-grained data residency controls and transparency. We will also expand data residency controls for operational data, such as identity and billing information.

2. Verifiable control over data access

We have designed and delivered first-of-a-kind innovation to restrict access to customer data. The AWS Nitro System, which is the foundation of AWS computing services, uses specialized hardware and software to protect data from outside access during processing on Amazon Elastic Compute Cloud (Amazon EC2). By providing a strong physical and logical security boundary, Nitro is designed to enforce restrictions so that nobody, including anyone in AWS, can access customer workloads on EC2. We commit to continue to build additional access restrictions that limit all access to customer data unless requested by the customer or a partner they trust.

3. The ability to encrypt everything everywhere

Currently, we give customers features and controls to encrypt data, whether in transit, at rest, or in memory. All AWS services already support encryption, with most also supporting encryption with customer managed keys that are inaccessible to AWS. We commit to continue to innovate and invest in additional controls for sovereignty and encryption features so that our customers can encrypt everything everywhere with encryption keys managed inside or outside the AWS Cloud.

4. Resilience of the cloud

It is not possible to achieve digital sovereignty without resiliency and survivability. Control over workloads and high availability are essential in the case of events like supply chain disruption, network interruption, and natural disaster. Currently, AWS delivers the highest network availability of any cloud provider. Each AWS Region is comprised of multiple Availability Zones (AZs), which are fully isolated infrastructure partitions. To better isolate issues and achieve high availability, customers can partition applications across multiple AZs in the same AWS Region. For customers that are running workloads on premises or in intermittently connected or remote use cases, we offer services that provide specific capabilities for offline data and remote compute and storage. We commit to continue to enhance our range of sovereign and resilient options, allowing customers to sustain operations through disruption or disconnection.

Earning trust through transparency and assurances

At AWS, earning customer trust is the foundation of our business. We understand that protecting customer data is key to achieving this. We also know that trust must continue to be earned through transparency. We are transparent about how our services process and transfer data. We will continue to challenge requests for customer data from law enforcement and government agencies. We provide guidance, compliance evidence, and contractual commitments so that our customers can use AWS services to meet compliance and regulatory requirements. We commit to continuing to provide the transparency and business flexibility needed to meet evolving privacy and sovereignty laws.

Navigating changes as a team

Helping customers protect their data in a world with changing regulations, technology, and risks takes teamwork. We would never expect our customers to go it alone. Our trusted partners play a prominent role in bringing solutions to customers. For example, in Germany, T-Systems (part of Deutsche Telekom) offers Data Protection as a Managed Service on AWS. It provides guidance to ensure data residency controls are properly configured, offering services for the configuration and management of encryption keys and expertise to help guide their customers in addressing their digital sovereignty requirements in the AWS Cloud. We are doubling down with local partners that our customers trust to help address digital sovereignty requirements.

We are committed to helping our customers meet digital sovereignty requirements. We will continue to innovate sovereignty features, controls, and assurances within the global AWS Cloud and deliver them without compromise to the full power of AWS.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Matt Garman

Matt Garman

Matt is currently the Senior Vice President of AWS Sales, Marketing and Global Services at AWS, and also sits on Amazon’s executive leadership S-Team. Matt joined Amazon in 2006, and has held several leadership positions in AWS over that time. Matt previously served as Vice President of the Amazon EC2 and Compute Services businesses for AWS for over 10 years. Matt was responsible for P&L, product management, and engineering and operations for all compute and storage services in AWS. He started at Amazon when AWS first launched in 2006 and served as one of the first product managers, helping to launch the initial set of AWS services. Prior to Amazon, he spent time in product management roles at early stage Internet startups. Matt earned a BS and MS in Industrial Engineering from Stanford University, and an MBA from the Kellogg School of Management at Northwestern University.

 


French version

AWS Digital Sovereignty Pledge: le contrôle sans compromis

Nous avons toujours pensé que pour que le cloud révèle son entier potentiel, il était essentiel que les clients aient le contrôle de leurs données. Garantir à nos clients cette souveraineté a été la priorité d’AWS depuis l’origine, lorsque nous étions le seul grand fournisseur de cloud à permettre aux clients de contrôler la localisation et le flux de leurs données. L’importance de cette démarche n’a cessé de croître ces 16 dernières années, à mesure que le cloud s’est démocratisé et que les gouvernements et les organismes de régulation ont développé des réglementations en matière de sécurité, de protection des données et de confidentialité.

Aujourd’hui, le contrôle des ressources numériques, ou souveraineté numérique, est plus important que jamais.

Tout en innovant et en nous développant pour offrir le cloud le plus performant, évolutif et fiable au monde, nous avons continué à ériger comme priorité la garantie que nos clients gardent le contrôle et soient en mesure de répondre aux exigences réglementaires partout où ils opèrent. Ces exigences varient considérablement selon les secteurs et les pays. Dans de nombreuses régions du monde, comme en Europe, les politiques de souveraineté numérique évoluent rapidement. Les clients sont confrontés à une incroyable complexité. Au cours des dix-huit derniers mois, ils nous ont rapporté qu’ils craignaient de devoir choisir entre les vastes possibilités offertes par les services d’AWS et une solution aux fonctionnalités limitées qui pourrait entraver leur capacité à innover, à se transformer et à se développer. Nous sommes convaincus que les clients ne devraient pas avoir à faire un tel choix.

C’est pourquoi nous présentons aujourd’hui l’AWS Digital Sovereignty Pledge – notre engagement à offrir à tous les clients AWS l’ensemble le plus avancé d’outils et de fonctionnalités de contrôle disponibles dans le cloud au service de la souveraineté.

AWS propose déjà une série de fonctionnalités de protection des données, de certifications et d’engagements contractuels qui donnent à nos clients le plein contrôle de la localisation de leurs données, de leur accès et de leur utilisation. Nous nous engageons à développer ces capacités pour permettre à nos clients du monde entier de répondre à leurs exigences en matière de souveraineté numérique, sans faire de compromis sur les capacités, les performances, l’innovation et la portée du cloud AWS. En parallèle, nous continuerons à travailler pour comprendre en profondeur l’évolution des besoins et des exigences des clients et des régulateurs, et nous nous adapterons et innoverons rapidement pour y répondre.

Souverain dès la conception

Pour respecter l’AWS Digital Sovereignty Pledge, notre approche est de continuer à rendre le cloud AWS souverain dès sa conception, comme il l’a été dès le premier jour. Aux débuts de notre histoire, nous recevions de nombreux commentaires de nos clients de secteurs tels que les services financiers et la santé – des clients qui comptent parmi les organisations les plus soucieuses de la sécurité et de la confidentialité des données dans le monde – sur les fonctionnalités et les contrôles de protection des données dont ils auraient besoin pour leur utilisation du cloud. Nous avons développé des compétences en matière de chiffrement et de gestion des données, obtenu des certifications de conformité et pris des engagements contractuels pour répondre aux besoins de nos clients. Nous développons le cloud AWS à mesure que les exigences des clients évoluent. Nous pouvons citer, parmi les exemples récents, les data residency guardrails (les garde-fous de la localisation des données) que nous avons ajoutés en fin d’année dernière à l’AWS Control Tower (un service de gestion des environnements AWS) et qui donnent aux clients davantage de contrôle sur l’emplacement physique de leurs données, où elles sont stockées et traitées. En février 2022, nous avons annoncé de nouveaux services AWS adhérant au Code de Conduite sur la protection des données de l’association CISPE (Cloud Infrastructure Service Providers in Europe (CISPE)). Ils apportent à nos clients une vérification indépendante et un niveau de garantie supplémentaire attestant que nos services peuvent être utilisés conformément au Règlement général sur la protection des données (RGPD). Ces capacités et ces garanties sont disponibles pour tous les clients d’AWS.

Nous nous engageons à poursuivre nos investissements conformément à une ambitieuse feuille de route pour le, , développement de capacités au service de la localisation des données, de la restriction d’accès granulaire (pratique consistant à accorder à des utilisateurs spécifiques différents niveaux d’accès à une ressource particulière), de chiffrement et de résilience :

1. Contrôle de l’emplacement de vos données

AWS a toujours permis à ses clients de contrôler l’emplacement de leurs données. Aujourd’hui en Europe, par exemple, les clients ont le choix de déployer leurs données dans l’une des huit Régions existantes. Nous nous engageons à fournir encore plus de services et de capacités pour protéger les données de nos clients. Nous nous engageons également à développer nos capacités existantes pour fournir des contrôles de localisation des données encore plus précis et transparents. Nous allons également étendre les contrôles de localisation des données pour les données opérationnelles, telles que les informations relatives à l’identité et à la facturation.

2. Contrôle fiable de l’accès aux données

Nous avons conçu et fourni une innovation unique en son genre pour restreindre l’accès aux données clients. Le système AWS Nitro, qui constitue la base des services informatiques d’AWS, utilise du matériel et des logiciels spécialisés pour protéger les données contre tout accès extérieur pendant leur traitement sur les serveurs EC2. En fournissant une solide barrière de sécurité physique et logique, Nitro est conçu pour empêcher, y compris au sein d’AWS, l’accès à ces données sur EC2. Nous nous engageons à continuer à développer des restrictions d’accès supplémentaires qui limitent tout accès aux données de nos clients, sauf indication contraire de la part du client ou de l’un de ses prestataires de confiance.

3. La possibilité de tout chiffrer, partout

Aujourd’hui, nous offrons à nos clients des fonctionnalités et des outils de contrôle pour chiffrer les données, qu’elles soient en transit, au repos ou en mémoire. Tous les services AWS prennent déjà en charge le chiffrement, la plupart permettant également le chiffrement sur des clés gérées par le client et inaccessibles à AWS. Nous nous engageons à continuer d’innover et d’investir dans des outils de contrôle au service de la souveraineté et des fonctionnalités de chiffrement supplémentaires afin que nos clients puissent chiffrer l’ensemble de leurs données partout, avec des clés de chiffrement gérées à l’intérieur ou à l’extérieur du cloud AWS.

4. La résilience du cloud

La souveraineté numérique est impossible sans résilience et sans capacités de continuité d’activité lors de crise majeure. Le contrôle des charges de travail et la haute disponibilité de réseau sont essentiels en cas d’événements comme une rupture de la chaîne d’approvisionnement, une interruption du réseau ou encore une catastrophe naturelle. Actuellement, AWS offre la plus haute disponibilité de réseau de tous les fournisseurs de cloud. Chaque Région AWS est composée de plusieurs zones de disponibilité (AZ), qui sont des portions d’infrastructure totalement isolées. Pour mieux isoler les difficultés et obtenir une haute disponibilité de réseau, les clients peuvent répartir les applications sur plusieurs zones dans la même Région AWS. Pour les clients qui exécutent des charges de travail sur place ou dans des cas d’utilisation à distance ou connectés par intermittence, nous proposons des services qui offrent des capacités spécifiques pour les données hors ligne, le calcul et le stockage à distance. Nous nous engageons à continuer d’améliorer notre gamme d’options souveraines et résilientes, permettant aux clients de maintenir leurs activités en cas de perturbation ou de déconnexion.

Gagner la confiance par la transparence et les garanties

Chez AWS, gagner la confiance de nos clients est le fondement de notre activité. Nous savons que la protection des données de nos clients est essentielle pour y parvenir. Nous savons également que leur confiance s’obtient par la transparence. Nous sommes transparents sur la manière dont nos services traitent et transfèrent leurs données. Nous continuerons à nous contester les demandes de données des clients émanant des autorités judiciaires et des organismes gouvernementaux. Nous fournissons des conseils, des preuves de conformité et des engagements contractuels afin que nos clients puissent utiliser les services AWS pour répondre aux exigences de conformité et de réglementation. Nous nous engageons à continuer à fournir la transparence et la flexibilité commerciale nécessaires pour répondre à l’évolution du cadre réglementaire relatif à la confidentialité et à la souveraineté des données.

Naviguer en équipe dans un monde en perpétuel changement

Aider les clients à protéger leurs données dans un monde où les réglementations, les technologies et les risques évoluent nécessite un travail d’équipe. Nous ne pouvons-nous résoudre à ce que nos clients relèvent seuls ces défis. Nos partenaires de confiance jouent un rôle prépondérant dans l’apport de solutions aux clients. Par exemple, en Allemagne, T-Systems (qui fait partie de Deutsche Telekom) propose la protection des données en tant que service géré sur AWS. L’entreprise fournit des conseils pour s’assurer que les contrôles de localisation des données sont correctement configurés, offrant des services pour la configuration en question, la gestion des clés de chiffrements et une expertise pour aider leurs clients à répondre à leurs exigences de souveraineté numérique dans le cloud AWS. Nous redoublons d’efforts avec les partenaires locaux en qui nos clients ont confiance pour les aider à répondre à ces exigences de souveraineté numérique.

Nous nous engageons à aider nos clients à répondre aux exigences de souveraineté numérique. Nous continuerons d’innover en matière de fonctionnalités, de contrôles et de garanties de souveraineté dans le cloud mondial d’AWS, tout en fournissant sans compromis et sans restriction la pleine puissance d’AWS.


German version

AWS Digital Sovereignty Pledge: Kontrolle ohne Kompromisse

Wir waren immer der Meinung, dass die Cloud ihr volles Potenzial nur dann erschließen kann, wenn Kunden die volle Kontrolle über ihre Daten haben. Diese Datensouveränität des Kunden genießt bei AWS schon seit den Anfängen der Cloud Priorität, als wir der einzige große Anbieter waren, bei dem Kunden sowohl Kontrolle über den Speicherort als auch über die Übertragung ihrer Daten hatten. Die Bedeutung dieser Grundsätze hat über die vergangenen 16 Jahre stetig zugenommen: Die Cloud ist im Mainstream angekommen, sowohl Gesetzgeber als auch Regulatoren entwickeln ihre Vorgaben zu IT-Sicherheit und Datenschutz stetig weiter.

Kontrolle bzw. Souveränität über digitale Ressourcen ist heute wichtiger denn je.

Unsere Innovationen und Entwicklungen haben stets darauf abgezielt, unseren Kunden eine Cloud zur Verfügung zu stellen, die skalierend und zugleich verlässlich global nutzbar ist. Dies beinhaltet auch unseren Kunden die Kontrolle zu gewährleisten die sie benötigen, damit sie alle ihre regulatorischen Anforderungen erfüllen können. Regulatorische Anforderungen sind länder- und sektorspezifisch. Vielerorts – wie auch in Europa – entstehen neue Anforderungen und Regularien zu digitaler Souveränität, die sich rasant entwickeln. Kunden sehen sich einer hohen Anzahl verschiedenster Regelungen ausgesetzt, die eine enorme Komplexität mit sich bringen. Innerhalb der letzten achtzehn Monate haben sich viele unserer Kunden daher mit der Sorge an uns gewandt, vor eine Wahl gestellt zu werden: Entweder die volle Funktionalität und Innovationskraft von AWS zu nutzen, oder auf funktionseingeschränkte „souveräne“ Cloud-Lösungen zurückzugreifen, deren Kapazität für Innovation, Transformation, Sicherheit und Wachstum aber limitiert ist. Wir sind davon überzeugt, dass Kunden nicht vor diese „Wahl“ gestellt werden sollten.

Deswegen stellen wir heute den „AWS Digital Sovereignty Pledge“ vor – unser Versprechen allen AWS Kunden, ohne Kompromisse die fortschrittlichsten Souveränitäts-Kontrollen und Funktionen in der Cloud anzubieten.

AWS bietet schon heute eine breite Palette an Datenschutz-Funktionen, Zertifizierungen und vertraglichen Zusicherungen an, die Kunden Kontrollmechanismen darüber geben, wo ihre Daten gespeichert sind, wer darauf Zugriff erhält und wie sie verwendet werden. Wir werden diese Palette so erweitern, dass Kunden überall auf der Welt, ihre Anforderungen an Digitale Souveränität erfüllen können, ohne auf Funktionsumfang, Leistungsfähigkeit, Innovation und Skalierbarkeit der AWS Cloud verzichten zu müssen. Gleichzeitig werden wir weiterhin daran arbeiten, unser Angebot flexibel und innovativ an die sich weiter wandelnden Bedürfnisse und Anforderungen von Kunden und Regulatoren anzupassen.

Sovereign-by-design

Wir werden den „AWS Digital Sovereignty Pledge“ so umsetzen, wie wir das seit dem ersten Tag machen und die AWS Cloud gemäß unseres „sovereign-by-design“ Ansatz fortentwickeln. Wir haben von Anfang an, durch entsprechende Funktions- und Kontrollmechanismen für spezielle IT-Sicherheits- und Datenschutzanforderungen aus den verschiedensten regulierten Sektoren Lösungen gefunden, die besonders sensiblen Branchen wie beispielsweise dem Finanzsektor oder dem Gesundheitswesen frühzeitig ermöglichten, die Cloud zu nutzen. Auf dieser Basis haben wir die AWS Verschlüsselungs- und Schlüsselmanagement-Funktionen entwickelt, Compliance-Akkreditierungen erhalten und vertragliche Zusicherungen gegeben, welche die Bedürfnisse unserer Kunden bedienen. Dies ist ein stetiger Prozess, um die AWS Cloud auf sich verändernde Kundenanforderungen anzupassen. Ein Beispiel dafür sind die Data Residency Guardrails, um die wir AWS Control Tower Ende letzten Jahres erweitert haben. Sie geben Kunden die volle Kontrolle über die physikalische Verortung ihrer Daten zu Speicherungs- und Verarbeitungszwecken. Dieses Jahr haben wir einen Katalog von AWS Diensten veröffentlicht, die den Cloud Infrastructure Service Providers in Europe (CISPE) erfüllen. Damit verfügen Kunden über eine unabhängige Verifizierung und zusätzliche Versicherung, dass unsere Dienste im Einklang mit der DSGVO verwendet werden können. Diese Instrumente und Nachweise stehen schon heute allen AWS Kunden zur Verfügung.

Wir haben uns ehrgeizige Ziele für unsere Roadmap gesetzt und investieren kontinuierlich in Funktionen für die Verortung von Daten (Datenresidenz), granulare Zugriffsbeschränkungen, Verschlüsselung und Resilienz:

1. Kontrolle über den Ort der Datenspeicherung

Bei AWS hatten Kunden immer schon die Kontrolle über Datenresidenz, also den Ort der Datenspeicherung. Aktuell können Kunden ihre Daten z.B. in 8 bestehenden Regionen innerhalb Europas speichern, von denen 6 innerhalb der Europäischen Union liegen. Wir verpflichten uns dazu, noch mehr Dienste und Funktionen zur Verfügung zustellen, die dem Schutz der Daten unserer Kunden dienen. Ebenso verpflichten wir uns, noch granularere Kontrollen für Datenresidenz und Transparenz auszubauen. Wir werden auch zusätzliche Kontrollen für Daten einführen, die insbesondere die Bereiche Identitäts- und Abrechnungs-Management umfassen.

2. Verifizierbare Kontrolle über Datenzugriffe

Mit dem AWS Nitro System haben wir ein innovatives System entwickelt, welches unberechtigte Zugriffsmöglichkeiten auf Kundendaten verhindert: Das Nitro System ist die Grundlage der AWS Computing Services (EC2). Es verwendet spezialisierte Hardware und Software, um den Schutz von Kundendaten während der Verarbeitung auf EC2 zu gewährleisten. Nitro basiert auf einer starken physikalischen und logischen Sicherheitsabgrenzung und realisiert damit Zugriffsbeschränkungen, die unautorisierte Zugriffe auf Kundendaten auf EC2 unmöglich machen – das gilt auch für AWS als Betreiber. Wir werden darüber hinaus für weitere AWS Services zusätzliche Mechanismen entwickeln, die weiterhin potentielle Zugriffe auf Kundendaten verhindern und nur in Fällen zulassen, die explizit durch Kunden oder Partner ihres Vertrauens genehmigt worden sind.

3. Möglichkeit der Datenverschlüsselung überall und jederzeit

Gegenwärtig können Kunden Funktionen und Kontrollen verwenden, die wir zur Verschlüsselung von Daten während der Übertragung, persistenten Speicherungen oder Verarbeitung in flüchtigem Speicher anbieten. Alle AWS Dienste unterstützen schon heute Datenverschlüsselung, die meisten davon auf Basis der Customer Managed Keys – d.h. Schlüssel, die von Kunden verwaltet werden und für AWS nicht zugänglich sind. Wir werden auch in diesem Bereich weiter investieren und Innovationen vorantreiben. Es wird zusätzliche Kontrollen für Souveränität und Verschlüsselung geben, damit unsere Kunden alles jederzeit und überall verschlüsseln können – und das mit Schlüsseln, die entweder durch AWS oder durch den Kunden selbst bzw. ausgewählte Partner verwaltet werden können.

4. Resilienz der Cloud

Digitale Souveränität lässt sich nicht ohne Ausfallsicherheit und Überlebensfähigkeit herstellen. Die Kontrolle über Workloads und hohe Verfügbarkeit z.B. in Fällen von Lieferkettenstörungen, Netzwerkausfällen und Naturkatastrophen ist essenziell. Aktuell bietet AWS die höchste Netzwerk-Verfügbarkeit unter allen Cloud-Anbietern. Jede AWS Region besteht aus mehreren Availability Zones (AZs), die jeweils vollständig isolierte Partitionen unserer Infrastruktur sind. Um Probleme besser zu isolieren und eine hohe Verfügbarkeit zu erreichen, können Kunden Anwendungen auf mehrere AZs in derselben Region verteilen. Kunden, die Workloads on-premises oder in Szenarien mit sporadischer Netzwerk-Anbindung betreiben, bieten wir Dienste an, welche auf Offline-Daten und Remote Compute und Storage Anwendungsfälle angepasst sind. Wir werden unser Angebot an souveränen und resilienten Optionen ausbauen und fortentwickeln, damit Kunden den Betrieb ihrer Workloads auch bei Trennungs- und Disruptionsszenarien aufrechterhalten können.

Vertrauen durch Transparenz und Zusicherungen

Der Aufbau eines Vertrauensverhältnisses mit unseren Kunden, ist die Grundlage unserer Geschäftsbeziehung bei AWS. Wir wissen, dass der Schutz der Daten unserer Kunden der Schlüssel dazu ist. Wir wissen auch, dass Vertrauen durch fortwährende Transparenz verdient und aufgebaut wird. Wir bieten schon heute transparenten Einblick, wie unsere Dienste Daten verarbeiten und übertragen. Wir werden auch in Zukunft Anfragen nach Kundendaten durch Strafverfolgungsbehörden und Regierungsorganisationen konsequent anfechten. Wir bieten Rat, Compliance-Nachweise und vertragliche Zusicherungen an, damit unsere Kunden AWS Dienste nutzen und gleichzeitig ihre Compliance und regulatorischen Anforderungen erfüllen können. Wir werden auch in Zukunft die Transparenz und Flexibilität an den Tag legen, um auf sich weiterentwickelnde Datenschutz- und Soveränitäts-Regulierungen passende Antworten zu finden.

Den Wandel als Team bewältigen

Regulatorik, Technologie und Risiken sind stetigem Wandel unterworfen: Kunden dabei zu helfen, ihre Daten in diesem Umfeld zu schützen, ist Teamwork. Wir würden nie erwarten, dass unsere Kunden das alleine bewältigen müssen. Unsere Partner genießen hohes Vertrauen und spielen eine wichtige Rolle dabei, Lösungen für Kunden zu entwickeln. Zum Beispiel bietet T-Systems in Deutschland Data Protection as a Managed Service auf AWS an. Das Angebot umfasst Hilfestellungen bei der Konfiguration von Kontrollen zur Datenresidenz, Zusatzdienste im Zusammenhang mit der Schlüsselverwaltung für kryptographische Verfahren und Rat bei der Erfüllung von Anforderungen zu Datensouveränität in der AWS Cloud. Wir werden die Zusammenarbeit mit lokalen Partnern, die besonderes Vertrauen bei unseren gemeinsamen Kunden genießen, intensivieren, um bei der Erfüllung der Digitalen Souveräntitätsanforderungen zu unterstützen.

Wir verpflichten uns dazu unseren Kunden bei der Erfüllung ihre Anforderungen an digitale Souveränität zu helfen. Wir werden weiterhin Souveränitäts-Funktionen, Kontrollen und Zusicherungen für die globale AWS Cloud entwickeln, die das gesamte Leistungsspektrum von AWS erschließen.


Italian version

AWS Digital Sovereignty Pledge: Controllo senza compromessi

Abbiamo sempre creduto che, affinché il cloud possa realizzare in pieno il potenziale, sia essenziale che i clienti abbiano il controllo dei propri dati. Offrire ai clienti questa “sovranità” è sin dall’inizio una priorità per AWS, da quando eravamo l’unico grande cloud provider a consentire ai clienti di controllare la localizzazione e lo spostamento dei propri dati. L’importanza di questo principio è aumentata negli ultimi 16 anni, man mano che il cloud ha iniziato a diffondersi e i governi e gli organismi di standardizzazione hanno continuato a sviluppare normative in materia di sicurezza, protezione dei dati e privacy.

Oggi, avere il controllo sulle risorse digitali, sulla sovranità digitale, è più importante che mai.

Nell’innovare ed espandere l’offerta cloud più completa, scalabile e affidabile al mondo, la nostra priorità è stata assicurarci che i clienti – ovunque operino – abbiano il controllo e siano in grado di soddisfare i requisiti normativi. Il contesto varia notevolmente tra i settori e i paesi. In molti luoghi del mondo, come in Europa, le politiche di sovranità digitale si stanno evolvendo rapidamente. I clienti stanno affrontando un crescente livello di complessità, e negli ultimi diciotto mesi molti di essi ci hanno detto di essere preoccupati di dover scegliere tra usare AWS in tutta la sua potenza e una soluzione cloud sovrana con funzionalità limitate che potrebbe ostacolare la loro capacità di innovazione, trasformazione e crescita. Crediamo fermamente che i clienti non debbano fare questa scelta.

Ecco perché oggi presentiamo l’AWS Digital Sovereignty Pledge, il nostro impegno a offrire a tutti i clienti AWS l’insieme più avanzato di controlli e funzionalità sulla sovranità digitale disponibili nel cloud.

AWS offre già una gamma di funzionalità di protezione dei dati, accreditamenti e impegni contrattuali che consentono ai clienti di controllare la localizzazione, l’accesso e l’utilizzo dei propri dati. Ci impegniamo a espandere queste funzionalità per consentire ai clienti di tutto il mondo di soddisfare i propri requisiti di sovranità digitale senza compromettere le capacità, le prestazioni, l’innovazione e la scalabilità del cloud AWS. Allo stesso tempo, continueremo a lavorare per comprendere a fondo le esigenze e i requisiti in evoluzione sia dei clienti che delle autorità di regolamentazione, adattandoci e innovando rapidamente per soddisfarli.

Sovereign-by-design

Il nostro approccio per mantenere questo impegno è quello di continuare a rendere il cloud AWS “sovereign-by-design”, come è stato sin dal primo giorno. All’inizio della nostra storia, abbiamo ricevuto da clienti in settori come quello finanziario e sanitario – che sono tra le organizzazioni più attente al mondo alla sicurezza e alla privacy dei dati – molti input sulle funzionalità e sui controlli relativi alla protezione dei dati di cui necessitano per utilizzare il cloud. Abbiamo sviluppato le funzionalità di crittografia e gestione delle chiavi di AWS, ottenuto accreditamenti di conformità e preso impegni contrattuali per soddisfare le esigenze dei nostri clienti. Con l’evolversi delle loro esigenze, sviluppiamo ed espandiamo il cloud AWS.

Un paio di esempi recenti includono i data residency guardrail che abbiamo aggiunto a AWS Control Tower (un servizio per la gestione degli ambienti in AWS) alla fine dell’anno scorso, che offrono ai clienti un controllo ancora maggiore sulla posizione fisica in cui i dati dei clienti vengono archiviati ed elaborati. A febbraio 2022 abbiamo annunciato i servizi AWS che aderiscono al Codice di condotta sulla protezione dei dati dei servizi di infrastruttura cloud in Europa (CISPE), offrendo ai clienti una verifica indipendente e un ulteriore livello di garanzia che i nostri servizi possano essere utilizzati in conformità con il Regolamento generale sulla protezione dei dati (GDPR). Queste funzionalità e garanzie sono disponibili per tutti i clienti AWS.

Ci impegniamo a continuare a investire in una roadmap ambiziosa di funzionalità per la residenza dei dati, la restrizione granulare dell’accesso, la crittografia e la resilienza:

1. Controllo della localizzazione dei tuoi dati

I clienti hanno sempre controllato la localizzazione dei propri dati con AWS. Ad esempio, attualmente in Europa i clienti possono scegliere di distribuire i propri dati attraverso una delle otto Region AWS disponibili. Ci impegniamo a fornire ancora più servizi e funzionalità per proteggere i dati dei nostri clienti e ad espandere le nostre capacità esistenti per fornire controlli e trasparenza ancora più dettagliati sulla residenza dei dati. Estenderemo inoltre anche i controlli relativi ai dati operativi, come le informazioni su identità e fatturazione.

2. Controllo verificabile sull’accesso ai dati

Abbiamo progettato e realizzato un’innovazione unica nel suo genere per limitare l’accesso ai dati dei clienti. Il sistema AWS Nitro, che è alla base dei servizi informatici di AWS, utilizza hardware e software specializzati per proteggere i dati dall’accesso esterno durante l’elaborazione su EC2. Fornendo un solido limite di sicurezza fisico e logico, Nitro è progettato per applicare restrizioni in modo che nessuno, nemmeno in AWS, possa accedere ai carichi di lavoro dei clienti su EC2. Ci impegniamo a continuare a creare ulteriori restrizioni di accesso che limitino tutti gli accessi ai dati dei clienti, a meno che non sia richiesto dal cliente o da un partner di loro fiducia.

3. La capacità di criptare tutto e ovunque

Attualmente, offriamo ai clienti funzionalità e controlli per criptare i dati, che siano questi in transito, a riposo o in memoria. Tutti i servizi AWS già supportano la crittografia, e la maggior parte supporta anche la crittografia con chiavi gestite dal cliente, non accessibili per AWS. Ci impegniamo a continuare a innovare e investire in controlli aggiuntivi per la sovranità e le funzionalità di crittografia in modo che i nostri clienti possano criptare tutto e ovunque con chiavi di crittografia gestite all’interno o all’esterno del cloud AWS.

4. Resilienza del cloud

Non è possibile raggiungere la sovranità digitale senza resilienza e affidabilità. Il controllo dei carichi di lavoro e l’elevata disponibilità sono essenziali in caso di eventi come l’interruzione della catena di approvvigionamento, l’interruzione della rete e i disastri naturali. Attualmente AWS offre il più alto livello di disponibilità di rete rispetto a qualsiasi altro cloud provider. Ogni regione AWS è composta da più zone di disponibilità (AZ), che sono partizioni di infrastruttura completamente isolate. Per isolare meglio i problemi e ottenere un’elevata disponibilità, i clienti possono partizionare le applicazioni tra più AZ nella stessa Region AWS. Per i clienti che eseguono carichi di lavoro in locale o in casi d’uso remoti o connessi in modo intermittente, offriamo servizi che forniscono funzionalità specifiche per i dati offline e l’elaborazione e lo storage remoti. Ci impegniamo a continuare a migliorare la nostra gamma di opzioni per la sovranità del dato e la resilienza, consentendo ai clienti di continuare ad operare anche in caso di interruzioni o disconnessioni.

Guadagnare fiducia attraverso trasparenza e garanzie

In AWS, guadagnare la fiducia dei clienti è alla base della nostra attività. Comprendiamo che proteggere i dati dei clienti è fondamentale per raggiungere questo obiettivo, e sappiamo che la fiducia si guadagna anche attraverso la trasparenza. Per questo siamo trasparenti su come i nostri servizi elaborano e spostano i dati. Continueremo ad opporci alle richieste relative ai dati dei clienti provenienti dalle forze dell’ordine e dalle agenzie governative. Forniamo linee guida, prove di conformità e impegni contrattuali in modo che i nostri clienti possano utilizzare i servizi AWS per soddisfare i requisiti normativi e di conformità. Ci impegniamo infine a continuare a fornire la trasparenza e la flessibilità aziendali necessarie a soddisfare l’evoluzione delle leggi sulla privacy e sulla sovranità del dato.

Gestire i cambiamenti come un team

Aiutare i clienti a proteggere i propri dati in un mondo caratterizzato da normative, tecnologie e rischi in evoluzione richiede un lavoro di squadra. Non ci aspetteremmo mai che i nostri clienti lo facessero da soli. I nostri partner di fiducia svolgono un ruolo di primo piano nel fornire soluzioni ai clienti. Ad esempio in Germania, T-Systems (parte di Deutsche Telekom) offre un servizio denominato Data Protection as a Managed Service on AWS. Questo fornisce indicazioni per garantire che i controlli di residenza dei dati siano configurati correttamente, offrendo servizi per la configurazione e la gestione delle chiavi di crittografia, oltre a competenze per aiutare i clienti a soddisfare i requisiti di sovranità digitale nel cloud AWS, per i quali stiamo collaborando con partner locali di cui i nostri clienti si fidano.

Ci impegniamo ad aiutare i nostri clienti a soddisfare i requisiti di sovranità digitale. Continueremo a innovare le funzionalità, i controlli e le garanzie di sovranità del dato all’interno del cloud AWS globale e a fornirli senza compromessi sfruttando tutta la potenza di AWS.


Japanese version

AWS Digital Sovereignty Pledge(AWSのデジタル統制に関するお客様との約束): 妥協のない管理

AWS セールス、マーケティングおよびグローバルサービス担当シニアバイスプレジデント マット・ガーマン(Matt Garman)

クラウドの可能性を最大限に引き出すためには、お客様が自らデータを管理することが不可欠であると、私たちは常に考えてきました。アマゾン ウェブ サービス(以下、AWS) は、お客様がデータの保管場所の管理とデータの移動を統制できる唯一の大手クラウドプロバイダーであった当初から、お客様にこれらの統制を実施していただくことを最優先事項としていました。クラウドが主流になり、政府機関及び標準化団体がセキュリティ、データ保護、プライバシーに関する規制を策定し続ける中で、お客様による統制の重要性は過去 16 年間にわたって一貫して高まっています。

今日、デジタル資産を管理すること、つまりデジタル統制は、かつてないほど重要になっています。

私たちは、世界で最も高性能で、スケーラビリティと信頼性の高いクラウドを提供するために、イノベーションとサービスの拡大を実現してきました。その中でお客様が、どの地域でも継続して管理し、規制要件を満たせるようにすることを最優先してきました。この状況は、業界や国によって大きく異なります。ヨーロッパなど世界中の多くの地域で、デジタル統制に関する政策が急速に発展しています。お客様は非常に複雑な状況に直面しています。過去 18 か月間にわたり、私たちは、お客様から、AWSの全ての機能を使うか、またはイノベーション・変革・成長を妨げてでも機能が限定されたクラウドソリューションを使うかを選択しなければいけない、という懸念の声を多く聞いてきました。私たちは、お客様がこのような選択を迫られるべきではないと考えています。

本日、私たちは、「AWS Digital Sovereignty Pledge(AWSのデジタル統制に関するお客様との約束)」を発表いたします。クラウドで利用できる最も高度な一連の統制管理と機能とを、すべての AWS のお客様に提供します。

AWS は既に、データの保存場所、アクセスできるユーザー、データの使用方法をお客様が管理できるようにする、さまざまなデータ保護機能、認定、契約上の責任を提供しています。私たちは、世界中のお客様が AWS クラウドの機能、パフォーマンス、イノベーション、スケールを犠牲にすることなく、デジタル統制の要件を満たすことができるよう、これらの機能を拡張することを約束いたします。同時に、お客様と規制当局双方の進化するニーズと要件を深く理解し、迅速な導入とイノベーションにより、デジタル統制のニーズと要件を満たすよう継続的に努力していきます。

企画・設計段階からの統制 (sovereign-by-design)

この約束を果たすための私たちのアプローチは、創業初日からそうであったように、AWS クラウドにおいて企画・設計段階からの統制を維持し続けることです。私たちのこれまでの取組の初期段階で、金融サービスやヘルスケアなどの業界のお客様 (世界で最もセキュリティとデータのプライバシーを重視する組織であるお客様) から、クラウドを使うために必要なデータ保護機能や管理について多くのご意見をいただきました。そして、私たちは AWS の暗号化とキー管理機能を開発し、コンプライアンス認定を取得して、お客様のニーズを満たすための契約を締結してきました。お客様の要件の進化に応じて、AWS クラウドも進化、拡張しています。最近の例としては、昨年末に AWS Control Tower (AWS 環境を管理するサービス) に追加したデータレジデンシーガードレールがあります。これにより、お客様のデータが保存および処理される物理的な場所をさらに詳細に管理できるようになりました。2022 年 2 月に、Cloud Infrastructure Service Providers in Europe (CISPE) のデータ保護行動規範に準拠した AWS のサービスを発表しました。これにより、お客様は独立した検証と、当社のサービスがEUの一般データ保護規則 (GDPR) に準拠して使用できるという追加の保証を得ることができます。これらの機能と保証は、すべての AWS のお客様にご利用いただけます。

私たちは、データ保管、きめ細かなアクセス制限、暗号化、耐障害性に関する機能の展開・拡大に引き続き投資することを約束します。

1. データの保管場所の管理

お客様は、AWS を使用して常にデータの保管場所を制御できます。例えば、現在ヨーロッパでは、8 つある既存のリージョンのいずれにもデータを保管できます。私たちは、お客様のデータを保護するために、さらに多くのサービスと機能を提供するよう努めています。さらに、よりきめ細かなデータの保管・管理と透明性を提供するため、既存の機能を拡張することにも取り組んでいます。また、ID や請求情報などの運用データのデータ保管・管理も拡大していきます。

2.検証可能なデータアクセスの管理

私たちは、お客様のデータへのアクセスを制限する、他に類を見ないイノベーションを設計し、実現してきました。AWS のコンピューティングサービスの基盤である AWS Nitro System は、専用のハードウェアとソフトウェアを使用して、Amazon Elastic Compute Cloud (Amazon EC2) での処理中に外部アクセスからデータを保護します。Nitro は、物理的にも論理的にも強固なセキュリティ境界を設けることで、AWS の従業員を含め、誰も Amazon EC2 上のお客様のワークロードにアクセスできないように制限を行います。私たちは、お客様またはお客様が信頼しているパートナーからの要求がない限り、お客様のデータへのすべてのアクセスを制限できるよう、さらなるアクセス管理の構築に継続的に取り組んでいきます。

3.あらゆる場所ですべてを暗号化する機能

現在私たちは、転送中、保存中、メモリ内にあるかを問わず、データを暗号化する機能と管理をお客様に提供しています。すべての AWS のサービスは既に暗号化をサポートしており、そのほとんどのサービスでは、AWSからもアクセスできないお客様が管理するキーによる暗号化もサポートしています。私たちは、お客様が AWS クラウドの内部または外部で管理されている暗号化キーを使用して、あらゆる場所ですべてを暗号化できるように、統制と暗号化機能のさらなる管理に向けたイノベーションと投資を続けていきます。

4. クラウドの耐障害性

可用性と強靭性なくしてデジタル統制を実現することは不可能です。サプライチェーンの中断、ネットワークの中断、自然災害などの事象が発生した場合、ワークロードの管理と高可用性が重要になります。現在、AWS はどのクラウドプロバイダーよりも高いネットワーク可用性を実現しています。各 AWS リージョンは、完全に分離されたインフラストラクチャパーティションである複数のアベイラビリティーゾーン (AZ) で構成されています。生じうる事象をより適切に分離して高可用性を実現するために、お客様は同じ AWS リージョン内の複数の AZ にアプリケーションを分割できます。ワークロードをオンプレミスで実行しているお客様、または断続的な接続やリモートのユースケースには、オフラインデータおよびリモートコンピューティングとストレージに関する特定の機能を備えたサービスを提供しています。私たちは、中断や切断が発生してもお客様が業務を継続できるように、統制と回復力のある選択肢を継続的に拡大していきます。

透明性と保証による信頼の獲得

AWS では、お客様の信頼を得ることはビジネスの根幹です。そのためには、お客様のデータの保護が不可欠であると考えています。また、信頼を継続的に得るには、透明性が必要であることも理解しています。私たちは、当社のサービスによるデータの処理および転送方法に関して、透明性を確保しています。法執行機関や政府機関からのお客様のデータの要求に対しては、引き続き異議申し立てを行っていきます。お客様が AWS のサービスを利用してコンプライアンスや規制の要件を満たすことができるように、ガイダンス、コンプライアンスの証拠、契約責任を提供します。私たちは、進化するプライバシーおよび統制に関する法律に対応するために必要な透明性とビジネスの柔軟性を引き続き提供していきます。

チームとしての変化への対応

規制、テクノロジー、リスクが変化する世界において、お客様によるデータの保護を支援するためには、チームワークが必要です。私たちは、お客様だけで対応することを期待するようなことは決してありません。AWS の信頼できるパートナーが、お客様にソリューションを提供するうえで顕著な役割を果たします。例えば、ドイツでは、T-Systems (ドイツテレコムグループ) が AWS のマネージドサービスとしてデータ保護を提供しています。同社は、データ保護・管理が適切に設定されていることを確認するためのガイダンスを提供し、暗号化キーの設定と管理に関するサービスと専門知識を提供して、顧客が AWS クラウドでデジタル統制要件に対応できるよう支援しています。私たちは、デジタル統制要件への対応を支援するために、お客様が信頼するローカルパートナーとの連携を強化しています。

私たちは、お客様がデジタル統制要件を満たすことができるよう、支援を行うことを約束しています。私たちは引き続き、グローバルな AWS クラウドにおける統制機能、管理、保証を革新し、それらを AWS の全ての機能において妥協することなく提供していきます。


Korean version

AWS 디지털 주권 서약: 타협 없는 제어

작성자: Matt Garman, 아마존웹서비스(AWS) 마케팅 및 글로벌 서비스 담당 수석 부사장

아마존웹서비스(AWS)는 클라우드가 지닌 잠재력이 최대로 발휘되기 위해서는 고객이 반드시 자신의 데이터를 제어할 수 있어야 한다고 항상 믿어 왔습니다. AWS는 서비스 초창기부터 고객이 데이터의 위치와 이동을 제어할 수 있도록 허용하는 유일한 주요 클라우드 공급업체였고 지금까지도 고객에게 이러한 권리를 부여하는 것을 최우선 과제로 삼고 있습니다. 클라우드가 주류가 되고 정부 및 표준 제정 기관이 보안, 데이터 보호 및 개인정보보호 규정을 지속적으로 발전시키면서 지난 16년간 이러한 원칙의 중요성은 더욱 커졌습니다.

오늘날 디지털 자산 또는 디지털 주권 제어는 그 어느 때보다 중요합니다.

AWS는 세계에서 가장 우수하고 확장 가능하며 신뢰할 수 있는 클라우드를 제공하기 위한 혁신과 서비스 확대에 주력하면서, 고객이 사업을 운영하는 모든 곳에서 스스로 통제할 수 있을 뿐 아니라 규제 요구 사항을 충족할 수 있어야 한다는 점을 언제나 최우선 순위로 여겨왔습니다. AWS의 이러한 기업 철학은 개별 산업과 국가에 따라 상당히 다른 모습으로 표출됩니다. 유럽을 비롯해 전 세계 많은 지역에서 디지털 주권 정책이 빠르게 진화하고 있습니다. 고객은 엄청난 복잡성에 직면해 있으며, 지난 18개월 동안 많은 고객들은 AWS의 모든 기능을 갖춘 클라우드 서비스와 혁신, 변화, 성장을 저해할 수 있는 제한적 기능의 클라우드 솔루션 중 하나를 선택해야만 할 수도 있다는 우려가 있다고 말하였습니다. 하지만 AWS는 고객이 이러한 선택을 해서는 안 된다는 굳건한 믿음을 가지고 있습니다.

이것이 바로 오늘, 모든 AWS 고객에게 클라우드 기술에서 가장 발전된 형태의 디지털 주권 제어와 기능을 제공하겠다는 약속인, “AWS 디지털 주권 서약”을 소개하는 이유입니다.

AWS는 이미 고객이 데이터 위치, 데이터에 액세스가 가능한 인력 및 데이터 이용 방식을 제어할 수 있는 다양한 데이터 보호 기능, 인증 및 계약상 의무를 제공하고 있습니다. 또한 전 세계 고객이 AWS 클라우드의 기능, 성능, 혁신 및 규모에 영향을 주지 않으면서 디지털 주권 관련 요건을 충족할 수 있도록 기존 데이터 보호 정책을 지속적으로 확대할 것을 약속합니다. 동시에 고객과 규제 기관의 변화하는 요구와 규제 요건을 깊이 이해하고 이를 충족하기 위해 신속하게 적응하고 혁신을 이어나가고자 합니다.

디지털 주권 보호를 위한 원천 설계

이 약속을 이행하기 위한 우리의 접근 방식은 처음부터 그러했듯이 AWS 클라우드를 애초부터 디지털 주권을 강화하는 방식으로 설계하는 것입니다. AWS는 사업 초기부터 금융 서비스 및 의료 업계와 같이 세계 최고 수준의 보안 및 데이터 정보 보호를 요구하는 조직의 고객으로부터 클라우드를 사용하는 데 필요한 데이터 보호 기능 및 제어에 관한 많은 의견을 수렴했습니다. AWS는 고객의 요구 사항을 충족하기 위하여 암호화 및 핵심적 관리 기능을 개발하고, 규정 준수 인증을 획득했으며, 계약상 의무를 제공하였습니다. 고객의 요구 사항이 변화함에 따라 AWS 클라우드도 진화하고 확장해 왔습니다. 최근의 몇 가지 예를 들면, 작년 말 AWS Control Tower(AWS 환경 관리 서비스)에 추가한 데이터 레지던시 가드레일이 있습니다. 이는 고객 데이터가 저장되고 처리되는 물리적 위치에 대한 제어 권한을 고객에게 훨씬 더 많이 부여하는 것을 목표로 합니다. 2022년 2월에는 유럽 클라우드 인프라 서비스(CISPE) 데이터 보호 행동 강령을 준수하는 AWS 서비스를 발표한 바 있습니다. 이것은 유럽연합의 개인정보보호규정(GDPR)에 따라 서비스를 사용할 수 있다는 독립적인 검증과 추가적 보증을 고객에게 제공하는 것입니다. 이러한 기능과 보증은 AWS의 모든 고객에게 적용됩니다.

AWS는 데이터 레지던시, 세분화된 액세스 제한, 암호화 및 복원력 기능의 증대를 위한 야심 찬 로드맵에 다음과 같이 계속 투자할 것을 약속합니다.

1. 데이터 위치에 대한 제어

고객은 항상 AWS를 통해 데이터 위치를 제어해 왔습니다. 예를 들어 현재 유럽에서는 고객이 기존 8개 리전 중 원하는 위치에 데이터를 배치할 수 있습니다. AWS는 고객의 데이터를 보호하기 위한 더 많은 서비스와 기능을 제공할 것을 약속합니다. 또한, 기존 기능을 확장하여 더욱 세분화된 데이터 레지던시 제어 및 투명성을 제공할 것을 약속합니다. 뿐만 아니라, ID 및 결제 정보와 같은 운영 데이터에 대한 데이터 레지던시 제어를 확대할 예정입니다.

2. 데이터 액세스에 대한 검증 가능한 제어

AWS는 고객 데이터에 대한 액세스를 제한하는 최초의 혁신적 설계를 제공했습니다. AWS 컴퓨팅 서비스의 기반인 AWS Nitro System은 특수 하드웨어 및 소프트웨어를 사용하여 EC2에서 처리하는 동안 외부 액세스로부터 데이터를 보호합니다. 강력한 물리적 보안 및 논리적 보안의 경계를 제공하는 Nitro는 AWS의 모든 사용자를 포함하여 누구도 EC2의 고객 워크로드에 액세스할 수 없도록 설계되었습니다. AWS는 고객 또는 고객이 신뢰하는 파트너의 요청이 있는 경우를 제외하고, 고객 데이터에 대한 모든 액세스를 제한하는 추가 액세스 제한을 계속 구축할 것을 약속합니다.

3. 모든 것을 어디서나 암호화하는 능력

현재 AWS는 전송 중인 데이터, 저장된 데이터 또는 메모리에 있는 데이터를 암호화할 수 있는 기능과 제어권을 고객에게 제공합니다. 모든 AWS 서비스는 이미 암호화를 지원하고 있으며, 대부분 AWS에서 액세스할 수 없는 고객 관리형 키를 사용한 암호화도 지원합니다. 또한 고객이 AWS 클라우드 내부 또는 외부에서 관리되는 암호화 키로 어디서든 어떤 것이든 것을 암호화할 수 있도록 디지털 주권 및 암호화 기능에 대한 추가 제어를 지속적으로 혁신하고 투자할 것을 약속합니다.

4. 클라우드의 복원력

복원력과 생존성 없이는 디지털 주권을 달성할 수 없습니다. 공급망 장애, 네트워크 중단 및 자연 재해와 같은 이벤트가 발생할 경우 워크로드 제어 및 고가용성은 매우 중요합니다. 현재 AWS는 모든 클라우드 공급업체 중 가장 높은 네트워크 가용성을 제공합니다. 각 AWS 리전은 완전히 격리된 인프라 파티션인 여러 가용 영역(AZ)으로 구성됩니다. 문제를 보다 효과적으로 격리하고 고가용성을 달성하기 위해 고객은 동일한 AWS 리전의 여러 AZ로 애플리케이션을 분할할 수 있습니다. 온프레미스 또는 간헐적으로 연결되거나 원격 사용 사례에서 워크로드를 실행하는 고객을 위해 오프라인 데이터와 원격 컴퓨팅 및 스토리지에 대한 특정 기능을 지원하는 서비스를 제공합니다. AWS는 고객이 중단되거나 연결이 끊기는 상황에도 운영을 지속할 수 있도록 자주적이고 탄력적인 옵션의 범위를 지속적으로 강화할 것을 약속합니다.

투명성과 보장을 통한 신뢰 확보

고객의 신뢰를 얻는 것이 AWS 비즈니스의 토대입니다. 이를 위해서는 고객 데이터를 보호하는 것이 핵심이라는 점을 잘 알고 있습니다. 투명성을 통해 계속 신뢰를 쌓아야 한다는 점 또한 잘 알고 있습니다. AWS는 서비스가 데이터를 처리하고 전송하는 방식을 투명하게 공개합니다. 법 집행 기관 및 정부 기관의 고객 데이터 제공 요청에 대하여 계속해서 이의를 제기할 것입니다. AWS는 고객이 AWS 서비스를 사용하여 규정 준수 및 규제 요건을 충족할 수 있도록 지침, 규정 준수 증거 및 계약상의 의무 이행을 제공합니다. AWS는 진화하는 개인 정보 보호 및 각 지역의 디지털 주권 규정을 준수하는 데 필요한 투명성과 비즈니스 유연성을 계속 제공할 것을 약속합니다.

팀 차원의 변화 모색

변화하는 규정, 기술 및 위험이 있는 환경에서 고객이 데이터를 보호할 수 있도록 하려면 팀워크가 필요합니다. 고객 혼자서는 절대 할 수 없는 일입니다. 신뢰할 수 있는 AWS의 파트너는 고객이 문제를 해결하는데 중요한 역할을 합니다. 예를 들어, 독일에서는 T-Systems(Deutsche Telekom의 일부)를 통해 AWS의 관리형 서비스로서의 데이터 보호를 제공합니다. 데이터 레지던시 제어가 올바르게 구성되도록 하기 위한 지침을 제공하고, 암호화 키의 구성 및 관리를 위한 서비스를 통해 고객이 AWS 클라우드에서 디지털 주권 요구 사항을 해결하는 데 도움이 되는 전문 지식을 제공합니다. AWS는 고객이 디지털 주권 요구 사항을 해결하는 데 도움이 될 수 있도록 신뢰할 수 있는 현지 파트너와 협력하고 있습니다.

AWS는 고객이 디지털 주권 요구 사항을 충족할 수 있도록 최선의 지원을 다하고있습니다. AWS는 글로벌 AWS 클라우드 내에서 주권 기능, 제어 및 보안을 지속적으로 혁신하고, AWS의 모든 기능에 대한 어떠한 타협 없이 이를 제공할 것입니다.

Matt Garman

Matt Garman

Matt is currently the Senior Vice President of AWS Sales, Marketing and Global Services at AWS, and also sits on Amazon’s executive leadership S-Team. Matt joined Amazon in 2006, and has held several leadership positions in AWS over that time. Matt previously served as Vice President of the Amazon EC2 and Compute Services businesses for AWS for over 10 years. Matt was responsible for P&L, product management, and engineering and operations for all compute and storage services in AWS. He started at Amazon when AWS first launched in 2006 and served as one of the first product managers, helping to launch the initial set of AWS services. Prior to Amazon, he spent time in product management roles at early stage Internet startups. Matt earned a BS and MS in Industrial Engineering from Stanford University, and an MBA from the Kellogg School of Management at Northwestern University.

Lower your Amazon OpenSearch Service storage cost with gp3 Amazon EBS volumes

Post Syndicated from Siddhant Gupta original https://aws.amazon.com/blogs/big-data/lower-your-amazon-opensearch-service-storage-cost-with-gp3-amazon-ebs-volumes/

Amazon OpenSearch Service makes it easy for you to perform interactive log analytics, real-time application monitoring, website search, and more. OpenSearch is an open-source, distributed search and analytics suite comprising OpenSearch, a distributed search and analytics engine, and OpenSearch Dashboards, a UI and visualization tool. When you use Amazon OpenSearch Service, you configure a set of data nodes to store indexes and serve queries. The service supports instance types for data nodes with different storage options. Some supported Amazon Elastic Compute Cloud (Amazon EC2) instance types, like the R6GD or I3, have local NVMe disks. Others use Amazon Elastic Block Store (Amazon EBS) storage.

On July 2022, OpenSearch Service launched support for the next generation, general purpose SSD (gp3) EBS volumes. OpenSearch Service data nodes require low latency and high throughput storage to provide fast indexing and query. With gp3 EBS volumes, you get higher baseline performance (IOPS and throughput) at a 9.6% lower cost than with the previously offered gp2 EBS volume type. You can provision additional IOPS and throughput independent of volume size using gp3. gp3 volumes are also more stable because they don’t use burst credits. OpenSearch support for gp3 volumes includes doubling the limit on per-data node volume sizes. With these larger volumes, you can reduce the cost of passive data, increasing the amount of storage per node.

We recommend that you consider gp3 as the best Amazon EBS option for price/performance and flexibility. In this post, I discuss the basics of gp3 and various cost-saving use cases. Migrating from previous generation storage (gp2, PIOPS, and magnetic) volumes to the latest generation gp3 volumes allows you to reduce monthly storage costs and optimize instance utilization.

Comparing gp2 and gp3

gp3 is the successor to the general purpose SSD gp2 volume. The key benefits of gp3 include higher baseline performance, 9.6% lower cost, and the ability to provision higher performance regardless of volume. The following table summarizes the key differences between gp2 and gp3.

Volume type gp3 gp2
Volume size Depends on instance type. Max OpenSearch Service supports 24 TiB for R6g.12Xlarge. For the latest instance limits, see Amazon OpenSearch Service quotas. Depends on instance type. Max OpenSearch Service supports 12 TiB for R6g.12Xlarge.
Baseline IOPS 3,000 IOPS for volume size up to 1,024 GiB. For volumes above 1,024 GiB, you get 3 IOPS/GiB, without burst credit complexity. 3 IOPS/GiB (minimum 100 IOPS) to a maximum of 16,000 IOPS. Volumes smaller than 1 TiB can also burst up to 3,000 IOPS.
Max IOPS/volume 16,000 16,000
Baseline throughput 125 MiB/s free for volume size up to 170 GiB, or 250 MiB/s free for volume above 170 GiB. Between 125 MiB/s and 250 MiB/s, depending on the volume size.
Max throughput/volume 1,000 MiB/s 250 MiB/s
Price for us-east-1 Region
  • Storage – $0.122/GB-month.
  • IOPS – 3,000 IOPS free for volumes up to 1,024 GiB, or 3 IOPS/GiB free for volumes above 1,024 GiB. $0.008/provisioned IOPS-month over free limits.
  • Throughput – 125 MiB/s free for volumes up to 170 GiB, or +250 MiB/s free for every 3 TiB for volumes above 170 GiB. $0.064/provisioned MiB/s-month over free limits.
  • Storage – $0.135/GB-month.
  • IOPS and throughput provisioning not allowed.
Instance supported T3, C5, M5, R5, C6g, M6g, and R6g T2, C4, M4, R4, T3, C5, M5, R5, C6g, M6g, and R6g

Lower your monthly bills with gp3

The ability to provision IOPS and throughput independent of volume size and support for denser (twice as large) volume sizes are two significant advantages of gp3 adoption. Together, these benefits enable multiple use cases to lower your monthly bills. In this section, we present a few examples of pricing comparisons for OpenSearch domains.

gp2 vs. gp3

This is the most common scenario, in which existing gp2 customers switch to gp3 and immediately begin saving 9.6% due to the lower monthly price per GB for gp3 storage. You can also benefit from the fact that gp3 supports volume sizes two times larger for the R5, R6g, M5, and M6g instance families. This means that you don’t need to spin up new instances for denser storage requirements and can achieve higher storage on the same instance. OpenSearch Service currently supports a maximum of 24 TiB of gp3 storage on R6g.12Xlarge instances.

PIOPS (io1) vs. gp3

OpenSearch Service supports the PIOPS SSD (io1) EBS volume type. You can switch to gp3 and provision additional IOPS and throughput to meet your specific performance requirements. The following table compares the monthly cost of PIOPS (io1) and gp3 storage with R5.large.search instances for storage requirements of 6 TiB and 16000 IOPS. In this example, you would save 65% with gp3 adoption.

. PIOPS (io1) gp3
Instance cost

6 instances * $0.186/hr = $830/month

(r5.large.search can support up to 1 TiB storage for io1; to support 6 TiB we require six instances.)

3 instances * $0.167Hr = $372/month

(r6g.large.search can support up to 2 TiB storage for gp3; to support 6 TiB we require three instances.)

Storage cost (6 TiB)

6,597 GB * $0.169/GB-month = $1115/month

Notes:
(a) Price for PIOPS(io1) is $0.169 per GB/month.
(b) 6TiB = 6597 GB

6,597 GB * $0.122/GB-month = $805/month

Notes:
(a) Price for gp3 storage is $0.122 per GB/month.
(b) 6TiB = 6597 GB

PIOPS cost (16000 PIOPS)

16000 IOPS * $0.088/IOPS-month = $1408/month

Note: io1 PIOPS rate is $0.088 per IOPS-month.

18,000 IOPS is included in the price for 6 TiB volume of gp3; you don’t need to pay.

Note: 3 IOPS/ GiB Storage IOPS inlcued in price.

Total monthly bills $3,353/month $1,177/month

I3 vs. gp3

I3 instances include Non-Volatile Memory Express (NVMe) SSD-based instance storage optimized for low latency, very high random I/O performance, and high sequential read throughput, and delivers high IOPS. However, I3 uses older third-generation CPUs, and the largest storage supported size is 15 TiB with i3.16xlarge.search instance. You should consider using the largest generation instances such as R6g with gp3 storage to get lower cost and better performance over I3 instances.

To comprehend the cost advantage, let’s compare I3 and gp3 for 12 TiB of data storage needs. By switching to gp3 along with the current generation of instances, you can reduce your monthly bills by 56%, according to the calculations in the following table.

. I3.4xlarge gp3 with R6g.xlarge
On-demand instance cost for us-east-1 Region

4 instances * $1.99/hr = $5,922/month

Note: I3.4xlarge.search supports up to 3.8 TiB, so we require four instances to manage 12 TiB storage. Instance cost is $1.99/hr.

4 instances * $0.335/hr = $996/month

Note: R6g.xlarge.search supports up to 3 TiB with gp3, so we require four instances to manage 12 TiB. Instance cost is $0.335/hr.

Storage cost (12 TiB) N/A (included in instance price)

13,194 GB * $0.122/GB-month = $1,610/month

Notes:
(a) 12 TiB = 13,194 GB
(b) Storage cost is $0.122 per GB / month

Total monthly bills $5,922/month $2,606/month

UltraWarm vs. gp3

UltraWarm is designed to provide inexpensive access to infrequently accessed data, such as logs older than 30 days. Warm storage is useful for indexes that aren’t actively being written to, are queried less frequently, and don’t require high performance. If you have large and query-intensive workloads and are attempting to use UltraWarm to optimize costs but encountering higher query volumes than it can handle, you should consider moving some of the data volume to hot nodes with gp3 storage. UltraWarm will remain the least expensive option for your warm data (less-frequently accessed) type use cases, but you shouldn’t use it for hot data use cases. A combination of low-cost gp3 storage and denser instances can help you achieve cost-optimized higher performance for hot data.

The following table shows the monthly costs associated with running a 30 TiB UltraWarm workload, along with a comparison to the potential monthly costs of gp2 and gp3. With gp3, you can save up to 36% compared to gp2. Please note that UltraWarm setup does require hot data nodes; however, we excluded them in the UltraWarm column to focus on UltraWarm replacement costs with hot data nodes using gp2 and gp3.

. UltraWarm All Hot (gp2 with R6g.8xlarge) All Hot (gp3 with R6g.8xlarge)
Instance cost (On-demand)

2 UW large instances * $2.68/hr = $3,987/month

Note: ultrawarm1.large.search supports max 20 TiB, so we need two instances.

4 instances * $2.677/hr = $7,966/month

Note: r6g.8xlarge.search supports max 8 TiB with gp2, so we require four instances.

2 Instances * $2.677/hr= $3,984/month

Note: r6g.8xlarge.search supports max 16 TiB with gp3, so we only require two instances.

Storage cost (30 TiB)

32,985 GB * $0.024/GB-month = $792/month

Notes:
(1) Storage price is $0.024/per GB/month).
(2) 30 TiB = 32985 GB

32,985 GB * $0.135/GB-month = $4,453/month

Notes:
(1) Storage price is $0.135 per GB/month.
(2) 30 TiB = 32985 GB

32,985 GB * $0.122/GB-month = $4,024/month

Notes:
(1) Storage price is $0.122 per GB/month.
(2) 30 TiB = 32985 GB

Total Monthly Bills $4,779/month $12,419/month $8,008/month

All the preceding use cases are from a cost perspective. Before making any changes to the production environment, we recommend validating performance in a test environment for your unique workload and ensuring that configuration changes don’t result in performance degradation.

Optimize instance cost with gp3’s denser storage

OpenSearch Service increased the maximum volume size supported per instance for gp3 by 100% when compared to gp2 for the R5, R6g, M5, and M6g instance families due to gp3’s improved baseline performance. You can optimize your instance needs by taking advantage of the increased storage per instance volume. For example, R6g.large supports up to 2 TiB with gp3, but only 1 TiB with gp2. If you require support for 12 TiB of data storage, you can reconfigure your domains from six data nodes to three R6g.large in order to reduce your instance costs. For OpenSearch EBS instance-specific volume limits, refer to EBS volume size quotas.

Upgrade from gp2 to gp3

To use the EBS gp3 volume type, you must first upgrade your domain’s instances to supported instance types if they don’t already support gp3. For a list of OpenSearch Service supported instances, see EBS volume size quotas. The transition from gp2 to gp3 is seamless. You can upgrade domain configurations from existing EBS volume types such as gp2, Magnetic, and PIOS (io1) to gp3 through OpenSearch Service console or the UpdatedomainConfig API. The configuration change will initiate blue/green deployment, which runs in the background without impacting your online traffic and, depending on the data size, is complete in a few hours. Blue/green deployments run in the background, ensuring that your online traffic is uninterrupted and preventing data loss.

gp3 baseline performance, and additional provisioning limits

One of the gp3’s key features is the ability to scale IOPS and throughput independent of volume. When your application requires more performance, you can scale up to 16,000 IOPS and 1,000 MiB/s throughput for an additional fee. OpenSearch Service EBS gp3 delivers a baseline performance of 3,000 IOPS and 125 MiB/s throughput at any volume size. In addition, OpenSearch Service provisions additional IOPS and throughput for larger volumes to ensure optimal performance. For volumes above 1,024 GiB, you receive 3 IOPS/GiB, and for volumes above 170 GiB, you get an incremental 250 MiB/s for every 3 TiB of storage.

The following table outlines OpenSearch Service baseline IOPS and throughput, as well as the maximum amount you can provision. Note that your instance type may have additional limitations regarding how much and for how long it can support these performance baselines in a 24-hour period. For more information about instances and their limits, refer to Amazon EBS-optimized instances.

Additional performance customers can provisions

.. Baseline (included in storage price) Additional performance customers can provision
Volume Storage (in GiB) IOPS throughput (MiB/s) IOPS throughput (MiB/s)
170 3,000 125 13,000 875
172 3,000 250 13,000 750
1,024 3,000 250 13,000 750
1,025 3,075 250 12,925 750
3,000 9,000 250 7,000 750
3,001 9,003 500 6,997 500
6,000 18,000 500 NA 500
6,001 18,003 750 NA 250
9,001 27,003 1,000 NA NA
24,000 72,000 2,000 NA NA

Do you need additional performance?

In the majority of use cases, you don’t need to provision additional IOPS and throughput, and gp3 baseline performance should suffice. You can use Amazon CloudWatch metrics to find the usage patterns, and if you observe current limits of IOPS and throughput bottlenecking your index and query performance, you should provision additional performance. For more information, refer to EBS volume metrics.

Conclusion

This post explains how OpenSearch Service general purpose SSD gp3 volumes can significantly reduce monthly storage and instance costs, making them more cost-effective than gp2 volumes. Migration to gp3 volumes with the same size and performance configurations as gp2 is the quickest and simplest way to reduce costs. Additionally, you should also consider reducing instance costs by taking advantage of gp3’s support for denser storage per data node.

For more details, check out Amazon OpenSearch Service pricing and Configuration API reference for Amazon OpenSearch Service.


About the author

Siddhant Gupta is a Sr. Technical Product Manager at Amazon Web Services based in Hyderabad, India. Siddhant has been with Amazon for over five years and is currently working with the OpenSearch Service team, helping with new region launches, pricing strategy, and bringing EC2 and EBS innovations to OpenSearch Service customers . He is passionate about analytics and machine learning. In his free time, he loves traveling, fitness activities, spending time with his family and reading non-fiction books.

2022 Canadian Centre for Cyber Security Assessment Summary report available with 12 additional services

Post Syndicated from Naranjan Goklani original https://aws.amazon.com/blogs/security/2022-canadian-centre-for-cyber-security-assessment-summary-report-available-with-12-additional-services/

We are pleased to announce the availability of the 2022 Canadian Centre for Cyber Security (CCCS) assessment summary report for Amazon Web Services (AWS). This assessment will bring the total to 132 AWS services and features assessed in the Canada (Central) AWS Region, including 12 additional AWS services. A copy of the summary assessment report is available for review and download on demand through AWS Artifact.

The full list of services in scope for the CCCS assessment is available on the AWS Services in Scope page. The 12 new services are:

The CCCS is Canada’s authoritative source of cyber security expert guidance for the Canadian government, industry, and the general public. Public and commercial sector organizations across Canada rely on CCCS’s rigorous Cloud Service Provider (CSP) IT Security (ITS) assessment in their decisions to use cloud services. In addition, CCCS’s ITS assessment process is a mandatory requirement for AWS to provide cloud services to Canadian federal government departments and agencies.

The CCCS Cloud Service Provider Information Technology Security Assessment Process determines if the Government of Canada (GC) ITS requirements for the CCCS Medium cloud security profile (previously referred to as GC’s Protected B/Medium Integrity/Medium Availability [PBMM] profile) are met as described in ITSG-33 (IT security risk management: A lifecycle approach). As of November 2022, 132 AWS services in the Canada (Central) Region have been assessed by the CCCS and meet the requirements for the CCCS Medium cloud security profile. Meeting the CCCS Medium cloud security profile is required to host workloads that are classified up to and including the medium categorization. On a periodic basis, CCCS assesses new or previously unassessed services and reassesses the AWS services that were previously assessed to verify that they continue to meet the GC’s requirements. CCCS prioritizes the assessment of new AWS services based on their availability in Canada, and on customer demand for the AWS services. The full list of AWS services that have been assessed by CCCS is available on our Services in Scope for CCCS Assessment page.

To learn more about the CCCS assessment or our other compliance and security programs, visit AWS Compliance Programs. As always, we value your feedback and questions; you can reach out to the AWS Compliance team through the Contact Us page.

If you have feedback about this post, submit comments in the Comments section below. Want more AWS Security news? Follow us on Twitter.

Naranjan Goklani

Naranjan Goklani

Naranjan is a Security Audit Manager at AWS, based in Toronto (Canada). He leads audits, attestations, certifications, and assessments across North America and Europe. Naranjan has more than 13 years of experience in risk management, security assurance, and performing technology audits. Naranjan previously worked in one of the Big 4 accounting firms and supported clients from the financial services, technology, retail, ecommerce, and utilities industries.

AWS Security Profile: Sarah Currey, Delivery Practice Manager

Post Syndicated from Maddie Bacon original https://aws.amazon.com/blogs/security/aws-security-profile-sarah-currey-delivery-practice-manager/

In the weeks leading up to AWS re:invent 2022, I’ll share conversations I’ve had with some of the humans who work in AWS Security who will be presenting at the conference, and get a sneak peek at their work and sessions. In this profile, I interviewed Sarah Currey, Delivery Practice Manager in World Wide Professional Services (ProServe).

How long have you been at AWS and what do you do in your current role?

I’ve been at AWS since 2019, and I’m a Security Practice Manager who leads a Security Transformation practice dedicated to helping customers build on AWS. I’m responsible for leading enterprise customers through a variety of transformative projects that involve adopting AWS services to help achieve and accelerate secure business outcomes.

In this capacity, I lead a team of awesome security builders, work directly with the security leadership of our customers, and—one of my favorite aspects of the job—collaborate with internal security teams to create enterprise security solutions.

How did you get started in security?

I come from a non-traditional background, but I’ve always had an affinity for security and technology. I started off learning HTML back in 2006 for my Myspace page (blast from the past, I know) and in college, I learned about offensive security by dabbling in penetration testing. I took an Information Systems class my senior year, but otherwise I wasn’t exposed to security as a career option. I’m from Nashville, TN, so the majority of people I knew were in the music or healthcare industries, and I took the healthcare industry path.

I started my career working at a government affairs firm in Washington, D.C. and then moved on to a healthcare practice at a law firm. I researched federal regulations and collaborated closely with staffers on Capitol Hill to educate them about controls to protect personal health information (PHI), and helped them to determine strategies to adhere to security, risk, and compliance frameworks such as HIPAA and (NIST) SP 800-53. Government regulations can lag behind technology, which creates interesting problems to solve. But in 2015, I was assigned to a project that was planned to last 20 years, and I decided I wanted to move into an industry that operated as a faster pace—and there was no better place than tech. 

From there, I moved to a startup where I worked as a Project Manager responsible for securely migrating customers’ data to the software as a service (SaaS) environment they used and accelerating internal adoption of the environment. I often worked with software engineers and asked, “why is this breaking?” so they started teaching me about different aspects of the service. I interacted regularly with a female software engineer who inspired me to start teaching myself to code. After two years of self-directed learning, I took the leap and quit my job to do a software engineering bootcamp. After the course, I worked as a software engineer where I transformed my security assurance skills into the ability to automate security. The cloud kept coming up in conversations around migrations, so I was curious and achieved software engineering and AWS certifications, eventually moving to AWS. Here, I work closely with highly regulated customers, such as those in healthcare, to advise them on using AWS to operate securely in the cloud, and work on implementing security controls to help them meet frameworks like NIST and HIPAA, so I’ve come full circle.

How do you explain your job to non-technical friends and family?

The general public isn’t sure how to define the cloud, and that’s no different with my friends and family. I get questions all the time like “what exactly is the cloud?” Since I love storytelling, I use real-world examples to relate it to their profession or hobbies. I might talk about the predictive analytics used by the NFL or, for my friends in healthcare, I talk about securing PHI.

However, my favorite general example is describing the AWS Shared Responsibility Model as a house. Imagine a house—AWS is responsible for security of the house. We’re responsible for the physical security of the house, and we build a fence, we make sure there is a strong foundation and secure infrastructure. The customer is the tenant—they can pay as they go, leave when they need to—and they’re responsible for running the house and managing the items, or data, in the house. So it’s my job to help the customer implement new ideas or technologies in the house to help them live more efficiently and securely. I advise them on how to best lock the doors, where to store their keys, how to keep track of who is coming in and out of the house with access to certain rooms, and how to protect their items in the house from other risks.

And for my friends that love Harry Potter, I just say that I work in the Defense Against the Dark Arts.

What are you currently working on that you’re excited about?

There are a lot of things in different spaces that I’m excited about.

One is that I’m part of a ransomware working group to provide an offering that customers can use to prepare for a ransomware event. Many customers want to know what AWS services and features they can use to help them protect their environments from ransomware, and we take real solutions that we’ve used with customers and scale them out. Something that’s really cool about Professional Services is that we’re on the frontlines with customers, and we get to see the different challenges and how we can relate those back to AWS service teams and implement them in our products. These efforts are exciting because they give customers tangible ways to secure their environments and workloads. I’m also excited because we’re focusing not just on the technology but also on the people and processes, which sometimes get forgotten in the technology space.

I’m a huge fan of cross-functional collaboration, and I love working with all the different security teams that we have within AWS and in our customer security teams. I work closely with the Amazon Managed Services (AMS) security team, and we have some very interesting initiatives with them to help our customers operate more securely in the cloud, but more to come on that.

Another exciting project that’s close to my heart is the Inclusion, Diversity, and Equity (ID&E) workstream for the U.S. It’s really important to me to not only have diversity but also inclusion, and I’m leading a team that is helping to amplify diverse voices. I created an Amplification Flywheel to help our employees understand how they can better amplify diverse voices in different settings, such as meetings or brainstorming sessions. The flywheel helps illustrate a process in which 1) an idea is voiced by an underrepresented individual, 2) an ally then amplifies the idea by repeating it and giving credit to the author, 3) others acknowledge the contribution, 4) this creates a more equitable workplace, and 5) the flywheel continues where individuals feel more comfortable sharing ideas in the future.

Within this workstream, I’m also thrilled about helping underrepresented people who already have experience speaking but who may be having a hard time getting started with speaking engagements at conferences. I do mentorship sessions with them so they can get their foot in the door and amplify their own voice and ideas at conferences.

You’re presenting at re:Invent this year. Can you give us a sneak peek of your session?

I’m partnering with Johnny Ray, who is an AMS Senior Security Engineer, to present a session called SEC203: Revitalize your security with the AWS Security Reference Architecture. We’ll be discussing how the AWS SRA can be used as a holistic guide for deploying the full complement of AWS security services in a multi-account environment. The AWS SRA is a living document that we continuously update to help customers revitalize their security best practices as they grow, scale, and innovate.

What do you hope attendees take away from your session?

Technology is constantly evolving, and the security space is no exception. As organizations adopt AWS services and features, it’s important to understand how AWS security services work together to improve your security posture. Attendees will be able to take away tangible ways to:

  • Define the target state of your security architecture
  • Review the capabilities that you’ve already designed and revitalize them with the latest services and features
  • Bootstrap the implementation of your security architecture
  • Start a discussion about organizational governance and responsibilities for security

Johnny and I will also provide attendees with a roadmap at the end of the session that gives customers a plan for the first week after the session, one to three months after the session, and six months after the session, so they have different action items to implement within their organization.

You’ve written about the importance of ID&E in the workplace. In your opinion, what’s the most effective way leaders can foster an inclusive work environment?

I’m super passionate about ID&E, because it’s really important and it makes businesses more effective and a better place to work as a whole. My favorite Amazon Leadership Principle is Earn Trust. It doesn’t matter if you Deliver Results or Insist on the Highest Standards if no one is willing to listen to you because you don’t have trust built up. When it comes to building an inclusive work environment, a lot of earning trust comes from the ability to have empathy, vulnerability, and humility—being able to admit when you made a mistake—with your teammates as well as with your customers. I think we have a unique opportunity at AWS to work closely with customers and learn about what they’re doing and their best practices with ID&E, and share our best practices.

We all make mistakes, we’re all learning, and that’s okay, but having the ability to admit when you’ve made a mistake, apologize, and learn from it makes a much better place to work. When it comes to intent versus impact, I love to give the example—going back to storytelling—of walking down the street and accidentally bumping into someone, causing them to drop their coffee. You didn’t intend to hurt them or spill their coffee; your intent was to keep walking down the street. However, the impact that you had was maybe they’re burnt now, maybe their coffee is all down their clothes, and you had a negative impact on them. Now, you want to apologize and maybe look up more while you’re walking and be more observant of your surroundings. I think this is a good example because sometimes when it comes to ID&E, it can become a culture of blame and that’s not what we want to do—we want to call people in instead of calling them out. I think that’s a great way to build an inclusive team.

You can have a diverse workforce, but if you don’t have inclusion and you’re not listening to people who are underrepresented, that’s not going to help. You need to make sure you’re practicing transformative leadership and truly wanting to change how people behave and think when it comes to ID&E. You want to make sure people are more kind to each other, rather than only checking the box on arbitrary diversity goals. It’s important to be authentic and curious about how you learn from others and their experiences, and to respect them and implement that into different ideas and processes. This is important to make a more equitable workplace.

I love learning from different ID&E leaders like Camille Leak, Aiko Bethea, and Brené Brown. They are inspirational to me because they all approach ID&E with vulnerability and tackle the uncomfortable.

What’s the thing you’re most proud of in your career?

I have two different things—one from a technology standpoint and one from a personal impact perspective.

On the technology side, one of the coolest projects I’ve been on is Change Healthcare, which is an independent healthcare technology company that connects payers, providers, and patients across the United States. They have an important job of protecting a lot of PHI and personally identifiable information (PII) for American citizens. Change Healthcare needed to quickly migrate its ClaimsXten claims processing application to the cloud to meet the needs of a large customer, and it sought to move an internal demo and training application environment to the cloud to enable self-service and agility for developers. During this process, they reached out to AWS, and I took the lead role in advising Change Healthcare on security and how they were implementing their different security controls and technical documentation. I led information security meetings on AWS services, because the processes were new to a lot of the employees who were previously working in data centers. Through working with them, I was able to cut down their migration hours by 58% by using security automation and reduce the cost of resources, as well. I oversaw security for 94 migration cutovers where no security events occurred. It was amazing to see that process and build a great relationship with the company. I still meet with Change Healthcare employees for lunch even though I’m no longer on their projects. For this work, I was awarded the “Above and Beyond the Call of Duty” award, which only three Amazonians get a year, so that was an honor.

From a personal impact perspective, it was terrifying to quit my job and completely change careers, and I dealt with a lot of imposter syndrome—which I still have every day, but I work through it. Something impactful that resulted from this move was that it inspired a lot of people in my network from non-technical backgrounds, especially underrepresented individuals, to dive into coding and pursue a career in tech. Since completing my bootcamp, I’ve had more than 100 people reach out to me to ask about my experience, and about 30 of them quit their job to do a bootcamp and are now software engineers in various fields. So, it’s really amazing to see the life-changing impact of mentoring others.

You do a lot of volunteer work. Can you tell us about the work you do and why you’re so passionate about it?

Absolutely! The importance of giving back to the community cannot be understated.

Over the last 13 years, I have fundraised, volunteered, and advocated in building over 40 different homes throughout the country with Habitat for Humanity. One of my most impactful volunteer experiences was in 2013. I volunteered with a nonprofit called Bike & Build, where we cycled across the United States to raise awareness and money for affordable housing efforts. From Charleston, South Carolina to Santa Cruz, California, the team raised over $158,000, volunteered 3,584 hours, and biked 4,256 miles over the course of three months. This was such an incredible experience to meet hundreds of people across the country and help empower them to learn about affordable housing and improve their lives. It also tested me so much emotionally, mentally, and physically that I learned a lot about myself in the process. Additionally, I was selected by Gap, Inc. to participate in an international Habitat build in Antigua, Guatemala in October of 2014.

I’m currently on the Associate Board of Gilda’s Club, which provides free cancer support to anyone in need. Corporate social responsibility is a passion of mine, and so I helped organize AWS Birthday Boxes and Back to School Bags volunteer events with Gilda’s Club of Middle Tennessee. We purchased and assembled birthday and back-to-school boxes for children whose caregiver was experiencing cancer, so their caregiver would have one less thing to worry about and make sure the child feels special during this tough time. During other AWS team offsites, I’ve organized volunteering through Nashville Second Harvest food bank and created 60 shower and winter kits for individuals experiencing homelessness through ShowerUp.

I also mentor young adult women and non-binary individuals with BuiltByGirls to help them navigate potential career paths in STEM, and I recently joined the Cyversity organization, so I’m excited to give back to the security community.

If you had to pick an industry outside of security, what would you want to do?

History is one of my favorite topics, and I’ve always gotten to know people by having an inquisitive mind. I love listening and asking curious questions to learn more about people’s experiences and ideas. Since I’m drawn to the art of storytelling, I would pick a career as a podcast host where I bring on different guests to ask compelling questions and feature different, rarely heard stories throughout history.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Maddie Bacon

Maddie (she/her) is a technical writer for Amazon Security with a passion for creating meaningful content that focuses on the human side of security and encourages a security-first mindset. She previously worked as a reporter and editor, and has a BA in Mathematics. In her spare time, she enjoys reading, traveling, and staunchly defending the Oxford comma.

Sarah Curry

Sarah Currey

Sarah (she/her) is a Security Practice Manager with AWS Professional Services, who is focused on accelerating customers’ business outcomes through security. She leads a team of expert security builders who deliver a variety of transformative projects that involve adopting AWS services and implementing security solutions. Sarah is an advocate of mentorship and passionate about building an inclusive, equitable workplace for all.

How Etleap and Amazon Redshift Serverless optimize costs for ETL

Post Syndicated from Caius Brindescu original https://aws.amazon.com/blogs/big-data/how-etleap-and-amazon-redshift-serverless-optimize-costs-for-etl/

Amazon Redshift Serverless lets you avoid managing infrastructure while only paying for what you use. Etleap provides data integration software that is natively built on AWS. It’s an AWS Advanced Technology Partner with the AWS Data & Analytics Competency and Amazon Redshift Service Ready designation.

In this post, we share how you can minimize the usage of resources for some workload patterns and maximize savings while seamlessly managing data pipelines. We illustrate an example of how Redshift Serverless and Etleap’s load synchronization feature can reduce active Redshift Serverless time, further optimizing extract, transform, and load (ETL) costs.

Introduction to Redshift Serverless

Redshift Serverless makes it easy to run and scale analytics in seconds without the need to set up and manage data warehouse clusters. With Redshift Serverless, you pay for the compute only when the data warehouse is in use. This is ideal when it’s difficult to predict compute needs such as variable workloads, periodic workloads with idle time, and steady-state workloads with spikes. As your demand evolves with new workloads and more concurrent users, Redshift Serverless automatically provisions the right compute resources, and your data warehouse scales seamlessly and automatically.

You can create a Redshift Serverless data warehouse either using the default settings or custom settings. Redshift Serverless creates a default workgroup and associates that to the default namespace. You can also create multiple Redshift Serverless endpoints per AWS account and Region using namespaces and workgroups.

A namespace is a collection of database objects and users, with properties such as database name and password, permissions, and encryption and security. The following screenshot shows an example of a namespace configuration on the Redshift Serverless console.

Namespace-Amazon Redshift Serverless

A workgroup is a collection of compute resources, which includes network and security settings. Workgroup configuration allows you to create a private or public serverless endpoint that you can use to connect with your applications. The following screenshot shows an example workgroup on the Redshift Serverless console.

Workgroup - Amazon Redshift Serverless

When the Redshift Serverless endpoint is available, choose Query data to launch the Amazon Redshift Query Editor v2 to create database objects, load data, and analyze and visualize data. You can also connect to Redshift Serverless endpoints using your preferred SQL client tools via Amazon Redshift JDBC/ODBC drivers.

With Redshift Serverless, you pay separately for the compute and storage you use. Compute capacity is measured in Redshift Processing Units (RPUs), and you pay for the workloads in RPU-hours with a minimum charge of 60 seconds, metered on a per-second basis. Data lake queries are also part of the same RPU-hours, and Redshift Serverless doesn’t charge separately for the per-TB based pricing of Amazon Redshift Spectrum. The default base capacity is 128 RPUs, but you can adjust it from 32 RPUs to 512 RPUs in units of 8 using the Redshift Serverless console. For storage, you pay for data stored in Amazon Redshift-managed storage and storage used for manual snapshots, similar to what you would pay with Amazon Redshift provisioned RA3 instances.

To control your costs, you can specify usage limits and define actions that Amazon Redshift automatically takes if those limits are reached. You can specify usage limits in RPU-hours and associated with a daily, weekly, or monthly duration. Setting higher usage limits can improve the overall throughput of the system, especially for workloads that need to handle high concurrency while maintaining consistently high performance.

Why Etleap customers need Redshift Serverless

Etleap gives customers robust and flexible pipelines without the hassle of coding and managing infrastructure. Redshift Serverless has a similar benefit, letting you run Amazon Redshift without worrying about provisioning and maintaining data warehouse.

With the close Etleap-AWS integration, you can get started working with multiple data sources in Redshift Serverless in minutes.

Redshift Serverless can also reduce users’ costs because it automatically scales data warehouse capacity up and down to match usage and only charges when the serverless instance is active. ETL workloads are often batch-based and characterized by spikes, so the dynamic scaling of Redshift Serverless reduces unnecessary costs.

The following diagram illustrates this solution architecture.

Etleap Integration with Amazon Redshift Serverless

Etleap uses Amazon Database Migration Service (AWS DMS), Amazon EMR, and Amazon Simple Storage Service (Amazon S3) to process data from databases, files, applications, and streams into Redshift Serverless.

Optimize costs for Redshift Serverless

One of the main sources of cost savings when using Redshift Serverless comes from its auto-pausing feature. When a Redshift Serverless instance is idle, it will auto-pause and you aren’t charged during this period of inactivity.

However, high frequency ETL pipelines (such as those from streams or CDC sources) can constantly resume the Redshift Serverless instance, negating the cost benefit. To maximize the advantages of the auto-pausing feature of Redshift Serverless, Etleap provides the option of load synchronization. As shown in the following figure, this reduces the number of load batches, thereby lowering active Redshift Serverless instance time and cost.

Etleap Load Synchronization

It sometimes makes sense to maximize the frequency of data ingestion, but not all use cases justify the higher cost of an always-on Amazon Redshift instance. Etleap users can set their load frequency at a cost-efficient once-per-hour or as frequently as every 5 minutes.

Amazon Redshift users typically run some SQL transformations after data is loaded in the warehouse. Etleap’s models feature lets you define the SQL transformations and their dependencies and control when these transformations are run. As with data loading, however, if these aren’t designed thoughtfully, there is a risk that models will trigger updates that unnecessarily wake up an idle Redshift Serverless instance, negating the cost savings of the Redshift Serverless auto-pausing feature.

To avoid this, Etleap schedules the models to update immediately after all the dependent tables have been updated. This maximizes the instance usage while it’s awake and allows it to pause when the loads and updates have completed.

Cost savings example

Let’s illustrate the cost savings benefits of Redshift Serverless by means of an example. A customer has set a 1-hour load synchronization schedule and has 100 pipelines and 10 models. Although by default Redshift Serverless has a provisioned base capacity of 128 RPUs, a provisioned base capacity of 32 RPUs is sufficient for the load requirements of this example. A typical average load time for Etleap customers into Amazon Redshift is 6 seconds. In Etleap, we perform a maximum of five loads at a time to avoid overloading the Redshift Serverless instance.

Here is an example of how the sequence would work for the pipelines:

  1. When the hourly schedule triggers, Etleap begins the extraction and transformation of source data for all pipelines with new data to process.
  2. After all the pipelines have finished extraction and transformation, Etleap begins to load the data into Amazon Redshift. This resumes the serverless instance. At an average of 6 seconds per load and five loads running in parallel, it takes 120 seconds to load all the pipelines (100 / 5 pipeline cycles * 6 seconds each).
  3. When the load is complete, Etleap triggers the model updates. A typical model in Etleap takes about 130 seconds to update. As with loads, Etleap limits models to five simultaneous updates to reduce the load on the Redshift Serverless instance. Therefore, updating all 10 models takes 260 seconds of total instance run time (130 seconds * 10/5 model cycles).
  4. At this point, you’re being charged for 380 seconds of active workload, and Redshift Serverless will become idle after some time.

Additionally, Etleap runs daily vacuum operations on applicable tables to minimize storage and improve query efficiency. The length of this process depends on the tables and the number of updates and deletes. For a customer with this amount of pipeline volume, 20 minutes is a typical length of time to vacuum the tables, adding that much daily runtime for the instance.

This results in a total daily runtime of 172 minutes ((380 seconds * 24 daily cycles / 60) + 20 minutes), which translates into a cost of $34.40 per day for a 32 RPU serverless instance. This is 88% lower cost than a comparable Amazon Redshift provisioned environment without the benefits of Etleap and Redshift Serverless: an always-on provisioned Amazon Redshift cluster with similar performance (1 year reserved instance pricing for 16 ra3.xlplus nodes running 24 hours/day).

Other ETL optimizations on Etleap using Redshift Serverless

Etleap natively supports Redshift Serverless by updating its ETL solution to ensure you can continue to seamlessly ingest diverse data sources.

Redshift Serverless offers new system views that are used for tracking and managing ingestion, and Etleap utilizes these new system views to natively handle tracking ingestion loads and vacuuming operations in their platform. For example, Etleap uses sys_query_history to determine which loads are in progress or complete, and thereby helps avoid double loading a batch.

Redshift Serverless automatically initiates optimizations such as sort and vacuum in the background and doesn’t charge for these automatic optimizations. As a best practice, after Etleap load synchronization, Etleap periodically runs the vacuum function on applicable tables, which reduces storage and improves query performance. Etleap uses the vacuum_sort_benefit column in svv_table_info, which provides the statistics for each table, informing which would benefit from vacuuming.

Summary

In this post, we described how Redshift Serverless frees you from managing data warehouse infrastructure and reduces costs. In particular, we illustrated a data integration pattern where Etleap can ensure further cost savings through its load synchronization feature by optimally choosing a cost-efficient once-per-hour load frequency. Although this proves to be an optimal solution for uses cases where you prefer cost efficiency over real-time data insights, Etleap also allows you to set the load frequency as low as 5 minutes for use cases where near-real-time data insights are important.

Start using Redshift Serverless to run and scale analytics without having to manage data warehouse infrastructure and take advantage of further cost savings through Etleap’s load synchronization feature. To get started with Etleap, start a free trial  or request a tailored demo.


About the Authors

Caius Brindescu is an engineer at Etleap with over 4 years of experience in developing ETL software. In addition to development work, he helps customers make the most out of Etleap and Amazon Redshift. He holds a PhD from Oregon State University and one AWS certification (Big Data – Specialty).

Maneesh Sharma is a Senior Database Engineer at AWS with more than a decade of experience designing and implementing large-scale data warehouse and analytics solutions. He collaborates with various Amazon Redshift Partners and customers to drive better integration.

Sathisan Vannadil is a Senior Partner Solutions Architect at Amazon Web Services (AWS). His primary focus is on helping independent software vendor (ISV) partners design and build solutions at scale on AWS. Prior to AWS, Sathisan held diverse technical positions and has over 20 years of experience in the field of data and analytics.

Fall 2022 SOC reports now available with 154 services in scope

Post Syndicated from Andrew Najjar original https://aws.amazon.com/blogs/security/fall-2022-soc-reports-now-available-with-154-services-in-scope/

At Amazon Web Services (AWS), we’re committed to providing customers with continued assurance over the security, availability, and confidentiality of the AWS control environment. We’re proud to deliver the Fall 2022 System and Organizational Controls (SOC) 1, 2, and 3 reports, which cover April 1–September 30, 2022, to support our customers’ confidence in AWS services.

AWS has also updated the associated infrastructure supporting our in-scope products and services to reflect new edge locations, AWS Wavelength zones, and AWS Local Zones.

The Fall 2022 SOC reports include an additional seven services in scope, for a new total of 154 services. See the full list on our Services in Scope by Compliance Program page.

The following are the additional seven services now in scope for the Fall 2022 SOC reports:

Customers can download the Fall 2022 SOC reports through AWS Artifact in the AWS Management Console. You can also download the SOC 3 report as a PDF file from AWS.

AWS strives to bring services into the scope of its compliance programs to help you meet your architectural and regulatory needs. If there are additional AWS services that you would like to see added to the scope of our SOC reports (or other compliance programs), reach out to your AWS representatives.

As always, we value your feedback and questions. Feel free to reach out to the team through the Contact Us page. If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to-content, news, and feature announcements? Follow us on Twitter.

Andrew Najjar

Andrew Najjar

Andrew is a Compliance Program Manager at Amazon Web Services. He leads multiple security and privacy initiatives within AWS and has 8 years of experience in security assurance. Andrew holds a master’s degree in information systems and bachelor’s degree in accounting from Indiana University. He is a CPA and AWS Certified Solution Architect – Associate.

ryan wilks

Ryan Wilks

Ryan is a Compliance Program Manager at Amazon Web Services. He leads multiple security and privacy initiatives within AWS. Ryan has 11 years of experience in information security and holds ITIL, CISM and CISA certifications.

Nathan Samuel

Nathan Samuel

Nathan is a Compliance Program Manager at Amazon Web Services. He leads multiple security and privacy initiatives within AWS. Nathan has a Bachelors of Commerce degree from the University of the Witwatersrand, South Africa and has 17 years’ experience in security assurance and holds the CISA, CRISC, CGEIT, CISM, CDPSE and Certified Internal Auditor certifications.

Fall 2022 SOC 2 Type 2 Privacy report now available

Post Syndicated from Nimesh Ravasa original https://aws.amazon.com/blogs/security/fall-2022-soc-2-type-2-privacy-report-now-available/

Your privacy considerations are at the core of our compliance work at Amazon Web Services (AWS), and we are focused on the protection of your content while using AWS services.

We are happy to announce that our Fall 2022 SOC 2 Type 2 Privacy report is now available. The report provides a third-party attestation of our system and the suitability of the design of our privacy controls. The SOC 2 Privacy Trust Service Criteria (TSC), developed by the American Institute of Certified Public Accountants (AICPA), establishes the criteria for evaluating controls that relate to how personal information is collected, used, retained, disclosed, and disposed of. For more information about our privacy commitments supporting the SOC 2 Type 2 report, see the AWS Customer Agreement.

The scope of the Fall 2022 SOC 2 Type 2 Privacy report includes information about how we handle the content that you upload to AWS, and how that content is protected across the services and locations that are in scope for the latest AWS SOC reports. AWS customers can download the SOC 2 Type 2 Privacy report through AWS Artifact in the AWS Management Console.

As always, we value your feedback and questions. Feel free to reach out to the compliance team through the AWS Compliance Contact Us page. If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to-content, news, and feature announcements? Follow us on Twitter.

Nimesh Ravasa

Nimesh Ravasa

Nimesh is a Compliance Program Manager at Amazon Web Services. He leads multiple security and privacy initiatives within AWS. Nimesh has 14 years of experience in information security and holds CISSP, CISA, PMP, CSX, AWS Solution Architect – Associate, and AWS Security Specialty certifications.

Emma Zhang

Emma Zhang

Emma is a Compliance Program Manager at Amazon Web Services. She leads multiple process improvement projects across multiple compliance programs within AWS. Emma has 8 years of experience in risk management, IT risk assurance, and technology risk advisory.

Brownell Combs

Brownell Combs

Brownell is a Compliance Program Manager at Amazon Web Services. He leads multiple security and privacy initiatives within AWS. Brownell holds a Master of Science, Computer Science degree from the University of Virginia and a Bachelor of Science, Computer Science degree from Centre College. He has over 20 years of experience in Information Technology risk management and CISSP, CISA, and CRISC certifications.

You can now assign multiple MFA devices in IAM

Post Syndicated from Liam Wadman original https://aws.amazon.com/blogs/security/you-can-now-assign-multiple-mfa-devices-in-iam/

At Amazon Web Services (AWS), security is our top priority, and configuring multi-factor authentication (MFA) on accounts is an important step in securing your organization.

Now, you can add multiple MFA devices to AWS account root users and AWS Identity and Access Management (IAM) users in your AWS accounts. This helps you to raise the security bar in your accounts and limit access management to highly privileged principals, such as root users. Previously, you could only have one MFA device associated with root users or IAM users, but now you can associate up to eight MFA devices of the currently supported types with root users and IAM users.

In this blog post, we review the current MFA features for IAM, share use cases for multiple MFA devices, and show you how to manage and sign in with the additional MFA devices for better resiliency and flexibility.

Overview of MFA for IAM

First, let’s recap some of the benefits and available MFA configurations for IAM.

The use of MFA is an important security best practice on AWS. With MFA, you have an additional layer of protection to help prevent unauthorized individuals from gaining access to your systems and data. MFA can help protect your AWS environments if a password associated with your root user or IAM user became compromised.

As a security best practice, AWS recommends that you avoid using root users or IAM users to manage access to your accounts. Instead, you should use AWS IAM Identity Center (successor to AWS Single Sign-On) to manage access to your accounts. You should only use root users for tasks that they are required for.

To help meet different customer needs, AWS supports three types of MFA devices for IAM, including FIDO security keys, virtual authenticator applications, and time-based one-time password (TOTP) hardware tokens. You should select the device type that aligns with your security and operational requirements. You can associate different types of MFA devices with an IAM principal.

Use cases for multiple MFA devices

There are several use cases in which associating multiple MFA devices with an IAM principal is beneficial to the security and operational efficiency of your organization, such as the following:

  • In the event of a lost, stolen, or inaccessible MFA device, you can use one of the remaining MFA devices to access the account without performing the AWS account recovery procedure. If an MFA device is lost or stolen, it’s best practice to disassociate the lost or stolen device from the root users or IAM users that it’s associated with.
  • Geographically dispersed teams, or teams working remotely, can use hardware-based MFA to access AWS, without shipping a single hardware device or coordinating a physical exchange of a single hardware device between team members.
  • If the holder of an MFA device isn’t available, you can maintain access to your root users and IAM users by using a different MFA device associated with an IAM principal.
  • You can store additional MFA devices in a secure physical location, such as a vault or safe, while retaining physical access to another MFA device for redundancy.

How to manage multiple MFA devices in IAM

You can register up to eight MFA devices, in any combination of the currently supported MFA types, with your root users and IAM users.

To register an MFA device

  1. Sign in to the AWS Management Console and do the following:
    • For a root user, choose My Security Credentials.
    • For an IAM user, choose Security credentials.
  2. For Multi-factor authentication (MFA), choose Assign MFA device.
  3. Select the type of MFA device that you want to use and then choose Next.

With multiple MFA devices, you only need one MFA device to sign in to the console or to create a session through the AWS Command Line Interface (AWS CLI) as that principal.

You don’t need to make permissions changes in order for your organization to start taking advantage of multiple MFA devices. The root users and IAM users in your accounts that manage MFA devices today can use their existing IAM permissions to enable additional MFA devices.

Changes to Cloudtrail log entries

In support of this new feature, the identifier of the MFA device used will now be added to the console sign-in events of the root user and IAM user that use MFA. With these changes to AWS CloudTrail log entries, you can now view both the user and the MFA device used to authenticate to AWS. This provides better traceability and audibility for your accounts.

You can find this information in the MFAIdentifier field in CloudTrail, within additionalEventData. You don’t need to take action for this information to be logged. The following is a sample log from CloudTrail that includes the MFAIdentifier.

"additionalEventData": {
"LoginTo": "https://console.aws.amazon.com/console/home?state=hashArgs%23&isauthcode=true",
"MobileVersion": "No",
"MFAIdentifier": "arn:aws:iam::111122223333:mfa/root-account-mfa-device",
"MFAUsed": "YES"
}

The identifier of the MFA devices used for AWS CLI sessions with the sts:GetSessionToken action are logged in the requestParameters field.

    "requestParameters": {
"serialNumber": "arn:aws:iam::111122223333:mfa/root-account-mfa-device"
    }

Sign-in experience with multiple MFA devices

In this section, we’ll show you how to sign in to the console as an IAM principal with multiple MFA devices associated with it.

To authenticate as an IAM principal with multiple MFA devices

  1. Sign in to the IAM console as an IAM principal.
  2. Authenticate with the principal’s password.
  3. For Additional verification required, select the type of MFA device that you want to use to continue authenticating, and then choose Next:
    Figure 1: MFA device selection when authenticating to the console as an IAM user or root user with different types of MFA devices available

    Figure 1: MFA device selection when authenticating to the console as an IAM user or root user with different types of MFA devices available

  4. You will then be prompted to authenticate with the type of device that you selected.
    Figure 2: Prompt to authenticate with a FIDO security key

    Figure 2: Prompt to authenticate with a FIDO security key

Conclusion

In this blog post, you learned about the new multiple MFA devices feature in IAM, and how to set up and manage multiple MFA devices in IAM. Associating multiple MFA devices with your root users and IAM users can make it simpler for you to manage access to them. This feature is available now for AWS customers, except for customers operating in AWS GovCloud (US) Regions or in the AWS China Regions. For more information about how to configure multiple MFA devices on your root users and IAM users, see the documentation on MFA in IAM. There is no extra charge to use MFA devices in IAM.

AWS offers a free MFA security key to eligible AWS account owners in the United States. To determine eligibility and order a key, see the ordering portal.

If you have questions, post them in the AWS Identity and Access Management re:Post topic or reach out to AWS Support.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Liam Wadman

Liam Wadman

Liam is a Solutions Architect with the Identity Solutions team. When he’s not building exciting solutions on AWS or helping customers, he’s often found in the hills of British Columbia on his Mountain Bike. Liam points out that you cannot spell LIAM without IAM.

Khaled Zaky

Khaled Zaky

Khaled is a Sr. Product Manager – Technical at Amazon Web Services. He is responsible for AWS Identity products related to user authentication such as sign-in security and multi-factor authentication products. Khaled has deep industry experience in cloud computing and product management. He is passionate about building customer-centric products that make it easier and more secure for customers to use the cloud. Outside of work interests include teaching product management, road cycling, Taekwondo (Martial Arts) and DIY home renovations.

New ebook: CJ Moses’ Security Predictions in 2023 and Beyond

Post Syndicated from CJ Moses original https://aws.amazon.com/blogs/security/new-ebook-cj-moses-security-predictions-in-2023-and-beyond/

As we head into 2023, it’s time to think about lessons from this year and incorporate them into planning for the next year and beyond. At AWS, we continually learn from our customers, who influence the best practices that we share and the security services that we offer.

We heard that you’re looking for more prescriptive guidance, patterns, and trends that AWS Security is seeing in the industry, so I’m happy to share an ebook that I recently authored called Security Predictions in 2023 and Beyond. In this ebook, you’ll learn about what we think is next for the security industry and some high-level pointers on how you can stay ahead.

The last few years has brought rapid acceleration of digital transformation in a short time and forced organizations to manage disruptions to their business, such as the impact of remote work. As security and risk management leaders handle the recovery and renewal phases from the past two years, they must consider forward-looking strategic planning assumptions when allocating budget, selecting services, and prioritizing employee effort. We are now at an interesting point in time where it will take the right mix of technology and humans to shape the future of cybersecurity.

I encourage you to read through the ebook and consider how these predictions could influence strategic planning for your security program. Drop us your feedback in the comments or reach out to your account team with questions. You can also follow @AWSSecurityInfo for the latest from AWS Security, and you can find me at @mosescj58.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

CJ Moses

CJ Moses

CJ is the Chief Information Security Officer (CISO) at AWS, where he leads product design and security engineering. His mission is to deliver the economic and security benefits of cloud computing to business and government customers. Previously, CJ led the technical analysis of computer and network intrusion efforts at the U.S. Federal Bureau of Investigation Cyber Division. He also served as a Special Agent with the U.S. Air Force Office of Special Investigations (AFOSI). CJ led several computer intrusion investigations seen as foundational to the information security industry today.

Detect and block advanced bot traffic

Post Syndicated from Etienne Munnich original https://aws.amazon.com/blogs/security/detect-and-block-advanced-bot-traffic/

Automated scripts, known as bots, can generate significant volumes of traffic to your mobile applications, websites, and APIs. Targeted bots take this a step further by targeting website content, such as product availability or pricing.

Traffic from targeted bots can result in a poor user experience by competing against legitimate user traffic for website access to high-demand inventory, increasing business risk through chargebacks from fraudulent transactions, and increasing infrastructure costs.

In 2021, AWS released AWS WAF Bot Control for Common Bots to help you detect and control common bots. In October 2022, AWS released a new feature—AWS Bot Control for Targeted Bots—that can help you detect and protect against bots that use advanced techniques to actively avoid detection.

In this post, I provide an overview of Bot Control for Targeted Bots and show you how to enable Bot Control to detect and block both common and targeted bots.

Overview of Bot Control for Targeted Bots

Bot Control for Targeted Bots provides sophisticated bot detection and mitigation by creating an intelligent baseline of traffic patterns. Bot Control for Targeted Bots uses browser fingerprinting techniques and client-side JavaScript interrogation methods to help protect your application from advanced bots that mimic human traffic patterns and actively try to evade detection.

Bot Control detects anomalies in usage patterns and provides new flexible mitigation options to isolate bad bots. These options include dynamic rate-limiting, challenge actions, and the ability to block based on labels and confidence scores.

With Bot Control for Targeted Bots, you can use bot protection rules to allow verified common bot traffic and, at the same time, to challenge unwanted advanced bot traffic. You can achieve both tasks from the same configuration page without making application or architectural changes. You can also configure fine-grained rule sets. For example, you can configure blocking actions for high-risk bots while allowing for exceptions for known IP ranges.

This release also introduces token domains, which is the ability to use the same AWS WAF web ACL across multiple domain names and Amazon CloudFront distributions to simplify client-side configuration. For example, you can use token domains to accept tokens that are generated by www.example.com for api.example.com and vice versa. In addition, you can now specify a resource path directly in the managed rule configuration, enabling you to only require a token for API calls, but not for cached, content-like images.

Bot Control for Targeted Bots sends metrics to Amazon CloudWatch to identify application access trends. The metrics include the percentage of human traffic compared to bot traffic and the count of requests for sensitive web pages such as login and checkout pages. Each rule in Bot Control produces a unique label so that you can review CloudWatch metrics and filter logs to understand traffic patterns. By using these mechanisms, you can identify, isolate, and remediate operational issues.

Walkthrough

In this walkthrough, I will show you how to set up Bot Control for Targeted Bots to help protect a CloudFront distribution.

You will set up an AWS WAF web ACL with an AWS Managed Rule for Bot Control for Targeted Bots. The rule detects bots and then decides the appropriate action:

  • Dynamically rate limit verified bots – Based on traffic history, Bot Control creates an intelligent baseline and then applies rate limits to abnormally high volumes.
  • Enable the challenge action – You have a new option, called challenge, along with the already supported options of count, allow, block, and CAPTCHA. The challenge option initiates a process of challenge interstitial, which means that Bot Control provides a challenge to the browser and creates a domain token when the challenge is resolved.

Set up Bot Control for Targeted Bots

In this section, I will show you how to set up Bot Control for Targeted Bots by creating a new web ACL or editing an existing one.

To set up Bot Control for Targeted Bots

  1. Open the AWS WAF console, and then do one of the following:
    • To create a new web ACL, choose Create a new web ACL.
    • To edit an existing web ACL, choose the name of the ACL.
  2. On the Rules tab, for the Add rules drop-down, select Add managed rule groups.
  3. Add a Bot Control rule set to the web ACL. Choose Edit to edit the rule.
  4. For Bot Control inspection level, select the inspection level for Bot Control. For this walkthrough, we chose Targeted to enable Bot Control for Targeted Bots.
    Figure 1: Bot Control – Select inspection level

    Figure 1: Bot Control – Select inspection level

  5. Review and select the actions to be taken on each category of bots detected, and then choose Save rule. In our example, we set allow, challenge, and count rules for the categories, as shown in Figure 2.
    Figure 2: Bot Control – Select actions for each category

    Figure 2: Bot Control – Select actions for each category

    You can select different actions for each category based on your application security needs:

    • Allow: Allows the request to be sent to a protected resource.
    • Block: Blocks the request, returning an HTTP 403 (Forbidden) response.
    • Count: Allows the request to be sent to the protected resource while counting detections. The count shows you bot activity that is occurring without blocking or challenging. When you turn on rules for the first time, this information can help you see what the detections are, before you change the actions.
    • CAPTCHA and Challenge: use CAPTCHA puzzles and silent challenges with tokens to track successful client responses.
  6. In this example you will configure a scope-down statement to apply Bot Control for a given URI path only.

    On the same page in the step above, you can add a scope-down statement to ensure you use and incur Targeted Bots charges for the requests where you need protections. There are more examples of how to use scope-down statements in our documentation.

    Select “Enable scope-down statement” and configure the rule to inspect the URI path as per figure 3.

    Figure 3: Bot Control – Add the scope-down statement

    Figure 3: Bot Control – Add the scope-down statement

  7. To add domain names to be protected, scroll to the bottom of the web ACL and choose Edit. In the Token domain listoptional section, enter the domain name or names to which the token verification applies. Tokens that are generated are valid for these domains.

Create the SDK link for the AWS WAF integration

In this section, I’ll show you how to find the AWS WAF SDK and add it to your application pages.

The token SDK manages the token authorization and includes the tokens in the requests that you send to your protected resources. By adding the SDK link to application pages, you can help ensure that the remote procedure calls by your client contain a valid token.

To add the SDK to your application pages

  1. In the AWS WAF console, in the left navigation pane, choose Application integration SDKs.
  2. Under JavaScript SDK, copy the provided code snippet. This code snippet allows for creation of the cryptographic token in the background when the application loads for the first time, providing a better customer experience.
  3. Add the code snippet to your pages. For example, paste the provided script code within the <head> section of the HTML.

When this integration is in place on your application’s pages, you can add AWS WAF rules in your web ACL to block requests that don’t contain a valid token. Replace the <Web ACL integration URL> with the provided integration URL from the AWS WAF console or copy the script tag from the console:

<script type="text/javascript" src="<Web ACL integration URL>/challenge.js” defer></script>

Figure 4 shows the SDK link for application pages.

Figure 4: Bot Control – Add SDK link to application pages

Figure 4: Bot Control – Add SDK link to application pages

Review metrics

Now that you’ve set up the web ACL and application, you can use the bot visualization dashboard to review bot traffic patterns. Bot rules emit metrics corresponding to their labels, helping you identify which rule within the AWS Managed Rule for Bot Control for Targeted Bots initiated an action. You can also use these labels and rule actions to filter AWS WAF logs so that you can further examine a request.

To view AWS WAF metrics for the distribution

  1. In the AWS WAF console, in the left navigation pane, select Web ACLs.
  2. Select the web ACL that Bot Control is enabled on and then choose the Bot Control tab to view the metrics.
Figure 5: Bot Control – Review web ACL metrics

Figure 5: Bot Control – Review web ACL metrics

Best practices

In this section, I describe best practices for your Bot Control setup.

Set priority ordering of AWS WAF rules to help lower costs

You can set the priority of rule groups in a web ACL such that the order of the rule matches requests more efficiently. AWS WAF will take the action associated to the first rule it matches. If the incoming traffic matches the more wider criteria (such as IPset rules at priority 1), the associated action is taken. That request is never analyzed by the Bot Control rule and hence do not incur the bot control request analysis fees. For example, the following list shows rules ranked in order from highest priority (1) to lowest priority (5):

  1. Use allow and deny lists – provide IP addresses to allow or deny
  2. AWS Managed Rule groups for IP reputation – block bots and other threats
  3. General rate limit – help prevent HTTP flood across the protected resource
  4. AWS WAF Bot Control rule group – scoped-down to exclude static content such as images
  5. Rate limit for login pages – scoped-down for specific URLs and HTTP POST methods

Figure 6 shows the prioritized rules in AWS WAF.

Figure 6: AWS WAF – Web ACL rule order

Figure 6: AWS WAF – Web ACL rule order

Use scope-down statements

You can use scope-down statements to limit the requests evaluated for a rule group. For example, a scope-down statement that excludes checking requests for static assets, such as images for a given URI and HTTP method (GET), can help reduce Bot Control costs.

Block requests without tokens

If a request has a token absent or is rejected, you can block that request. For example, you might want to block requests on login or payment processing pages. To block requests with a missing or rejected token, add a rule to run after the Bot Control rule to block requests matching the labels rejected and absent:

  • awswaf:managed:token:rejected – The request token is present but is either corrupt or has an expired challenge timestamp.
  • awswaf:managed:token:absent – The request doesn’t have a token.

Use SDK integration

After you add the token domains and the provided script to your application pages, you can add a rule to block requests that don’t have a token. Use of the SDK helps AWS WAF verify the client application with silent challenges and provide AWS token acquisition and management. The SDK provides the full functionality of both AWS WAF Bot Control and AWS WAF Fraud Control, reducing the need for multiple SDKs if either or both rule groups are used in the web ACL.

Create CloudWatch alarms

You can add CloudWatch alarms to help you assess whether there is activity outside of the norm for your application. For example, you can monitor for a high number of token-absent metrics for a given time period.

Configure a billing alarm

To help you track costs, you can configure a billing alarm that sends an alert when you have exceeded the threshold for your expected costs.

Pricing and availability

Bot Control for Targeted Bots is available today in AWS Regions where AWS WAF is available, excluding AWS GovCloud (US) and China Regions. For information on pricing, see AWS WAF Pricing.

Conclusion

In this post, you learned how to use Bot Control for Targeted Bots to add visibility into bot activity on your website or applications. With Bot Control for common and targeted bots, you can detect, challenge, and block unwanted bot activity. Because Bot Control is customizable, you can tailor how you address legitimate bots while protecting against bots that use advanced techniques to actively avoid detection. For more information and to get started today, see AWS WAF Bot Control.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Etienne Munnich

Etienne Munnich

Etienne works as an Edge Specialist Solutions Architect, assisting start ups and companies of all sizes across Asia Pacific to improve their web applications performance and security. Based in Sydney, Australia, he has fulfilled many roles in tech, which range from systems integrator to cloud engineer, project manager and now a solutions architect. Follow him on Twitter at @etiennemunnich

Reduce cost and improve query performance with Amazon Athena Query Result Reuse

Post Syndicated from Theo Tolv original https://aws.amazon.com/blogs/big-data/reduce-cost-and-improve-query-performance-with-amazon-athena-query-result-reuse/

Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon Simple Storage Service (Amazon S3) using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run on datasets at petabyte scale. You can use Athena to query your S3 data lake for use cases such as data exploration for machine learning (ML) and AI, business intelligence (BI) reporting, and ad hoc querying.

It’s not uncommon for datasets in data lakes to update only daily, or at most a few times per day, yet queries running on these datasets may be repeated more frequently. Previously, all queries resulted in a data scan, even if the same query was repeated again. When the source data hasn’t changed, repeat queries run needlessly, leading to the same results with higher data scan costs and query latency. Wouldn’t it be better if the results of a recent query could be reused instead?

Query Result Reuse is a new feature available in Athena engine version 3 that makes it possible to reuse the results of a previous query. This can improve performance and reduce cost for frequently run queries, by skipping scanning the source data and instead returning a previously calculated result directly. With Query Result Reuse, you can tell Athena that you want to reuse results of a previous query run, with a maximum age setting that controls how recent a previous result has to be.

Athena automatically reuses any previous results that match your query and maximum age setting, or transparently runs the query again if no match is found. If you know that a dataset changes a few times per day, you can, for example, tell Athena to reuse results that are up to an hour old to avoid rerunning most queries, but still get new results when you run a query soon after new data has become available.

In this post, we demonstrate how to reduce cost and improve query performance with the new Query Result Reuse feature.

When should you use Query Result Reuse?

We recommend using Query Result Reuse for every query where the source data doesn’t change frequently. You can configure the maximum age of results to reuse per query, or use the default, which is 60 minutes. In certain cases where queries include non-deterministic functions such as RAND(), the query fetches fresh data from the input source even if the Query Result Reuse feature is enabled.

Query Result Reuse allows results to be shared among users in a workgroup, as long as they have access to the tables and data. This means Query Result Reuse can benefit not only a single user, but also other users in the workgroup who might be running the same queries. One example where this may be especially beneficial is when you have dashboards that are viewed by many users. The dashboard widgets run the same queries for all users, and are therefore accelerated by Query Result Reuse, when enabled.

Another example is if you have a dataset that is updated daily, and many users who all query the most recent data to create reports. Different people might run the same queries as part of their work; with Query Result Reuse, they can collectively avoid running the same query more than once, making everyone more productive and lowering overall cost by avoiding repeated scans of the same data.

Finally, if you have a historical dataset that is frequently queried, but never or very rarely updated, you can configure queries to reuse results that are up to 7 days old to maximize the chances of reusing results and avoid unnecessary costs.

How does Query Result Reuse work?

Query Result Reuse takes advantage of the fact that Athena writes query results to Amazon S3 as a CSV file. Before the introduction of Query Result Reuse, it was possible to reuse query results by reading these files directly. You could also use the ClientRequestToken parameter of the StartQueryExecution API to ensure queries are run only once, and subsequent runs return the same results. With Query Result Reuse, the process of reusing query results is easier and more versatile.

When Athena receives a query with Query Result Reuse enabled, it looks for a result for a query with the same query string that was run in the same workgroup. The query string has to be identical in order to match.

Query Result Reuse is enabled on a per query basis. When you run a query, you specify how old a result can be for it to be reused, from 1 minute up to 7 days. If the query has been run before, and a result exists that matches the request, it’s returned, otherwise the query is run and a new result is calculated. This new result is then available to be reused by subsequent queries.

You can run the query multiple times with different settings for how old a result you can accept. Results can be reused within the same workgroup, even if a different user ran the query previously.

Before a query result is reused, Athena does a few checks to make sure that the user is still allowed to see the results. It checks that the user has access to the tables involved in the query and permission to read the result file on Amazon S3.

There are some situations where query results can’t be reused, for example if the query uses non-deterministic functions, or has AWS Lake Form ation fine-grained access controls enabled. These limitations are described in more detail later in this post.

Run queries with Query Result Reuse

In this section, we demonstrate how to run queries with the Query Result Reuse feature via the Athena API, the Athena console, and the JDBC and ODBC drivers.

Run queries using the Athena API

For applications that use the Athena API through the AWS Command Line Interface (AWS CLI) or the AWS SDKs, the StartQueryExecution API call now has the additional parameter ResultReuseConfiguration, where you can enable Query Result Reuse and specify the maximum age of results. For example, when using the AWS CLI, you can run a query with Query Result Reuse enabled as follows:

aws athena start-query-execution \
  --work-group "my_work_group" \
  --query-string "SELECT * FROM my_table LIMIT 10" \
  --result-reuse-configuration \
    "ResultReuseByAgeConfiguration={Enabled=true,MaxAgeInMinutes=60}"

The following code shows how to do this with the AWS SDK for Python:

import boto3

client = boto3.client('athena')
response = client.start_query_execution(
    WorkGroup='my_work_group',
    QueryString='SELECT * FROM my_table LIMIT 10',
    ResultReuseConfiguration={
        'ResultReuseByAgeConfiguration': {
   	    	'Enabled': True,
     		'MaxAgeInSeconds': 60
        }
    }
)

These examples assume that my_work_group uses Athena engine v3, that the workgroup has an output location configured, and that the AWS Region has been set in the AWS CLI configuration.

When a query result is reused, you can see in the statistics section of the response from the GetQueryExecution API call that no data was scanned and that results were reused:

{
    "QueryExecution": {
        …
        "Statistics": {
            "EngineExecutionTimeInMillis": 272,
            "DataScannedInBytes": 0,
            "TotalExecutionTimeInMillis": 445,
            "QueryQueueTimeInMillis": 143,
            "ServiceProcessingTimeInMillis": 30,
            "ResultReuseInformation": {
               	"ReusedPreviousResult": true
           	}
        }
    }
}

Run queries using the Athena console

When you run queries on the Athena console, Query Result Reuse is now enabled by default. You can enable and disable Query Result Reuse in the query editor. You can also choose the pen icon to change the maximum age of results. This setting applies to all queries run on the Athena console.

The following screenshot shows an example query run against AWS CloudTrail logs with Query Result Reuse enabled.

When we ran the query again, the results showed up immediately, and we could see the message “using reused query results” in the Query results pane as a confirmation that the results of our first query had been reused. The Data scanned statistic also showed “-” to indicate that no data was scanned.

Run queries using the JDBC and ODBC drivers

If you use the JDBC or ODBC driver to query Athena, you can now add enableResultReuse=1 to your connection parameters to enable Query Result Reuse, and use ageforResultReuse=60 to set the maximum age to 60 minutes. The drivers automatically apply the setting to all queries running in the context of the connection.

For more information on how to connect to Athena via JDBC and ODBC, refer to Connecting to Amazon Athena with ODBC and JDBC drivers.

Limitations and considerations

Query Result Reuse is supported for most Athena queries, but there are some limitations. We want to ensure that reusing results doesn’t create surprising situations, or expose results that a user shouldn’t have access to. For that reason, Athena always runs a fresh query in the following situations:

  • Non-deterministic functions – Some functions and expressions produce different results from query to query, such as CURRENT_TIME and RAND(). Results for queries that use temporal and non-deterministic expressions and functions aren’t reusable because that could create surprising and inconsistent results.
  • Fine-grained access controls – Row-level and column-level permissions are configured in Lake Formation, and Athena can’t know if these have changed since a previous query result was created. Users using the same workgroup can also have different permissions, and checking all permissions would undo many of the cost and performance savings you get from Query Result Reuse.
  • Federated queries, user-defined functions (UDFs), and external Hive metastores – Users using the same workgroup can have different permissions to invoke the AWS Lambda functions that these features rely on. Athena isn’t able to check that a user that wants to reuse a result has permission to invoke these Lambda functions without running the query, which would negate the cost and performance savings.

Athena detects these conditions automatically and runs the query as if Query Result Reuse wasn’t enabled. You won’t get errors, but you can determine that Query Result Reuse wasn’t in effect by inspecting the query status (see our earlier examples).

Query Result Reuse is available in Athena engine version 3 only.

Conclusion

Query Result Reuse is a new feature in Athena that aims to reduce cost and query response times for datasets that change less frequently than they are queried. For teams that often run the same query, or have dashboards that are used more often than the data changes, Query Result Reuse can result in lower costs and faster results. It’s easy to get started with Query Result Reuse via the Athena console, API, and JDBC/ODBC; all you have to do is set the maximum age of results, and run your queries as usual.

We hope that you will like this new feature, and that it will save cost and improve performance for you and your team!


About the authors

Theo Tolv is a Senior Big Data Architect in the Athena team. He’s worked with small and big data for most of his career and often hangs out on Stack Overflow answering questions about Athena.

Vijay Jain is a Senior Product Manager in Amazon Web Services (AWS) Athena team. He is passionate about building scalable analytics technologies and products working closely with enterprise customers. Outside of work, Vijay likes running and spending time with his family.

How to evaluate and use ECDSA certificates in AWS Certificate Manager

Post Syndicated from Zachary Miller original https://aws.amazon.com/blogs/security/how-to-evaluate-and-use-ecdsa-certificates-in-aws-certificate-manager/

AWS Certificate Manager (ACM) is a managed service that enables you to provision, manage, and deploy public and private SSL/TLS certificates that you can use to securely encrypt network traffic. You can now use ACM to request Elliptic Curve Digital Signature Algorithm (ECDSA) certificates and associate the certificates with AWS services like Application Load Balancer (ALB) or Amazon CloudFront. As a result, you get the benefit of managed renewal, where ACM can automatically renew ECDSA certificates before they expire. Previously, you could only request certificates with an RSA 2048 key algorithm from ACM. ECDSA certificates could be imported to ACM, but imported certificates cannot use managed renewal.

You can request both ECDSA P-256 and P-384 certificates from ACM. If you do not request an ECDSA certificate, ACM will issue an RSA 2048 certificate by default.

In this blog post, we will briefly examine the differences between RSA and ECDSA certificates, discuss some important considerations when evaluating which certificate type to use, and walk through how you can request an ECDSA certificate and associate it with an application load balancer in AWS.

Cryptographic certificates overview

TLS certificates are used to secure network communications and establish the identity of websites over the internet, as well as the identity of resources on private networks. Public certificates that you request through ACM are obtained from Amazon Trust Services, which is an Amazon managed public certificate authority (CA).

Private certificates are issued through certificate authorities, which you can create and manage by using AWS Private Certificate Authority (AWS Private CA).

Both public and private certificates can help customers identify resources on networks and secure communication between these resources. Public certificates identify resources on the public internet, whereas private certificates do the same for private networks. One key difference is that applications and browsers trust public certificates by default, but an administrator must explicitly configure applications and devices to trust private certificates.

RSA and ECDSA primer

RSA and ECDSA are two widely used public-key cryptographic algorithms—algorithms that use two different keys to encrypt and decrypt data. In the case of TLS, a public key is used to encrypt data, and a private key is used to decrypt data. Public key (or asymmetric key) algorithms are not as computationally efficient as symmetric key algorithms like AES. For this reason, public key algorithms like RSA and ECDSA are primarily used to exchange secrets between two parties initiating a TLS connection. These secrets are then used by both parties to decipher the same symmetric key that actually encrypts the data in transit.

RSA stands for Rivest, Shamir, and Adleman: the researchers who first publicly described this algorithm in 1977. The basic functionality of RSA relies on the idea that large prime numbers are very difficult to efficiently factor. ECDSA, or Elliptic Curve Digital Signature Algorithm, is based on certain unique mathematical properties of elliptic curves that make them very useful for cryptographic operations. The cryptographic utility of ECDSA comes from a concept called the discrete logarithm problem.

Considerations when choosing between RSA and ECDSA

What are the important differences between RSA and ECDSA certificates? When should you choose ECDSA certificates to encrypt network traffic? In this section, we’ll examine the security and performance considerations that help to determine whether ECDSA or RSA certificates are the best choice for your workload.

Security

In cryptography, security is measured as the computational work it takes to exhaust all possible values of a symmetric key in an ideal cipher. An ideal cipher is a theoretical algorithm that has no weaknesses, so you must try every possible key to discover which is the correct key. This is similar to the idea of “brute forcing” a password: trying every possible character combination to find the correct password.

Let’s imagine you have a 112-bit key ideal cipher, which means it would take 2112 tries to exhaust the key space—we would say this cipher has a 112-bit security strength. However, it is important to realize that security strength and key length are not always equal—meaning that an encryption key with a length of 112 bits will not always have a 112-bit security strength.

ECDSA provides higher security strength for lower computational cost. ECDSA P-256, for example, provides 128-bit security strength and is equivalent to an RSA 3072 key. Meanwhile, ECDSA P-384 provides 192-bit security strength, equivalent to the key associated with an RSA 7680 certificate. In other words, an ECDSA P-384 key would require 2192 tries to exhaust the key space.

The following table provides an in-depth comparison of the different security strengths for RSA key lengths and ECDSA curve types. Note that only RSA 2048 and ECDSA P-256 and P-384 are currently issued by ACM. However, ACM does support the import and usage of the other certificate types listed in the table. For more information, see Importing certificates into AWS Certificate Manager.

Security strength RSA key length ECDSA curve type
80-bit 1024 160
112-bit 2048 224
128-bit 3072 256
192-bit 7680 384
256-bit 15360 512

Performance

ECDSA provides a higher security strength (for a given key length) than RSA but does not add performance overhead. For example, ECDSA P-256 is as performant as RSA 2048 while providing security strength that is comparable to RSA 3072.

ECDSA certificates also have up to a 50% smaller certificate size when compared to RSA certificates, and are therefore more suitable to protect data-in-transit over low bandwidth or for applications with limited memory and storage, such as Internet of Things (IoT) devices.

Take a look at the following certificate examples; you can see the size difference between RSA and ECDSA certificates.

RSA 2048: ECDSA P-256 (EC_prime256v1):
-----BEGIN CERTIFICATE-----
MIIDLjCCAhYCCQCgZT9jmNNbmjANBgkqhkiG9w0BAQsFADBZMQswCQYDVQQGEwJV
UzETMBEGA1UECAwKV2FzaGluZ3RvbjEQMA4GA1UEBwwHU2VhdHRsZTEPMA0GA1UE
CgwGQW1hem9uMRIwEAYDVQQDDAllY2RzYWJsb2cwHhcNMjIxMDE4MTcxMTUyWhcN
MjMxMDE4MTcxMTUyWjBZMQswCQYDVQQGEwJVUzETMBEGA1UECAwKV2FzaGluZ3Rv
bjEQMA4GA1UEBwwHU2VhdHRsZTEPMA0GA1UECgwGQW1hem9uMRIwEAYDVQQDDAll
Y2RzYWJsb2cwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC64mquZoJ5
tJVmiUK0vKRYyYqHYYV/sTqwcW2u7MNP/mvCWd6K8rFBcxdBcscGFZR5tZa7OzTu
XUdwqdj13Gjl/euG4c8rlIWY+5HtybXI7vBRrf6YCNZWIAucRQesycmhtf9bjB7v
mgBpy7XVEweBZ++Ve3IynsNBD0O4C0w5y1HuaFFCsxF+dsrcgofVIMcjIEs/by65
JqIQMvjDExjxkkSFfyhrSd3f78EDp3WtEATD67zMTZGKgHNc1J/7V7vMRfb0qyqr
WxWUQVMRMzo1LdC3vpF57aMP/b6amQBSU5lP0nMBLfqA3oHQ3hvXEUaNrxcvHdyb
CUmd9On5cPMjAgMBAAEwDQYJKoZIhvcNAQELBQADggEBAGudUVlz0x/OJGCORjoo
KuGjvlY2pztMkj3KA6/DO2eYYeYwrAcBuO7Ycc18XDnTwGfsZmkV1INVd2NNoqSX
PxsY21y/7ely8ZK1f7lMCtxWaHdlz4nzxtrEN/YRrFZ0VoGmBgQutRou0fIdn9ax
+J4zUwDQYqXyppIqTmSxajvzrfl0YYUZ5xd4gGGTLah7gzqqv2H/KTAUck0mr9B7
o3hh9Oe8MNsUJhMp1s1c8MTOZzvY+yadDlG4zIXKdZPUxuAvdxpWPEWntsPQIxo7
8m09RAttvA+/WKfuXbCXNILj9MwH6cRAnq/8DVFmgWnZIz4YSpNk9TinN3U0hcoi
PU4=
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIBoTCCAUgCCQD+ccox1RgzgTAKBggqhkjOPQQDAjBZMQswCQYDVQQGEwJVUzET
MBEGA1UECAwKV2FzaGluZ3RvbjEQMA4GA1UEBwwHU2VhdHRsZTEPMA0GA1UECgwG
QW1hem9uMRIwEAYDVQQDDAllY2RzYWJsb2cwHhcNMjIxMDE4MTcwNzAzWhcNMjMx
MDE4MTcwNzAzWjBZMQswCQYDVQQGEwJVUzETMBEGA1UECAwKV2FzaGluZ3RvbjEQ
MA4GA1UEBwwHU2VhdHRsZTEPMA0GA1UECgwGQW1hem9uMRIwEAYDVQQDDAllY2Rz
YWJsb2cwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAASsxNpbxi5xcRwIBN1M1M4s
tDyyPILqwGjlvfruLZDqPHB7EVSV78TjN4+/a3KO0bl8cYolLBm31rlLAxZfejgK
MAoGCCqGSM49BAMCA0cAMEQCIF7FSzRha4KATIgj8C6i0NfNomHAX8I0lPtMgHrN
YGOOAiAlBpqNQU8e2i9zHzU9rQx7rZJnm4iepImKvodfSCXJ/A==
-----END CERTIFICATE-----

Consider a small IoT sensor device that tracks temperature in an office building. This device typically has very low storage capacity and compute power, so the smaller ECDSA certificate will be easier to process and store. In the case of an IoT device, you might not be able to store the entire RSA certificate chain on the device due to memory limitations and the larger size of RSA certificates. This can make it more difficult to validate the chain of trust for that certificate.

Using ECDSA, customers can take advantage of the smaller size of the certificates (and the certificate trust chain) and store the entire chain of trust on the IoT device itself, enabling the IoT device to more easily validate the certificate.

When should I use ECDSA certificates from ACM?

In general, you should consider using ECDSA certificates wherever possible, because they provide stronger security (for a given key length) compared to RSA, without impacting performance. You can also choose to issue ECDSA certificates from ACM to implement 128-bit or 192-bit TLS security, where previously you could request up to 112-bit security from ACM by using RSA 2048 certificates.

ECDSA certificates are strongly recommended for applications that need to securely send data over low-bandwidth connections, or when you are using IoT devices that might not have much memory or computational power to store and process the larger certificate sizes that RSA offers.

If your application is not ECDSA compatible, you will need to continue using RSA certificates. RSA 2048 remains the default certificate type issued by ACM, in order to prevent compatibility issues with legacy applications or with applications that do not support ECDSA certificate types. We will provide links to check if your application is compatible with ECDSA certificate types in the next section of this blog.

Getting started with ECDSA certificates

Modern browsers and operating systems are ECDSA compatible. That said, some custom applications might not be ECDSA compatible. You can check whether your calling application is ECDSA compatible by accessing the following links from your application:

ECDSA P-256

ECDSA P-384

When you access one of these links, you should see a message stating “Expected Status: good”. This indicates that the application is ECDSA compatible. See Figure 1 for an example of a successful result.

Figure 1: ECDSA application compatibility example

Figure 1: ECDSA application compatibility example

When you terminate your TLS traffic with ALB, you can work around compatibility concerns by binding both ECDSA and RSA certificates for a given domain. ALB will prioritize and present the ECDSA certificate when the calling application is ECDSA compatible and will use the RSA certificate if the calling application is not ECDSA compatible. We’ll walk through this configuration in the demonstration portion of this post.

How to request an ECDSA certificate from ACM

You can use the ACM console, APIs, or AWS Command Line Interface (AWS CLI) to issue public or private ECDSA P-256 and P-384 TLS certificates. When you request certificates by using the API or AWS CLI, you can use the request-certificate API action with either EC_prime256v1 or EC_secp384r1 as the key-algorithm parameter to request a P-256 or P-384 ECDSA certificate, respectively.

Certificates have a defined validity period, and ACM will attempt to renew certificates that were issued by ACM and that are in use before they expire. ACM will also attempt to automatically bind the renewed certificates with an integrated service. ACM issued private ECDSA certificates can also be exported and used on other workloads to terminate TLS traffic.

Associate an ECDSA certificate with an Application Load Balancer for TLS

To demonstrate how to request and use ECDSA certificates from ACM, let’s examine a common use case: requesting a public certificate from ACM and associating it with an ALB. This walkthrough will also include requesting an RSA 2048 certificate and associating it with the same ALB, to facilitate TLS connections for applications that do not support ECDSA. ALB will prioritize and present the ECDSA certificate when the calling application is ECDSA compatible, and will use the RSA certificate if the calling application is not ECDSA compatible.

This procedure has the following prerequisites:

  • An AWS Identity and Access Management (IAM) user or role that has the appropriate permissions to request certificates from ACM and create an ALB
  • A public domain that you own
  • A public subnet, or IAM permissions to create one

To request an ECDSA certificate from ACM

  1. Navigate to the ACM console and choose Request a certificate.
  2. Choose Request a public certificate, and then choose Next.
  3. For Fully qualified domain name, enter your domain name.
  4. Choose DNS validation. DNS validation is recommended wherever possible, because it enables automatic renewal of ACM issued certificates with no action required by the domain owner. If you use Amazon Route 53, you can use ACM to directly update your DNS records. DNS-validated certificates will be renewed by ACM as long as the certificate is in use and the DNS record is in place.
    Figure 2: Requesting a public ECDSA certificate

    Figure 2: Requesting a public ECDSA certificate

  5. In the Key algorithm options section, select your preferred algorithm based on your security requirements:
    • ECDSA P-256 — Equivalent in security strength to RSA 3072
    • ECDSA P-384 — Equivalent in security strength to RSA 7680
    Figure 3: Key algorithms

    Figure 3: Key algorithms

  6. (Optional) Add tags to help you identify and manage your certificate. You can find more information on using tags in Tagging AWS resources in the AWS General Reference.
  7. Choose Request to request the public certificate.

    The certificate will now be in the Pending Validation state until the domain can be validated, either through DNS or email validation, depending on your selection in the previous steps. For information on how to validate ownership of the domain name or names, see Validating domain ownership in the AWS Certificate Manager User Guide.

  8. Take note of the certificate ARN; you will need this later to identify the certificate.

To request an RSA 2048 certificate from ACM

  1. To request a public RSA 2048 certificate, use the same steps noted in the preceding section, but select RSA 2048 in the Key algorithm options section.
  2. Make sure that both certificates you request have the same fully qualified domain name.

    For more information on requesting public certificates from ACM, see Requesting a public certificate.

To create a new Application Load Balancer and associate a default certificate

  1. Navigate to the Amazon Elastic Compute Cloud (EC2) console. In the left navigation pane, under Load Balancing, choose Load Balancers.
  2. Choose Create Load Balancer.

    For this post, we will use an Application Load Balancer. You can view more details on each type of Load Balancer, and see a feature-to-feature breakdown, on the Elastic Load Balancing features page.

  3. For the Application Load Balancer type, choose Create.
  4. Enter a name for your load balancer.
  5. Select the scheme and IP address type of the application load balancer. For this post, we will choose Internet-facing for the scheme and use the IPv4 address type.
    Figure 4: Create an application load balancer

    Figure 4: Create an application load balancer

  6. In the Network mapping section of this page, you will need to select a VPC and at least two Availability Zones and one public subnet per zone. If you do not already have a public subnet in two Availability Zones, see these instructions for creating a public subnet.
    Figure 5: Network mapping for ALB

    Figure 5: Network mapping for ALB

  7. Next, you need to create a secure listener. Under Listeners and routing, choose the HTTPS protocol (Port 443) in the drop-down list.
  8. Under Default action, choose Forward. For Target Group, select a target group for the ALB to send traffic to.
  9. Under Secure listener settings, you will associate the RSA 2048 certificate with the new Application Load Balancer.

    Choose the appropriate security policy for your organization—you can compare policies on this page.

  10. Under Default SSL/TLS certificate, verify that From ACM is selected, and then in the drop-down list, select the RSA certificate you requested earlier.

    Note: We are using the RSA certificate as the default so that the ALB will use this certificate if the connecting client does not support ECDSA or the Server Name Indication (SNI) protocol. This is to maximize availability and compatibility with legacy applications.

    Figure 6: Secure listener settings

    Figure 6: Secure listener settings

  11. (Optional) Add tags to the Application Load Balancer.
  12. Review your selections, and then choose Create load balancer.
    Figure 7: Review and create load balancer

    Figure 7: Review and create load balancer

To associate the ECDSA certificate with the Application Load Balancer

  1. In the EC2 console, select the new ALB you just created, and choose the Listeners tab.
  2. In the SSL Certificate column, you should see the default certificate you added when you created the ALB. Choose View/edit certificates to see the full list of certificates associated with this ALB.
    Figure 8: ALB listeners

    Figure 8: ALB listeners

  3. Under Listener certificates for SNI, choose Add certificate.
    Figure 9: Listener certificates for SNI

    Figure 9: Listener certificates for SNI

  4. Under ACM and IAM certificates, select the ECDSA certificate you requested earlier.

    Note: You can use the certificate ARN to identify the appropriate certificate.

  5. Choose Include as pending below to add the ECDSA certificate to the listener.
    Figure 10: Adding the ECDSA certificate to the load balancer listener

    Figure 10: Adding the ECDSA certificate to the load balancer listener

  6. Under Listener certificates for SNI, confirm that the ECDSA certificate is listed as pending, and choose Add pending certificates.
    Figure 11: Confirm addition of pending certificates

    Figure 11: Confirm addition of pending certificates

Great! We’ve used ACM to request a public ECDSA certificate and a public RSA 2048 certificate. Next, we associated both of these certificates with an Application Load Balancer to facilitate TLS communications between the load balancer and client devices.

If clients support the SNI protocol, the ALB uses a smart certificate selection algorithm. The load balancer will select the best certificate that the client can support from the certificate list. Certificate selection is based on the following criteria, in the following order:

  • Public key algorithm (prefer ECDSA over RSA)
  • Hashing algorithm (prefer SHA over MD5)
  • Key length (prefer the longest key)
  • Validity period

In the earlier example, this means if clients support SNI and ECDSA, the ECDSA certificate will be prioritized and presented to the client. If the client does not support SNI or ECDSA, the RSA certificate will be used to maximize compatibility with legacy applications.

Conclusion

In this blog post, we discussed the basic differences between RSA and ECDSA certificates, when you might choose ECDSA over RSA, and how you can use AWS Certificate Manager to request public or private ECDSA certificates. We also covered how to request a public ECDSA certificate from ACM and associate it with an Application Load Balancer. Finally, we showed you how to request an RSA 2048 certificate and associate it with the same load balancer to facilitate TLS for applications that do not support ECDSA certificates.

To learn more about using ACM to issue ECDSA certificates, see our YouTube video: AWS Certificate Manager (ACM) – How to evaluate and use ECDSA certificates. You can also refer to the AWS Certificate Manager documentation for more details, and then get started issuing ECDSA certificates with AWS Certificate Manager.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Zach Miller

Zach Miller

Zach is a Senior Security Specialist Solutions Architect at AWS. His background is in data protection and security architecture, focused on a variety of security domains, including cryptography, secrets management, and data classification. Today, he is focused on helping enterprise AWS customers adopt and operationalize AWS security services to increase security effectiveness and reduce risk.

Chandan Kundapur

Chandan Kundapur

Chandan is a Senior Technical Product Manager on the AWS Certificate Manager (ACM) team. With over 15 years of cybersecurity experience, he has a passion for driving PKI product strategy.

See yourself in cyber: Highlights from Cybersecurity Awareness Month

Post Syndicated from CJ Moses original https://aws.amazon.com/blogs/security/see-yourself-in-cyber-highlights-from-cybersecurity-awareness-month/

As Cybersecurity Awareness Month comes to a close, we want to share some of the work we’ve done and made available to you throughout October. Over the last four weeks, we have shared insights and resources aligned with this year’s theme—”See Yourself in Cyber”—to help advance awareness training, and inspire people to join the rapidly growing security industry. Here are a few highlights.

Roundtable with the Cybersecurity and Infrastructure Security Agency (CISA): Amazon Chief Security Officer Steve Schmidt hosted CISA director Jen Easterly in Seattle for a roundtable with leaders across higher education, state and local government, and private industry to discuss ways to develop the cybersecurity workforce through skills training, partnerships between government and industry, and creating pathways to cybersecurity careers.

How AWS, Cisco, Netflix & SAP Are Approaching Cybersecurity Awareness Month. I joined Cisco Chief Security and Trust Officer Brad Arkin, Netflix Head of Cloud Security Srinath Kuruvardi, and SAP Chief Trust Officer Elena Kvochko to describe how AWS, Cisco, Netflix, and SAP are instilling strong cybersecurity training and practices within our organizations, with the goal of inspiring other organizations to do the same.

Cybersecurity Awareness Month 2022 Briefing. Amazon Security Director Jenny Brinkley—who leads Amazon’s internal and external awareness training activities—participated in a Cybersecurity Awareness Month panel discussion hosted by the National Cybersecurity Alliance. Jenny met with executives from KnowBe4, Google, NortonLifeLock, and Dell and chatted about how the cybersecurity landscape has changed over the past few years, and how those changes have impacted the perception of security as a part of daily life.

Making Cybersecurity Relevant for Consumers: The Case for Personal Agency. In addition to the briefing, Jenny spoke to the National Cybersecurity Alliance about staying safe online. She highlighted simple steps that everyone can take to be safer online, including staying consistent on software updates for connected devices, using strong passwords, activating multi-factor authentication (MFA) on accounts when possible, and being on the lookout for phishing attempts.

National Cybersecurity Alliance and Nasdaq Cybersecurity Summit. Jenny and Amazon Head of Global Security Training Jyllian Clarke also joined the National Cybersecurity Alliance, Nasdaq, and public and private sector security leaders in New York City for a cybersecurity summit and got to ring the opening bell.

Resources

AWS offers free Cybersecurity Awareness Training to individuals and businesses around the world, and we’re providing complimentary MFA security keys to AWS account owners in the United States. More than 40 security-focused courses are available through AWS Skill Builder, ranging from foundational to advanced content. By subscribing to AWS Skill Builder, you gain access to security-related interactive challenges with AWS Jam, which guides you through solving real-world problems.

Additionally, Amazon and the National Cybersecurity Alliance launched a cybersecurity awareness campaign called Protect & Connect. The campaign includes a public service announcement featuring Prime Video actor Michael B. Jordan and actress-producer Tessa Thompson as “internet bodyguards,” as well as a Protect & Connect microsite for consumers, featuring additional videos on topics such as MFA and how to identify and avoid phishing attempts.

Humanizing security

Cybersecurity can seem like a complex subject but ultimately, it’s all about people. Most of today’s threats need people to activate them, so you need to train people to develop intuition, which is something that can’t be automated. By meeting employees where they are with an engaging approach to awareness training that moves security to the forefront of everything they do, you can promote positive behavioral change, and start building a security-first culture.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

CJ Moses

CJ Moses

CJ is the Chief Information Security Officer (CISO) at AWS, where he leads product design and security engineering. His mission is to deliver the economic and security benefits of cloud computing to business and government customers. Previously, CJ led the technical analysis of computer and network intrusion efforts at the U.S. Federal Bureau of Investigation Cyber Division. He also served as a Special Agent with the U.S. Air Force Office of Special Investigations (AFOSI). CJ led several computer intrusion investigations seen as foundational to the information security industry today.

How The Mill Adventure enabled data-driven decision-making in iGaming using Amazon QuickSight

Post Syndicated from Deepak Singh original https://aws.amazon.com/blogs/big-data/how-the-mill-adventure-enabled-data-driven-decision-making-in-igaming-using-amazon-quicksight/

This post is co-written with Darren Demicoli from The Mill Adventure.

The Mill Adventure is an iGaming industry enabler offering customizable turnkey solutions to B2B partners and custom branding enablement for its B2C partners. They provide a complete gaming platform, including licenses and operations, for rapid deployment and success in iGaming, and are committed to improving the iGaming experience by being a differentiator through innovation. The Mill Adventure already provides its services to a number of iGaming brands and seeks to continuously grow through the ranks of the industry.

In this post, we show how The Mill Adventure is helping its partners answer business-critical iGaming questions by building a data analytics application using modern data strategy using AWS. This modern data strategy approach has led to high velocity innovation while lowering the total operating cost.

With a gross market revenue exceeding $70 billion and a global player base of around 3 billion players (per a recent imarc Market Overview 2022-2027), the iGaming industry has, without a doubt, been booming over the past few years. This presents a lucrative opportunity to an ever-growing list of businesses seeking to tap into the market and attract a bigger share as their audience. Needless to say, staying competitive in this somewhat saturated market is extremely challenging. Making data-driven decisions is critical to the growth and success of iGaming businesses.

Business challenges

Gaming companies typically generate a massive amount of data, which could potentially enable meaningful insights and answer business-critical questions. Some of the critical and common business challenges in iGaming industry are:

  • What impacts the brand’s turnover—its new players, retained players, or a mix of both?
  • How to assess the effectiveness of a marketing campaign? Should a campaign be reinstated? Which games to promote via campaigns?
  • Which affiliates drive quality players that have better conversion rates? Which paid traffic channels should be discontinued?
  • For how long does the typical player stay active within a brand? What is the lifetime deposit from a player?
  • How to improve the registration to first deposit processes? What are the most pressing issues impacting player conversion?

Though sufficient data was captured, The Mill Adventure found two key challenges in their ability to generate actionable insights:

  • Lack of analysis-ready datasets (not raw and unusable data formats)
  • Lack of timely access to business-critical data

For example, The Mill Adventure generates over 50 GB of data daily. Its partners have access to this data. However, due to the data being in a raw form, they find it of little value in answering their business-critical questions. This affects their decision-making processes.

To address these challenges, The Mill Adventure chose to build a modern data platform on AWS that was not only capable of providing timely and meaningful business insights for the iGaming industry, but also efficiently manageable, low-cost, scalable, and secure.

Modern data architecture

The Mill Adventure wanted to build a data analytics platform using a modern data strategy that would grow as the company grows. Key tenets of this modern data strategy are:

  • Build a modern business application and store data in the cloud
  • Unify data from different application sources into a common data lake, preferably in its native format or in an open file format
  • Innovate using analytics and machine learning, with an overarching need to meet security and governance compliance requirements

A modern data architecture on AWS applies these tenets. Two key features that form the basic foundation of a modern data architecture on AWS are serverless and microservices.

The Mill Adventure solution

The Mill Adventure built a serverless iGaming data analytics platform that allows its partners to have quick and easy access to a dashboard with data visualizations driven by the varied sources of gaming data, including real-time streaming data. With this platform, stakeholders can use data to devise strategies and plan for future growth based on past performance, evaluate outcomes, and respond to market events with more agility. Having the capability to access insightful information in a timely manner and respond promptly has substantial impact on the turnover and revenue of the business.

A serverless iGaming platform on AWS

In building the iGaming platform, The Mill Adventure was quick to recognize the benefits of having a serverless microservice infrastructure. We wanted to spend time on innovating and building new applications, not managing infrastructure. AWS services such as Amazon API Gateway, AWS Lambda, Amazon DynamoDB, Amazon Kinesis Data Streams, Amazon Simple Storage Service (Amazon S3), Amazon Athena, and Amazon QuickSight are at the core of this data platform solution. Moving to AWS serverless services has saved time, reduced cost, and improved productivity. A microservice architecture has enabled us to accelerate time to value, increase innovation speed, and reduce the need to re-platform, refactor, and rearchitect in the future.

The following diagram illustrates the data flow from the gaming platform to QuickSight.

The data flow includes the following steps:

  1. As players access the gaming portal, associated business functions such as gaming activity, payment, bonus, accounts management, and session management capture the relevant player actions.
  2. Each business function has a corresponding Lambda-based microservice that handles the ingestion of the data from that business function. For example, the Session service handles player session management. The Payment service handles player funds, including deposits and withdrawals from player wallets. Each microservice stores data locally in DynamoDB and manages the create, read, update, and delete (CRUD) tasks for the data. For event sourcing implementation details, see How The Mill Adventure Implemented Event Sourcing at Scale Using DynamoDB.
  3. Data records resulting from the CRUD outputs are written in real time to Kinesis Data Streams, which forms the primary data source for the analytics dashboards of the platform.
  4. Amazon S3 forms the underlying storage for data in Kinesis Data Streams and forms the internal real-time data lake containing raw data.
  5. The raw data is transformed and optimized through custom-built extract, transform, and load (ETL) pipelines and stored in a different S3 bucket in the data lake.
  6. Both raw and processed data are immediately available for querying via Athena and QuickSight.

Raw data is transformed, optimized, and stored as processed data using an hourly data pipeline to meet analytics and business intelligence needs. The following figure shows an example of record counts and the size of the data being written into Kinesis Data Streams, which eventually needs to be processed from the data lake.

These data pipeline jobs can be broadly classified into six main stages:

  • Cleanup – Filtering out invalid records
  • Deduplicate – Removing duplicate data records
  • Aggregate at various levels – Grouping data at various aggregation levels of interest (such as per player, per session, or per hour or day)
  • Optimize – Writing files to Amazon S3 in optimized Parquet format
  • Report – Triggering connectors with updated data (such as updates to affiliate providers and compliance)
  • Ingest – Triggering an event to ingest data in QuickSight for analytics and visualizations

The output of this data pipeline is two-fold:

  • A transformed data lake that is designed and optimized for fast query performance
  • A refreshed view of data for all QuickSight dashboards and analyses

Cultivating a data-driven mindset with QuickSight

The Mill Adventure’s partners access their data securely via QuickSight datasets. These datasets are purposefully curated views on top of the transformed data lake. Each partner can access and visualize their data immediately. With QuickSight, partners can build useful dashboards without having deep technical knowledge or familiarity with the internal structure of the data. This approach significantly reduces the time and effort required and speeds up access to valuable gaming insights for business decision-making.

The Mill Adventure also provides each partner with a set of readily available dashboards. These dashboards are built on the years of experience that The Mill Adventure has in the iGaming industry, cover the most common business intelligence requirements, and jumpstart a data-driven mindset.

In the following sections, we provide a high-level overview of some of The Mill Adventure iGaming dashboard features and how these are used to meet the iGaming business analytics needs.

Key performance indicators

This analysis provides a comprehensive set of iGaming key performance indicators (KPIs) across different functional areas, including but not limited to payment activity (deposits and withdrawals), game activity (bets, gross game wins, return to player) and conversion metrics (active customers, active players, unique depositing customers, newly registered customers, new depositing customers, first-time depositors). These are presented concisely in both a quantitative view and in more visual forms.

In the following example KPI report, we can see how by presenting different iGaming metrics for key periods and lifetime, we can identify the overall performance of the brand.

Affiliates analysis

This analysis presents metrics related to the activity generated by players acquired through affiliates. Affiliates usually account for a large share of the traffic driven to gaming sites, and such a report helps identify the most effective affiliates. It informs performance trends per affiliate and compares across different affiliates. By combining data from multiple sources via QuickSight cross-data source joins, affiliate provider-related data such as earnings and clicks can be presented together with other key gaming platform metrics. By having these metrics broken down by affiliate, we can determine which affiliates contribute the most to the brand, as shown in the following example figure.

Cohort analysis

Cohort analyses track the progression of KPIs (such as average deposits) over a period of time for groups of players after their first deposit day. In the following figure, the average deposits per user (ADPU) is presented for players registering in different quarters within the last 2 years. By moving horizontally along each row on the graph, we can see how the ADPU changes for successive quarters for the same group of players. In the following example, the ADPU decreases substantially, indicating higher player churn.

We can use cohort analyses to calculate the churn rate (rate of players who become inactive). Additionally, by averaging the ADPU figures from this analysis, you can extract the lifetime value (LTV) of the ADPU. This shows the average deposit that can be expected to be deposited by players over their lifetime with the brand.

Player onboarding journey

Player onboarding is not a single-step process. In particular, jurisdictional requirements impose a number of compliance checks that need to be fulfilled along various stages during registration flow. All these, plus other steps along the registration (such as email verification), could pose potential pitfalls for players, leading them to fail to complete registration. Showing these steps in QuickSight funnel visuals helps identify such issues and pinpoint any bottlenecks in such flows, as shown in the following example. Additionally, Sankey visuals are used to monitor player movement across registration steps, identifying steps that need to be optimized.

Campaign outcome analysis

Bonus campaigns are a valuable promotional technique used to reward players and boost engagement. Campaigns can drive turnover and revenue, but there is always an inherent cost associated. It’s critical to assess the performance of campaigns and determine the net outcome. We have built a specific analysis to simplify the task of evaluating these promotions. A number of key metrics related to players activated by campaigns are available. These include both monetary incentives for game activity and deposits and other details related to player demographics (such as country, age group, gender, and channel). Individual campaign performance is analyzed and high-performance ones are identified.

In the following example, the figure on the left shows a time series distribution of deposits coming from campaigns in comparison to the global ones. The figure on the right shows a geographic plot of players activated from selected campaigns.

Demographics distribution analysis

Brands may seek to improve player engagement and retention by tailoring their content for their player base. They need to collect and understand information about their players’ demographics. Players’ demographic distribution varies from brand to brand, and the outcome of actions taken on different brands will vary due to this distribution. Keeping an eye on this demographic (age, country, gender) distribution helps shape a brand strategy in the best way that suits the player base and helps choose the right promotions that appeal most to its audience.

Through visuals such as the following example, it’s possible to quickly analyze the distribution of the selected metric along different demographic categories.

In addition, grouping players by the number of days since registration indicates which players are making a higher contribution to revenue, whether it is existing players or newly registered players. In the following figure, we can see that players who registered in the last 3 months continually account for the highest contribution to deposits. In addition, the proportion of deposits coming from the other two bands of players isn’t increasing, indicating an issue with player retention.

Compliance and responsible gaming

The Mill Adventure treats player protection with the utmost priority. Each iGaming regulated market has its own rules that need to be followed by the gaming operators. These include a number of compliance reports that need to be regularly sent to authorities in the respective jurisdictions. This process was simplified for new brands by creating a common reports template and automating the report creation in QuickSight. This helps new B2B brands meet these reporting requirements quickly and with minimal effort.

In addition, a number of control reports highlighting different areas of player protection are in place. As shown in the following example, responsible gaming reports such as those outlining player behavior deviations help identify accounts with problematic gambling patterns.

Players whose gaming pattern varies from the identified norm are flagged for inspection. This is useful to identify players who may need intervention.

Assessing game play and releases

It’s important to measure the performance and popularity of new games post release. Metrics such as unique player participation and player stakes are monitored during the initial days after the release, as shown in the following figures.

Not only does this help evaluate the overall player engagement, but it can also give a clear indication of how these games will perform in the future. By identifying popular games, a brand may choose to focus marketing campaigns on those games, and therefore ensure that it’s promoting games that appeal to its player base.

As shown in these example dashboards, we can use QuickSight to design and create business analytics insights of the iGaming data. This helps us answer real-life business-critical questions and take measurable actions using these insights.

Conclusion

In the iGaming industry, decisions not backed up by data are like an attempt to hit the bullseye blindfolded. With QuickSight, The Mill Adventure empowers its B2B partners and customers to harness data in a timely and convenient manner and support decision-making with winning strategies. Ultimately, in addition to gaining a competitive edge in maximizing revenue opportunities, improved decision-making will also lead to enhanced player experiences.

Reach out to The Mill Adventure and kick-start your iGaming journey today.

Explore rich set of out-of-the-box Amazon QuickSight ML Insights. Amazon QuickSight Q enables dashboards with natural language querying capabilities. For more information and resources on how to get started with free trial, visit Amazon QuickSight.


About the authors

Darren Demicoli is a Senior Devops and Business Intelligence Engineer at The Mill Adventure. He has worked in different roles in technical infrastructure, software development and database administration and has been building solutions for the iGaming sector for the past few years. Outside work, he enjoys travelling, exploring good food and spending time with his family.

Padmaja Suren is a Technical Business Development Manager serving the Public Sector Field Community in Market Intelligence on Analytics. She has 20+ years of experience in building scalable data platforms using a variety of technologies. At AWS she has served as Specialist Solution Architect on services such as Database, Analytics and QuickSight. Prior to AWS, she has implemented successful data and BI initiatives for diverse industry sectors in her capacity as Datawarehouse and BI Architect. She dedicates her free time on her passion project SanghWE which delivers psychosocial education for sexual trauma survivors to heal and recover.

Deepak Singh is a Solution Architect at AWS with specialization in business intelligence and analytics. Deepak has worked across a number of industry verticals such as Finance, Healthcare, Utilities, Retail, and High Tech. Throughout his career, he has focused on solving complex business problems to help customers achieve impactful business outcomes using applied intelligence solutions and services.

AWS successfully renews GSMA security certification for US East (Ohio) and Europe (Paris) Regions

Post Syndicated from Janice Leung original https://aws.amazon.com/blogs/security/aws-successfully-renews-gsma-security-certification-for-us-east-ohio-and-europe-paris-regions/

Amazon Web Services is pleased to announce that our US East (Ohio) and Europe (Paris) Regions have been re-certified through October 2023 by the GSM Association (GSMA) under its Security Accreditation Scheme Subscription Management (SAS-SM) with scope Data Centre Operations and Management (DCOM).

The US East (Ohio) and Europe (Paris) Regions first obtained GSMA certification in September and October 2021, respectively. This renewal demonstrates our continuous commitment to adhere to the heightened expectations for cloud service providers. AWS customers who provide an embedded Universal Integrated Circuit Card (eUICC) for mobile devices can run their remote provisioning applications with confidence in the AWS Cloud in the GSMA-certified Regions.

For up-to-date information related to the certification, visit the AWS Compliance Program page and choose GSMA under Europe, Middle East & Africa.

AWS was evaluated by independent third-party auditors selected by GSMA. The Certificate of Compliance that shows that AWS achieved GSMA compliance status is available on the GSMA website and through AWS Artifact. AWS Artifact is a self-service portal for on-demand access to AWS compliance reports. Sign in to AWS Artifact in the AWS Management Console, or learn more at Getting Started with AWS Artifact.

To learn more about our compliance and security programs, see AWS Compliance Programs. As always, we value your feedback and questions; reach out to the AWS Compliance team through the Contact Us page. If you have feedback about this post, you can submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Author

Janice Leung

Janice is a security audit program manager at AWS, based in New York. She leads security audits across Europe and has previously worked in security assurance and technology risk management in the financial industry for 10 years.

Upgrade to Athena engine version 3 to increase query performance and access more analytics features

Post Syndicated from Blayze Stefaniak original https://aws.amazon.com/blogs/big-data/upgrade-to-athena-engine-version-3-to-increase-query-performance-and-access-more-analytics-features/

Customers tell us they want to have stronger performance and lower costs for their data analytics applications and workloads. Customers also want to use AWS as a platform that hosts managed versions of their favorite open-source projects, which will frequently adopt the latest features from the open-source communities. With Amazon Athena engine version 3, we continue to increase performance, provide new features and now deliver better currency with the Trino and Presto projects.

Athena is an interactive query service that makes it easy to analyze data in Amazon Simple Storage Service (Amazon S3) using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run. Customers such as Orca Security, the Agentless Cloud Security Platform, are already realizing the benefits of using Athena engine version 3 with the Apache Iceberg.

“At Orca Security, we are excited about the launch of Athena engine version 3,” says Arie Teter, VP R&D at Orca Security. “With Athena engine version 3, we will be able to query our massive petabyte-scale data lake more efficiently and at a lower cost. We are especially excited about being able to leverage all the latest Trino features with Athena’s new engine in order to deliver our customers the best-of-breed, ML-driven anomaly detection solution.”

In this post, we discuss benefits of Athena engine version 3, performance benchmark results for different table formats and information about upgrading to engine version 3.

New features, more often

One of the most exciting aspects of engine version 3 is its new continuous integration approach to open source software management that will improve currency with the Trino and PrestoDB projects. This approach enables Athena to deliver increased performance and new features at an even faster pace.

At AWS, we are committed to bringing the value of open source to our customers and providing contributions to open source communities. The Athena development team is actively contributing bug fixes and security, scalability, performance, and feature enhancements back to these open-source code bases, so anyone using Trino, PrestoDB and Apache Iceberg can benefit from the team’s contributions. For more information on AWS’s commitment to the open-source community, refer to Open source at AWS.

Athena engine version 3 incorporates over 50 new SQL functions, and 30 new features from the open-source Trino project. For example, Athena engine version 3 supports T-Digest functions that can be used to approximate rank-based statistics with high accuracy, new Geospatial functions to run optimized Geospatial queries, and new query syntaxes such as MATCH_RECOGNIZE for identifying data patterns in applications such as fraud detection and sensor data analysis.

Athena engine version 3 also gives you more AWS-specific features. For example, we have worked closely with the AWS Glue data catalog team to improve Athena’s metadata retrieval time, which we explain in the section “Faster query planning with AWS Glue Data Catalog” below.

For more information about what’s new in Athena engine version 3, refer to the Athena engine version 3 Improvements and new features.

Faster runtime, lower cost

Last year, we shared benchmark testing on Athena engine version 2 using TPC-DS benchmark queries at 3 TB scale and observed that query performance improved by three times and cost decreased by 70% as a result of reduced scanned data. These improvements have been a combination of enhancements developed by Athena and AWS engineering teams as well as contributions from the PrestoDB and Trino open-source communities.

The new engine version 3 will allow Athena to continue delivering performance improvements at a rapid pace. We performed benchmark testing on engine version 3 using TPC-DS benchmark queries at 3 TB scale, and observed 20% query performance improvement when compared to the latest release of engine version 2. Athena engine version 3 includes performance improvement across operators, clauses, and decoders: such as performance improvement of joins involving comparisons with the <,<=, >,>= operators, queries that contains JOIN, UNION, UNNEST, GROUP BY clauses, queries using IN predicate with a short list of constant.  Athena engine version 3 also provides query execution improvements that reduce the amount of data scanned which gives you additional performance gains. With Athena, you are charged based on the amount of data scanned by each query, so this also translates to lower costs. For more information, refer to Amazon Athena pricing.

Faster query planning with AWS Glue Data Catalog

Athena engine version 3 provides better integration with AWS Glue Data Catalog to improve query planning performance by up to ten times. Query planning is the process of listing instructions the query engine will follow in order to run a query. During query planning, Athena uses AWS Glue API to retrieve various information such as table and partition metadata, and column statistics. As the number of tables increases, the number of calls to the Glue API for metadata also increase which results in additional query latency. In engine version 3, we reduced this Glue API overhead thus brought down the overall query planning time. For smaller datasets and datasets with large number of tables, you can see the total runtime has been reduced significantly because the query planning time is a higher percentage of the total run time.

Figure 1 below charts the top 10 queries from the TPC-DS benchmark with the most performance improvement from engine version 2 to engine version 3 based on the Amazon CloudWatch metric for total runtime. Each query involves joining multiple tables with complex predicates.

Faster query runtime with Apache Iceberg integration

Athena engine version 3 provides better integration with the Apache Iceberg table format. Features such as Iceberg’s hidden partitioning now augment Athena optimizations such as partition pruning and dynamic filtering to reduce data scanned and improve query performance in Athena engine v3. You do not need to maintain partition columns or even understand the physical table layout to load data to table and achieve good query performance.

We performed TPC-DS benchmark testing by loading data into the Apache Iceberg table format, with hidden partitions configured, and compared the performance between Athena engine version 2 and 3. Figure 2 below is a chart of the top 10 query improvements, which all include complex predicates. The top query, query 52, has five WHERE predicates and two GROUP BY operations. Compared to engine version 2, the query runs thirteen times faster with sixteen times less data scanned on engine version 3.

Upgrading to Athena engine version 3

To use Athena engine version 3, you can create a new workgroup, or configure an existing workgroup, and select the recommended Athena engine version 3. Any Athena workgroup can upgrade from engine version 2 to engine version 3 without interruption in your ability to submit queries. For more information and instructions for changing your Athena engine version, refer to Changing Athena engine versions.

Athena engine version 3 has feature parity with all major features from Athena engine version 2. There are no changes required by you to use features like dynamic partition pruningApache Iceberg and Apache Hudi table formats, AWS Lake Formation governed tables integration, and Athena Federated Query in engine version 3.For more information on Athena features, refer to Amazon Athena features, and the Amazon Athena User Guide.

Athena engine version 3 includes additional improvements to support ANSI SQL compliance. This results in some changes to syntax, data processing, and timestamps that may cause errors when running the same queries in the new engine version. For information about error messages, causes, and suggested solutions, refer to Athena engine version 3 LimitationsBreaking changesData processing changes, and Timestamp changes.

To make sure that your Athena engine version upgrade goes smoothly, we recommend the following practices to facilitate your upgrade process. After you have confirmed your query behavior works as you expect, you can safely upgrade your existing Athena workgroups.

  • Review the Athena engine version 3 Limitations and Breaking changes and update any affected queries.
  • Test in pre-production to validate and qualify your queries against Athena engine version 3 by creating a test workgroup or upgrading an existing pre-production environment. For example, you can create a new test workgroup running engine version 3 to run integration tests from your pre-production or staging environment, and monitor for failures or performance regressions. For information about CloudWatch metrics and dimensions published by Athena, refer to Monitoring Athena queries with CloudWatch metrics.
  • Upgrade each query based on metrics to test your queries against an Athena engine version 3 workgroup. For example, you can create a new workgroup with engine version 3 alongside your existing engine version 2 workgroup. You can send a small percentage of queries to the engine version 3 workgroup, monitor for failures or performance regressions, then increase the number of queries if they’re successful and performant. Repeat until all your queries have been migrated to Athena engine version 3.

With our simplified automatic engine upgrade process, you can configure existing workgroups to be automatically upgraded to engine version 3 without requiring manual review or intervention. The upgrade behavior is as follows:

  • If Query engine version is set to Automatic, your workgroup will remain on engine version 2 pending the automatic upgrade, and Athena will choose when to upgrade the workgroup to engine version 3. Before upgrading a workgroup, we perform a set of validation tests to confirm that its queries perform correctly and efficiently on engine version 3. Because our validation is performed on a best effort basis, we recommend you perform your own validation testing to ensure all queries run as expected.
  • If Query engine version is set to Manual, you will have the ability to select your version. The default choice is set to engine version 3, with the ability to toggle to engine version 2.

Conclusion

This post discussed Athena engine version 3 benefits, performance benchmark results, and how you can start using engine version 3 today with minimal work required. You can get started with Athena engine version 3 by using the Athena Console, the AWS CLI, or the AWS SDK. To learn more about Athena, refer to the Amazon Athena User Guide.

Thanks for reading this post! If you have questions on Athena engine version 3, don’t hesitate to leave a comment in the comments section.


About the authors

Blayze Stefaniak is a Senior Solutions Architect for the Technical Strategist Program supporting Executive Customer Programs in AWS Marketing. He has experience working across industries including healthcare, automotive, and public sector. He is passionate about breaking down complex situations into something practical and actionable. In his spare time, you can find Blayze listening to Star Wars audiobooks, trying to make his dogs laugh, and probably talking on mute.

Daniel Chen is a Senior Product Manager at Amazon Web Services (AWS) Athena. He has experience in Banking and Capital Market of financial service industry and works closely with enterprise customers building data lakes and analytical applications on the AWS platform. In his spare time, he loves playing tennis and ping pong.

Theo Tolv is a Senior Big Data Architect in the Athena team. He’s worked with small and big data for most of his career and often hangs out on Stack Overflow answering questions about Athena.

Jack Ye is a software engineer of the Athena Data Lake and Storage team. He is an Apache Iceberg Committer and PMC member.

Best practices for setting up Amazon Macie with AWS Organizations

Post Syndicated from Jonathan Nguyen original https://aws.amazon.com/blogs/security/best-practices-for-setting-up-amazon-macie-with-aws-organizations/

In this post, we’ll walk through the best practices to implement before you enable Amazon Macie across all of your AWS accounts within AWS Organizations.

Amazon Macie is a data classification and data protection service that uses machine learning and pattern matching to help secure your critical data in AWS. To do this, Macie first automatically provides an inventory of Amazon Simple Storage Service (Amazon S3) buckets in AWS accounts managed by Macie and identifies S3 buckets with security risks, including unencrypted buckets, publicly accessible buckets, and buckets shared with AWS accounts external to AWS Organizations. Second, Macie applies machine learning and pattern matching techniques to the buckets you select to discover, identify, and create alerts for sensitive data, such as personally identifiable information (PII). With the visibility provided by Macie, you can centrally manage your sensitive data findings across your data estate and automate and take actions on Macie findings.

By enabling Amazon Macie within AWS Organizations, you immediately start receiving the benefits of viewing your Macie policy findings and sensitive data findings from jobs that ran for member AWS accounts. When you enable Macie for member accounts, a service-linked role is created within each member AWS account. Macie uses a service-linked role (AWSServiceRoleForAmazonMacie) to monitor resources on your behalf. The service-linked role has a trust relationship with the Macie service (macie.amazonaws.com). For more information about using Macie in your AWS Organizations architecture, see the AWS Security Reference Architecture (AWS SRA).

The best practices we’ll walk through include how to create least-privilege AWS Identity and Access Management (IAM) policies for Macie-delegated administrators and for security engineers who will use Macie on a day-to-day basis. We’ll also show you how to create classification buckets, provide you with the correct resource permissions to allow the Macie service-linked role in each AWS account, and cover how to troubleshoot common issues.

IAM roles to provision for Amazon Macie

The least-privilege principle is important when managing access to sensitive data within your AWS accounts. In this section, we’ll show you how to create least-privilege IAM roles for the following personas for Macie:

  1. Data administrator
  2. Data security engineers
  3. DevOps/DevSecOps engineer
  4. Macie sensitive data findings reviewer

The personas can vary based on your organization, and this list is primarily meant to serve as an example. You will need to align the appropriate permissions to each role in order to enable Macie with the principle of least privilege. You can create your own customer managed policies after you know the specific permissions required for each persona.

Important: In general, AWS strongly recommends you limit the use of wildcards where possible. However, in some of the persona policies that follow, wildcards are necessary to accomplish the task. To implement the principle of least privilege where wildcards must be used, you should put limits on the resources that the persona can access. You can do this by adding condition keys for Macie; or if you deployed Macie by using AWS Organizations, you can add a condition for aws:ResourceOrgId.

Persona 1: Data administrator

This persona is a data administrator who is responsible for setting up and configuring Macie within AWS Organizations. To enforce separation of duties, this persona is not able to view or access Macie findings. You can perform the following steps to verify that the entity has the required permissions to enable the Macie-delegated administrator, and onboard the member AWS accounts within AWS Organizations. You can find the full procedure for each step by following the links to the Macie User Guide.

  1. Verify your permissions
  2. Designate the delegated Macie administrator account
  3. Automatically enable and add new organization accounts
  4. Enable and add existing organization accounts

It’s important to note that Macie is a Regional service. This means that the designation of a Macie administrator account is a Regional designation. A Macie administrator account in a specific AWS Region can manage Macie for member accounts only in that Region. To centrally manage Macie accounts in multiple Regions, the management account must log in to each Region where the organization uses Macie, and then designate the Macie administrator account in each of those Regions. You can use a single Macie administrator account to centrally manage up to 5,000 AWS accounts.

In the following policy, replace <account-id> with the Macie-delegated administrator account ID.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "OrganizationsReadAccess",
            "Effect": "Allow",
            "Action": [
                "organizations:ListDelegatedAdministrators",
                "organizations:ListAccounts",
                "organizations:DescribeOrganization",
                "organizations:ListAWSServiceAccessForOrganization"
            ],
            "Resource": "*"
        },
        {
            "Sid": "AWSServiceAccess",
            "Effect": "Allow",
            "Action": "organizations:EnableAWSServiceAccess",
            "Resource": "*",
            "Condition": {
                "StringLikeIfExists": {
                    "organizations:ServicePrincipal": "macie.amazonaws.com"
                }
            }
        },
        {
            "Sid": "RegisterDelegatedAdministrator",
            "Effect": "Allow",
            "Action": "organizations:RegisterDelegatedAdministrator",
            "Resource": "arn:*:organizations::*:<account-id>",
            "Condition": {
                "StringLikeIfExists": {
                    "organizations:ServicePrincipal": "macie.amazonaws.com"
                }
            }
        }
    ]
}

Persona 2: Data security engineer

This persona is a data security engineer who has day-to-day responsibility for reviewing Macie findings or Macie sensitive data discovery job configurations. Depending on your use case, you may need to separate this persona into two distinct personas where one is responsible to view Macie findings and the other to set Macie job configurations. To allow an IAM principal read-only permissions to view the Macie dashboard, configurations, and features, you can use the following policy. To enforce least privilege and restrict the resources to the Macie-delegated administrator, replace <region> with the AWS Region in which the delegated administrator is designated, and replace <account-id> with the Macie delegated administrator account ID.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "MacieJobConfiguration",
            "Effect": "Allow",
            "Action": [
                "macie2:GetFindingsFilter",
                "macie2:DescribeClassificationJob",
                "macie2:GetCustomDataIdentifier",
                "macie2:BatchGetCustomDataIdentifiers",
                "macie2:ListTagsForResource",
                "macie2:GetMember",
                "macie2:GetAllowList"
            ],
            "Resource": [
                "arn:aws:macie2:<region>:<account-id>:custom-data-identifier/*",
                "arn:aws:macie2:<region>:<account-id>:findings-filter/*",
                "arn:aws:macie2:<region>:<account-id>:member/*",
                "arn:aws:macie2:<region>:<account-id>:classification-job/*",
                "arn:aws:macie2:<region>:<account-id>:allow-list/*"
            ]
        },
        {
            "Sid": "MacieFindings",
            "Effect": "Allow",
            "Action": [
                "macie2:ListFindings",
                "macie2:ListClassificationJobs",
                "macie2:ListFindingsFilters",
                "macie2:GetFindings",
                "macie2:GetUsageTotals",
                "macie2:GetSensitiveDataOccurrencesAvailability",
                "macie2:GetFindingsPublicationConfiguration",
                "macie2:GetSensitiveDataOccurrences",
                "macie2:GetClassificationExportConfiguration",
                "macie2:GetUsageStatistics",
                "macie2:GetRevealConfiguration",
                "macie2:GetFindingStatistics",
                "macie2:GetBucketStatistics",
                "macie2:GetMacieSession",
                "macie2:ListMembers",
                "macie2:ListAllowLists",
                "macie2:DescribeBuckets",
                "macie2:ListCustomDataIdentifiers",
                "macie2:ListManagedDataIdentifiers",
                "macie2:SearchResources",
                "macie2:ListInvitations"
            ],
            "Resource": "*"
        }
    ]
}

Persona 3: DevOps/DevSecOps engineer

This persona is a DevOps or DevSecOps engineer who is responsible for building and maintaining applications that run on AWS resources. These application builders typically receive top-level security guidance from central security, and they are directly responsible for the security of the applications that they design, build, and operate in AWS. DevSecOps engineers might need limited additional IAM permissions to configure Macie discovery jobs, depending on how Macie will be used within AWS Organizations. To allow an IAM principal the ability to pause or stop Macie jobs, you can add the following policy. Be sure to replace <region> with the AWS Region in which the delegated administrator is designated, and replace <account-id> with the Macie delegated administrator AWS account number.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "MacieUpdateJobs",
            "Effect": "Allow",
            "Action": [
                "macie2:UpdateClassificationJob",
                "macie2:DescribeClassificationJob"
            ],
            "Resource": "arn:aws:macie2:<region>:<account-id>:classification-job/*"
        },
        {
            "Sid": "MacieListJobs",
            "Effect": "Allow",
            "Action": [
                "macie2:GetClassificationExportConfiguration",
                "macie2:GetMacieSession",
                "macie2:ListClassificationJobs"
            ],
            "Resource": "*"
        }
    ]
}

Persona 4: Macie sensitive data findings reviewer

This persona is a reviewer (usually a security engineer) who is responsible for investigating the sensitive data associated with Macie findings. There are a number of ways this persona can be set up, based on your specific use case and the needs of your organization. In this section, we will describe two of the options for setting up this persona.

Option 1: Enable and use Macie to retrieve and reveal sensitive data samples from the delegated Macie account where findings are consolidated

In this option, Macie doesn’t use the Macie service-linked role for your account to perform these tasks. Instead, you use your IAM identity to locate, retrieve, encrypt, and reveal the samples for sensitive findings. You can retrieve and reveal sensitive data samples for a finding if you’re allowed to access the requisite resources and data, and you’re allowed to perform the requisite actions. All the requisite actions are logged in AWS CloudTrail. In the following policy, be sure to replace <account-id>, <region>, and <key-id> with your own values.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "MacieReveal",
            "Effect": "Allow",
"Action": [
"macie2: UpdateRevealConfiguration",
"macie2:GetRevealConfiguration
],
            "Resource": " arn:aws:macie2:*:<account-id>:*"
        },
        {
            "Sid": "KMSPermissions",
            "Effect": "Allow",
            "Action": [
                "kms:Decrypt",
                "kms:DescribeKey",
                "kms:GenerateDataKey"
],
"Resource": "arn:aws:kms:<region>:<account-id>:key/<key-id>"
        }

    ]
}

Option 2: Create IAM roles to review findings and objects in the same AWS account where objects are located

For a command line utility to help you investigate the sensitive data, you can use the Macie Finding Data Reveal project. The Macie Finding Data Reveal project needs permissions to invoke macie:GetFindings on the account and s3:GetObject on the specific object reported in the finding.

In the following policy, be sure to replace <DOC-EXAMPLE-BUCKET> with the values for the S3 bucket where the finding is reported; and replace <account-id>, <region>, and <key-id> with your own values. You will also need to configure the KMS key and S3 bucket resource policies to allow permissions to your IAM role.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "InvokeMacieFindings",
            "Effect": "Allow",
            "Action": "macie2:GetFindings",
            "Resource": "*"
        },
        {
            "Sid": "ReportedS3Object",
            "Effect": "Allow",
            "Action": "s3:GetObject",
            "Resource": " arn:aws:s3:::<DOC-EXAMPLE-BUCKET>/*"
        },
     {
               "Sid": "KMSPermissions",
               "Effect": "Allow",
               "Action": [
                   "kms:Decrypt",
                   "kms:DescribeKey",
                   "kms:GenerateDataKey"
   ],
   "Resource": "arn:aws:kms:<region>:<account-id>:key/<key-id>"
        }
    ]
}

If you use an IAM role in the same AWS account, you can specify permissions to access the object and encryption key by using resource policies, and you can leave off the ReportedS3Object and KMSPermissions statement ID (Sid).

Apply SCPs to restrict unauthorized changes to Macie

After you create the personas, you need to verify that the Macie configurations to manage Macie members within AWS Organizations are only updated by authorized IAM principals. The following is an example service control policy (SCP) that you can use to prevent users from disabling Macie, or from modifying Macie configurations within the organization. Make sure to replace <account-id> and <data-admin-role-name> with your own values for the authorized IAM principal.

Note: When you use SCPs within a multi-account structure, it is important to keep in mind quotas that affect AWS Organizations.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "RestrictAmazonMacie",
            "Effect": "Deny",
            "Action": [
                "macie2:DeleteMember",
                "macie2:DisableMacie",
                "macie2:DisableOrganizationAdminAccount",
                "macie2:DisassociateFromAdministratorAccount",
                "macie2:DisassociateMember",
                "macie2:UpdateMacieSession",
                "macie2:UpdateMemberSession"
            ],
            "Resource": [
                "*"
            ],
            "Condition": {
                "StringNotLike": {
                    "aws:PrincipalArn": [
                        "arn:aws:iam::<account-id>:role/<data-admin-role-name>"
                    ]
                }
            }
        }
    ]
}

Allow the Macie service-linked IAM role to scan S3 objects

When Macie analyzes files, it needs permissions to analyze encrypted files. This is important so that you don’t have blind spots in your data protection initiatives.

Before you run a Macie job against S3 objects, make sure that existing KMS keys that are used to encrypt the S3 buckets also grant the Macie service-linked IAM role in the AWS account the necessary permissions to decrypt the S3 objects. For more information, see Service-linked roles for Amazon Macie. To confirm that Macie can scan encrypted objects, the associated KMS key resource policies must allow the Macie service-linked role to use the KMS key to decrypt objects.

Furthermore, depending on the object’s type of encryption, Macie might not be able to fully scan the object. The following table summarizes types of object encryption and the ability Macie has to scan the object. For more information, see Macie supported encryption types.

S3 object encryption type Macie scan ability
Client-side encryption Macie cannot decrypt and analyze the object. Macie can only store and report metadata for the object.
Server-side encryption with Amazon S3 managed keys (SSE-S3) Macie can decrypt and analyze the object.
Server-side encryption with AWS managed AWS KMS encryption (AWS-KMS) Macie can decrypt and analyze the object.
Server-side encryption with customer managed AWS KMS encryption (SSE-KMS) Macie can decrypt and analyze the object if Macie is authorized to use the KMS key. Otherwise, Macie can only store and report metadata for the object.
Server-side encryption with customer provided key (SSE-C) Macie cannot decrypt and analyze the object. Macie can only store and report metadata for the object.

Investigating failed Macie scans of S3 objects

In the event Macie is unable to scan an S3 object, you can view the logs in an S3 bucket configured in the Macie delegated administrator account for sensitive data discovery results, or in centralized AWS CloudTrail logs. The following are common reasons why Macie might not be able to scan S3 objects, and the associated steps for remediating each issue.

KMS implicit deny

The Macie service-linked role (AWSServiceRoleForAmazonMacie) is not authorized to decrypt S3 objects in Macie member accounts, because no resource-based policy allows the kms:Decrypt action. Check for the following error message in AWS CloudTrail if the AWS KMS resource-based policy implicitly denies the Macie service-linked role. Your error message will show <account-id> and <region> as your own values.

sourceIPAddress: "macie.amazonaws.com" and eventSource : "kms.amazonaws.com" and eventName : "Decrypt" and errorCode : "AccessDenied" Filter the results by error message: “User: arn:aws:sts::<account-id>:assumed-role/AWSServiceRoleForAmazonMacie/classifier-content-fetcher is not authorized to perform: kms:Decrypt on resource: arn:aws:kms:<region>:key/key-id because no resource-based policy allows the kms:Decrypt action…”

In order to remediate a KMS implicit deny error for a customer-managed key, add the following to the customer managed key policy. Be sure to replace <account_name> with your own value.

{
            "Sid": "Allow Macie Decrypt S3",
            "Effect": "Allow",
            "Principal": {
                "AWS": "*"
            },
            "Action": [
                "kms:Decrypt",
                "kms:DescribeKey"
            ],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "aws:PrincipalArn": "arn:aws:iam::<account_name>:role/aws-service-role/macie.amazonaws.com/AWSServiceRoleForAmazonMacie"              
            }
          }
  }

KMS explicit deny

The Macie service-linked role (AWSServiceRoleForAmazonMacie) is not authorized to decrypt S3 objects in Macie member accounts, because resource-based policies explicitly deny the kms:Decrypt action for the Macie service-linked role. Check for the following error message in AWS CloudTrail if the AWS KMS resource-based policy explicitly denies the Macie service-linked role. Your error message will show <account_name> and <region> as your own values.

sourceIPAddress : "macie.amazonaws.com" and eventSource : "kms.amazonaws.com" and eventName : "Decrypt" and errorCode : "AccessDenied" Filter the results by error message:
“User:arn:aws:sts::<account_name>:assumed-role/AWSServiceRoleForAmazonMacie/classifier-content-fetcher is not authorized to perform: kms:Decrypt on resource: arn:aws:kms:<region>:key/key-id with an explicit deny in resource-based policy…”

In order to remediate a KMS explicit deny error, update the policy statement to allow the Macie service-linked role access to decrypt and describe key actions. Be sure to replace <account_name> with your own value.

{
            "Sid": "Deny Macie Decrypt S3",
            "Effect": "Deny",
            "Principal": {
                "AWS": "*"
            },
            "Action": [
                "kms:Decrypt",
                "kms:DescribeKey"
            ],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "aws:PrincipalArn": "arn:aws:iam::<account_name>:role/aws-service-role/macie.amazonaws.com/AWSServiceRoleForAmazonMacie"              
              }
            }
 }

S3 explicit deny

The Macie service-linked role (AWSServiceRoleForAmazonMacie) is explicitly denied in the S3 bucket policy. Check for the following error messages in AWS CloudTrail for S3 explicit deny.

userIdentity.sessionContext.sessionIssuer.userName: "AWSServiceRoleForAmazonMacie" and eventSource: "s3.amazonaws.com" and eventName: " GetBucketEncryption" and errorcode: “ServerSideEncryptionConfigurationNotFoundError” and errormessage: “The server side encryption configuration was not found” OR userIdentity.sessionContext.sessionIssuer.userName: "AWSServiceRoleForAmazonMacie" and eventSource: "s3.amazonaws.com" and eventName: "GetBucketReplication" and errorcode: " ReplicationConfigurationNotFoundError" and errormessage: “The replication configuration was not found” OR userIdentity.sessionContext.sessionIssuer.userName: "AWSServiceRoleForAmazonMacie" and eventSource: "s3.amazonaws.com" and eventName: " GetBucketTagging" and errorcode: " NoSuchTagSet" and errormessage: “The TagSet does not exist” OR userIdentity.sessionContext.sessionIssuer.userName: "AWSServiceRoleForAmazonMacie" and eventSource: "s3.amazonaws.com" and eventName: "GetBucketAcl" and responseElements: "null" OR userIdentity.sessionContext.sessionIssuer.userName: "AWSServiceRoleForAmazonMacie" and eventSource: "s3.amazonaws.com" and eventName: " GetBucketPublicAccessBlock" and responseElements: "null" OR userIdentity.sessionContext.sessionIssuer.userName: "AWSServiceRoleForAmazonMacie" and eventSource: "s3.amazonaws.com" and eventName: "GetBucketLocation" and responseElements: "null" OR userIdentity.sessionContext.sessionIssuer.userName: "AWSServiceRoleForAmazonMacie" and eventSource: "s3.amazonaws.com" and eventName: "GetBucketVersioning" and responseElements: "null" OR userIdentity.sessionContext.sessionIssuer.userName: "AWSServiceRoleForAmazonMacie" and eventSource: "s3.amazonaws.com" and eventName: " GetBucketPolicy" and errorcode: "NoSuchBucketPolicy" and errormessage: “The bucket policy does not exist” OR userIdentity.sessionContext.sessionIssuer.userName: "AWSServiceRoleForAmazonMacie" and eventSource: "s3.amazonaws.com" and eventName: " GetBucketEncryption" and responseElements: "null" OR userIdentity.sessionContext.sessionIssuer.userName: "AWSServiceRoleForAmazonMacie" and eventSource: "s3.amazonaws.com" and eventName: " GetBucketPolicy" and responseElements: "null"

Note: Nearly all S3 explicit deny and S3 object ownership error messages have the same event names. See the Ensure S3 and KMS resource policy compliance section in this post to view the S3 object ownership setting.

Macie cannot decrypt and analyze S3 objects if there is an explicit deny in the S3 bucket policy. The following is an example of an S3 bucket policy that explicitly denies the Macie service-linked role. Be sure to replace <DOC-EXAMPLE-BUCKET> and <account_id> with your own values.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "S3ExplicitDeny",
            "Effect": "Deny",
            "Principal": {
                "AWS": "*"
            },
            "Action": [
                "s3:GetObject",
                "s3:GetObjectTagging"
            ],
            "Resource": [
                "arn:aws:s3:::<DOC-EXAMPLE-BUCKET>/*",
                "arn:aws:s3:::<DOC-EXAMPLE-BUCKET>"
            ],
            "Condition": {
                "StringEquals": {
                    "aws:PrincipalArn": "arn:aws:iam::<account_id>:role/aws-service-role/macie.amazonaws.com/AWSServiceRoleForAmazonMacie"
                }
            }
        }
    ]
}

Macie can decrypt and analyze S3 objects if there is no explicit deny in the S3 bucket. The following is an example of the permission for the S3 bucket policy to explicitly allow the Macie service-linked role to have access to your S3 bucket. Be sure to replace <DOC-EXAMPLE-BUCKET> and <account-id> with your own values.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Allow Macie S3 Read",
            "Effect": "Allow",
            "Principal": {
                "AWS": "*"
            },
            "Action": [
                "s3:ListBucket",
                "s3:GetReplicationConfiguration",
                "s3:GetObject*",
                "s3:GetLifecycleConfiguration",
                "s3:GetEncryptionConfiguration",
                "s3:GetBucket*"
            ],
            "Resource": [
                "arn:aws:s3:::<DOC-EXAMPLE-BUCKET>/*",
                "arn:aws:s3:::<DOC-EXAMPLE-BUCKET>"
            ],
            "Condition": {
                "StringEquals": {
                    "aws:PrincipalArn": "arn:aws:iam::<account-id>:role/aws-service-role/macie.amazonaws.com/AWSServiceRoleForAmazonMacie“
                }
            }
        }
    ]
  }

S3 Object Ownership

Macie is unable to scan S3 objects that are owned by another AWS account, due to access control list (ACL) settings and permissions. Event names are identical for both S3 explicit deny errors and S3 Object Ownership errors. S3 explicit deny has the following additional two event names.

userIdentity.sessionContext.sessionIssuer.userName: "AWSServiceRoleForAmazonMacie" and eventSource: "s3.amazonaws.com" and eventName: " GetBucketEncryption" and errorcode: “ServerSideEncryptionConfigurationNotFoundError” and errormessage: “The server side encryption configuration was not found” OR userIdentity.sessionContext.sessionIssuer.userName: "AWSServiceRoleForAmazonMacie" and eventSource: "s3.amazonaws.com" and eventName: " GetBucketPolicy" and errorcode: "NoSuchBucketPolicy" and errormessage: “The bucket policy does not exist”

The S3 Object Ownership feature has the following three settings that you can use to control ownership of objects that are uploaded to your bucket, and to disable or enable ACLs. We recommend that you disable ACLs on your S3 buckets.

  • Bucket owner enforced (recommended) – ACLs are disabled, and the bucket owner automatically owns and has full control over every object in the bucket. ACLs no longer affect permissions to data in the S3 bucket. The bucket uses policies to define access control.
  • Bucket owner preferred – The bucket owner owns and has full control over new objects that other accounts write to the bucket with the bucket-owner-full-control canned ACL.
  • Object writer (default) – The AWS account that uploads an object owns the object, has full control over it, and can grant other users access to it through ACLs.

In order to remediate an S3 object ownership issue, there are two options available:

Option 1: Change object ownership settings to bucket owner enforced (recommended). When you disable ACLs, it changes the ownership of existing objects to the bucket owner account. You should consider the following scenarios prior to changing the S3 Object Ownership setting.

S3 objects in the source bucket (account A) are encrypted with a customer-managed key, and you copy the object in the destination bucket (account B) that has the object writer object ownership setting and its own customer managed key. If you copy S3 objects from the source bucket (account A) to the destination bucket (account B), and you do not specify a customer-managed key to use during the copy command, and the object ownership setting in the destination bucket (account B) is bucket owner enforced (ACLs disabled), then this will result in an object ownership change to bucket owner. These actions will also set the object’s server-side encryption to use the encryption settings in the destination bucket (account B).

However, if you specify a customer-managed key during the S3 copy command, then the object’s server-side encryption remains with the source bucket account (account A) customer managed key.

Option 2: Use S3 batch operations to copy objects and set ACLs. Changing the object ownership

setting to bucket owner preferred only applies to new objects and not the existing objects. You can use one one-time batch operation to set ACLs on existing objects.

Ensure S3 and KMS resource policy compliance

Another best practice to follow when you enable Macie with AWS Organizations is to use Macie to verify your organization’s policy compliance for S3 objects and KMS resources. In the Macie-delegated admin account, the summary page provides an overview of S3 data and security and access control in your organization in AWS Organizations. Users can view information about S3 security posture, such as whether S3 buckets are public or not, server-side encryption of S3 buckets, and whether S3 buckets are shared inside or outside of your organization. Data privacy and compliance groups can get organization-wide visibility across their accounts and buckets.

Your organization is responsible for introducing guardrails based on your organization’s security policies. To automate compliance checks for S3 objects and KMS resources, make sure to update your continuous integration and continuous deployment (CI/CD) pipeline. This will allow you to set up continuous compliance checks for the Macie service-linked role by using tools like CloudFormation Guard or Open Policy Agent.

In order to check S3 object ownership settings, you can use AWS Command Line Interface (AWS CLI) commands to view bucket ownership settings. Currently, Macie and AWS Config do not report on S3 object ownership as part of the resource configuration. You can run the following AWS CLI command in AWS accounts within AWS Organizations, making sure to replace <DOC-EXAMPLE-BUCKET> with your own value, to view bucket ownership settings. This can be scripted to list all AWS accounts within AWS Organizations, list all S3 buckets within the AWS account, then get the bucket ownership configuration.

aws s3api get-bucket-ownership-controls --bucket <DOC-EXAMPLE-BUCKET>

After checking these ownership settings, you can run the following AWS CLI commands to view the S3 objects ownership settings, making sure to replace <DOC-EXAMPLE-BUCKET> with your own value.

aws s3api list-objects-v2 —bucket <DOC-EXAMPLE-BUCKET> —fetch-owner—query ”Contents[?Owner.ID!='CURRENT-ID'].{Key:Key,Owner:Owner.DisplayName}" —output

Additional Macie best practices

You should also consider the following recommendations before you enable Macie, so that you can manage Macie findings and member accounts efficiently at scale:

  • Enable Macie using AWS Organizations to manage multiple accounts and to govern your environment as you grow and scale your AWS resources.
  • Enable Macie in all Regions where you have workloads with S3 buckets.
  • Enable Security Hub and Amazon Macie integration to send Macie findings to Security Hub (enabled by default).
  • Enable Security Hub Region aggregation to consolidate Macie findings in a single Region.
  • Ingest logs from AWS CloudWatch Logs to enable custom alerting for Macie sensitive data discovery job results.
  • In Macie settings, turn on the Auto-enable setting. That way, Macie will automatically be enabled for new accounts when the accounts are added to your organization in AWS Organizations.
  • Store sensitive data discovery results in an S3 bucket, with default encryption enabled, after you have configured your Macie delegated administrator account.

Conclusion

In this blog post, we walked you through the best practices to implement before you enable Amazon Macie across your AWS accounts within AWS Organizations. In order to efficiently use Macie within AWS Organizations, it is important to understand why failures can occur, how to investigate the logs, and how to remediate the issues for both existing and future resources.

Now that you have a better understanding of how to prepare for using Macie, try running a Macie sensitive data discovery job. The next aspect to start thinking about is how to review and respond to Macie findings. You can deploy another solution to automatically send notifications with Slack when Macie findings are generated.

If you have feedback about this post, submit comments in the Comments section below. If you have any questions about this post, start a thread on the Amazon Macie forum.

Want more AWS Security news? Follow us on Twitter.

Jonathan Nguyen

Jonathan Nguyen

Jonathan is a Shared Delivery Team Senior Security Consultant at AWS. His background is in AWS Security with a focus on threat detection and incident response. Today, he helps enterprise customers develop a comprehensive security strategy and deploy security solutions at scale, and he trains customers on AWS Security best practices.

Ajay Rawat

Ajay Rawat

Ajay is a Security Consultant in a shared delivery team at AWS. He is a technology enthusiast who enjoys working with customers to solve their technical challenges and to improve their security posture in the cloud.

Customize Amazon QuickSight dashboards with the new bookmarks functionality

Post Syndicated from Mei Qu original https://aws.amazon.com/blogs/big-data/customize-amazon-quicksight-dashboards-with-the-new-bookmarks-functionality/

Amazon QuickSight users now can add bookmarks in dashboards to save customized dashboard preferences into a list of bookmarks for easy one-click access to specific views of the dashboard without having to manually make multiple filter and parameter changes every time. Combined with the “Share this view” functionality, you can also now share your bookmark views with other readers for easy collaboration and discussion.

In this post, we introduce the bookmark functionality and its capabilities, and demonstrate typical use cases. We also discuss several scenarios extending the usage of bookmarks for sharing and presentation.

Create a bookmark

Open the published dashboard that you want to view and make changes to the filters, controls, or select the sheet that you want. For example, you can filter to the Region that interests you, or you can select a specific date range using a sheet control on the dashboard.

To create a bookmark of this view, choose the bookmark icon, then choose Add bookmark.

Enter a name, then choose Save.

The bookmark is saved, and the dashboard name on the banner updates with the bookmark name.

You can return to the original dashboard view that the author published at any time by going back to the bookmark icon and choosing Original dashboard.

Rename or delete a bookmark

To rename or delete a bookmark, on the bookmark pane, choose the context menu (three vertical dots) and choose Rename or Delete.

Although you can make any of these edits to bookmarks you have created, the Original dashboard view can’t be renamed, deleted, or updated, because it’s what the author has published.

Update a bookmark

At any time, you can change a bookmark dashboard view and update the bookmark to always reflect those changes.

After you make your edits, on the banner, you can see Modified next to the bookmark you’re currently editing.

To save your updates, on the bookmarks pane, choose the context menu and choose Update.

Make a bookmark the default view

After you create a bookmark, you can set it as the default view of the dashboard that you see when you open the dashboard in a new session. This doesn’t affect anyone else’s view of the dashboard. You can also set the Original dashboard that the author published as the default view. When a default view is set, any time that you open the dashboard, the bookmark view is presented to you, regardless of the changes you made during your last session. However, if no default bookmark has been set, QuickSight will persist your current filter selections when you leave the dashboard. This way, readers can still pick up where they left off and don’t have to re-select filters.

To do this, in the Bookmarks pane, choose the context menu (the three dots) for the bookmark that you want to set as your default view, then choose Set as default.

Share a bookmark

After you create a bookmark, you can share a URL link for the view with others who have permission to view the dashboard. They can then save that view as their own bookmark. The shared view is a snapshot of your bookmark, meaning that if you make any updates to your bookmark after generating the URL link, the shared view doesn’t change.

To do this, choose the bookmark that you want to share so that the dashboard updates to that view. Choose the share icon, then choose Share this view. You can then copy the generated URL to share it with others.

Presentation with bookmarks

One extended use case of the new bookmarks feature is the ability to create a presentation through visuals in a much more elegant fashion. Previously, you may have had to change multiple filters and controls before arriving at your next presentation. Now you can save each presentation as a bookmark beforehand and just go through each bookmark like a slideshow. By creating a snapshot for each of the configurations you want to show, you create a smoother and cleaner experience for your audience.

For example, let’s prepare a presentation through bookmarks using a Restaurant Operations dashboard. We can start with the global view of all the states and their related KPIs.

Best vs. worst performing restaurant

We’re interested in analyzing the difference between the best performing and worst performing store based on revenue. If we had to filter this dashboard on the fly, it would be much more time-consuming because we would have to manually enter both store names into the filter. However, with bookmarks, we can pre-filter to the two addresses, save the bookmark, and be automatically directed to the desired stores when we select it.

Worst performing restaurant analysis

When comparing the best and worst performing stores, we see that the worst performing store (10 Resort Plz) had a below average revenue amount in December 2021. We’re interested in seeing how we can better boost sales around the holiday season and if there is something the owner lacked in that month. We can create a bookmark that directs us to the Owner View with the filters selected to December 2021 and the address and city pre-filtered.

Pennsylvania December 2021 analysis

After looking at this bookmark, we found that in the month of December, the call for support score in the restaurant category was high. Additionally, in the week of December 19, 2021 product sales were at the monthly low. We want to see how these KPIs compare to other stores in the same state for that week. Is it just a seasonal trend, or do we need to look even deeper and take action? Again, we can create a bookmark that gives us this comparison on a state level.

From the last bookmark, we can see that the other store in the same state (Pennsylvania) also had lower than average product sales in the week of December 19, 2021. For the purpose of this demo, we can go ahead and stop here, but we could create even more bookmarks to help answer additional questions, such as if we see this trend across other cities and states.

With these three bookmarks, we have created a top-down presentation looking at the worst performing store and its relevant metrics and comparisons. If we wanted to present this, it would be as simple as cycling through the bookmarks we’ve created to put together a cohesive presentation instead of having distracting onscreen filtering.

Conclusion

In this post, we discussed how we can create, modify, and delete bookmarks, along with setting it as a default view and sharing bookmarks. We also demonstrated an extended use case of presentation with bookmarks. The new bookmarks functionality is now generally available in all supported QuickSight Regions.

We’re looking forward to your feedback and stories on how you apply these calculations for your business needs.

Did you know that you can also ask questions of you data in natural language with QuickSight Q? Watch this video to learn more.


About the Authors

Mei Qu is a Business Intelligence Engineer on the AWS QuickSight ProServe Team. She helps customers and partners leverage their data to develop key business insights and monitor ongoing initiatives. She also develops QuickSight proof of concept projects, delivers technical workshops, and supports implementation projects.

Emily Zhu is a Senior Product Manager at Amazon QuickSight, AWS’s cloud-native, fully managed SaaS BI service. She leads the development of QuickSight analytics and query capabilities. Before joining AWS, she was working in the Amazon Prime Air drone delivery program and the Boeing company as senior strategist for several years. Emily is passionate about potentials of cloud-based BI solutions and looks forward to helping customers advance in their data-driven strategy-making.

AWS achieves its second ISMAP authorization in Japan

Post Syndicated from Hidetoshi Takeuchi original https://aws.amazon.com/blogs/security/aws-achieves-its-second-ismap-authorization-in-japan/

Earning and maintaining customer trust is an ongoing commitment at Amazon Web Services (AWS). Our customers’ security requirements drive the scope and portfolio of the compliance reports, attestations, and certifications we pursue. We’re excited to announce that AWS has achieved authorization under the Information System Security Management and Assessment Program (ISMAP) program, effective from April 1, 2022 to March 31, 2023. The authorization scope covers a total of 145 AWS services (an increase of 22 services over the previous authorization) across 22 AWS Regions, including the Asia Pacific (Tokyo) Region and the Asia Pacific (Osaka) Region. This is the second time AWS has undergone an assessment since ISMAP was first published by the ISMAP steering committee in March 2020.

ISMAP is a Japanese government program for assessing the security of public cloud services. The purpose of ISMAP is to provide a common set of security standards for cloud service providers (CSPs) to comply with as a baseline requirement for government procurement. ISMAP introduces security requirements for cloud domains, practices, and procedures that CSPs must implement. CSPs must engage with an ISMAP-approved third-party assessor to assess compliance with the ISMAP security requirements in order to apply as an ISMAP-registered CSP. The ISMAP program will evaluate the security of each CSP and register those that satisfy the Japanese government’s security requirements. Upon successful ISMAP registration of CSPs, government procurement departments and agencies can accelerate their engagement with the registered CSPs and contribute to the smooth introduction of cloud services in government information systems.

The achievement of this authorization demonstrates the proactive approach AWS has taken to help customers meet compliance requirements set by the Japanese government and to deliver secure AWS services to our customers. Service providers and customers of AWS can use the ISMAP authorization of AWS services to support their own ISMAP authorization programs. The full list of 145 ISMAP-authorized AWS services is available on the AWS Services in Scope by Compliance Program webpage, and you can also use the ISMAP Customer Package on AWS Artifact. You can confirm the AWS ISMAP authorization status and find detailed scope information on the ISMAP Portal.

As always, we are committed to bringing new services and Regions into the scope of our ISMAP program, based on your business needs. If you have any questions, don’t hesitate to contact your AWS Account Manager.

If you have feedback about this post, submit comments in the Comments section below.
Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Hidetoshi Takeuchi

Hidetoshi Takeuchi

Hidetoshi is the Audit Program Manager for the Asia Pacific Region, leading Japan security certification and authorization programs. Hidetoshi has worked in information technology security, risk management, security assurance, and technology audits for the past 25 years. He is passionate about delivering programs that build customers’ trust and provide them with assurance on cloud security.