Post Syndicated from original https://xkcd.com/2523/
A Conversation About the Feature Film “Mass”
Post Syndicated from The Atlantic original https://www.youtube.com/watch?v=STf8Ig0YPk4
The Future of Voting Rights
Post Syndicated from The Atlantic original https://www.youtube.com/watch?v=-IGtsPFI3BI
A Conversation About “The Many Saints of Newark”
Post Syndicated from The Atlantic original https://www.youtube.com/watch?v=pniWB-qW5Mk
Ratiu: A tale of two toolchains and glibc
Post Syndicated from original https://lwn.net/Articles/871451/rss
Adrian Ratiu writes
on the Collabora blog
about the challenges that face developers trying to build the GNU C
Library with the LLVM compiler.
Is it worth it to fix glibc (and other projects which support only
GCC) to build with LLVM? Is it better to just replace them with
alternatives already supporting LLVM? Is it best to use both GCC
and LLVM, each for their respective supported projects?This post is an exploration starting from these questions but does
not attempt to give any definite answers. The intent here is to not
be divisive and controversial, but to raise awareness by describing
parts of the current status-quo and to encourage collaboration.
Bottomley: Linux Plumbers Conference Matrix and BBB integration
Post Syndicated from original https://lwn.net/Articles/871450/rss
James Bottomley explains
how the integration of Matrix and BigBlueButton was done for the
just-concluded Linux Plumbers Conference.
One thing that emerged from our initial disaster with Matrix on the
first day is that we failed to learn from the experiences of other
open source conferences (i.e. FOSDEM, which used Matrix and ran
into the same problems). So, an object of this post is to document
for posterity what we did and how to repeat it.
[$] User-space interrupts
Post Syndicated from original https://lwn.net/Articles/871113/rss
The term “interrupt” brings to mind a signal that originates in the
hardware and which is handled in the kernel; even software interrupts are a
kernel concept. But there is, it seems, a use case for enabling user-space
processes to send interrupts directly to each other. An upcoming Intel
processor generation includes support for this capability; at the 2020 Linux Plumbers Conference,
Sohil Mehta ran a
Kernel-Summit session on how Linux might support that feature.
Field Notes: How to Scale Your Networks on Amazon Web Services
Post Syndicated from Androski Spicer original https://aws.amazon.com/blogs/architecture/field-notes-how-to-scale-your-networks-on-amazon-web-services/
As AWS adoption increases throughout an organization, the number of networks and virtual private clouds (VPCs) to support them also increases. Customers can see growth upwards of tens, hundreds, or in the case of the enterprise, thousands of VPCs.
Generally, this increase in VPCs is driven by the need to:
- Simplify routing, connectivity, and isolation boundaries
- Reduce network infrastructure cost
- Reduce management overhead
Overview of solution
This blog post discusses the guidance customers require to achieve their desired outcomes. Guidance is provided through a series of real-world scenarios customers encounter on their journey to building a well-architected network environment on AWS. These challenges range from the need to centralize networking resources, to reduce complexity and cost, to implementing security techniques that help workloads to meet industry and customer specific operational compliance.
The scenarios presented here form the foundation and starting point from which the intended guidance is provided. These scenarios start as simple, but gradually increase in complexity. Each scenario tackles different questions customers ask AWS solutions architects, service teams, professional services, and other AWS professionals, on a daily basis.
Some of these questions are:
- What does centralized DNS look like on AWS, and how should I approach and implement it?
- How do I reduce the cost and complexity associated with Amazon Virtual Private Cloud (Amazon VPC) interface endpoints for AWS services by centralizing that is spread across many AWS accounts?
- What does centralized packet inspection look like on AWS, and how should we approach it?
This blog post will answer these questions, and more.
Prerequisites
This blog post assumes that the reader has some understanding of AWS networking basics outlined in the blog post One to Many: Evolving VPC Design. It also assumes that the reader understands industry-wide networking basics.
Simplify routing, connectivity, and isolation boundaries
Simplification in routing starts with selecting the correct layer 3 technology. In the past, customers used a combination of VPC peering, Virtual Gateway configurations, and the Transit VPC Solution to achieve inter–VPC routing, and routing to on-premises resources. These solutions presented challenges in configuration and management complexity, as well as security and scaling.
To solve these challenges, AWS introduced AWS Transit Gateway. Transit Gateway is a regional virtual router that customers can attach their VPCs, site-to-site virtual private networks (VPNs), Transit Gateway Connect, AWS Direct Connect gateways, and cross-region transit gateway peering connections, and configure routing between them. Transit Gateway scales up to 5,000 attachments; so, a customer can start with one VPC attachment, and scale up to thousands of attachments across thousands of accounts. Each VPC, Direct Connect gateway, and peer transit gateway connection receives up to 50 Gbps of bandwidth.
Routing happens at layer 3 through a transit gateway. Transit Gateway come with a default route table to which all default attachment association happens. If route propagation and association is enabled at transit gateway creation time, AWS will create a transit gateway with a default route table to which attachments are automatically associated and their routes automatically propagated. This creates a network where all attachments can route to each other.
Adding VPN or Direct Connect gateway attachments to on-premises networks will allow all attached VPCs and networks to easily route to on-premises networks. Some customers require isolation boundaries between routing domains. This can be achieved with Transit Gateway.
Let’s review a use case where a customer with two spoke VPCs and a shared services VPC (shared-services-vpc-A) would like to:
- Allow all spoke VPCs to access the shared services VPC
- Disallow access between spoke VPCs
To achieve this, the customer needs to:
- Create a transit gateway with the name tgw-A and two route tables with the names spoke-tgw-route-table and shared-services-tgw-route-table.
-
- When creating the transit gateway, disable automatic association and propagation to the default route table.
- Enable equal-cost multi-path routing (ECMP) and use a unique Border Gateway Protocol (BGP) autonomous system number (ASN).
- Associate all spoke VPCs with the spoke-tgw-route-table.
-
- Their routes should not be propagated.
- Propagate their routes to the shared-services-tgw-route-table.
- Associate the shared services VPC with the shared-services-tgw-route-table and its routes should be propagated or statically added to the spoke-tgw-route-table.
- Add a default and summarized route with a next hop of the transit gateway to the shared services and spoke VPCs route table.
After successfully deploying this configuration, the customer decides to:
- Allow all VPCs access to on-premises resources through AWS site-to-site VPNs.
- Require an aggregated bandwidth of 10 Gbps across this VPN.
To achieve this, the customer needs to:
- Create four site-to-site VPNs between the transit gateway and the on-premises routers with BGP as the routing protocol.
-
- AWS site-to-site VPN has two VPN tunnels. Each tunnel has a dedicated bandwidth of 1.25 Gbps.
- Read more on how to configure ECMP for site-to-site VPNs.
- Create a third transit gateway route table with the name WAN-connections-route-table.
- Associate all four VPNs with the WAN-connections-route-table.
- Propagate the routes from the spoke and shared services VPCs to WAN-connections-route-table.
- Propagate VPN attachment routes to the spoke-tgw-route-table and shared-services-tgw-route-table.
Building on this progress, the customer has decided to deploy another transit gateway and shared services VPC in another AWS Region. They would like both shared service VPCs to be connected.
To accomplish these requirements, the customer needs to:
- Create a transit gateway with the name tgw-B in the new region.
- Create a transit gateway peering connection between tgw-A and tgw-B. Ensure peering requests are accepted.
- Statically add a route to the shared-services-tgw-route-table in region A that has the transit-gateway-peering attachment as the next for hop traffic destined to the VPC Classless Inter-Domain Routing (CIDR) range for shared-services-vpc-B. Then, in region B, add a route to the shared-services-tgw-route-table that has the transit-gateway-peering attachment as the next for hop traffic destined to the VPC CIDR range for shared-services-vpc-A.
Reduce network infrastructure cost
It is important to design your network to eliminate unnecessary complexity and management overhead, as well as cost optimization. To achieve this, use centralization. Instead of creating network infrastructure that is needed by every VPC inside each VPC, deploy these resources in a type of shared services VPC and share them throughout your entire network. This results in the creation of this infrastructure only one time, which reduces the cost and management overhead.
Some VPC components that can be centralized are network address translation (NAT) gateways, VPC interface endpoints, and AWS Network Firewall. Third-party firewalls can also be centralized.
Let’s take a look at a few use cases that build on the previous use cases.
The customer has made the decision to allow access to AWS Key Management Service (AWS KMS) and AWS Secrets Manager from their VPCs.
The customer should employ the strategy of centralizing their VPC interface endpoints to reduce the potential proliferation of cost, management overhead, and complexity that can occur when working with this VPC feature.
To centralize these endpoints, the customer should:
- Deploy AWS VPC interface endpoints for AWS KMS and Secrets Manager inside shared-services-vpc-A and shared-services-vpc-B.
-
- Disable each Private DNS.
- Use the AWS default DNS name for AWS KMS and Secrets Manager to create an Amazon Route 53 private hosted zone (PHZ) for each of these services. These are:
-
- kms.<region>.amazonaws.com
- secretsmanager.<region>.amazonaws.com
- Authorize each spoke VPC to associate with the PHZ in their respective region. This can be done from the AWS Command Line Interface (AWS CLI) by using the command aws route53 create-vpc-association-authorization –hosted-zone-id <hosted-zone-id> –vpc VPCRegion=<region>,VPCId=<vpc-id> –region <AWS-REGION>.
- Create an A record for each PHZ. In the creation process, for the Route to option, select the VPC Endpoint Alias. Add the respective VPC interface endpoint DNS hostname that is not Availability Zone specific (for example, vpce-0073b71485b9ad255-mu7cd69m.ssm.ap-south-1.vpce.amazonaws.com).
- Associate each spoke VPC with the available PHZs. Use the CLI command aws route53 associate-vpc-with-hosted-zone –hosted-zone-id <hosted-zone-id> –vpc VPCRegion=<region>,VPCId=<vpc-id> –region <AWS-REGION>.
This concludes the configuration for centralized VPC interface endpoints for AWS KMS and Secrets Manager. You can learn more about cross-account PHZ association configuration.
After successfully implementing centralized VPC interface endpoints, the customer has decided to centralize:
- Internet access.
- Packet inspection for East-West and North-South internet traffic using a pair of firewalls that support the Geneve protocol.
To achieve this, the customer should use the AWS Gateway Load Balancer (GWLB), Amazon VPC endpoint services, GWLB endpoints, and transit gateway route table configurations.
To accomplish these centralization requirements, the customer should create:
- A VPC with the name security-egress VPC.
- A GWLB, an autoscaling group with at least two instance of the customer’s firewall which are evenly distributed across multiple private subnets in different Availability Zones.
- A target group for use with the GWLB. Associate the autoscaling group with this target group.
- An AWS endpoint service using the GWLB as the entry point. Then create AWS interface endpoints for this endpoint service inside the same set of private subnets or create a /28 set of subnets for interface endpoints.
- Two AWS NAT gateways spread across two public subnets in multiple Availability Zones.
- A transit gateway attachment request from the security-egress VPC and ensure that:
-
- Transit gateway appliance mode is enabled for this attachment as it ensures bidirectional traffic forwarding to the same transit gateway attachments.
- Transit gateway–specific subnets are used to host the attachment interfaces.
- In the security-egress VPC, configure the route tables accordingly.
-
- Private subnet route table.
-
-
- Add default route to the NAT gateway.
- Add summarized routes with a next-hop of Transit Gateway for all networks you intend to route to that are connected to the Transit Gateway.
-
-
- Public subnet route table.
-
-
- Add default route to the internet gateway.
- Add summarized routes with a next-hop of the GWLB endpoints you intend to route to for all private networks.
-
Transit Gateway configuration
- Create a new transit gateway route table with the name transit-gateway-egress-route-table.
-
- Propagate all spoke and shared services VPCs routes to it.
- Associate the security-egress VPC with this route table.
- Add a default route to the spoke-tgw-route-table and shared-services-tgw-route-table that points to the security-egress VPC attachment, and remove all VPC attachment routes respectively from both route tables.
Conclusion
In this blog post, we went on a network architecture journey that started with a use case of routing domain isolation. This is a scenario most customers confront when getting started with Transit Gateway. Gradually, we built upon this use case and exponentially increased its complexity by exploring other real-world scenarios that customers confront when designing multiple region networks across multiple AWS accounts.
Regardless of the complexity, these use cases were accompanied by guidance that helps customers achieve a reduction in cost and complexity throughout their entire network on AWS.
When designing your networks, design for scale. Use AWS services that let you achieve scale without the complexity of managing the underlying infrastructure.
Also, simplify your network through the technique of centralizing repeatable resources. If more than one VPC requires access to the same resource, then find ways to centralize access to this resource which reduces the proliferation of these resources. DNS, packet inspection, and VPC interface endpoints are good examples of things that should be centralized.
Thank you for reading. Hopefully you found this blog post useful.
Field Notes provides hands-on technical guidance from AWS Solutions Architects, consultants, and technical account managers, based on their experiences in the field solving real-world business problems for customers.
AWS Cloud Control API, a Uniform API to Access AWS & Third-Party Services
Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/announcing-aws-cloud-control-api/
Today, I am happy to announce the availability of AWS Cloud Control API a set of common application programming interfaces (APIs) that are designed to make it easy for developers to manage their AWS and third-party services.
AWS delivers the broadest and deepest portfolio of cloud services. Builders leverage these to build any type of cloud infrastructure. It started with Amazon Simple Storage Service (Amazon S3) 15 years ago and grew over 200+ services. Each AWS service has a specific API with its own vocabulary, input parameters, and error reporting. For example, you use the S3 CreateBucket
API to create an Amazon Simple Storage Service (Amazon S3) bucket and the Amazon Elastic Compute Cloud (Amazon EC2) RunInstances
API to create an EC2 instances.
Some of you use AWS APIs to build infrastructure-as-code, some to inspect and automatically improve your security posture, some others for configuration management, or to provision and to configure high performance compute clusters. The use cases are countless.
As applications and infrastructures become increasingly sophisticated and you work across more AWS services, it becomes increasingly difficult to learn and manage distinct APIs. This challenge is exacerbated when you also use third-party services in your infrastructure, since you have to build and maintain custom code to manage both the AWS and third-party services together.
Cloud Control API is a standard set of APIs to Create, Read, Update, Delete, and List (CRUDL) resources across hundreds of AWS Services (more being added) and dozens of third-party services (and growing).
It exposes five common verbs (CreateResource
, GetResource
, UpdateResource
, DeleteResource
, ListResource
) to manage the lifecycle of services. For example, to create an Amazon Elastic Container Service (Amazon ECS) cluster or an AWS Lambda function, you call the same CreateResource
API, passing as parameters the type and attributes of the resource you want to create: an Amazon ECS cluster or an Lambda function. The input parameters are defined by an unified resource model using JSON. Similarly, the return types and error messages are uniform across all verbs and all resources.
Cloud Control API provides support for hundreds of AWS resources today, and we will continue to add support for existing AWS resources across services such as Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3) in the coming months. It will support new AWS resources typically on the day of launch.
Until today, when I want to get the details about an Lambda function or a Amazon Kinesis stream, I use the get-function
API to call Lambda and the describe-stream
API to call Kinesis. Notice in the example below how different these two API calls are: they have different names, different naming conventions, different JSON outputs, etc.
aws lambda get-function --function-name TictactoeDatabaseCdkStack
{
"Configuration": {
"FunctionName": "TictactoeDatabaseCdkStack",
"FunctionArn": "arn:aws:lambda:us-west-2:0123456789:function:TictactoeDatabaseCdkStack",
"Runtime": "nodejs14.x",
"Role": "arn:aws:iam::0123456789:role/TictactoeDatabaseCdkStack",
"Handler": "framework.onEvent",
"CodeSize": 21539,
"Timeout": 900,
"MemorySize": 128,
"LastModified": "2021-06-07T11:26:39.767+0000",
...
aws kinesis describe-stream --stream-name AWSNewsBlog
{
"StreamDescription": {
"Shards": [
{
"ShardId": "shardId-000000000000",
"HashKeyRange": {
"StartingHashKey": "0",
"EndingHashKey": "340282366920938463463374607431768211455"
},
"SequenceNumberRange": {
"StartingSequenceNumber": "49622132796672989268327879810972713309953024040638611458"
}
}
],
"StreamARN": "arn:aws:kinesis:us-west-2:012345678901:stream/AWSNewsBlog",
"StreamName": "AWSNewsBlog",
"StreamStatus": "ACTIVE",
"RetentionPeriodHours": 24,
"EncryptionType": "NONE",
"KeyId": null,
"StreamCreationTimestamp": "2021-09-17T14:58:20+02:00"
}
}
In contrast, when using the Cloud Control API, I use a single API name get-resource
, and I receive a consistent output.
aws cloudcontrol get-resource \
--type-name AWS::Kinesis::Stream \
--identifier NewsBlogDemo
{
"TypeName": "AWS::Kinesis::Stream",
"ResourceDescription": {
"Identifier": "NewsBlogDemo",
"Properties": "{\"Arn\":\"arn:aws:kinesis:us-west-2:486652066693:stream/NewsBlogDemo\",\"RetentionPeriodHours\":168,\"Name\":\"NewsBlogDemo\",\"ShardCount\":3}"
}
}
Similary, to create the resource above I used the create-resource
API.
aws cloudcontrol create-resource \
--type-name AWS::Kinesis::Stream \
--desired-state "{"Name": "NewsBlogDemo","RetentionPeriodHours":168, "ShardCount":3}"
In my opinion, there are three types of builders that are going to adopt Cloud Control API:
Builders
The first community is builders using AWS Services APIs to manage their infrastructure or their customer’s infrastructure. The ones requiring usage of low-level AWS Services APIs rather than higher level tools. For example, I know companies that manages AWS infrastructures on behalf of their clients. Many developed solutions to list and describe all resources deployed in their client’s AWS Accounts, for management and billing purposes. Often, they built specific tools to address their requirements, but find it hard to keep up with new AWS Services and features. Cloud Control API simplifies this type of tools by offering a consistent, resource-centric approach. It makes easier to keep up with new AWS Services and features.
Another example: Stedi is a developer-focused platform for building automated Electronic Data Interchange (EDI) solutions that integrate with any business system. “We have a strong focus on infrastructure as code (IaC) within Stedi and have been looking for a programmatic way to discover and delete legacy cloud resources that are no longer managed through CloudFormation – helping us reduce complexity and manage cost,” said Olaf Conjin, Serverless Engineer at Stedi, Inc. “With AWS Cloud Control API, our teams can easily list each of these legacy resources, cross-reference them against CloudFormation managed resources, apply additional logic and delete the legacy resources. By deleting these unused legacy resources using Cloud Control API, we can manage our cloud spend in a simpler and faster manner. Cloud Control API allows us to remove the need to author and maintain custom code to discover and delete each type of resource, helping us improve our developer velocity”.
APN Partners
The second community that benefits from Cloud Control API is APN Partners, such as HashiCorp (maker of Terraform) and Pulumi, and other APN Partners offering solutions that relies on AWS Services APIs. When AWS releases a new service or feature, our partner’s engineering teams need to learn, integrate, and test a new set of AWS Service APIs to expose it in their offerings. This is a time consuming process and often leads to a lag between the AWS release and the availability of the service or feature in their solution. With the new Cloud Control API, partners are now able to build a unique REST API code base, using unified API verbs, common input parameters, and common error types. They just have to merge the standardized pre-defined uniform resource model to interact with new AWS Services exposed as REST resources.
Launch Partners
HashiCorp and Pulumi are our launch partners, both solutions are integrated with Cloud Control API today.
HashiCorp provides cloud infrastructure automation software that enables organizations to provision, secure, connect, and run any infrastructure for any application. “AWS Cloud Control API makes it easier for our teams to build solutions to integrate with new and existing AWS services,” said James Bayer – EVP Product, HashiCorp. “Integrating HashiCorp Terraform with AWS Cloud Control API means developers are able to use the newly released AWS features and services, typically on the day of launch.”
Pulumi’s new AWS Native Provider, powered by the AWS Cloud Control API, “gives Pulumi’s users faster access to the latest AWS innovations, typically the day they launch, without any need for us to manually implement support,” said Joe Duffy, CEO at Pulumi. “The full surface area of AWS resources provided by AWS Cloud Control API can now be automated from familiar languages like Python, TypeScript, .NET, and Go, with standard IDEs, package managers, and test frameworks, with high fidelity and great quality. Using this new provider, developers and infrastructure teams can develop and ship modern AWS applications and infrastructure faster and with more confidence than ever before.”
To learn more about HashiCorp and Pulumi’s integration with Cloud Control API, refer to their blog post and announcements. I will add the links here as soon as they are available.
AWS Customers
The third type of builders that will benefit from Cloud Control API is AWS customers using solution such as Terraform or Pulumi. You can benefit from Cloud Control API too. For example, when using the new Terraform AWS Cloud Control provider or Pulumi’s AWS Native Provider, you can benefit from availability of new AWS Services and features typically on the day of launch.
Now that you understand the benefits, let’s see Cloud Control API in action.
How It Works?
To start using Cloud Control API, I first make sure I use the latest AWS Command Line Interface (CLI) version. Depending on how the CLI was installed, there are different methods to update the CLI. Cloud Control API is available from our AWS SDKs as well.
To create an AWS Lambda function, I first create an index.py
handler, I zip it, and upload the zip file to one of my private bucket. I pay attention that the S3 bucket is in the same AWS Region where I will create the Lambda function:
cat << EOF > index.py
heredoc> import json
def lambda_handler(event, context):
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
EOF
zip index.zip index.py
aws s3 cp index.zip s3://private-bucket-seb/index.zip
Then, I call the create-resource
API, passing the same set of arguments as required by the corresponding CloudFormation resource. In this example, the Code
, Role
, Runtime
, and Handler
arguments are mandatory, as per the CloudFormation AWS::Lambda::Function documentation.
aws cloudcontrol create-resource \
--type-name AWS::Lambda::Function \
--desired-state '{"Code":{"S3Bucket":"private-bucket-seb","S3Key":"index.zip"},"Role":"arn:aws:iam::0123456789:role/lambda_basic_execution","Runtime":"python3.9","Handler":"index.lambda_handler"}' \
--client-token 123
{
"ProgressEvent": {
"TypeName": "AWS::Lambda::Function",
"RequestToken": "56a0782b-2b26-491c-b082-18f63d571bbd",
"Operation": "CREATE",
"OperationStatus": "IN_PROGRESS",
"EventTime": "2021-09-26T12:05:42.210000+02:00"
}
}
I may call the same command again to get the status or to learn about an eventual error:
aws cloudcontrol create-resource \
--type-name AWS::Lambda::Function \
--desired-state '{"Code":{"S3Bucket":"private-bucket-seb","S3Key":"index.zip"},"Role":"arn:aws:iam::0123456789:role/lambda_basic_execution","Runtime":"python3.9","Handler":"index.lambda_handler"}' \
--client-token 123
{
"ProgressEvent": {
"TypeName": "AWS::Lambda::Function",
"Identifier": "ukjfq7sqG15LvfC30hwbRAMfR-96K3UNUCxNd9",
"RequestToken": "f634b21d-22ed-41bb-9612-8740297d20a3",
"Operation": "CREATE",
"OperationStatus": "SUCCESS",
"EventTime": "2021-09-26T19:46:46.643000+02:00"
}
}
Here, the OperationStatus
is SUCCESS and the function name is ukjfq7sqG15LvfC30hwbRAMfR-96K3UNUCxNd9
(I can pass my own name if I want something more descriptive 🙂 )
I then invoke the Lambda function to ensure it works as expected:
aws lambda invoke \
--function-name ukjfq7sqG15LvfC30hwbRAMfR-96K3UNUCxNd9 \
out.txt && cat out.txt && rm out.txt
{
"StatusCode": 200,
"ExecutedVersion": "$LATEST"
}
{"statusCode": 200, "body": "\"Hello from Lambda!\""}
When finished, I delete the Lambda function using Cloud Control API:
aws cloudcontrol delete-resource \
--type-name AWS::Lambda::Function \
--identifier ukjfq7sqG15LvfC30hwbRAMfR-96K3UNUCxNd9
{
"ProgressEvent": {
"TypeName": "AWS::Lambda::Function",
"Identifier": "ukjfq7sqG15LvfC30hwbRAMfR-96K3UNUCxNd9",
"RequestToken": "8923991d-72b3-4981-8160-4d9a585965a3",
"Operation": "DELETE",
"OperationStatus": "IN_PROGRESS",
"EventTime": "2021-09-26T20:06:22.013000+02:00"
}
}
Idempotency
You might have noticed the client-token
parameter I passed to the create-resource
API call. Create
, Update
, and Delete
requests all accept a ClientToken
, which is used to ensure idempotency of the request.
- We recommend always passing a client token. This will disambiguate requests in case a retry is needed. Otherwise, you may encounter unexpected errors like
ConcurrentOperationException
orAlreadyExists
. - We recommend that client tokens always be unique for every single request, such as by passing a UUID.
One More Thing
At the heart of AWS Cloud Control API source of data, there is the CloudFormation Public Registry, which my colleague Steve announced earlier this month in this blog post. It allows anyone to expose a set of AWS resources through CloudFormation and AWS CDK. This is the mechanism AWS Service teams are now using to release their services and features as CloudFormation and AWS CDK resources. Multiple third-party vendors are also publishing their solutions in the CloudFormation Public Registry. All resources published are modelled with a standard schema that defines the resource, its properties, and their attributes in a uniform way.
AWS Cloud Control API is a CRUDL API layer on top of resources published in the CloudFormation Public Registry. Any resource published in the registry exposes its attributes with standard JSON schemas. The resource can then be created, updated, deleted, or listed using Cloud Control API with no additional work.
For example, imagine I decide to expose a public CloudFormation stack to let any AWS customer create VPN servers, based on EC2 instances. I model the VPNServer resource type and publish it in the CloudFormation Public Registry. With no additional work on my side, my custom resource “VPNServer” is now available to all AWS customers through the Cloud Control API REST API. Not only, it is also automatically available through solutions like Hashicorp’s Terraform and Pulumi, and potentially others who adopt Cloud Control API in the future.
It is worth mentioning Cloud Control API is not aimed at replacing the traditional AWS service-level APIs. They are still there and will always be there, but we think that Cloud Control API is easier and more consistent to use and you should use it for new apps.
Availability and Pricing
Cloud Control API is available in all AWS Regions, except China.
You will only pay for the usage of underlying AWS resources, such as a CloudWatch logs or Lambda functions invocations, or pay for the number of handler operations and handler operation duration associated with using third-party resources (such as Datadog monitors or MongoDB Atlas clusters). There are no minimum fees and no required upfront commitments.
I can’t wait to discover what you are going to build on top of this new Cloud Control API. Go build!
COVID Long-Haulers Are Fighting for Their Future
Post Syndicated from The Atlantic original https://www.youtube.com/watch?v=1eXiPYtaHSA
The Future of the GOP With Senator Marco Rubio
Post Syndicated from The Atlantic original https://www.youtube.com/watch?v=tPDvHQm4Ay4
A Conversation With Hillary Rodham Clinton
Post Syndicated from The Atlantic original https://www.youtube.com/watch?v=px-7lYq0RAc
The Broadway Sinfonietta
Post Syndicated from The Atlantic original https://www.youtube.com/watch?v=G3Ozyu1NjOw
Use Amazon Athena and Amazon QuickSight in a cross-account environment
Post Syndicated from Lotfi Mouhib original https://aws.amazon.com/blogs/big-data/use-amazon-athena-and-amazon-quicksight-in-a-cross-account-environment/
Many AWS customers use a multi-account strategy to host applications for different departments within the same company. However, you might deploy services like Amazon QuickSight using a single-account approach, which raises challenges when you need to use QuickSight in combination with Amazon Athena to build reports and dashboards. With the recently announced built-in support for cross-account Data Catalogs in Athena, you can now use AWS Glue Data Catalogs in different accounts to create datasets, and build reports and dashboards from a single AWS account using QuickSight and Athena – creating a serverless data visualization solution that lets you share insights from all your data to all your users.
In this post, I show you how to use this new feature to set up cross-account access to Athena for QuickSight.
Solution overview
To set up cross-account access, you complete the following steps:
- Grant QuickSight cross-account access to an AWS Glue Data Catalog.
- Register the Data Catalog in Athena.
- Grant QuickSight cross-account access to an Amazon Simple Storage Service (Amazon S3) bucket.
- Add the shared bucket to QuickSight.
- Connect QuickSight to Athena.
The following architecture shows the deployment steps.
Grant cross-account access for the Data Catalog
QuickSight uses a service role to interact with other AWS services. QuickSight creates this role for you under the name aws-quicksight-s3-consumers-role-v0
. You need this role to allow access to the Data Catalog cross-account share. To allow the QuickSight service role (Account A, the borrower account) to access the Data Catalog (Account B, the owner account), you need to grant cross-account access by updating the AWS Glue resource policy.
In the AWS account of the Data Catalog, complete the following steps:
- On the AWS Glue console, choose Settings in the navigation pane.
- Under Permissions, enter the following resource policy:
- Choose Save.
The resource policy gives QuickSight access to all the databases and tables in the Data Catalog. You can further scope it down by adding the name of the tables and databases to the resource element.
The following screenshot shows the Settings page on the AWS Glue console and the catalog UI for updating the resource permission.
Register the Data Catalog in Athena
Now you need to register the shared Data Catalog with Athena in the AWS account (borrower) that hosts QuickSight.
- On the Athena console, choose Data sources in the navigation pane.
- Choose Connect data source.
- For Choose where your data is located, select Query data in Amazon S3.
- For Choose a metadata catalog, select AWS Glue Data Catalog.
- Choose AWS Glue Data Catalog in another account.
- For Connection details, enter a Data Catalog name, optional description, and the Data Catalog owner’s AWS account ID.
- Choose Register.
When you complete these steps, you can see the borrowed catalog on the Data sources page on the Athena console.
Grant QuickSight cross-account access to an S3 bucket
Creating a resource policy on the Data Catalog to allow cross-account access for QuickSight is not sufficient. You also need to grant QuickSight access to the S3 bucket where the data is stored. You use the same QuickSight service role that we used for the Data Catalog to update the S3 bucket policy.
In the account of the Data Catalog, complete the following steps:
- On the Amazon S3 console, choose Buckets.
- Choose the bucket that you want to create a policy for, or whose policy you want to edit.
- Choose Permissions.
- Enter the following policy:
- Choose Save changes.
Add the shared S3 bucket to QuickSight
The last step before you can connect QuickSight to Athena is to add the S3 bucket (Account B) as a resource that the QuickSight service role (Account A) can access. To allow your QuickSight service role access to the S3 bucket in another account, perform the following steps:
- On the QuickSight console, on the account drop-down menu, choose Manage QuickSight.
- Choose Security & permissions.
- Choose Add or remove.
- Choose Details.
- Choose Select S3 buckets.
- Under Use a different bucket, add your bucket.
- Choose Finish.
Connect QuickSight to Athena
After you set up the necessary permissions, you can follow the instruction in this section to add a dataset in Athena by using the remote (borrowed) Data Catalog.
- On the Athena console, choose Datasets in the navigation pane.
- Choose New dataset.
- Create a new connection profile by providing a data source name and Athena workgroup.
- Choose Validate connection.
- Choose Create data source.
- In the Choose your table section, for Catalog, choose the catalog you created in Athena.
- Choose a database and table and click on Select.
- Choose Edit/Preview data.
- To create a dataset and analyse the data using the table, choose Visualize.
Conclusion
This post showed how to use the built-in support for a cross-account Data Catalog in Athena with Quicksight when the Data Catalog and the S3 bucket containing the data are in a different account. This feature greatly reduces operational overhead by having a single account managing the Data Catalog and its data.
After you set up your data sources, you can join your data across these various sources. You can also use these new data sources to gain further insights from your data by setting up ML Insights in QuickSight and set graphical representations of your data using QuickSight visuals.
Lotfi is a Senior Solutions Architect working for the Public Sector team with Amazon Web Services. He helps public sector customers across EMEA realize their ideas, build new services, and innovate for citizens. In his spare time, Lotfi enjoys cycling and running.
Now — AWS Step Functions Supports 200 AWS Services To Enable Easier Workflow Automation
Post Syndicated from Marcia Villalba original https://aws.amazon.com/blogs/aws/now-aws-step-functions-supports-200-aws-services-to-enable-easier-workflow-automation/
Today AWS Step Functions expands the number of supported AWS services from 17 to over 200 and AWS API Actions from 46 to over 9,000 with its new capability AWS SDK Service Integrations.
When developers build distributed architectures, one of the patterns they use is the workflow-based orchestration pattern. This pattern is helpful for workflow automation inside a service to perform distributed transactions. An example of a distributed transaction is all the tasks required to handle an order and keep track of the transaction status at all times.
Step Functions is a low-code visual workflow service used for workflow automation, to orchestrate services, and help you to apply this pattern. Developers use Step Functions with managed services such as Artificial Intelligence services, Amazon Simple Storage Service (Amazon S3), and Amazon DynamoDB.
Introducing Step Functions AWS SDK Service Integrations
Until today, when developers were building workflows that integrate with AWS services, they had to choose from the 46 supported services integrations that Step Functions provided. If the service integration was not available, they had to code the integration in an AWS Lambda function. This is not ideal as it added more complexity and costs to the application.
Now with Step Functions AWS SDK Service Integrations, developers can integrate their state machines directly to AWS service that has AWS SDK support.
You can create state machines that use AWS SDK Service Integrations with Amazon States Language (ASL), AWS Cloud Development Kit (AWS CDK), or visually using AWS Step Function Workflow Studio. To get started, create a new Task state. Then call AWS SDK services directly from the ASL in the resource field of a task state. To do this, use the following syntax.
arn:aws:states:::aws-sdk:serviceName:apiAction.[serviceIntegrationPattern]
Let me show you how to get started with a demo.
Demo
In this demo, you are building an application that, when given a video file stored in S3, transcribes it and translates from English to Spanish.
Let’s build this demo with Step Functions. The state machine, with the service integrations, integrates directly to S3, Amazon Transcribe, and Amazon Translate. The API for transcribing is asynchronous. To verify that the transcribing job is completed, you need a polling loop, which waits for it to be ready.
Create the state machine
To follow this demo along, you need to complete these prerequisites:
- An S3 bucket where you will put the original file that you want to process
- A video or audio file in English stored in that bucket
- An S3 bucket where you want the processing to happen
I will show you how to do this demo using the AWS Management Console. If you want to deploy this demo as infrastructure as code, deploy the AWS CloudFormation template for this project.
To get started with this demo, create a new standard state machine. Choose the option Write your workflow in code to build the state machine using ASL. Create a name for the state machine and create a new role.
Start a transcription job
To get started working on the state machine definition, you can Edit the state machine.
The following piece of ASL code is a state machine with two tasks that are using the new AWS SDK Service Integrations capability. The first task is copying the file from one S3 bucket to another, and the second task is starting the transcription job by directly calling Amazon Transcribe.
For using this new capability from Step Functions, the state type needs to be a Task. You need to specify the service name and API action using this syntax: “arn:aws:states:::aws-sdk:serviceName:apiAction.<serviceIntegrationPattern>”. Use camelCase for apiAction names in the Resource field, such as “copyObject”, and use PascalCase for parameter names in the Parameters field, such as “CopySource”.
For the parameters, find the name and required parameters in the AWS API documentation for this service and API action.
{
"Comment": "A State Machine that process a video file",
"StartAt": "GetSampleVideo",
"States": {
"GetSampleVideo": {
"Type": "Task",
"Resource": "arn:aws:states:::aws-sdk:s3:copyObject",
"Parameters": {
"Bucket.$": "$.S3BucketName",
"Key.$": "$.SampleDataInputKey",
"CopySource.$": "States.Format('{}/{}',$.SampleDataBucketName,$.SampleDataInputKey)"
},
"ResultPath": null,
"Next": "StartTranscriptionJob"
},
"StartTranscriptionJob": {
"Type": "Task",
"Resource": "arn:aws:states:::aws-sdk:transcribe:startTranscriptionJob",
"Parameters": {
"Media": {
"MediaFileUri.$": "States.Format('s3://{}/{}',$.S3BucketName,$.SampleDataInputKey)"
},
"TranscriptionJobName.$": "$$.Execution.Name",
"LanguageCode": "en-US",
"OutputBucketName.$": "$.S3BucketName",
"OutputKey": "transcribe.json"
},
"ResultPath": "$.transcription",
"End": true
}
}
}
In the previous piece of code, you can see an interesting use case of the intrinsic functions that ASL provides. You can construct a string using different parameters. Using intrinsic functions in combination with AWS SDK Service Integrations allows you to manipulate data without the needing a Lambda function. For example, this line:
"MediaFileUri.$": "States.Format('s3://{}/{}',$.S3BucketName,$.SampleDataInputKey)"
Give permissions to the state machine
If you start the execution of the state machine now, it will fail. This state machine doesn’t have permissions to access the S3 buckets or use Amazon Transcribe. Step Functions can’t autogenerate IAM policies for most AWS SDK Service Integrations, so you need to add those to the role manually.
Add those permissions to the IAM role that was created for this state machine. You can find a quick link to the role in the state machine details. Attach the “AmazonTranscribeFullAccess” and the “AmazonS3FullAccess” policies to the role.
Running the state machine for the first time
Now that the permissions are in place, you can run this state machine. This state machine takes as an input the S3 bucket name where the original video is uploaded, the name for the file and the name of the S3 bucket where you want to store this file and do all the processing.
For this to work, this file needs to be a video or audio file and it needs to be in English. When the transcription job is done, it saves the result in the bucket you specify in the input with the name transcribe.json.
{
"SampleDataBucketName": "<name of the bucket where the original file is>",
"SampleDataInputKey": "<name of the original file>",
"S3BucketName": "<name of the bucket where the processing will happen>"
}
As StartTranscriptionJob is an asynchronous call, you won’t see the results right away. The state machine is only calling the API, and then it completes. You need to wait until the transcription job is ready and then see the results in the output bucket in the file transcribe.json.
Adding a polling loop
Because you want to translate the text using your transcriptions results, your state machine needs to wait for the transcription job to complete. For building an API poller in a state machine, you can use a Task, Wait, and Choice state.
- Task state gets the job status. In your case, it is calling the service Amazon Transcribe and the API getTranscriptionJob.
- Wait state waits for 20 seconds, as the transcription job’s length depends on the size of the input file.
- Choice state moves to the right step based on the result of the job status. If the job is completed, it moves to the next step in the machine, and if not, it keeps on waiting.
Wait state
The first of the states you are going to add is the Wait state. This is a simple state that waits for 20 seconds.
"Wait20Seconds": {
"Type": "Wait",
"Seconds": 20,
"Next": "CheckIfTranscriptionDone"
},
Task state
The next state to add is the Task state, which calls the API getTranscriptionJob. For calling this API, you need to pass the transcription job name. This state returns the job status that is the input of the Choice state.
"CheckIfTranscriptionDone": {
"Type": "Task",
"Resource": "arn:aws:states:::aws-sdk:transcribe:getTranscriptionJob",
"Parameters": {
"TranscriptionJobName.$": "$.transcription.TranscriptionJob.TranscriptionJobName"
},
"ResultPath": "$.transcription",
"Next": "IsTranscriptionDone?"
},
Choice state
The Choice state has one rule that checks if the transcription job status is completed. If that rule is true, then it goes to the next state. If not, it goes to the Wait state.
"IsTranscriptionDone?": {
"Type": "Choice",
"Choices": [
{
"Variable": "$.transcription.TranscriptionJob.TranscriptionJobStatus",
"StringEquals": "COMPLETED",
"Next": "GetTranscriptionText"
}
],
"Default": "Wait20Seconds"
},
Getting the transcription text
In this step you are extracting only the transcription text from the output file returned by the transcription job. You need only the transcribed text, as the result file has a lot of metadata that makes the file too long and confusing to translate.
This is a step that you would generally do with a Lambda function. But you can do it directly from the state machine using ASL.
First you need to create a state using AWS SDK Service Integration that gets the result file from S3. Then use another ASL intrinsic function to convert the file text from a String to JSON.
In the next state you can process the file as a JSON object. This state is a Pass state, which cleans the output from the previous state to get only the transcribed text.
"GetTranscriptionText": {
"Type": "Task",
"Resource": "arn:aws:states:::aws-sdk:s3:getObject",
"Parameters": {
"Bucket.$": "$.S3BucketName",
"Key": "transcribe.json"
},
"ResultSelector": {
"filecontent.$": "States.StringToJson($.Body)"
},
"ResultPath": "$.transcription",
"Next": "PrepareTranscriptTest"
},
"PrepareTranscriptTest" : {
"Type": "Pass",
"Parameters": {
"transcript.$": "$.transcription.filecontent.results.transcripts[0].transcript"
},
"Next": "TranslateText"
},
Translating the text
After preparing the transcribed text, you can translate it. For that you will use Amazon Translate API translateText directly from the state machine. This will be the last state for the state machine and it will return the translated text in the output of this state.
"TranslateText": {
"Type": "Task",
"Resource": "arn:aws:states:::aws-sdk:translate:translateText",
"Parameters": {
"SourceLanguageCode": "en",
"TargetLanguageCode": "es",
"Text.$": "$.transcript"
},
"ResultPath": "$.translate",
"End": true
}
Add the permissions to the state machine to call the Translate API, by attaching the managed policy “TranslateReadOnly”.
Now with all these in place, you can run your state machine. When the state machine finishes running, you will see the translated text in the output of the last state.
Important things to know
Here are some things that will help you to use AWS SDK Service Integration:
- Call AWS SDK services directly from the ASL in the resource field of a task state. To do this, use the following syntax: arn:aws:states:::aws-sdk:serviceName:apiAction.[serviceIntegrationPattern]
- Use camelCase for apiAction names in the Resource field, such as “copyObject”, and use PascalCase for parameter names in the Parameters field, such as “CopySource”.
- Step Functions can’t autogenerate IAM policies for most AWS SDK Service Integrations, so you need to add those to the IAM role of the state machine manually.
- Take advantage of ASL intrinsic functions, as those allow you to manipulate the data and avoid using Lambda functions for simple transformations.
Get started today!
AWS SDK Service Integration is generally available in the following regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Canada (Central), Europe (Ireland), Europe (Milan), Africa (Cape Town) and Asia Pacific (Tokyo). It will be generally available in all other commercial regions where Step Functions is available in the coming days.
Learn more about this new capability by reading its documentation.
— Marcia
Hardening Your VPN
Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/09/hardening-your-vpn.html
The NSA and CISA have released a document on how to harden your VPN.
Increased interest in Spanish media after the La Palma volcanic eruption
Post Syndicated from João Tomé original https://blog.cloudflare.com/increased-interest-in-spanish-media-after-the-la-palma-volcanic-eruption/
The Internet is a valuable source of knowledge but also a deeply interesting, interconnected, and complex place. And with Cloudflare Radar (our Internet trends and insights free tool for everyone — including journalists, like I was for several years) you get a sense of different trends in the collection of networks that form the Internet.
We saw that over the past week or so in Spain. Radar shows a clear increase in interest in Spanish media outlets (in comparison with the preceding days and Sundays) after the news of the eruption in La Palma (one of the Spanish Canary Islands) broke on Sunday, September 19.
That is particularly clear looking at El País, one of the most well known media outlets in the country. Using our Global Popularity Ranking Trend available on Radar, we can see that ElPais.com jumped several positions in our ranking of most popular domains after September 19. That change is clear in the last seven days, but especially in the last 30, putting El País near the top 3,000 most popular domains in the world.
Elpais.com
A similar trend is seen on the El Mundo website, which had its highest days of global popularity on Wednesday and Thursday of last week. And Spanish public radio and television, RTVE, after a week of growing popularity, reached the top 1,200 of our Global Popularity Ranking last Friday, climbing more than 100 positions after the news of the volcano’s eruption broke.
Rtve.es
There is a world of trends and even human habits (different from country to country) to discover on our Cloudflare Radar platform. Start here.
More about the volcanic eruption:
Live blog from ElPais (in Spanish) — https://elpais.com/espana/2021-09-22/ultimas-noticias-del-volcan-en-erupcion-en-la-palma-en-directo-la-ultima-hora-de-cumbre-vieja-en-canarias.html
Stable kernel updates
Post Syndicated from original https://lwn.net/Articles/871425/rss
Stable kernels 5.14.9, 5.10.70, and 5.4.150 have been released with the usual set
of important fixes. Users of those series should upgrade.
Enable Security Hub PCI DSS standard across your organization and disable specific controls
Post Syndicated from Pablo Pagani original https://aws.amazon.com/blogs/security/enable-security-hub-pci-dss-standard-across-your-organization-and-disable-specific-controls/
At this time, enabling the PCI DSS standard from within AWS Security Hub enables this compliance framework only within the Amazon Web Services (AWS) account you are presently administering.
This blog post showcases a solution that can be used to customize the configuration and deployment of the PCI DSS standard compliance standard using AWS Security Hub across multiple AWS accounts and AWS Regions managed by AWS Organizations. It also demonstrates how to disable specific standards or controls that aren’t required by your organization to meet its compliance requirement. This solution can be used as a baseline for implementation when creating new AWS accounts through the use of AWS CloudFormation StackSets.
Solution overview
Figure 1 that follows shows a sample account setup using the automated solution in this blog post to enable PCI DSS monitoring and reporting across multiple AWS accounts using AWS Organizations. The hierarchy depicted is of one management account used to monitor two member accounts with infrastructure spanning across multiple Regions. Member accounts are configured to send their Security Hub findings to the designated Security Hub management account for centralized compliance management.
Prerequisites
The following prerequisites must be in place in order to enable the PCI DSS standard:
- A designated administrator account for Security Hub.
- Security Hub enabled in all the desired accounts and Regions.
- Access to the management account for the organization. The account must have the required permissions for stack set operations.
- Choose which deployment targets (accounts and Regions) you want to enable the PCI DSS standard. Typically, you set this on the accounts where Security Hub is already enabled, or on the accounts where PCI workloads reside.
- (Optional) If you find standards or controls that aren’t applicable to your organization, get the Amazon Resource Names (ARNs) of the desired standards or controls to disable.
Solution Resources
The CloudFormation template that you use in the following steps contains:
- An AWS Lambda function—SHLambdaFunction—to configure and deploy the setup procedures within Security Hub.
- An AWS Identity and Access Management (IAM) role—SHLambdaRole—with the required permissions needed to deploy the solution.
- A custom resource—SHConfiguration—triggers the Lambda function to begin setup procedures.
Solution deployment
To set up this solution for automated deployment, stage the following CloudFormation StackSet template for rollout via the AWS CloudFormation service. The stack set runs across the organization at the root or organizational units (OUs) level of your choice. You can choose which Regions to run this solution against and also to run it each time a new AWS account is created.
To deploy the solution
- Open the AWS Management Console.
- Download the sh-pci-enabler.yaml template and save it to an Amazon Simple Storage Services (Amazon S3) bucket on the management account. Make a note of the path to use later.
- Navigate to CloudFormation service on the management account. Select StackSets from the menu on the left, and then choose Create StackSet.
- On the Choose a template page, go to Specify template and select Amazon S3 URL and enter the path to the sh-pci-enabler.yaml template you saved in step 2 above. Choose Next.
- Enter a name and (optional) description for the StackSet. Choose Next.
- (Optional) On the Configure StackSet options page, go to Tags and add tags to identify and organize your stack set.
- Choose Next.
-
On the Set deployment options page, select the desired Regions, and then choose Next.
- Review the definition and select I acknowledge that AWS CloudFormation might create IAM resources. Choose Submit.
- After you choose Submit, you can monitor the creation of the StackSet from the Operations tab to ensure that deployment is successful.
Disable standards that don’t apply to your organization
To disable a standard that isn’t required by your organization, you can use the same template and steps as described above with a few changes as explained below.
To disable standards
- Start by opening the SH-PCI-enabler.yaml template and saving a copy under a new name.
- In the template, look for sh.batch_enable_standards. Change it to sh.batch_disable_standards.
- Locate standardArn=f”arn:aws:securityhub:{region}::standards/pci-dss/v/3.2.1″ and change it to the desired ARN. To find the correct standard ARN, you can use the AWS Command Line Interface (AWS CLI) or AWS CloudShell to run the command aws securityhub describe-standards.
Note: Be sure to keep the f before the quotation marks and replace any Region you might get from the command with the {region} variable. If the CIS standard doesn’t have the Region defined, remove the variable.
Disable controls that don’t apply to your organization
When you enable a standard, all of the controls for that standard are enabled by default. If necessary, you can disable specific controls within an enabled standard.
When you disable a control, the check for the control is no longer performed, no additional findings are generated for that control, and the related AWS Config rules that Security Hub created are removed.
Security Hub is a regional service. When you disable or enable a control, the change is applied in the Region that you specify in the API request. Also, when you disable an entire standard, Security Hub doesn’t track which controls were disabled. If you enable the standard again later, all of the controls in that standard will be enabled.
To disable a list of controls
- Open the Security Hub console and select Security standards from the left menu. For each check you want to disable, select Finding JSON and make a note of each StandardsControlArn to add to your list.
Note: Another option is to use the DescribeStandardsControls API to create a list of StandardsControlArn to be disabled.
- Download the StackSet SH-disable-controls.yaml template to your computer.
- Use a text editor to open the template file.
- Locate the list of controls to disable, and edit the template to replace the provided list of StandardsControlArn with your own list of controls to disable, as shown in the following example. Use a comma as the delimiter for each ARN.
- Save your changes to the template.
- Follow the same steps you used to deploy the PCI DSS standard, but use your edited template.
Note: The region and account_id are set as variables, so you decide in which accounts and Regions to disable the controls from the StackSet deployment options (step 8 in Deploy the solution).
Troubleshooting
The following are issues you might encounter when you deploy this solution:
- StackSets deployment errors: Review the troubleshooting guide for CloudFormation StackSets.
- Dependencies issues: To modify the status of any standard or control, Security Hub must be enabled first. If it’s not enabled, the operation will fail. Make sure you meet the prerequisites listed earlier in this blog post. Use CloudWatch logs to analyze possible errors from the Lambda function to help identify the cause.
- StackSets race condition error: When creating new accounts, the Organizations service enables Security Hub in the account, and invokes the stack sets during account creation. If the stack set runs before the Security Hub service is enabled, the stack set can’t enable the PCI standard. If this happens, you can fix it by adding the Amazon EventBridge rule as shown in SH-EventRule-PCI-enabler.yaml. The EventBridge rule invokes the SHLambdaFunctionEB Lambda function after Security Hub is enabled.
Conclusion
The AWS Security Hub PCI DSS standard is fundamental for any company involved with storing, processing, or transmitting cardholder data. In this post, you learned how to enable or disable a standard or specific controls in all your accounts throughout the organization to proactively monitor your AWS resources. Frequently reviewing failed security checks, prioritizing their remediation, and aiming for a Security Hub score of 100 percent can help improve your security posture.
Further reading
- Getting started with Security Hub
- PCI DSS v3.2.1 Security Hub user guide
- Disabling or enabling a security standard
- Disabling and enabling individual controls
- How to set up a recurring Security Hub summary email
If you have feedback about this post, submit comments in the Comments section below. If you have questions, please start a new thread on the Security Hub forum.
Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.
Security updates for Thursday
Post Syndicated from original https://lwn.net/Articles/871424/rss
Security updates have been issued by Debian (libxstream-java, uwsgi, and weechat), Fedora (libspf2, libvirt, mingw-python3, mono-tools, python-flask-restx, and sharpziplib), Mageia (gstreamer, libgcrypt, libgd, mosquitto, php, python-pillow, qtwebengine5, and webkit2), openSUSE (postgresql12 and postgresql13), SUSE (haproxy, postgresql12, postgresql13, and rabbitmq-server), and Ubuntu (commons-io and linux-oem-5.13).