Tag Archives: security

Setting the Record Straight on Bloomberg BusinessWeek’s Erroneous Article

Post Syndicated from Stephen Schmidt original https://aws.amazon.com/blogs/security/setting-the-record-straight-on-bloomberg-businessweeks-erroneous-article/

Today, Bloomberg BusinessWeek published a story claiming that AWS was aware of modified hardware or malicious chips in SuperMicro motherboards in Elemental Media’s hardware at the time Amazon acquired Elemental in 2015, and that Amazon was aware of modified hardware or chips in AWS’s China Region.

As we shared with Bloomberg BusinessWeek multiple times over the last couple months, this is untrue. At no time, past or present, have we ever found any issues relating to modified hardware or malicious chips in SuperMicro motherboards in any Elemental or Amazon systems. Nor have we engaged in an investigation with the government.

There are so many inaccuracies in ‎this article as it relates to Amazon that they’re hard to count. We will name only a few of them here. First, when Amazon was considering acquiring Elemental, we did a lot of due diligence with our own security team, and also commissioned a single external security company to do a security assessment for us as well. That report did not identify any issues with modified chips or hardware. As is typical with most of these audits, it offered some recommended areas to remediate, and we fixed all critical issues before the acquisition closed. This was the sole external security report commissioned. Bloomberg has admittedly never seen our commissioned security report nor any other (and refused to share any details of any purported other report with us).

The article also claims that after learning of hardware modifications and malicious chips in Elemental servers, we conducted a network-wide audit of SuperMicro motherboards and discovered the malicious chips in a Beijing data center. This claim is similarly untrue. The first and most obvious reason is that we never found modified hardware or malicious chips in Elemental servers. Aside from that, we never found modified hardware or malicious chips in servers in any of our data centers. And, this notion that we sold off the hardware and datacenter in China to our partner Sinnet because we wanted to rid ourselves of SuperMicro servers is absurd. Sinnet had been running these data centers since we ‎launched in China, they owned these data centers from the start, and the hardware we “sold” to them was a transfer-of-assets agreement mandated by new China regulations for non-Chinese cloud providers to continue to operate in China.

Amazon employs stringent security standards across our supply chain – investigating all hardware and software prior to going into production and performing regular security audits internally and with our supply chain partners. We further strengthen our security posture by implementing our own hardware designs for critical components such as processors, servers, storage systems, and networking equipment.

Security will always be our top priority. AWS is trusted by many of the world’s most risk-sensitive organizations precisely because we have demonstrated this unwavering commitment to putting their security above all else. We are constantly vigilant about potential threats to our customers, and we take swift and decisive action to address them whenever they are identified.

– Steve Schmidt, Chief Information Security Officer

Daniel Schwartz-Narbonne shares how automated reasoning is helping achieve the provable security of AWS boot code

Post Syndicated from Supriya Anand original https://aws.amazon.com/blogs/security/automated-reasoning-provable-security-of-boot-code/

I recently sat down with Daniel Schwartz-Narbonne, a software development engineer in the Automated Reasoning Group (ARG) at AWS, to learn more about the groundbreaking work his team is doing in cloud security. The team uses automated reasoning, a technology based on mathematical logic, to prove that key components of the cloud are operating as intended. ARG recently hit a milestone by leveraging this technology to prove the memory safety of boot code components. Boot code is the foundation of the cloud. Proving the memory safety of boot code is akin to verifying that the foundation of your house is secure—it lets you build upon it without worry. Daniel shared details with the AWS Security Blog team about the project’s accomplishments and how it will help solve cloud security challenges.

Daniel Schwartz-Narbonne discusses ARG's work on the provable security of boot code

Daniel Schwartz-Narbonne discusses how automated reasoning, a branch of AI tech, can help prove the security of boot code

Tell me about yourself: what made you decide to become a software engineer with the Automated Reasoning Group?

I wanted to become an engineer because I like building things and understanding how they work. I get satisfaction out of producing something tangible. I went into cloud security because I believe it’s a major area of opportunity in the computing industry right now. As the cloud continues to scale in response to customer demand, we’re going to need more and more automation around security to meet this demand.

I was offered the opportunity to work with ARG after I finished up my post-doc at NYU. Byron Cook, the director of ARG, was starting up a team with the mission of using formal reasoning methods to solve real-world problems in the cloud. Joining ARG was an opportunity for me to help pioneer the use of automated reasoning for cloud security.

How would you describe automated reasoning?

Automated reasoning uses mathematical analysis to understand what’s happening in a complex computer system. The technique takes a system and a question you might have about the system—like “is the system memory safe?”—and reformulates the question as a set of mathematical properties. Then it uses automated reasoning tools called “constraint solvers” to analyze these properties and provide an answer. We’re using this technology to provide higher levels of cloud security assurance for AWS customers via features that protect key components of the cloud, including IAM permissions, networking controls, verification for security protocols and source code of foundational software packages in use at AWS. Links to this work can be found at the bottom of this post.

What is the Boot Code Verification Project?

The Boot Code Verification Project is one of several ARG projects that apply automated reasoning techniques to the foundational elements of cloud security. In this case, we’re looking at boot code. Boot code is the first code that starts when you turn on a computer. It’s the foundation for all computer code, which makes its security critical. This is joint work with my ARG colleagues Michael Tautschnig and Mark Tuttle and with infrastructure engineers.

Why is boot code so difficult to secure?

Ensuring boot code security by using traditional techniques, such as penetration testing and unit testing, is hard. You can only achieve visibility into code execution via debug ports, which means you have almost no ability to single-step the boot code for debugging. You often can’t instrument the boot code, either, because this can break the build process: the increased size of the instrumented code may be larger than the size of the ROM targeted by the build process. Extracting the data collected by instrumentation is also difficult because the boot code has no access to a file system to record the data, and memory available for storing the data may be limited.

Our aim is to gain increased confidence in the correctness of the boot code by using automated reasoning, instead. Applying automated reasoning to boot code has challenges, however. A big one is that boot code directly interfaces with hardware. Hardware can, for example, modify the value of memory locations through the use of memory-mapped input/output (IO). We developed techniques for modeling the effect that hardware can have on executing boot code. One technique we successfully tried is using model checking to symbolically represent all the effects that hardware could have on the memory state of the boot code. This required close collaboration with our product teams to understand AWS data center hardware and then design and validate a model based on these specifications. To ensure future code revisions maintain the properties we have validated, our analysis is embedded into the continuous integration flow. In such a workflow, each change by the developers triggers automated verification.

We published the full technical details, including the process by which we were able to prove the memory safety of boot code, in Model Checking Boot Code from AWS Data Centers, a peer-reviewed scientific publication at the Computer-Aided Verification Conference, the leading academic conference on automated reasoning.

You mention model checking. Can you explain what that is?

A software model checker is a tool that examines every path through a computer program from every possible input. There are different kinds of model checkers, but our model checker is based on a constraint solver (called a SAT solver, or a Satisfiability solver) that can test whether a given set of constraints is satisfiable. To understand how it works, first remember that each line of a computer program describes a particular change in the state of the computer (for example, turning on the device). Our model checker describes each change as an equation that shows how the computer’s state has changed. If you describe each line of code in a program this way, the result is a set of equations that describes a set of constraints upon all the ways that the program can change the state of the computer. We hand these constraints and a question (“Is there a bug?”) to a constraint solver, which then determines if the computer can ever reach a state in which the question (“Is there a bug?”) is true.

What is memory safety? Why is it so crucial to prove the memory safety of boot code?

A proof of memory safety gives you assurance that certain security issues cannot arise. Memory safety states that every operation in a program can only write to the variables it has access to, within the bounds of those variables. A classic example is a buffer that stores data coming in from a message on the network. If the message is larger than the buffer in which it’s stored, then you’ll overwrite the buffer, as well as whatever comes after the buffer. If trusted data stored after the buffer is overwritten, then the value of this trusted data is under the control of the adversary inducing the buffer overflow—and your system’s security is now at risk.

Boot code is written in C, a language that does not have the dynamic run-time support for memory safety found in other programming languages. The Boot Code Verification Project uses automated reasoning technology to prove memory safety of the boot code for every possible input.

What has the Boot Code Verification Project accomplished?

We’ve achieved two major accomplishments. The first is the concrete proof we’ve delivered. We have demonstrated that for every boot configuration, device configuration, possible boot source, and second stage binary, AWS boot code is memory safe.

The second accomplishment is more forward-looking. We haven’t just validated a piece of code—we’ve validated a methodology for testing security critical C code at AWS. As we describe in our paper, completing this proof required us to make significant advances in program analysis tooling, ranging from the way we handle memory-mapped IO, to a more efficient symbolic implementation of memcpy, to new tooling that can analyze the unusual linker configurations used in boot code. We made the tooling easier to use, with AWS Batch scripts that allow automatic proof re-runs, and HTML-based reports that make it easy to dive in and understand code. We expect to build on these improvements as we continue to apply automated reasoning to the AWS cloud.

Is your work open source?

We use the model checker CBMC (C Bounded Model Checker), which is available on GitHub under the open source Berkeley Software Distribution license. AWS is committed to the open source community, and we have a number of other projects that you can also find on GitHub.

Overall, how does the Boot Code Verification Project benefit customers?

Customers ask how AWS secures their data. This project is a part of the answer, providing assurance for how AWS protects low-level code in customer data centers running on AWS. Given all systems and processes run on top of this code, customers need to know measures are in place to keep it continuously safe.

We also believe that technology powered by automated reasoning has wider applicability. Our team has created tools like Zelkova, which we’ve embedded in a variety of AWS services to help customers validate their security-critical code. Because the Boot Code Verification Project is based on an existing open source project, wider applications of our methodology have also been documented in a variety of scientific publications that you can find on the the AWS Provable Security page under “Insight papers.” We encourage customers to check out our resources and comment below!

Want more AWS Security news? Follow us on Twitter.

Author

Supriya Anand

Supriya is a Content Strategist at AWS working with the Automated Reasoning Group.

How to clone an AWS CloudHSM cluster across regions

Post Syndicated from Tracy Pierce original https://aws.amazon.com/blogs/security/how-to-clone-an-aws-cloudhsm-cluster-across-regions/

You can use AWS CloudHSM to generate, store, import, export, and manage your cryptographic keys. It also permits hash functions to compute message digests and hash-based message authentication codes (HMACs), as well as cryptographically sign data and verify signatures. To help ensure redundancy of data and simplification of the disaster recovery process, you’ll typically clone your AWS CloudHSM cluster into a different AWS region. This then allows you to synchronize keys, including non-exportable keys, across regions. Non-exportable keys are keys that can never leave the CloudHSM device in plaintext. They reside on the CloudHSM device and are encrypted for security purposes.

You clone a cluster to another region in a two-step process. First, you copy a backup to the destination region. Second, you create a new cluster from this backup. In this post, I’ll show you how to set up one cluster in region 1, and how to use the new CopyBackupToRegion feature to clone the cluster and hardware security modules (HSMs) to a virtual private cloud (VPC) in region 2.

Note: This post doesn’t include instructions on how to set up a cross-region VPC to synchronize HSMs across the two cloned clusters. If you need to do that, read this article.

Solution overview

To complete this solution, you can use either the AWS Command Line Interface (AWS CLI)
or the AWS CloudHSM API. For this post, I’ll use the AWS CLI to copy the cluster backup from region 1 to region 2, and then I’ll launch a new cluster from that copied backup.

The following diagram illustrates the process covered in the post.
 

Figure 1: Architecture diagram

Figure 1: Architecture diagram

Here’s how the process works:

  1. AWS CloudHSM creates a backup of the cluster and stores it in an S3 bucket owned by AWS CloudHSM.
  2. You run the CLI/API command to copy the backup to another AWS region.
  3. When the backup is completed, you use that backup to then create a cluster and HSMs.
  4. Note: Backups can’t be copied into or out of AWS GovCloud (US) because it’s a restricted region.

As with all cluster backups, when you copy the backup to a new AWS region, it’s stored in an Amazon S3 bucket owned by an AWS CloudHSM account. AWS CloudHSM manages the security and storage of cluster backups for you. This means the backup in both regions will also have the durability of Amazon S3, which is 99.999999999%. The backup in region 2 will also be encrypted and secured in the same way as your backup in region 1. You can read more about the encryption process of your AWS CloudHSM backups here.

Any HSMs created in this cloned cluster will have the same users and keys as the original cluster at the time the backup was taken. From this point on, you must manually keep the cloned clusters in sync. Specifically:

  • If you create users after creating your new cluster from the backup, you must create them on both clusters manually.
  • If you change the password for a user in one cluster, you must change the password on the cloned clusters to match.
  • If you create more keys in one cluster, you must sync them to at least one HSM in the cloned cluster. Note that after you sync the key from cluster 1 to cluster 2, the CloudHSM automated cluster synchronization will take care of syncing the keys within the 2nd cluster.

Prerequisites

Some items that will need to be in place for this to work are:

Important note: Syncing keys across clusters in more than one region will only work if all clusters are created from the same backup. This is because synchronization requires the same secret key, called a masking key, to be present on the source and destination HSM. The masking key is specific to each cluster. It can’t be exported, and can’t be used for any purpose other than synchronizing keys across HSMs in a cluster.

Step 1: Create your first cluster in region 1

Follow the links in each sub-step below to the documentation page for more information and setup requirements:

  1. Create the cluster. To do this, you will run the command below via CLI. You will want to replace the placeholder <SUBNET ID 1> with one of your private subnets.
    $ aws cloudhsmv2 create-cluster –hsm-type hsm1.medium –subnet-ids <SUBNET ID 1>
  2. Launch your Amazon Elastic Compute Cloud (Amazon EC2) client (in the public subnet). You can follow the steps here to launch an EC2 Instance.
  3. Create the first HSM (in the private subnet). To do this, you will run the command below via CLI. You will want to replace the placeholder <CLUSTER ID> with the ID given from the ‘Create the cluster’ command above. You’ll replace <AVAILABILITY ZONE> with the AZ matching your private subnet. For example, us-east-1a.
    $ aws cloudhsmv2 create-hsm –cluster-id <CLUSTER ID> –availability-zone <AVAILABILITY ZONE>
  4. Initialize the cluster. Initializing your cluster requires creating a self-signed certificate and using that to sign the cluster’s Certificate Signing Request (CSR). You can view an example here of how to create and use a self-signed certificate. Once you have your certificate, you will run the command below to initialize the cluster with it. You will want to replace the placeholder <CLUSTER ID> with your cluster id from step 1.
    $ aws cloudhsmv2 initialize-cluster –cluster-id <CLUSTER ID> –signed-cert file://<CLUSTER ID>_CustomerHsmCertificate.crt –-trust-anchor file://customerCA.crt

    Note: Don’t forget to place a copy of the certificate used to sign your cluster’s CSR into the /opt/cloudhsm/etc directory to ensure a continued secure connection.

  5. Install the cloudhsm-client software. Once the Amazon EC2 client is launched, you’ll need to download and install the cloudhsm-client software. You can do this by running the command below from the CLI:
    wget https://s3.amazonaws.com/cloudhsmv2-software/CloudHsmClient/EL6/cloudhsm-client-latest.el6.x86_64.rpm

    Once downloaded, you’ll install by running this command:
    sudo yum install -y ./cloudhsm-client-latest.el6.x86_64.rpm

  6. The last step in initializing the cluster requires you to configure the cloudhsm-client to point to the ENI IP of your first HSM. You do this on your EC2 client by running this command:
    $ sudo /opt/cloudhsm/bin/configure -a <IP ADDRESS>
    Replace the <IP ADDRESS> placeholder with your HSM’s ENI IP. The cloudhsm-client comes pre-installed with a Python script called “configure” located in the /opt/cloudhsm/bin/ directory. This will update your /opt/cloudhsm/etc/cloudhsm_mgmt_util.cfg and
    /opt/cloudhsm/etc/cloudhsm_client.cfg files with your HSM’s IP address. This ensures your client can connect to your cluster.
  7. Activate the cluster. To activate, you must launch the cloudhsm-client by running this command, which logs you into the cluster:

    $ /opt/cloudhsm/bin/cloudhsm_mgmt_util /opt/cloudhsm/etc/cloudhsm_mgmt_util.cfg

    Then, you need to enable the secure communication by running this command:

    aws-cloudhsm>enable_e2e

    If you’ve placed the certificate in the correct directory, you should see a response like this on the command line:

    E2E enabled on server 0(server1)

    If you run the command listUsers you’ll see a PRECO user:

    
    aws-cloudhsm>listUsers
    Users on server 0(server1):
    Number of users found:2
    	User ID		User Type	User Name
    	     1		PRECO		admin
    	     2		AU		app_user
    
    

    Change the password for this user to complete the activation process. You do this by first logging in using the command below:

    aws-cloudhsm>loginHSM PRECO admin password

    Once logged in, change the password using this command:

    
    aws-cloudhsm>changePswd PRECO admin <NEW PASSWORD>
    ***************************CAUTION******************************
    This is a CRITICAL operation, should be done on all nodes in the
    cluster. Cav server does NOT synchronize these changes with the 
    nodes on which this operation is not executed or failed, please
    ensure this operation is executed on all nodes in the cluster.
    ****************************************************************
    
    Do you want to continue(y/n)?Y
    Changing password for admin(PRECO) on 1 nodes
    

    Once completed, log out using the command logout, then log back in with the new password, using the command loginHSM PRECO admin <NEW PASSWORD>.

    Doing this allows you to create the first crypto user (CU). You create the user by running the command:
    aws-cloudhsm>createUser <USERTYPE (ex: CO, CU)> <USERNAME> <PASSWORD>
    Replace the red values in this command. The <USERTYPE> can be a CO (crypto officer) or a CU (crypto user). You can find more information about usertypes here. You’ll replace the placeholders <USERNAME> <PASSWORD> with a real user and password combo. Crypto Users are permitted to create and share keys on the CloudHSM.

    Run the command quit to exit this tool.

Step 2: Trigger a backup of your cluster

To trigger a backup that will be copied to region 2 to create your new cluster, add an HSM to your cluster in region 1. You can do this via the console or CLI. The backup that is created will contain all users (COs, CUs, and appliance users), all key material on the HSMs, and the configurations and policies associated with them. The user portion is extremely important because keys can only be synced across clusters to the same user. Make a note of the backup ID because you will need it later. You can find this by logging into the AWS console and navigating to the CloudHSM console, then selecting Backups. There will be a list of backup IDs, cluster IDs, and creation times. Make sure to select the backup ID specifically created for the cross-region copy.

Step 3: Create a key on your cluster in Region 1

There are many ways to create a key. I’m using key_mgmt_util because it’s an easy and straightforward method using CLI commands instead of SDK libraries. Start by connecting to the EC2 client instance that you launched above and ensuring the cloudhsm-client is running. If you aren’t sure, run this command:

$ sudo start cloudhsm-client

Now, launch the key_mgmt_util by running this command:

$ /opt/cloudhsm/bin/key_mgmt_util

When you see the prompt, log in as a CU to create your key, replacing <USERNAME> and <PASSWORD> with an actual CU user’s username and password:

Command: loginHSM -u CU -s <USERNAME> -p <PASSWORD>

To create the key for this example, we’re going to use the key_mgmt_util to generate a symmetric key. Note the -nex parameter is what makes this key non-exportable. An example command is below:

Command: genSymKey -t 31 -s 32 -l aes256 -nex

In the above command:

  1. genSymKey creates the Symmetric key
  2. -t chooses the key type, which in this case is AES
  3. -s states the key size, which in this case is 32 bytes
  4. -l creates a label to easily recognize the key by
  5. -nex makes the key non-exportable

The HSM will return a key handle. This is used as an identifier to reference the key in future commands. Make a note of the key handle because you will need it later. Here’s an example of the full output in which you can see the key handle provided is 37:


Command:genSymKey -t 31 -s 32 -l aes256 -nex
	Cfm3GenerateSymmetricKey returned: 0x00 : HSM Return: SUCCESS
	Symmetric Key Created.   Key Handle: 37
	Cluster Error Status
	Node id 0 and err state 0x00000000 : HSM Return: SUCCESS

Step 4: Copy your backup from region 1 to region 2 and create a cluster from the backup

To copy your backup from region 1 to region 2, from your EC2 client you’ll need to run the command that appears after these important notes:

  • Make sure the proper permissions are applied for the IAM role or user configured for the CLI. You’ll want to be a CloudHSM administrator for these actions. The instructions here show you how to create an admin user for this process, and here is an example of the permissions policy:
    
    {
       "Version":"2018-06-12",
       "Statement":{
          "Effect":"Allow",
          "Action":[
             "cloudhsm:*",
             "ec2:CreateNetworkInterface",
             "ec2:DescribeNetworkInterfaces",
             "ec2:DescribeNetworkInterfaceAttribute",
             "ec2:DetachNetworkInterface",
             "ec2:DeleteNetworkInterface",
             "ec2:CreateSecurityGroup",
             "ec2:AuthorizeSecurityGroupIngress",
             "ec2:AuthorizeSecurityGroupEgress",
             "ec2:RevokeSecurityGroupEgress",
             "ec2:DescribeSecurityGroups",
             "ec2:DeleteSecurityGroup",
             "ec2:CreateTags",
             "ec2:DescribeVpcs",
             "ec2:DescribeSubnets",
             "iam:CreateServiceLinkedRole"
          ],
          "Resource":"*"
       }
    }
    
    

  • To copy the backup over, you need to know the destination region, the source cluster ID, and/or the source backup ID. You can find the source cluster ID and/or the source backup ID in the CloudHSM console.
  • If you use only the cluster ID, the most recent backup of the associated cluster will be chosen for copy. If you specify the backup ID, that associated backup will be copied. If you don’t know these IDs, run the describe-clusters or describe-backups commands.

Here’s the example command:


$ aws cloudhsmv2 copy-backup-to-region --destination-region <DESTINATION REGION> --cluster-id <CLUSTER ID> --backup-id <BACKUP ID>
{
    “DestinationBackup”: {
	“BackupId”: “backup-gdnekhcxf4n”,
	“CreateTimestamp”: 1531742400,
	“BackupState”: “CREATE_IN_PROGRESS”,
	“SourceCluster”: “cluster-kzlczlspnho”,
	“SourceBackup”: “backup-4kuraxsqetz”,
	“SourceRegion”: “us-east-1”
     }
}

Once the backup has been copied to region 2, you’ll see a new backup ID in your console. This is what you’ll use to create your new cluster. You can follow the steps here to create your new cluster from this backup. This cluster will launch already initialized for you, but it will still need HSMs added into it to complete the activation process. Make sure you copy over the cluster certificate from the original cluster to the new region. You can do this by opening two terminal sessions, one for each HSM. Open the certificate file on the HSM in cluster 1 and copy it. On the HSM in cluster 2, create a file and paste the certificate over. You can use any text editor you like to do this. This certificate is required to establish the encrypted connection between your client and HSM instances.

You should also make sure you’ve added the cloned cluster’s Security Group to your EC2 client instance to allow connectivity. You do this by selecting the Security Group for your EC2 client in the EC2 console, and selecting Add rules. You’ll add a rule allowing traffic, with the source being the Security Group ID of your cluster.

Finally, take note of the ENI IP for the HSM because you’ll need it later. You can find this in your CloudHSM Console by clicking on the cluster for more information.

Step 5: Create a new configuration file with one ENI IP from both clusters

To sync a key from a cluster in region 1 to a cluster in region 2, you must create a configuration file that contains at least one ENI IP of an HSM in both clusters. This is required to allow the cloudhsm-client to communicate with both clusters at the same time. This is where the masking key we mentioned earlier comes into play as the syncKey command uses that to copy keys between clusters. This is why the cluster in region 2 must be created from a backup of the cluster in region 1. For the new configuration file, I’m going to copy over the original file /opt/cloudhsm/etc/cloudhsm_mgmt_util.cfg to a new file. Name this SyncClusters.cfg. You’re going to edit this new configuration file to have the ENI IP of the HSM in the cluster of region 1 and the ENI IP of the HSM in the cluster of region 2. It should look something like this:


{
    "scard": {
        "certificate": "cert-sc",
        "enable": "no",
        "pkey": "pkey-sc",
        "port": 2225
    },
    "servers": [
        {
            "CAfile": "",
            "CApath": "/opt/cloudhsm/etc/certs",
            "certificate": "/opt/cloudhsm/etc/client.crt",
            "e2e_encryption": {
                "enable": "yes",
                "owner_cert_path": "/opt/cloudhsm/etc/customerCA.crt"
            },
            "enable": "yes",
            "hostname": "",
            "name": "",
            "pkey": "/opt/cloudhsm/etc/client.key",
            "port": 2225,
            "server_ssl": "yes",
            "ssl_ciphers": ""
        },
        {
            "CAfile": "",
            "CApath": "/opt/cloudhsm/etc/certs",
            "certificate": "/opt/cloudhsm/etc/client.crt",
            "e2e_encryption": {
                "enable": "yes",
                "owner_cert_path": "/opt/cloudhsm/etc/customerCA.crt"
            },
            "enable": "yes",
            "hostname": "",
            "name": "",
            "pkey": "/opt/cloudhsm/etc/client.key",
            "port": 2225,
            "server_ssl": "yes",
            "ssl_ciphers": ""
        }
    ]
}

To verify connectivity to both clusters, start the cloudhsm-client using the modified configuration file. The command will look similar to this:

$ /opt/cloudhsm/bin/cloudhsm_mgmt_util.cfg /opt/cloudhsm/etc/SyncClusters.cfg

After connection, you should see something similar to this, with one IP from cluster 1 and one IP from cluster 2:


Connecting to the server(s), it may take time
depending on the server(s) load, please wait...
Connecting to server '<CLUSTER-1-IP>': hostname '<CLUSTER-1-IP>', port 2225...
Connected to server '<CLUSTER-1-IP>': hostname '<CLUSTER-1-IP>', port 2225.
Connecting to server '<CLUSTER-2-IP>': hostname '<CLUSTER-2-IP>', port 2225...
Connected to server '<CLUSTER-2-IP>': hostname '<CLUSTER-2-IP>', port 2225.

If you run the command info server from the prompt, you’ll see a list of servers your client is connected to. Make note of these because they’ll be important when syncing your keys. Typically, you’ll see server 0 as your first HSM in cluster 1 and server 1 as your first HSM in cluster 2.

Step 6: Sync your key from the cluster in region 1 to the cluster in region 2

You’re ready to sync your keys. Make sure you’ve logged in as the Crypto Officer (CO) user. Only the CO user can perform management functions on the cluster (for example, syncing keys).

Note: These steps are all performed at the server prompt, not the aws-cloudhsm prompt.

First, run the command listUsers to get the user IDs of the user that created the keys. Here’s an example:


server0>listUsers
Users on server 0(<CLUSTER-A-IP>):
Number of users found:3

User Id    User Type	User Name     MofnPubKey     LoginFailureCnt	 2FA
1		CO   	admin         NO               0	 	 NO
2		AU   	app_user      NO               0	 	 NO
3		CU   	<USERNAME>    NO               0	 	 NO

Make note of the user ID because you’ll need it later; in this case, it’s 3. Now, you need to see the key handles that you want to sync. You either noted this from earlier, or you can find this by running the findAllKeys command with the parameter for user 3. Here’s an example:


server0>findAllKeys 3 0
Keys on server 0(<CLUSTER-1-IP<):
Number of keys found 1
number of keys matched from start index 0::1
37
findAllKeys success

In this case, the key handle I want to sync is 37. When running the command syncKey, you’ll input the key handle and the server you want to sync it to (the destination server). Here’s an example:

server0>syncKey 37 1

In this example, 37 is the key handle, and 1 is the destination HSM. You’ll run the exit command to back out to the cluster prompt, and from here you can run findAllKeys again, which should show the same key handle on both clusters.


aws-cloudhsm>findAllKeys 3 0
Keys on server 0(<CLUSTER-1-IP>):
Number of keys found 1
number of keys matched from start index 0::1
37
findAllKeys success on server 0(<CLUSTER-1-IP>)
Keys on server 0(<CLUSTER-2-IP>):
Number of keys found 1
number of keys matched from start index 0::1
37
findAllKeys success on server 0(<CLUSTER-2-IP>)

Repeat this process with all keys you want to sync between clusters.

Summary

I walked you through how to create a cluster, trigger a backup, copy that backup to a new region, launch a new cluster from that backup, and then sync keys across clusters. This will help reduce disaster recovery time, while helping to ensure that your keys are secure in multiple regions should a failure occur.

Remember to always manually update users across clusters after the initial backup copy and cluster creation because these aren’t automatic. You must also run the syncKey command on any keys created after this, as well.

You’re now set up for fault tolerance in your AWS CloudHSM environment.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS CloudHSM forum.

Want more AWS Security news? Follow us on Twitter.

Author

Tracy Pierce

Tracy Pierce is a Senior Cloud Support Engineer at AWS. She enjoys the peculiar culture of Amazon and uses that to ensure every day is exciting for her fellow engineers and customers alike. Customer Obsession is her highest priority and she shows this by improving processes, documentation, and building tutorials. She has her AS in Computer Security & Forensics from SCTD, SSCP certification, AWS Developer Associate certification, and AWS Security Specialist certification. Outside of work, she enjoys time with friends, her Great Dane, and three cats. She keeps work interesting by drawing cartoon characters on the walls at request.

Models for Electronic Identification

Post Syndicated from Bozho original https://techblog.bozho.net/models-for-electronic-identification/

Electronic identity is an important concept as it lies on the crossroads of the digital, the physical and the legal worlds. How do you prove your identity without being physically present? I’ve previously given an overview talk on electronic identification, tackled the high-level and philosophical aspects, and criticized biometric-only identification as insecure. Now I want to list a few practical ways in which eID can be implemented.

First, it’s important to mention the eIDAS regulation which makes electronic identity a legal reality. It says what an electronic identification scheme is and what are its legal effects (proving that you are who you claim to be). And it defines multiple “levels of assurance”, i.e. the levels of security of a method of authentication. But it is a broad framework that doesn’t tell you how to do things in the technical world.

And while electronic identification is mostly cited in the context of government and municipal services, it applies to private companies as well. Currently in the US, for example, the SSN is used for electronic identification. This is a very poor approach, as leaking the SSN allows for identity theft. In the EU there are many different approaches to do that, from the Estonian PKI-based approach, to UK’s verify initiative, which relies on the databases of private companies.

You can see electronic identity as a more legally-meaningful login. You still perform a login, in many cases using username and password as one of the factors, but it carries additional information – who is the actual person behind that login. In some cases it doesn’t have to even give information on who the person is – it can just confirm that such a person exists, and some attributes of that person – age (e.g. if you want to purchase alcohol online), city of registration (e.g. if you want to use municipal services), conviction status (e.g. if applying for a driver in an Uber-like service). It is also very useful when doing anti money laundering checks (e.g. if you are a payment provider, an online currency or crypto-currency exchange, etc.)

Electronic identification schemes can be public and private. Public are operated by governments (federal or state in the case of the US), a particular institution (e.g. the one the issues driver’s licenses). Private ones can be operated by any company that has had the ability to verify your physical identity – e.g. by you going and signing a contract with them. A bank, a telecom, a utility company.

I will use “authentication” and “identification” interchangeably, and for practical purposes this is sort-of true. They differ, of course, as authentication is proving you are who you are, and identification is uniquely identifying you among others (and they have slightly different meanings when it comes to cryptography, but let’s not get carried away in terminology).

Enrollment is the process of signing you up in the electronic identification scheme’s database. It can include a typical online registration step, but it has to do proper identity verification. This can be done in three ways:

  • In-person – you physically go to a counter to have your identity verified. This is easy in the EU where ID cards exists, and a bit harder in the US, where you are not required to actually have an identity document (though you may have one of several). In that case you’d have to bring a birth certificate, utility bills or whatever the local legislation requires
  • Online – any combination of the following may be deemed acceptable, depending on the level of assurance needed: a videoconferencing call; selfie with an identity document; separate picture of an identity document; camera-based liveness detection; matching of selfie with government-registered photo. Basically, a way to say that 1. I have this document 2. I am the person on the document. This could be automated or manual. But does not require physical presence.
  • By proxy – by relying on another eID provider that can confirm your identity. This is an odd option, but you can cascade eID schemes.

And then there’s the technical aspects – what do you add to “username and password” to make identity theft less likely or nearly impossible:

  • OTP (one-time passwords). This can be a hardware OTP token (e.g. RSA SecureID) or a software-based TOTP (like Google Authenticator). The principal of both is the same – the client and the server share a secret, and based on the current time, generate a 6-digit password. Note that storing the secrets on the server side is not trivial – ideally that should be on an HSM (hardware security module) that can do native OTP, otherwise the secrets can leak and your users can be impersonated (The HSM is supposed to not let any secret key leave the hardware). There are less secure OTP approaches, like SMS or other type of messages – you generate one and send it to a registered phone, viber, telegram, email, etc. Banks often use that for their login, but it cannot be used across organizations, as it would require the secrets to be shared. Because the approach is centralized, you can easily revoke an OTP, e.g. declare a phone or OTP device as stolen and then go get a new one / register a new phone.
  • PKI-based authentication – when you verify the person’s identity, have them generate a private key, and issue a X.509 certificate for the corresponding public key. That way the user can use the private key to authenticate (the most straightforward way – TLS mutual authentication, where the user signs a challenge with the private key to confirm they are the “owners” of the certificate). The certificate would normally hold some identifier which can then be used to fetch data from databases. Alternatively, the data can be on the certificate itself, but that has some privacy implications and is rarely a good option. This option can be use across institutions, as you can prove you are the person that owns a private key without the other side needing to share a secret with you. They only need the certificate, and it is public anyway. Another benefit of PKI-based authentication is revokability – in case the user’s private key is somehow compromised, you can easily revoke a certificate (publish it in a CRL, for example).
  • Biometrics – when you are enrolled, you scan a fingerprint, a palm, an iris, a retina or whatever the current cool biometric tech is. I often argue that this cannot be your main factor of authentication. It can and sometimes should be used as an additional safeguard, but it has a big problem – it cannot be revoked. Once a biometric identifier is leaked, it is impossible to stop people from using it. And while they may not be able to fool scanners (although for fingerprints that has been proven easy in the past), the scanners communicate with a server which perform authentication. An attacker may simply spoof a scanner and make it seem to the server that the biometric data was properly obtained. If that has to be avoided, scanners have to themselves be identified by signing a challenge with a private key in a secure hardware module, which make the whole process too complicated to be meaningful. But then again – the biometric factor is useful and important, as we’ll see below.

The typical “factors” in an authentication process are: something you know (passwords), something you have (OTP token, smartcard, phone) and something you are (biometrics). The “something you have” part is what generates multiple variations to the PKI approach mentioned above:

  • Use an unprotected storage on a computer to store the private key – obviously not secure enough, as the private key can be easily extracted and your identity can thus be stolen. But it has to be mentioned, as it can be useful in lower-risk scenarios
  • Use a smartcard – a hardware device that can handle PKI operations (signing, encryption) and does not let private keys leave the hardware. Smartcards are tricky, as they require a reader (usually plugged via USB) and vendor-specific drivers and “magic” to have browser support. Depending on the circumstances, it could be a good approach, as it is the most secure – there is no way for someone to impersonate you, apart from stealing your smartcard and knowing both your smartcard PIN and your password. The problem with plugging the smartcard can be alleviated by utilzing NFC with a smartphone (so just place the card on the back of the smartphone in order to authenticate) but that leads to a lot more other problems, e.g. how to protect the communication from eavesdropping and MITM attacks (as far as I know, there is no established standard for that, except for NFC-SEC, which I think is not universally supported). The smartcard can be put on a national ID card, a separate card, or even the ones in existing bank cards can be reused (though issuers are reluctant to share the chip with functionality (applets) other than the EMV).
  • Use a smartphone – smartphones these days have secure storage capabilities (e.g. Android’s Trusted execution environment or iPhone’s Secure Enclave). A few years ago when I was doing a more thorough, these secure modules were not perfect and there were known attacks, but they have certainly improved. You can in many cases rely on a smartphone to protect the private key. Then, in order to authenticate, you’d need a PIN or biometrics to unlock the phone. Here’s when biometrics come really handy – when they don’t leave the phone, so even if leaked, they cannot be used to impersonate you. They can only be used to potentially make a fake fingerprint to unlock the phone (which should also be stolen). And of course, there’s still the password (“something you know”).
  • Remote HSM – the private keys can be stored remotely, on a hardware security module, so that they cannot physically leave the hardware. However, the hardware is not under your physical control and unlocking it requires just a PIN, which turns this scheme into just “something you know” (times 2, if you add the password). Remote identification and remote signing schemes are becoming popular and in order for them to be secure, you also have to somehow associate the device with the particular user and their private key on the HSM. This can be done in a combination of ways, including the IMEI of the phone (which is spoofable, though) and utilizing some of the aforementioned options – the protected storage of the phone and OTPs handled behind the scene. (Note: the keys on the HSM should be in the protected storage. Having them in an external database encrypted by the master key is not good enough, as they can still leak). If you are going to rely on the smartphone secure storage anyway, what’s the benefit of the remote HSM? It’s twofold – first – loosing the phone doesn’t mean you cannot use the same key again, and second – it reduces the risk of leaking the key, as the HSM is theoretically more secure than the smartphone storage
  • Hybrid / split key – the last two approaches – the smartphone secure storage and the remote HSM can be combined for additional security. You can have the key split in two – part in the smartphone, part on the HSM. That way you are reducing the risk of the key leaking. Losing the phone, however, would mean the key should be regenerated and new certificates should be issued, but that may be okay depending on the usecase.

As you can see, the smartphone secure storage is becoming animportant aspect for electronic identification that is both secure and usable. It allows easily adding a biometric factor without the need to be able to revoke it. And it doesn’t rely on a clunky smartcard that you can’t easily plug.

This is not everything that can be said about secure electronic identification, but I think it’s enough details to get a good picture. It’s not trivial and getting it wrong may lead to real-world damages. It is viewed as primarily government-related, but the eID market in Europe is likely going to grow (partly thanks to unifying the legislation by eIDAS) and many private providers will take part in that. In the US the problem of identity theft and the horrible practice of using the SSN for authentication is being recognized and it’s likely that legislative efforts will follow to put electronic identification on track and in turn foster a market for eID solutions (which is currently a patchwork of scanning and manual verification of documents).

The ultimate goal is to be both secure and usable. And that’s always hard. But thanks to the almost ubiquitous smartphone, it is now possible (though backup options should exist for people that don’t have smartphones). Electronic identification is a key enabler for the so called “digital transformation”, and getting it right is crucial for the digital economy. Apologies for the generic high-level sentence, but I do think we should have technical discussions at the same time as policy discussions, otherwise the two diverge and policy monsters are born.

The post Models for Electronic Identification appeared first on Bozho's tech blog.

Protecting your API using Amazon API Gateway and AWS WAF — Part 2

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/protecting-your-api-using-amazon-api-gateway-and-aws-waf-part-2/

This post courtesy of Heitor Lessa, AWS Specialist Solutions Architect – Serverless

In Part 1 of this blog, we described how to protect your API provided by Amazon API Gateway using AWS WAF. In this blog, we show how to use API keys between an Amazon CloudFront distribution and API Gateway to secure access to your API in API Gateway in addition to your preferred authorization (AuthZ) mechanism already set up in API Gateway. For more information about AuthZ mechanisms in API Gateway, see Secure API Access with Amazon Cognito Federated Identities, Amazon Cognito User Pools, and Amazon API Gateway.

We also extend the AWS CloudFormation stack previously used to automate the creation of the following necessary resources of this solution:

The following are alternative solutions to using an API key, depending on your security requirements:

Using a randomly generated HTTP secret header in CloudFront and verifying by API Gateway request validation
Signing incoming requests with [email protected] and verifying with API Gateway Lambda authorizers

Requirements

To follow along, you need full permissions to create, update, and delete API Gateway, CloudFront, Lambda, and CloudWatch Events through AWS CloudFormation.

Extending the existing AWS CloudFormation stack

First, click here to download the full template. Then follow these steps to update the existing AWS CloudFormation stack:

  1. Go to the AWS Management Console and open the AWS CloudFormation console.
  2. Select the stack that you created in Part 1, right-click it, and select Update Stack.
  3. For option 2, choose Choose file and select the template that you downloaded.
  4. Fill in the required parameters as shown in the following image.

Here’s more information about these parameters:

  • API Gateway to send traffic to – We use the same API Gateway URL as in Part 1 except without the URL scheme (https://): cxm45444t9a.execute-api.us-east-2.amazonaws.com/prod
  • Rotating API Keys – We define Daily and use 2018-04-03 as the timestamp value to append to the API key name

Continue with the AWS CloudFormation console to complete the operation. It might take a couple of minutes to update the stack as CloudFront takes its time to propagate changes across all point of presences.

Enabling API Keys in the example Pet Store API

While the stack completes in the background, let’s enable the use of API Keys in the API that CloudFront will send traffic to.

  1. Go to the AWS Management Console and open the API Gateway console.
  2. Select the API that you created in Part 1 and choose Resources.
  3. Under /pets, choose GET and then choose Method Request.
  4. For API Key Required, choose the dropdown menu and choose true.
  5. To save this change, select the highlighted check mark as shown in the following image.

Next, we need to deploy these changes so that requests sent to /pets fail if an API key isn’t present.

  1. Choose Actions and select Deploy API.
  2. Choose the Deployment stage dropdown menu and select the stage you created in Part 1.
  3. Add a deployment description such as “Requires API Keys under /pets” and choose Deploy.

When the deployment succeeds, you’re redirected to the API Gateway Stage page. There you can use the Invoke URL to test if the following request fails due to not having an API key.

This failure is expected and proves that our deployed changes are working. Next, let’s try to access the same API but this time through our CloudFront distribution.

  1. From the AWS Management Console, open the AWS Cloudformation console.
  2. Select the stack that you created in Part 1 and choose Outputs at the bottom left.
  3. On the CFDistribution line, copy the URL. Before you paste in a new browser tab or window, append ‘/pets’ to it.

As opposed to our first attempt without an API key, we receive a JSON response from the PetStore API. This is because CloudFront is injecting an API key before it forwards the request to the PetStore API. The following image demonstrates both of these tests:

  1. Successful request when accessing the API through CloudFront
  2. Unsuccessful request when accessing the API directly through its Invoke URL

This works as a secret between CloudFront and API Gateway, which could be any agreed random secret that can be rotated like an API key. However, it’s important to know that the API key is a feature to track or meter API consumers’ usage. It’s not a secure authorization mechanism and therefore should be used only in conjunction with an API Gateway authorizer.

Rotating API keys

API keys are automatically rotated based on the schedule (e.g., daily or monthly) that you chose when updating the AWS CloudFormation stack. This requires no maintenance or intervention on your part. In this section, we explain how this process works under the hood and what you can do if you want to manually trigger an API key rotation.

The AWS CloudFormation template that we downloaded and used to update our stack does the following in addition to Part 1.

Introduce a Timestamp parameter that is appended to the API key name

Parameters:
  Timestamp:
    Type: String
    Description: Fill in this format <Year>-<Month>-<Day>
    Default: 2018-04-02

Create an API Gateway key, API Gateway usage plan, associate the new key with the API gateway given as a parameter, and configure the CloudFront distribution to send a custom header when forwarding traffic to API Gateway

CFDistribution:
  Type: AWS::CloudFront::Distribution
  Properties:
    DistributionConfig:
      Logging:
        IncludeCookies: 'false'
        Bucket: !Sub ${S3BucketAccessLogs}.s3.amazonaws.com
        Prefix: cloudfront-logs
      Enabled: 'true'
      Comment: API Gateway Regional Endpoint Blog post
      Origins:
        -
          Id: APIGWRegional
          DomainName: !Select [0, !Split ['/', !Ref ApiURL]]
          CustomOriginConfig:
            HTTPPort: 443
            OriginProtocolPolicy: https-only
          OriginCustomHeaders:
            - 
              HeaderName: x-api-key
              HeaderValue: !Ref ApiKey
              ...

ApiUsagePlan:
  Type: AWS::ApiGateway::UsagePlan
  Properties:
    Description: CloudFront usage only
    UsagePlanName: CloudFront_only
    ApiStages:
      - 
        ApiId: !Select [0, !Split ['.', !Ref ApiURL]]
        Stage: !Select [1, !Split ['/', !Ref ApiURL]]

ApiKey: 
  Type: "AWS::ApiGateway::ApiKey"
  Properties: 
    Name: !Sub "CloudFront-${Timestamp}"
    Description: !Sub "CloudFormation API Key ${Timestamp}"
    Enabled: true

ApiKeyUsagePlan:
  Type: "AWS::ApiGateway::UsagePlanKey"
  Properties:
    KeyId: !Ref ApiKey
    KeyType: API_KEY
    UsagePlanId: !Ref ApiUsagePlan

As shown in the ApiKey resource, we append the given Timestamp to Name as well as use it in the API Gateway usage plan key resource. This means that whenever the Timestamp parameter changes, AWS CloudFormation triggers a resource replacement and updates every resource that depends on that API key. In this case, that includes the AWS CloudFront configuration and API Gateway usage plan.

But what does the rotation schedule that you chose at the beginning of this blog mean in this example?

Create a scheduled activity to trigger a Lambda function on a given schedule

Parameters:
...
  ApiKeyRotationSchedule: 
    Description: Schedule to rotate API Keys e.g. Daily, Monthly, Bimonthly basis
    Type: String
    Default: Daily
    AllowedValues:
      - Daily
      - Fortnightly
      - Monthly
      - Bimonthly
      - Quarterly
    ConstraintDescription: Must be any of the available options

Mappings: 

  ScheduleMap: 
    CloudwatchEvents: 
      Daily: "rate(1 day)"
      Fortnightly: "rate(14 days)"
      Monthly: "rate(30 days)"
      Bimonthly: "rate(60 days)"
      Quarterly: "rate(90 days)"

Resources:
...
  RotateApiKeysScheduledJob: 
    Type: "AWS::Events::Rule"
    Properties: 
      Description: "ScheduledRule"
      ScheduleExpression: !FindInMap [ScheduleMap, CloudwatchEvents, !Ref ApiKeyRotationSchedule]
      State: "ENABLED"
      Targets: 
        - 
          Arn: !GetAtt RotateApiKeysFunction.Arn
          Id: "RotateApiKeys"

The resource RotateApiKeysScheduledJob shows that the schedule that you selected through a dropdown menu when updating the AWS CloudFormation stack is actually converted to a CloudWatch Events rule. This in turn triggers a Lambda function that is defined in the same template.

RotateApiKeysFunction:
      Type: "AWS::Lambda::Function"
      Properties:
        Handler: "index.lambda_handler"
        Role: !GetAtt RotateApiKeysFunctionRole.Arn
        Runtime: python3.6
        Environment:
          Variables:
            StackName: !Ref "AWS::StackName"
        Code:
          ZipFile: !Sub |
            import datetime
            import os

            import boto3
            from botocore.exceptions import ClientError

            session = boto3.Session()
            cfn = session.client('cloudformation')
            
            timestamp = datetime.date.today()            
            params = {
                'StackName': os.getenv('StackName'),
                'UsePreviousTemplate': True,
                'Capabilities': ["CAPABILITY_IAM"],
                'Parameters': [
                    {
                      'ParameterKey': 'ApiURL',
                      'UsePreviousValue': True
                    },
                    {
                      'ParameterKey': 'ApiKeyRotationSchedule',
                      'UsePreviousValue': True
                    },
                    {
                      'ParameterKey': 'Timestamp',
                      'ParameterValue': str(timestamp)
                    },
                ],                
            }

            def lambda_handler(event, context):
              """ Updates CloudFormation Stack with a new timestamp and returns CloudFormation response"""
              try:
                  response = cfn.update_stack(**params)
              except ClientError as err:
                  if "No updates are to be performed" in err.response['Error']['Message']:
                      return {"message": err.response['Error']['Message']}
                  else:
                      raise Exception("An error happened while updating the stack: {}".format(err))          
  
              return response

All this Lambda function does is trigger an AWS CloudFormation stack update via API (exactly what you did through the console but programmatically) and updates the Timestamp parameter. As a result, it rotates the API key and the CloudFront distribution configuration.

This gives you enough flexibility to change the API key rotation schedule at any time without maintaining or writing any code. You can also manually update the stack and rotate the keys by updating the AWS CloudFormation stack’s Timestamp parameter.

Next Steps

We hope you found the information in this blog helpful. You can use it to understand how to create a mechanism to allow traffic only from CloudFront to API Gateway and avoid bypassing the AWS WAF rules that Part 1 set up.

Keep the following important notes in mind about this solution:

  • It assumes that you already have a strong AuthZ mechanism, managed by API Gateway, to control access to your API.
  • The API Gateway usage plan and other resources created in this solution work only for APIs created in the same account (the ApiUrl parameter).
  • If you already use API keys for tracking API usage, consider using either of the following solutions as a replacement:
    • Use a random HTTP header value in CloudFront origin configuration and use an API Gateway request model validation to verify it instead of API keys alone.
    • Combine [email protected] and an API Gateway custom authorizer to sign and verify incoming requests using a shared secret known only to the two. This is a more advanced technique.

MagPi 73: make a video game!

Post Syndicated from Rob Zwetsloot original https://www.raspberrypi.org/blog/magpi-73-make-video-game/

Hi folks, Rob from The MagPi here! As far back as I can remember, I always wanted to learn to code to make a video game. I’m technically working on one right now! It’s wildly behind my self-imposed schedule, though. If you too wish to learn how to make games, then check out issue 73 of The MagPi, out today!

The MagPi 73

Make video games in the latest issue of The MagPi!

Let’s play a game

There are many classifications of video games these days, and many tools to help make it easy. We take you through making a purely narrative experience on Twine, up to programming a simple 8-bit game for Pico-8 in this month’s main feature. Don’t forget our ongoing series on how to make games in C/C++ and Pygame as well!

The MagPi 73

Make games today on your Pi!

Boost your home security

If making games aren’t quite your thing, then we also have a feature for our more serious-sided readers on how to secure your home using a Raspberry Pi. We show you how to set up a CCTV camera, an IoT doorbell, and a door security monitor too.

Home security made easy with a Raspberry Pi

Maker Faire Tokyo

We also have a bumper five pages on Maker Faire Tokyo and the Japanese Raspberry Pi community! I went out there earlier this month and managed to drag myself away from the Gundam Base and the Mandarake in Akihabara long enough to see some of the incredible and inventive things Japanese makers had created.

The MagPi 73

See our report from Maker Faire Tokyo!

All of this along with our usual selection of tutorials, projects, and reviews? We spoil you.

The MagPi 73

Amazing projects to inspire!

Get The MagPi 73

You can get The MagPi 72 today from WHSmith, Tesco, Sainsbury’s, and Asda. If you live in the US, head over to your local Barnes & Noble or Micro Center in the next few days for a print copy. You can also get the new issue online from our store, or digitally via our Android or iOS apps. And don’t forget, there’s always the free PDF as well.

Rolling subscription offer!

Want to support the Raspberry Pi Foundation and the magazine? You can now take out a monthly £5 subscription to the magazine, effectively creating a rolling pre-order system that saves you money on each issue.

The MagPi subscription offer — The MagPi 73

You can also take out a twelve-month print subscription and get a Pi Zero W plus case and adapter cables absolutely free! This offer does not currently have an end date.

That’s it for now, see ya real soon!

Edit: I’m sure he’ll run out of Star Trek GIFs eventually – Alex

The post MagPi 73: make a video game! appeared first on Raspberry Pi.

How to use AWS Secrets Manager to rotate credentials for all Amazon RDS database types, including Oracle

Post Syndicated from Apurv Awasthi original https://aws.amazon.com/blogs/security/how-to-use-aws-secrets-manager-rotate-credentials-amazon-rds-database-types-oracle/

You can now use AWS Secrets Manager to rotate credentials for Oracle, Microsoft SQL Server, or MariaDB databases hosted on Amazon Relational Database Service (Amazon RDS) automatically. Previously, I showed how to rotate credentials for a MySQL database hosted on Amazon RDS automatically with AWS Secrets Manager. With today’s launch, you can use Secrets Manager to automatically rotate credentials for all types of databases hosted on Amazon RDS.

In this post, I review the key features of Secrets Manager. You’ll then learn:

  1. How to store the database credential for the superuser of an Oracle database hosted on Amazon RDS
  2. How to store the Oracle database credential used by an application
  3. How to configure Secrets Manager to rotate both Oracle credentials automatically on a schedule that you define

Key features of Secrets Manager

AWS Secrets Manager makes it easier to rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. The key features of this service include the ability to:

  1. Secure and manage secrets centrally. You can store, view, and manage all your secrets centrally. By default, Secrets Manager encrypts these secrets with encryption keys that you own and control. You can use fine-grained IAM policies or resource-based policies to control access to your secrets. You can also tag secrets to help you discover, organize, and control access to secrets used throughout your organization.
  2. Rotate secrets safely. You can configure Secrets Manager to rotate secrets automatically without disrupting your applications. Secrets Manager offers built-in integrations for rotating credentials for all Amazon RDS databases (MySQL, PostgreSQL, Oracle, Microsoft SQL Server, MariaDB, and Amazon Aurora.) You can also extend Secrets Manager to meet your custom rotation requirements by creating an AWS Lambda function to rotate other types of secrets.
  3. Transmit securely. Secrets are transmitted securely over Transport Layer Security (TLS) protocol 1.2. You can also use Secrets Manager with Amazon Virtual Private Cloud (Amazon VPC) endpoints powered by AWS Privatelink to keep this communication within the AWS network and help meet your compliance and regulatory requirements to limit public internet connectivity.
  4. Pay as you go. Pay for the secrets you store in Secrets Manager and for the use of these secrets; there are no long-term contracts, licensing fees, or infrastructure and personnel costs. For example, a typical production-scale web application will generate an estimated monthly bill of $6. If you follow along the instructions in this blog post, your estimated monthly bill for Secrets Manager will be $1. Note: you may incur additional charges for using Amazon RDS and Amazon Lambda, if you’ve already consumed the free tier for these services.

Now that you’re familiar with Secrets Manager features, I’ll show you how to store and automatically rotate credentials for an Oracle database hosted on Amazon RDS. I divided these instructions into three phases:

  1. Phase 1: Store and configure rotation for the superuser credential
  2. Phase 2: Store and configure rotation for the application credential
  3. Phase 3: Retrieve the credential from Secrets Manager programmatically

Prerequisites

To follow along, your AWS Identity and Access Management (IAM) principal (user or role) requires the SecretsManagerReadWrite AWS managed policy to store the secrets. Your principal also requires the IAMFullAccess AWS managed policy to create and configure permissions for the IAM role used by Lambda for executing rotations. You can use IAM permissions boundaries to grant an employee the ability to configure rotation without also granting them full administrative access to your account.

Phase 1: Store and configure rotation for the superuser credential

From the Secrets Manager console, on the right side, select Store a new secret.

Since I’m storing credentials for database hosted on Amazon RDS, I select Credentials for RDS database. Next, I input the user name and password for the superuser. I start by securing the superuser because it’s the most powerful database credential and has full access to the database.
 

Figure 1: For "Select secret type," choose "Credentials for RDS database"

Figure 1: For “Select secret type,” choose “Credentials for RDS database”

For this example, I choose to use the default encryption settings. Secrets Manager will encrypt this secret using the Secrets Manager DefaultEncryptionKey in this account. Alternatively, I can choose to encrypt using a customer master key (CMK) that I have stored in AWS Key Management Service (AWS KMS). To learn more, read the Using Your AWS KMS CMK documentation.
 

Figure 2: Choose either DefaultEncryptionKey or use a CMK

Figure 2: Choose either DefaultEncryptionKey or use a CMK

Next, I view the list of Amazon RDS instances in my account and select the database this credential accesses. For this example, I select the DB instance oracle-rds-database from the list, and then I select Next.

I then specify values for Secret name and Description. For this example, I use Database/Development/Oracle-Superuser as the name and enter a description of this secret, and then select Next.
 

Figure 3: Provide values for "Secret name" and "Description"

Figure 3: Provide values for “Secret name” and “Description”

Since this database is not yet being used, I choose to enable rotation. To do so, I select Enable automatic rotation, and then set the rotation interval to 60 days. Remember, if this database credential is currently being used, first update the application (see phase 3) to use Secrets Manager APIs to retrieve secrets before enabling rotation.
 

Figure 4: Select "Enable automatic rotation"

Figure 4: Select “Enable automatic rotation”

Next, Secrets Manager requires permissions to rotate this secret on my behalf. Because I’m storing the credentials for the superuser, Secrets Manager can use this credential to perform rotations. Therefore, on the same screen, I select Use a secret that I have previously stored in AWS Secrets Manager, and then select Next.

Finally, I review the information on the next screen. Everything looks correct, so I select Store. I have now successfully stored a secret in Secrets Manager.

Note: Secrets Manager will now create a Lambda function in the same VPC as my Oracle database and trigger this function periodically to change the password for the superuser. I can view the name of the Lambda function on the Rotation configuration section of the Secret Details page.

The banner on the next screen confirms that I’ve successfully configured rotation and the first rotation is in progress, which enables me to verify that rotation is functioning as expected. Secrets Manager will rotate this credential automatically every 60 days.
 

Figure 5: The confirmation notification

Figure 5: The confirmation notification

Phase 2: Store and configure rotation for the application credential

The superuser is a powerful credential that should be used only for administrative tasks. To enable your applications to access a database, create a unique database credential per application and grant these credentials limited permissions. You can use these database credentials to read or write to database tables required by the application. As a security best practice, deny the ability to perform management actions, such as creating new credentials.

In this phase, I will store the credential that my application will use to connect to the Oracle database. To get started, from the Secrets Manager console, on the right side, select Store a new secret.

Next, I select Credentials for RDS database, and input the user name and password for the application credential.

I continue to use the default encryption key. I select the DB instance oracle-rds-database, and then select Next.

I specify values for Secret Name and Description. For this example, I use Database/Development/Oracle-Application-User as the name and enter a description of this secret, and then select Next.

I now configure rotation. Once again, since my application is not using this database credential yet, I’ll configure rotation as part of storing this secret. I select Enable automatic rotation, and set the rotation interval to 60 days.

Next, Secrets Manager requires permissions to rotate this secret on behalf of my application. Earlier in the post, I mentioned that applications credentials have limited permissions and are unable to change their password. Therefore, I will use the superuser credential, Database/Development/Oracle-Superuser, that I stored in Phase 1 to rotate the application credential. With this configuration, Secrets Manager creates a clone application user.
 

Figure 6: Select the superuser credential

Figure 6: Select the superuser credential

Note: Creating a clone application user is the preferred mechanism of rotation because the old version of the secret continues to operate and handle service requests while the new version is prepared and tested. There’s no application downtime while changing between versions.

I review the information on the next screen. Everything looks correct, so I select Store. I have now successfully stored the application credential in Secrets Manager.

As mentioned in Phase 1, AWS Secrets Manager creates a Lambda function in the same VPC as the database and then triggers this function periodically to rotate the secret. Since I chose to use the existing superuser secret to rotate the application secret, I will grant the rotation Lambda function permissions to retrieve the superuser secret. To grant this permission, I first select role from the confirmation banner.
 

Figure 7: Select the "role" link that's in the confirmation notification

Figure 7: Select the “role” link that’s in the confirmation notification

Next, in the Permissions tab, I select SecretsManagerRDSMySQLRotationMultiUserRolePolicy0. Then I select Edit policy.
 

Figure 8: Edit the policy on the "Permissions" tab

Figure 8: Edit the policy on the “Permissions” tab

In this step, I update the policy (see below) and select Review policy. When following along, remember to replace the placeholder ARN-OF-SUPERUSER-SECRET with the ARN of the secret you stored in Phase 1.


{
  "Statement": [
    {
        "Effect": "Allow",
        "Action": [
            "ec2:CreateNetworkInterface",
			"ec2:DeleteNetworkInterface",
			"ec2:DescribeNetworkInterfaces",
			"ec2:DetachNetworkInterface"
		],
		"Resource": "*"
	},
	{
	    "Sid": "GrantPermissionToUse",
		"Effect": "Allow",
		"Action": [
            "secretsmanager:GetSecretValue"
        ],
		"Resource": "ARN-OF-SUPERUSER-SECRET"
	}
  ]
}

Here’s what it will look like:
 

Figure 9: Edit the policy

Figure 9: Edit the policy

Next, I select Save changes. I have now completed all the steps required to configure rotation for the application credential, Database/Development/Oracle-Application-User.

Phase 3: Retrieve the credential from Secrets Manager programmatically

Now that I have stored the secret in Secrets Manager, I add code to my application to retrieve the database credential from Secrets Manager. I use the sample code from Phase 2 above. This code sets up the client and retrieves and decrypts the secret Database/Development/Oracle-Application-User.

Remember, applications require permissions to retrieve the secret, Database/Development/Oracle-Application-User, from Secrets Manager. My application runs on Amazon EC2 and uses an IAM role to obtain access to AWS services. I attach the following policy to my IAM role. This policy uses the GetSecretValue action to grant my application permissions to read secret from Secrets Manager. This policy also uses the resource element to limit my application to read only the Database/Development/Oracle-Application-User secret from Secrets Manager. You can refer to the Secrets Manager Documentation to understand the minimum IAM permissions required to retrieve a secret.


{
 "Version": "2012-10-17",
 "Statement": {
    "Sid": "RetrieveDbCredentialFromSecretsManager",
    "Effect": "Allow",
    "Action": "secretsmanager:GetSecretValue",
    "Resource": "arn:aws:secretsmanager:<AWS-REGION>:<ACCOUNT-NUMBER>:secret: Database/Development/Oracle-Application-User     
 }
}

In the above policy, remember to replace the placeholder <AWS-REGION> with the AWS region that you’re using and the placeholder <ACCOUNT-NUMBER> with the number of your AWS account.

Summary

I explained the key benefits of Secrets Manager as they relate to RDS and showed you how to help meet your compliance requirements by configuring Secrets Manager to rotate database credentials automatically on your behalf. Secrets Manager helps you protect access to your applications, services, and IT resources without the upfront investment and on-going maintenance costs of operating your own secrets management infrastructure. To get started, visit the Secrets Manager console. To learn more, visit Secrets Manager documentation.

If you have comments about this post, submit them in the Comments section below. If you have questions about anything in this post, start a new thread on the Secrets Manager forum.

Want more AWS Security news? Follow us on Twitter.

Apurv Awasthi

Apurv is the product manager for credentials management services at AWS, including AWS Secrets Manager and IAM Roles. He enjoys the “Day 1” culture at Amazon because it aligns with his experience building startups in the sports and recruiting industries. Outside of work, Apurv enjoys hiking. He holds an MBA from UCLA and an MS in computer science from University of Kentucky.

Centralizing security with Amazon API Gateway and cross-account AWS Lambda authorizers

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/centralizing-security-with-amazon-api-gateway-and-cross-account-aws-lambda-authorizers/

This post courtesy of Diego Natali, AWS Solutions Architect

Customers often have multiple teams working on APIs. They might have separate teams working on individual API functionality, and another handling secure access control.

You can now use an AWS Lambda function from a different AWS account as your API integration backend. Cross-account Lambda authorizers allow multiple teams with different AWS accounts to develop and manage access control in Amazon API Gateway. This makes it easy to centrally manage and share the Lambda integration function across multiple APIs.

In this post, I explore an API where the API Gateway API belongs to one account (API), and the Lambda authorizer belongs to another different account (Security Team).

This set up can be useful for centralizing the protection of APIs, when a specific team handles the Lambda authorizer and enforces security. APIs from different AWS accounts within an organization can use a centralized Lambda authorizer for better management and security control.

Example scenario

In this example, I use the Lambda authorizer example from the Use API Gateway Lambda Authorizers topic. Don’t use it in a production environment. However, it is useful for understanding how a Lambda authorizer works.

Prerequisites

  • Two AWS accounts, one of which can be used for the “Security Team” account and the other for the “API” account.
  • The AWS CLI installed on both AWS accounts.

Create the Lambda authorizer

The first step is to create a Lambda authorizer in the Security Team account.

  1. Log in to the Security Team account.
  2. Open the Lambda console.
  3. Choose Create function, Author from scratch.
  4. For Name, enter LambdaAuthorizer.
  5. For Runtime, choose Node.js 6.10.
  6. For Role, choose Create new role from template(s). For Role Name, enter LambdaAuthorizer-role. For Policy templates, choose Simple Microservice Permission.
  7. Choose Create function.
  8. For Function Code, copy and paste the source code from Create a Lambda Function for a Lambda Authorizer of the TOKEN type.
  9. Choose Save.
  10. In the upper-right corner, find the ARN for the Lambda authorizer and save the string for later.

Create an API

The next step is to create a new API with Amazon API Gateway and then add a new API mock method to simulate a response from the API.

  1. Log in to the API account.
  2. Open the API Gateway console.
  3. Choose Create API.
  4. For API name, enter APIblogpost. For Endpoint Type, choose Edge optimized.
  5. Choose Create API.
  6. Choose Actions, Create Method, GET.
  7. Choose the tick symbol to add the new method.
  8. For Integration type, choose Mock.
  9. Choose Save.

Now that you have a new API method, protect it with the Lambda authorizer provided by the Security Team.

  1. In the Amazon API Gateway console, select the APIblogpost API.
  2. Choose Authorizers, Create New Authorizer.
  3. For Name, enter SecurityTeamAuthorizer.
  4. For Lambda Function, select the region where you created the Lambda authorizer. For ARN, enter the value for the Lambda authorizer that you saved earlier.
  5. For Token Source*, enter Authorizer and choose Create.

At this point, the Add Permission to Lambda Function dialog box displays a command such as the following:

aws lambda add-permission --function-name "arn:aws:lambda:us-east-1:XXXXXXXXXXXXXX:function:LambdaAuthorizer " --source-arn "arn:aws:execute-api:us-east-1:XXXXXXXXXXXXXX:jrp5uzygs0/authorizers/AUTHORIZER_ID" --principal apigateway.amazonaws.com --statement-id XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX --action lambda:InvokeFunction

Save this command for later so you can replace AUTHORIZER_ID with the authorizer ID of the API account before you execute this command in the Security Team account.

To find out the authorizer ID, use the AWS CLI.
1. From the command above, get the API Gateway API ID. For example:

arn:aws:execute-api:us-east-1:XXXXXXXXXXXXXX:jrp5uzygs0/authorizers/AUTHORIZER_ID

2. Open a terminal window and enter the following command:

aws apigateway get-authorizers --rest-api-id jrp5uzygs0 --region us-east-1

Output:

{
	"items": [{
		"authType": "custom",
		"name": "SecurityTeamAuthorizer",
		"authorizerUri": "arn:aws:apigateway:us-east-1:lambda:path/2015-03-31/functions/arn:aws:lambda:us-east-1:XXXXXXXXXXXX:function:LambdaAuthorizer /invocations",
		"identitySource": "method.request.header.Authorizer",
		"type": "TOKEN",
		"id": "9vb60i"
	}]
}

From the output, get the authorizer ID, in this case, 9vb60i.

Allow API Gateway to invoke the Lambda authorizer

To allow the API account to execute the Lambda authorizer from the Security Team account, copy and paste the command from the Add Permission to Lambda Function dialog box. Before executing the command, replace AUTHORIZER_ID with the authorizer ID discovered earlier, in this case, 9vb60i.

aws lambda add-permission  --function-name "arn:aws:lambda:us-east-1:XXXXXXXXXXXX:function:LambdaAuthorizer "  --source-arn "arn:aws:execute-api:us-east-1: XXXXXXXXXXXX:jrp5uzygs0/authorizers/9vb60i"  --principal apigateway.amazonaws.com  --statement-id XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX  --action lambda:InvokeFunction

Output:

{
  "Statement": "{\"Sid\":\"XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX \",\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"apigateway.amazonaws.com\"},\"Action\":\"lambda:InvokeFunction\",\"Resource\":\"arn:aws:lambda:us-east-1: XXXXXXXXXXXX:function:LambdaAuthorizer \",\"Condition\":{\"ArnLike\":{\"AWS:SourceArn\":\"arn:aws:execute-api:us-east-1: XXXXXXXXXXXX:jrp5uzygs0/authorizers/9vb60i\"}}}"
}

Now, the API authorizer can invoke the Lambda authorizer in the Security Team account.

Protect the API with the authorizer

Now that the authorizer has been configured correctly, you can protect the GET method of the APIblogpost API with the newly created authorizer and then deploy the API.

  1. In the API Gateway console, select APIblogpost.
  2. Choose Resources, GET, Method Request.
  3. Edit Authorization, select SecurityTeamAuthorizer, and then choose the tick symbol to save.
  4. Choose Actions, Deploy API.
  5. In the Deployment stage, choose [New Stage]. For Stage name*, enter Dev. Choose Deploy.
  6. The page automatically redirects to the dev Stage Editor for your API, which shows the Invoke URL value.

Test the API with cURL

To test the endpoint, you can use cURL. If the TOKEN contains the word “allow”, the Lambda authorizer allows you to call the API. The following example shows that the API returned 200, which means the request was successful:

curl -o /dev/null -s -w "%{http_code}\n"  https://jrp5uzygs0.execute-api.us-east-1.amazonaws.com/dev --header "Authorizer: allow"

200

If you pass the TOKEN “deny”, you see that the API returns a 403 Forbidden, as that account is not allowed to make the API call:

curl -o /dev/null -s -w "%{http_code}\n"  https://jrp5uzygs0.execute-api.us-east-1.amazonaws.com/dev --header "Authorizer: deny"

403

By looking at the CloudTrail event for the Security Team account (XXXXXXXXXX69), you can see that the lambdaAuthorizer invocation comes from the API account (XXXXXXXXXX78), as in the following event where the lambdaAuthorizer is invoked from a different account:

{
	"eventVersion": "1.06",
	"userIdentity": {
		"type": "AWSService",
		"invokedBy": "apimanager.amazonaws.com"
	},
	"eventTime": "2018-05-29T20:09:15Z",
	"eventSource": "lambda.amazonaws.com",
	"eventName": "Invoke",
	"awsRegion": "us-east-1",
	"sourceIPAddress": "apimanager.amazonaws.com",
	"userAgent": "apimanager.amazonaws.com",
	"requestParameters": {
		"functionName": "arn:aws:lambda:us-east-1:XXXXXXXXXX69:function:lambdaAuthorizer ",
		"sourceArn": "arn:aws:execute-api:us-east-1:XXXXXXXXXX78:jrp5uzygs0/authorizers/9vb60i",
		"contentType": "application/json"
	},
	"responseElements": null,
	"additionalEventData": {
		"functionVersion": "arn:aws:lambda:us-east-1:XXXXXXXXXX69:function:lambdaAuthorizer:$LATEST"
	},
	"requestID": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX",
	"eventID": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX",
	"readOnly": false,
	"resources": [{
		"accountId": "XXXXXXXXXX69",
		"type": "AWS::Lambda::Function",
		"ARN": "arn:aws:lambda:us-east-1:XXXXXXXXXX69:function:lambdaAuthorizer "
	}],
	"eventType": "AwsApiCall",
	"managementEvent": false,
	"recipientAccountId": "XXXXXXXXXX69",
	"sharedEventID": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"
}

Conclusion

I hope this post was useful for understanding how cross-account Lambda authorizers can segregate and delegate roles within your organization when working with APIs. Having a centralized Lambda authorizer guarantees that you can enforce similar security measures across all your APIs, increasing security and governance within your organization.

Maintaining Transport Layer Security all the way to your container part 2: Using AWS Certificate Manager Private Certificate Authority

Post Syndicated from Nathan Taber original https://aws.amazon.com/blogs/compute/maintaining-transport-layer-security-all-the-way-to-your-container-part-2-using-aws-certificate-manager-private-certificate-authority/

This post contributed by AWS Senior Cloud Infrastructure Architect Anabell St Vincent and AWS Solutions Architect Alex Kimber.

The previous post, Maintaining Transport Layer Security All the Way to Your Container, covered how the layer 4 Network Load Balancer can be used to maintain Transport Layer Security (TLS) all the way from the client to running containers.

In this post, we discuss the various options available for ensuring that certificates can be securely and reliably made available to containers. By simplifying the process of distributing or generating certificates and other secrets, it’s easier for you to build inherently secure architectures without compromising scalability.

There are several ways to achieve this:

1. Storing the certificate and private key in the Docker image

Certificates and keys can be included in the Docker image and made available to the container at runtime. This approach makes the deployment of containers with certificates and keys simple and easy.

However, there are some drawbacks. First, the certificates and keys need to be created, stored securely, and then included in the Docker image. There are some manual or additional automation steps required to securely create, retrieve, and include them for every new revision of the Docker image.

The following example Docker file creates an NGINX container that has the certificate and the key included:

FROM nginx:alpine

# Copy in secret materials
RUN mkdir -p /root/certs/nginxdemotls.com
COPY nginxdemotls.com.key /root/certs/nginxdemotls.com/nginxdemotls.com.key
COPY nginxdemotls.com.crt /root/certs/nginxdemotls.com/nginxdemotls.com.crt
RUN chmod 400 /root/certs/nginxdemotls.com/nginxdemotls.com.key

# Copy in nginx configuration files
COPY nginx.conf /etc/nginx/nginx.conf
COPY nginxdemo.conf /etc/nginx/conf.d
COPY nginxdemotls.conf /etc/nginx/conf.d

# Create folders to hold web content and copy in HTML files.
RUN mkdir -p /var/www/nginxdemo.com
RUN mkdir -p /var/www/nginxdemotls.com

COPY index.html /var/www/nginxdemo.com/index.html
COPY indextls.html /var/www/nginxdemotls.com/index.html

From a security perspective, this approach has additional drawbacks. Because certificates and private keys are bundled with the Docker images, anyone with access to a Docker image can also retrieve the certificate and private key.
The other drawback is the updated certificates are not replaced automatically and the Docker image must be re-created to include any updated certificates. Running containers must either be restarted with the new image, or have the certificates updated.

2. Storing the certificates in AWS Systems Manager Parameter Store and Amazon S3

The post Managing Secrets for Amazon ECS Applications Using Parameter Store and IAM Roles for Tasks explains how you can use Systems Manager Parameter Store to store secrets. Some customers use Parameter Store to keep their secrets for simpler retrieval, as well as fine-grained access control. Parameter Store allows for securing data using AWS Key Management Service (AWS KMS) for the encryption. Each encryption key created in KMS can be accessed and controlled using AWS Identity and Access Management (IAM) roles in addition to key policy functionality within KMS. This approach allows for resource-level permissions to each item that is stored in Parameter Store, based on the KMS key used for the encryption.

Some certificates can be stored in Parameter Store using the ‘Secure String’ type and using KMS for encryption. With this approach, you can make an API call to retrieve the certificate when the container is deployed. As mentioned earlier, the access to the certificate can be based on the role used to retrieve the certificate. The advantage of this approach is that the certificate can be replaced. The next time the container is deployed, it picks up the new certificate and there is no need to update the Docker image.

Currently, there is a limitation of 4,096 characters that can be stored in Parameter Store. This may not be sufficient for some type of certificates. For example, some x509 certs include the chain and so can exceed the 4,096 character limit. To avoid any character size limitation, Amazon S3 can be used to store the certificate with Parameter Store. The certificate can be stored on Amazon S3, encrypted with KMS and the private key, or the password can be stored in Parameter Store.

With this approach, there is no limitation on certificate length and the private key remains secured with KMS. However, it does involve some additional complexity in setting up the process of creating the certificates, storing them in S3, and then storing the password or private keys in Parameter Store. That is in addition to securing, trusting, and auditing the system handling the private keys and certificates.

3. Storing the certificates in AWS Secrets Manager

AWS Secrets Manager offers a number of features to allow you to store and manage credentials, keys, and other secret materials. This eliminates the need to store these materials with the application code and instead allows them to be referenced on demand. By centralizing the management of secret materials, this single service can manage fine-grained access control through granular IAM policies as well as the revocation and rotation, all through API calls.

All materials stored in the AWS Secrets Manager are encrypted with the customer’s choice of KMS key. The post AWS Secrets Manager: Store, Distribute, and Rotate Credentials Securely shows how AWS Secrets Manager can be used to store RDS database credentials. However, the same process can apply to TLS certificates and keys.

Secrets currently have a limit of 4,096 characters. This approach may be unsuitable for some x509 certificates that include the chain and can exceed this limit. This limit applies to the sum of all key-value pairs within a single secret, so certificates and keys may need to be stored in separate secrets.

After the secure material is in place, it can be retrieved by the container instance at runtime via the AWS Command Line Interface (AWS CLI) or directly from within the application code. All that’s required is for the container task role to have the requisite permissions in IAM to read the secrets.

With the exception of rotating RDS credentials, AWS Secrets Manager requires the user to provide Lambda function code, which is called on a configurable schedule to manage the rotation. This rotation would need to consider the generation of new keys and certificates and redeploying the containers.

4. Using self-signed certificates, generated as the Docker container is created

The advantage of this approach is that it allows the use of TLS communications without any of the complexity of distributing certificates or private keys. However, this approach does require implicit trust of the server. Some applications may generate warnings that there is no acceptable root of trust.

5. Building and managing a private certificate authority

A private certificate authority (CA) can offer greater security and flexibility than the solutions outlined earlier. Typically, a private CA solution would manage the following for each ‘Common name’:

  • A private key
  • A certificate, created with the private key
  • Lists of certificates issued and those that have been revoked
  • Policies for managing certificates, for example which services have the right to make a request for a new certificate
  • Audit logs to track the lifecycle of certificates, in particular to ensure timely renewal where necessary

It is possible for an organization to build and maintain their own certificate issuing platform. This approach requires the implementation of a platform that is highly available and secure. These types of systems add to the overall overhead of maintaining infrastructures from a security, availability, scalability, and maintenance perspective. Some customers have also implemented Lambda functions to achieve the same functionality when it comes to issuing private certificates.

While it’s possible to build a private CA for internal services, there are some challenges to be aware of. Any solution should provide a number of features that are key to ensuring appropriate management of the certificates throughout their lifecycle.

For instance, the solution must support the creation, tracking, distribution, renewal, and revocation of certificates. All of these operations must be provided with the requisite security and authentication controls to ensure that certificates are distributed appropriately.

Scalability is another consideration. As applications become increasingly stateless and elastic, it’s conceivable that certificates may be required for every new container instance or wildcard certificates created to support an environment. Whatever CA solution is implemented must be ready to accommodate such a load while also providing high availability.

These types of approaches have drawbacks from various perspectives:

  • Custom code can be hard to maintain
  • Additional security measures must be implemented
  • Certificate renewal and revocation mechanisms also must be implemented
  • The platform must be maintained and kept up-to-date from a patching perspective while maintaining high availability

6. Using the new ACM Private CA to issue private certificates

ACM Private CA offers a secure, managed infrastructure to support the issuance and revocation of private digital certificates. It supports RSA and ECDSA key types for CA keys used for the creation of new certificates, as well as certificate revocation lists (CRLs) to inform clients when a certificate should no longer be trusted. Currently, ACM Private CA does not offer root CA support.

The following screenshot shows a subordinate certificate that is available for use:

The private key for any private CA that you create with ACM Private CA is created and stored in a FIPS 140-2 Level 3 Hardware Security Module (HSM) managed by AWS. The ACM Private CA is also integrated with AWS CloudTrail, which allows you to record the audit trail of API calls made using the AWS Management Console, AWS CLI, and AWS SDKs.

Setting up ACM Private CA requires a root CA. This can be used to sign a certificate signing request (CSR) for the new subordinate (CA), which is then imported into ACM Private CA. After this is complete, it’s possible for containers within your platform to generate their own key-value pairs at runtime using OpenSSL. They can then use the key-value pairs to make their own CSR and ultimately receive their own certificate.

More specifically, the container would complete the following steps at runtime:

  1. Add OpenSSL to the Docker image (if it is not already included).
  2. Generate a key-value pair (a cryptographically related private and public key).
  3. Use that private key to make a CSR.
  4. Call the ACM Private CA API or CLI issue-certificate operation, which issues a certificate based on the CSR.
  5. Call the ACM Private CA API or CLI get-certificate operation, which returns an issued certificate.

The following diagram shows these steps:

The authorization to successfully request a certificate is controlled via IAM policies, which can be attached via a role to the Amazon ECS task. Containers require the ‘Allow’ effect for at least the acm-pca:GetCertificate and acm:IssueCertificate actions. The following is a sample IAM policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "",
            "Effect": "Allow",
            "Action": "acm-pca:*",
            "Resource": "arn:aws:acm-pca:us-east-1:1234567890:certificate-authority/2c4ccba1-215e-418a-a654-aaaaaaaa"
        }
    ]
}

For additional security, it is possible to store the certificate and keys in a temporary volume mounted in memory through the ‘tmpfs’ parameter. With this option enabled, the secure material is never written to the filesystem of the host machine.

Note: This feature is not currently available for containers run on AWS Fargate.

The task now has the necessary materials and starts up. Clients should be able to establish the trust hierarchy from the server, through ACM Private CA to the root or intermediate CA.

One consideration to be aware of is that ACM Private CA currently has a limit of 50,000 certificates for each CA in each Region. If the requirement is for each, short-lived container instance to have a separate certificate, then this limit could be reached.

Summary

The approaches outlined in this post describe the available options for ensuring that generation, storage, or distribution of sensitive material is done efficiently and securely. It should also be done in a way that supports the ephemeral, automatic scaling capabilities of container-based architectures. ACM Private CA provides a single interface to manage public and now private certificates, as well as seamlessly integrating with the AWS services.

If you have questions or suggestions, please comment below.

Proving Digital Events (Without Blockchain)

Post Syndicated from Bozho original https://techblog.bozho.net/proving-digital-events-without-blockchain/

Recently technical and non-technical people alike started to believe that the best (and only) way to prove that something has happened in an information system is to use a blockchain. But there are other ways to achieve that that are arguably better and cheaper. Of course, blockchain can be used to do that, and it will do it well, but it is far from the only solution to this problem.

The way blockchain proves that some event has occurred by putting it into a tamper-evident data structure (a hash chain of the roots of merkle trees of transactions) and distributing that data structure across multiple independent actors so that “tamper-evident” becomes “tamper-proof” (sort-of). So if an event is stored on a blockchain, and the chain is intact (and others have confirmed it’s intact), this is a technical guarantee that it had indeed happened and was neither back-dated, nor modified.

An important note here – I’m stressing on “digital” events, because no physical event can be truly guaranteed electronically. The fact that someone has to enter the physical event into a digital system makes this process error-prone and the question becomes “was the event correctly recorded” rather than “was it modified once it was recorded”. And yes, you can have “certified” / “approved” recording devices that automate transferring physical events to the digital realm, e.g. certified speed cameras, but the certification process is a separate topic. So we’ll stay purely in the digital realm (and ignore all provenance use cases).

There are two aspects to proving digital events – technical and legal. Once you get in court, it’s unlikely to be able to easily convince a judge that “byzantine fault tolerance guarantees tamper-proof hash chains”. You need a legal framework to allow for treating digital proofs as legally binding.

Luckily, Europe has such a legal framework – Regulation (EU) 910/2014. It classifies trust services in three categories – basic, advanced and qualified. Qualified ones are always supplied by a qualified trust service provider. The benefit of qualified signatures and timestamps is that the burden of proof is on the one claiming that the event didn’t actually happen (or was modified). If a digital event is signed with a qualified electronic signature or timestamped with a qualified timestamp, and someone challenges that occurrence of the event, it is they that should prove that it didn’t happen.

Advanced and basic services still bear legal strength – you can bring a timestamped event to court and prove that you’ve kept your keys securely so that nobody could have backdated an event. And the court should acknowledge that, because it’s in the law.

Having said that, the blockchain, even if it’s technically more secure, is not the best option from a legal point of view. Timestamps on blocks are not put by qualified trust service providers, but by nodes on the system and therefore could be seen as non-qualified electronic time stamp. Signatures on transactions have a similar problem – they are signed by anonymous actors on the network, rather than individuals whose identity is linked to the signature, therefore making them legally weaker.

On the technical side, we have been able to prove events even before blockchain. With digital signatures and trusted timestamps. Once you do a, say, RSA signature (encrypt the hash of the content with your private key, so that anyone knowing your public key can decrypt it and match it to the hash of the content you claim to have signed, thus verifying that it is indeed you who signed it), you cannot deny having signed it (non-repudiation). The signature also protects the integrity of the data (it can’t be changed without breaking the signature). It is also known who signed it, owning the private key (authentication). Having these properties on an piece of data (“event”) you can use it to prove that this event has indeed occurred.

You can’t, however, prove when it occurred – for that you need trusted timestamping. Usually a third-party provider signing the data you send them, and having the current timestamp in the signed response. That way, using public key cryptography and a few centralized authorities (the CA and the TSA), we’ve been able to prove the existence of digital events.

And yes, relying on centralized infrastructure is not perfect. But apart from a few extreme cases, you don’t need 100% protection for 100% of your events. That is not to say that you should go entirely unprotected and hope that an event has occurred simply because it is in some log file.

Relying on plain log files for proving things happened is a “no-go”, as I’ve explained in a previous post about audit trail. You simply can’t prove you didn’t back-date or modify the event data.

But you can rely on good old PKI to prove digital events (of course, blockchain also relies on public key cryptography). And the blockchain approach will not necessarily be better in court.

In a private blockchain you can, of course, utilize centralized components, like a TSA (Time stamping authority) or a CA to get the best of both worlds. And adding hash chains and merkle trees to the mix is certainly great from a technical perspective (which is what I’ve been doing recently). But you don’t need a distributed consensus in order to prove something digital happened – we’ve had the tools for that even before proof-of-work distributed consensus existed.

The post Proving Digital Events (Without Blockchain) appeared first on Bozho's tech blog.

How to connect to AWS Secrets Manager service within a Virtual Private Cloud

Post Syndicated from Divya Sridhar original https://aws.amazon.com/blogs/security/how-to-connect-to-aws-secrets-manager-service-within-a-virtual-private-cloud/

You can now use AWS Secrets Manager with Amazon Virtual Private Cloud (Amazon VPC) endpoints powered by AWS Privatelink and keep traffic between your VPC and Secrets Manager within the AWS network.

AWS Secrets Manager is a secrets management service that helps you protect access to your applications, services, and IT resources. This service enables you to rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. When your application running within an Amazon VPC communicates with Secrets Manager, this communication traverses the public internet. By using Secrets Manager with Amazon VPC endpoints, you can now keep this communication within the AWS network and help meet your compliance and regulatory requirements to limit public internet connectivity. You can start using Secrets Manager with Amazon VPC endpoints by creating an Amazon VPC endpoint for Secrets Manager with a few clicks on the VPC console or via AWS CLI. Once you create the VPC endpoint, you can start using it without making any code or configuration changes in your application.

The diagram demonstrates how Secrets Manager works with Amazon VPC endpoints. It shows how I retrieve a secret stored in Secrets Manager from an Amazon EC2 instance. When the request is sent to Secrets Manager, the entire data flow is contained within the VPC and the AWS network.

Figure 1: How Secrets Manager works with Amazon VPC endpoints

Figure 1: How Secrets Manager works with Amazon VPC endpoints

Solution overview

In this post, I show you how to use Secrets Manager with an Amazon VPC endpoint. In this example, we have an application running on an EC2 instance in VPC named vpc-5ad42b3c. This application requires a database password to an RDS instance running in the same VPC. I have stored the database password in Secrets Manager. I will now show how to:

  1. Create an Amazon VPC endpoint for Secrets Manager using the VPC console.
  2. Use the Amazon VPC endpoint via AWS CLI to retrieve the RDS database secret stored in Secrets Manager from an application running on an EC2 instance.

Step 1: Create an Amazon VPC endpoint for Secrets Manager

  1. Open the Amazon VPC console, select Endpoints, and then select Create Endpoint.
  2. Select AWS Services as the Service category, and then, in the Service Name list, select the Secrets Manager endpoint service named com.amazonaws.us-west-2.secrets-manager.
     
    Figure 2: Options to select when creating an endpoint

    Figure 2: Options to select when creating an endpoint

  3. Specify the VPC you want to create the endpoint in. For this post, I chose the VPC named vpc-5ad42b3c where my RDS instance and application are running.
  4. To create a VPC endpoint, you need to specify the private IP address range in which the endpoint will be accessible. To do this, select the subnet for each Availability Zone (AZ). This restricts the VPC endpoint to the private IP address range specific to each AZ and also creates an AZ-specific VPC endpoint. Specifying more than one subnet-AZ combination helps improve fault tolerance and make the endpoint accessible from a different AZ in case of an AZ failure. Here, I specify subnet IDs for availability zones us-west-2a, us-west-2b, and us-west-2c:
     
    Figure 3: Specifying subnet IDs

    Figure 3: Specifying subnet IDs

  5. Select the Enable Private DNS Name checkbox for the VPC endpoint. Private DNS resolves the standard Secrets Manager DNS hostname https://secretsmanager.<region>.amazonaws.com. to the private IP addresses associated with the VPC endpoint specific DNS hostname. As a result, you can access the Secrets Manager VPC Endpoint via the AWS Command Line Interface (AWS CLI) or AWS SDKs without making any code or configuration changes to update the Secrets Manager endpoint URL.
     
    Figure 4: The "Enable Private DNS Name" checkbox

    Figure 4: The “Enable Private DNS Name” checkbox

  6. Associate a security group with this endpoint. The security group enables you to control the traffic to the endpoint from resources in your VPC. For this post, I chose to associate the security group named sg-07e4197d that I created earlier. This security group has been set up to allow all instances running within VPC vpc-5ad42b3c to access the Secrets Manager VPC endpoint. Select Create endpoint to finish creating the endpoint.
     
    Figure 5: Associate a security group and create the endpoint

    Figure 5: Associate a security group and create the endpoint

  7. To view the details of the endpoint you created, select the link on the console.
     
    Figure 6: Viewing the endpoint details

    Figure 6: Viewing the endpoint details

  8. The Details tab shows all the DNS hostnames generated while creating the Amazon VPC endpoint that can be used to connect to Secrets Manager. I can now use the standard endpoint secretsmanager.us-west-2.amazonaws.com or one of the VPC-specific endpoints to connect to Secrets Manager within vpc-5ad42b3c where my RDS instance and application also resides.
     
    Figure 7: The "Details" tab

    Figure 7: The “Details” tab

Step 2: Access Secrets Manager through the VPC endpoint

Now that I have created the VPC endpoint, all traffic between my application running on an EC2 instance hosted within VPC named vpc-5ad42b3c and Secrets Manager will be within the AWS network. This connection will use the VPC endpoint and I can use it to retrieve my RDS database secret stored in Secrets Manager. I can retrieve the secret via the AWS SDK or CLI. As an example, I can use the CLI command shown below to retrieve the current version of my RDS database secret:

$aws secretsmanager get-secret-value –secret-id MyDatabaseSecret –version-stage AWSCURRENT

Since my AWS CLI is configured for us-west-2 region, it uses the standard Secrets Manager endpoint URL https://secretsmanager.us-west-2.amazonaws.com. This standard endpoint automatically routes to the VPC endpoint since I enabled support for Private DNS hostname while creating the VPC endpoint. The above command will result in the following output:


{
  "ARN": "arn:aws:secretsmanager:us-west-2:123456789012:secret:MyDatabaseSecret-a1b2c3",
  "Name": "MyDatabaseSecret",
  "VersionId": "EXAMPLE1-90ab-cdef-fedc-ba987EXAMPLE",
  "SecretString": "{\n  \"username\":\"david\",\n  \"password\":\"BnQw&XDWgaEeT9XGTT29\"\n}\n",
  "VersionStages": [
    "AWSCURRENT"
  ],
  "CreatedDate": 1523477145.713
} 

Summary

I’ve shown you how to create a VPC endpoint for AWS Secrets Manager and retrieve an RDS database secret using the VPC endpoint. Secrets Manager VPC Endpoints help you meet compliance and regulatory requirements about limiting public internet connectivity within your VPC. It enables your applications running within a VPC to use Secrets Manager while keeping traffic between the VPC and Secrets Manager within the AWS network. You can start using Amazon VPC Endpoints for Secrets Manager by creating endpoints in the VPC console or AWS CLI. Once created, your applications that interact with Secrets Manager do not require any code or configuration changes.

To learn more about connecting to Secrets Manager through a VPC endpoint, read the Secrets Manager documentation. For guidance about your overall VPC network structure, see Practical VPC Design.

If you have questions about this feature or anything else related to Secrets Manager, start a new thread in the Secrets Manager forum.

Want more AWS Security news? Follow us on Twitter.

Control access to your APIs using Amazon API Gateway resource policies

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/control-access-to-your-apis-using-amazon-api-gateway-resource-policies/

This post courtesy of Tapodipta Ghosh, AWS Solutions Architect

Amazon API Gateway provides you with a simple, flexible, secure, and fully managed service that lets you focus on building core business services. API Gateway supports multiple mechanisms of access control using AWS Identity and Access Management (IAM), AWS Lambda authorizers, and Amazon Cognito.

You may want to enforce strict control on the locations from which your APIs are invoked. For example, if you are an AWS Partner who offers APIs over a SaaS model, you can take advantage of the new Amazon API Gateway resource policies feature to control access to your APIs using predefined IP address ranges. API Gateway resource policies are JSON policy documents that you attach to an API to control whether a specified principal (typically, an IAM user or role) can invoke the API.

After a customer subscribes to your SaaS product in AWS Marketplace, you can ask for IP address ranges in the registration information. Then you can enable access to your API from only those IP addresses, making it a secure integration. For example, if you know that your customers are spread across a certain geography, you could blacklist all other countries. Alternately, if you have global customers, you can whitelist only specific IP address ranges.

What problems do resource policies solve?

In a distributed development team with separate AWS accounts, integration testing can be challenging. Allowing users from a different AWS account to access your API requires writing and maintaining code for assuming the role in the API owners account. Also, if you work with a third party, you have to write a Lambda authorizer to implement a bearer token–based authorization scheme.

Now, you can use resource policies much like S3 bucket policies, to provide overarching controls on your APIs without writing custom authorizers or complicated application logic. In this post, I demonstrate how you can use API Gateway resource policies to enable users from a different AWS account to access your API securely. You can also allow the API to be invoked only from specified source IP address ranges or CIDR blocks, without writing any code.

Solution overview

Imagine a company has two teams, Team A and Team B. Team B has created an API that is backed by a Lambda function and a DynamoDB database. They want to make the API public to third parties. First, they want Team A to run integration tests. After the API goes live, Team B wants to allow only users who access the API from a known IP address range.

The following diagram shows the sequence:
Flow Diagram

Start with building an API. For this walkthrough, use a SAM template and the AWS CLI to create the API. For the code to create an API and attach the resource policy to it, see the Sam-moviesapi-resourcepolicy GitHub repo.

Here’s a walkthrough of the steps, so you can get a deeper understanding of what’s happening under the covers.

  • Create the API
  • Turn on IAM authentication
  • Grant user access
  • Test the access permissions

Create the API

Assume that you are hosting the API in AccountB. Run the following commands:

git clone https://github.com/aws-samples/aws-sam-movies-api-resource-policy.git
mkdir ./build

cp -p -r ./movies
./build/movies

pip install -r
requirements.txt -t ./build

aws cloudformation package --template-file template.yaml --output-template-file template-out.yaml --s3-bucket $S3Bucket –profile AccountB

aws cloudformation deploy --template-file template-out.yaml --stack-name apigw-resource-policies-demo --capabilities CAPABILITY_IAM –profile AccountB

Note: You’ll need an S3 bucket to store your artifact for the “package” step.

Turn on IAM authentication

After the movie API is set up, turn on IAM authentication, so that it’s protected from unauthenticated attempts.
It should look like the following screenshot:
iam-auth-on

Also, make sure that you are getting a valid response when you make a GET request, as shown in the following screenshot:

Grant user access

Now grant AccountA user access to your API. In the API Gateway console, choose Movies API, Resource Policy.

Note: All the IP address ranges recorded in this post are for illustration purposes only.

Here is a screenshot of how it would look in the console:

The entire policy is listed here:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                    "arn:aws:iam::<account_idA>:user/<user>",
                    "arn:aws:iam::<account_idA>:root"
                ]
            },
            "Action": "execute-api:Invoke",
            "Resource": "arn:aws:execute-api:us-east-1:<account_idB>:qxz8y9c8a4/*/*/*"
        },
        {
            "Effect": "Allow",
            "Principal": "*",
            "Action": "execute-api:Invoke",
            "Resource": "arn:aws:execute-api:us-east-1:<account_idB>:qxz8y9c8a4/*",
            "Condition": {
                "IpAddress": {
                    "aws:SourceIp": " 203.0.113.0/24"
                }
            }
        }
    ]
}

Here are a few points worth noting. The first policy statement shows how you could provide granular access to certain API IDs down to the specific resource paths in the resource section of the policy. To provide the AccountA user with access only to GET requests, change the resource line to the following:

"Resource": "arn:aws:execute-api:us-east-1:<account_idB>:qxz8y9c8a4/*/GET/*"

In the second statement, you are whitelisting the entire 203.0.113.0/24 network to make all calls to the API.

While whitelisting IP addresses is a good way to start while launching the API for the first time, maintaining the updated list could provide challenging. For a stable product, blacklisting bad actors might be more practical.

A blacklist implementation could look like the following:

{
	"Effect": "Deny",
	"Principal": "*",
	"Action": "execute-api:Invoke",
	"Resource": "arn:aws:execute-api:us-east-1:<account_idB>:qxz8y9c8a4/*",
	"Condition": {
		"IpAddress": {
			"aws:SourceIp": "203.0.113.0/24"
		}
	}
}

You have access logs turned on for the API and your log analysis tool has flagged bad actor/s from a particular IP address range, for example 203.0.113.0/24. Now you can blacklist this IP address in the resource policy.

Test the access permissions

You can now test, using postman, to ensure that the user from AccountA can indeed call the API hosted in AccountB. Also verify that attempts from other accounts are rejected.

In the following examples, the AWS Signature is configured to the AccessKey and SecretKey values from an AccountB user, who was granted access to the API.

Successful response from an authorized user from AccountB – Got a 200 OK

Failure from an unauthorized account/user: Got 401 Unauthorized

Summary

In this post, I showed you the different ways that you can use resource policies to lock down access to your API. Want to restrict a dev API endpoint to the office IP address range? Now you can. Cross-account API access is also made much simpler without having to write complex authentication/authorization schemes.

Podcast: We developed Amazon GuardDuty to meet scaling demands, now it could assist with compliance considerations such as GDPR

Post Syndicated from Katie Doptis original https://aws.amazon.com/blogs/security/podcast-we-developed-amazon-guardduty-to-meet-scaling-demands-now-it-could-assist-with-compliance-considerations-such-as-gdpr/

It isn’t simple to meet the scaling requirements of AWS when creating a threat detection monitoring service. Our service teams have to maintain the ability to deliver at a rapid pace. That led to the question what can be done to make a security service as frictionless as possible to business demands?

Core parts of our internal solution can now be found in Amazon GuardDuty, which doesn’t require deployment of software or security infrastructure. Instead, GuardDuty uses machine learning to monitor metadata for access activity such as unusual API calls. This method turned out to be highly effective. Because it worked well for us, we thought it would work well for our customers, too. Additionally, when we externalized the service, we enabled it to be turned on with a single click. The customer response to Amazon GuardDuty has been positive with rapid adoption since launch in late 2017.

The service’s monitoring capabilities and threat detections could become increasingly helpful to customers concerned with data privacy or facing regulations such as the EU’s General Data Privacy Regulation (GDPR). Listen to the podcast with Senior Product Manager Michael Fuller to learn how Amazon GuardDuty could be leveraged to meet your compliance considerations.

How Security Mindfulness Can Help Prevent Data Disasters

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/what-is-cyber-security/

A locked computer screen

A few years ago, I was surprised by a request to consult with the Pentagon on cybersecurity. It surprised me because I have no military background, and it was the Pentagon, whom I suspected already knew a thing or two about security.

I learned that the consulting project was to raise the awareness of cybersecurity among the people who work at the Pentagon and on military bases. The problem they were having was that some did not sufficiently consider the issue of cybersecurity when they dealt with email, file attachments, and passwords, and in their daily interactions with fellow workers and outside vendors and consultants. If these sound like the same vulnerabilities that the rest of us have, you’re right. It turned out that the military was no different than we are in tackling the problem of cybersecurity in their day-to-day tasks.

That’s a problem. These are the people whose primary job requirement is to be vigilant against threats, and yet some were less than vigilant with their computer and communications systems.

But, more than highlighting a problem with just the military, it made me realize that this problem likely extended beyond the military. If the people responsible for defending the United States can’t take cybersecurity seriously, then how can the rest of us be expected to do so?

And, perhaps even more challenging: how do those of us in the business of protecting data and computer assets fix this problem?

I believe that the campaign I created to address this problem for the Pentagon also has value for other organizations and businesses. We all need to understand how to maintain and encourage security mindfulness as we interact with computer systems and other people.

Technology is Not Enough

We continually focus on what we can do with software and hardware to fight against cyber attacks. “Fighting fire with fire” is a natural and easy way of thinking.

The problem is that the technology used to attack us will continually evolve, which means that our technological responses must similarly evolve. The attackers have the natural advantage. They can innovate and we, the defenders, can only respond. It will continue like that, with attacks and defenses leapfrogging each other over and over while we, the defenders, try to keep up. It’s a game where we can never get ahead because the attackers have a multitude of weaknesses to exploit while the defenders have to guess which vulnerability will be exploited next. It’s enough to want to put the challenge out of your mind completely.

So, what’s the answer?

Let’s go back to the Pentagon’s request. It struck me that what the Pentagon was asking me to do was a classic marketing branding campaign. They wanted to make people more aware of something and to think in a certain manner about it. In this case, instead of making people think that using a certain product would make them happier and more successful, the task was to take a vague threat that wasn’t high on people’s list of things to worry about and turn into something that engaged them sufficiently that they changed their behavior.

I didn’t want to try to make cyber attacks more scary — an idea that I rejected outright — but I did want to try to make people understand the real threat of cyber attacks to themselves, their families, and their livelihoods.

Managers and sysadmins face this challenge daily. They make systems as secure as possible, they install security updates, they create policies for passwords, email, and file handling, yet breaches still happen. It’s not that workers are oblivious to the problem, or don’t care about it. It’s just that they have plenty of other things to worry about, and it’s easy to forget about what they should be doing to thwart cyber attacks. They aren’t being mindful of the possibility of intrusions.

Raising Cybersecurity Awareness

People respond most effectively to challenges that are immediate and present. Abstract threats and unlikely occurrences don’t rise sufficiently above the noise level to register in our consciousness. When a flood is at your door, the threat is immediate and we respond. Our long-term health is important enough that we take action to protect it through insurance, check-ups, and taking care of ourselves because we have been educated or seen what happens if we neglect those preparations.

Both of the examples above — one immediate and one long-term — have gained enough mindfulness that we do something about them.

The problem is that there are so many possible threats to us that to maintain our sanity we ignore all but the most immediate and known threats. A threat becomes real once we’ve experienced it as a real danger. If someone has experienced a cyber attack, the experience likely resulted in a change in behavior. A shift in mindfulness made it less likely that the event would occur again due to a new level of awareness of the threat.

Making Mindfulness Work

One way to make an abstract threat seem more real and more possible is to put it into a context that the person is already familiar with. It then becomes more real and more of a possibility.

That’s what I did for the Pentagon. I put together a campaign to raise the level of mindfulness of the threat of cyberattack by associating it with something they were already familiar with considered serious.

I chose the physical battlefield. I branded the threat of cyber attack as the “Silent Battlefield.” This took something that was not a visible, physical threat and turned it into something that was already perceived as a place where actual threats exist: the battlefield. Cyber warfare is silent compared to physical combat, of course, so the branding associated it with the field of combat. At the same time it perhaps also made the threat more insidious; cyber warfare is silent. You don’t hear a shell whistling through the air to warn you of the coming damage. When the enemy is silent, your only choice is be mindful of the threat and therefore, prepared.

Can this approach work in other contexts, say, a business office, an IT department, a school, or a hospital? I believe it can if the right cultural context is found to increase mindfulness of the problem and how to combat it.

First, find a correlative for the threat that makes it real in that particular environment. For the military, it was the battlefield. For a hospital, the correlative might be a disease attempting to invade a body.

Second, use a combination of messages using words, pictures, audio, and video to get the concept across. This is a branding campaign, so just like a branding campaign for a product or service, multiple exposure and multiple delivery mechanisms will increase the effectiveness of the campaign.

Third, frame security measures as positive rather than negative. Focus on the achievement of a positive outcome rather than the avoidance of a negative result. Examples of positive framing of security measures include:

  • backing up regularly enabled the restoration of an important document that was lost or an earlier draft of a plan containing important information
  • recognizing suspicious emails and attachments avoided malware and downtime
  • showing awareness of various types of phishing campaigns enabled the productive continuation of business
  • creating and using unique and strong passwords and multi-factor verification for accounts avoided having to recreate accounts, credentials, and data
  • showing insight into attempts at social engineering and manipulation was evidence of intelligence and value to the organization

Fourth, demonstrate successful outcomes by highlighting thwarted cyber incursions. Give credit to those who are modeling a proactive attitude. Everyone in the organization should reinforce the messages and give positive reinforcement to effective measures when they are employed.

Other things to do to increase mindfulness are:

Reduce stress
A stressful workplace reduces anyone’s ability to be mindful.
Remove other threats so there are fewer things to worry about.
Encourage a “do one thing now” attitude
Be very clear about what’s important. Make sure that security mindfulness is considered important enough to devote time to.
Show positive results and emphasize victories
Highlight behaviors and actions that defeated attempts to breach security and resulted in good outcomes. Make it personal by giving credit to individuals who have done something specific that worked.

You don’t have to study at a zendō to develop the prerequisite mindfulness to improve computer security. If you’re the person whose job it is to instill mindfulness, you need to understand how to make the threats of malware, ransomware, and other security vectors real to the people who must be vigilant against them every day, and find the cultural and psychological context that works in their environment.

If you can find a way to encourage that security mindfulness, you’ll create an environment where a concern for security is part of the culture and thereby greatly increase the resistance of your organization against cyber attacks.

The post How Security Mindfulness Can Help Prevent Data Disasters appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

AWS Online Tech Talks – June 2018

Post Syndicated from Devin Watson original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-june-2018/

AWS Online Tech Talks – June 2018

Join us this month to learn about AWS services and solutions. New this month, we have a fireside chat with the GM of Amazon WorkSpaces and our 2nd episode of the “How to re:Invent” series. We’ll also cover best practices, deep dives, use cases and more! Join us and register today!

Note – All sessions are free and in Pacific Time.

Tech talks featured this month:

 

Analytics & Big Data

June 18, 2018 | 11:00 AM – 11:45 AM PTGet Started with Real-Time Streaming Data in Under 5 Minutes – Learn how to use Amazon Kinesis to capture, store, and analyze streaming data in real-time including IoT device data, VPC flow logs, and clickstream data.
June 20, 2018 | 11:00 AM – 11:45 AM PT – Insights For Everyone – Deploying Data across your Organization – Learn how to deploy data at scale using AWS Analytics and QuickSight’s new reader role and usage based pricing.

 

AWS re:Invent
June 13, 2018 | 05:00 PM – 05:30 PM PTEpisode 2: AWS re:Invent Breakout Content Secret Sauce – Hear from one of our own AWS content experts as we dive deep into the re:Invent content strategy and how we maintain a high bar.
Compute

June 25, 2018 | 01:00 PM – 01:45 PM PTAccelerating Containerized Workloads with Amazon EC2 Spot Instances – Learn how to efficiently deploy containerized workloads and easily manage clusters at any scale at a fraction of the cost with Spot Instances.

June 26, 2018 | 01:00 PM – 01:45 PM PTEnsuring Your Windows Server Workloads Are Well-Architected – Get the benefits, best practices and tools on running your Microsoft Workloads on AWS leveraging a well-architected approach.

 

Containers
June 25, 2018 | 09:00 AM – 09:45 AM PTRunning Kubernetes on AWS – Learn about the basics of running Kubernetes on AWS including how setup masters, networking, security, and add auto-scaling to your cluster.

 

Databases

June 18, 2018 | 01:00 PM – 01:45 PM PTOracle to Amazon Aurora Migration, Step by Step – Learn how to migrate your Oracle database to Amazon Aurora.
DevOps

June 20, 2018 | 09:00 AM – 09:45 AM PTSet Up a CI/CD Pipeline for Deploying Containers Using the AWS Developer Tools – Learn how to set up a CI/CD pipeline for deploying containers using the AWS Developer Tools.

 

Enterprise & Hybrid
June 18, 2018 | 09:00 AM – 09:45 AM PTDe-risking Enterprise Migration with AWS Managed Services – Learn how enterprise customers are de-risking cloud adoption with AWS Managed Services.

June 19, 2018 | 11:00 AM – 11:45 AM PTLaunch AWS Faster using Automated Landing Zones – Learn how the AWS Landing Zone can automate the set up of best practice baselines when setting up new

 

AWS Environments

June 21, 2018 | 11:00 AM – 11:45 AM PTLeading Your Team Through a Cloud Transformation – Learn how you can help lead your organization through a cloud transformation.

June 21, 2018 | 01:00 PM – 01:45 PM PTEnabling New Retail Customer Experiences with Big Data – Learn how AWS can help retailers realize actual value from their big data and deliver on differentiated retail customer experiences.

June 28, 2018 | 01:00 PM – 01:45 PM PTFireside Chat: End User Collaboration on AWS – Learn how End User Compute services can help you deliver access to desktops and applications anywhere, anytime, using any device.
IoT

June 27, 2018 | 11:00 AM – 11:45 AM PTAWS IoT in the Connected Home – Learn how to use AWS IoT to build innovative Connected Home products.

 

Machine Learning

June 19, 2018 | 09:00 AM – 09:45 AM PTIntegrating Amazon SageMaker into your Enterprise – Learn how to integrate Amazon SageMaker and other AWS Services within an Enterprise environment.

June 21, 2018 | 09:00 AM – 09:45 AM PTBuilding Text Analytics Applications on AWS using Amazon Comprehend – Learn how you can unlock the value of your unstructured data with NLP-based text analytics.

 

Management Tools

June 20, 2018 | 01:00 PM – 01:45 PM PTOptimizing Application Performance and Costs with Auto Scaling – Learn how selecting the right scaling option can help optimize application performance and costs.

 

Mobile
June 25, 2018 | 11:00 AM – 11:45 AM PTDrive User Engagement with Amazon Pinpoint – Learn how Amazon Pinpoint simplifies and streamlines effective user engagement.

 

Security, Identity & Compliance

June 26, 2018 | 09:00 AM – 09:45 AM PTUnderstanding AWS Secrets Manager – Learn how AWS Secrets Manager helps you rotate and manage access to secrets centrally.
June 28, 2018 | 09:00 AM – 09:45 AM PTUsing Amazon Inspector to Discover Potential Security Issues – See how Amazon Inspector can be used to discover security issues of your instances.

 

Serverless

June 19, 2018 | 01:00 PM – 01:45 PM PTProductionize Serverless Application Building and Deployments with AWS SAM – Learn expert tips and techniques for building and deploying serverless applications at scale with AWS SAM.

 

Storage

June 26, 2018 | 11:00 AM – 11:45 AM PTDeep Dive: Hybrid Cloud Storage with AWS Storage Gateway – Learn how you can reduce your on-premises infrastructure by using the AWS Storage Gateway to connecting your applications to the scalable and reliable AWS storage services.
June 27, 2018 | 01:00 PM – 01:45 PM PTChanging the Game: Extending Compute Capabilities to the Edge – Discover how to change the game for IIoT and edge analytics applications with AWS Snowball Edge plus enhanced Compute instances.
June 28, 2018 | 11:00 AM – 11:45 AM PTBig Data and Analytics Workloads on Amazon EFS – Get best practices and deployment advice for running big data and analytics workloads on Amazon EFS.

Security updates for Monday

Post Syndicated from ris original https://lwn.net/Articles/756489/rss

Security updates have been issued by CentOS (procps, xmlrpc, and xmlrpc3), Debian (batik, prosody, redmine, wireshark, and zookeeper), Fedora (jasper, kernel, poppler, and xmlrpc), Mageia (git and wireshark), Red Hat (rh-java-common-xmlrpc), Slackware (git), SUSE (bzr, dpdk-thunderxdpdk, and ocaml), and Ubuntu (exempi).

AWS Resources Addressing Argentina’s Personal Data Protection Law and Disposition No. 11/2006

Post Syndicated from Leandro Bennaton original https://aws.amazon.com/blogs/security/aws-and-resources-addressing-argentinas-personal-data-protection-law-and-disposition-no-112006/

We have two new resources to help customers address their data protection requirements in Argentina. These resources specifically address the needs outlined under the Personal Data Protection Law No. 25.326, as supplemented by Regulatory Decree No. 1558/2001 (“PDPL”), including Disposition No. 11/2006. For context, the PDPL is an Argentine federal law that applies to the protection of personal data, including during transfer and processing.

A new webpage focused on data privacy in Argentina features FAQs, helpful links, and whitepapers that provide an overview of PDPL considerations, as well as our security assurance frameworks and international certifications, including ISO 27001, ISO 27017, and ISO 27018. You’ll also find details about our Information Request Report and the high bar of security at AWS data centers.

Additionally, we’ve released a new workbook that offers a detailed mapping as to how customers can operate securely under the Shared Responsibility Model while also aligning with Disposition No. 11/2006. The AWS Disposition 11/2006 Workbook can be downloaded from the Argentina Data Privacy page or directly from this link. Both resources are also available in Spanish from the Privacidad de los datos en Argentina page.

Want more AWS Security news? Follow us on Twitter.

 

Storing Encrypted Credentials In Git

Post Syndicated from Bozho original https://techblog.bozho.net/storing-encrypted-credentials-in-git/

We all know that we should not commit any passwords or keys to the repo with our code (no matter if public or private). Yet, thousands of production passwords can be found on GitHub (and probably thousands more in internal company repositories). Some have tried to fix that by removing the passwords (once they learned it’s not a good idea to store them publicly), but passwords have remained in the git history.

Knowing what not to do is the first and very important step. But how do we store production credentials. Database credentials, system secrets (e.g. for HMACs), access keys for 3rd party services like payment providers or social networks. There doesn’t seem to be an agreed upon solution.

I’ve previously argued with the 12-factor app recommendation to use environment variables – if you have a few that might be okay, but when the number of variables grow (as in any real application), it becomes impractical. And you can set environment variables via a bash script, but you’d have to store it somewhere. And in fact, even separate environment variables should be stored somewhere.

This somewhere could be a local directory (risky), a shared storage, e.g. FTP or S3 bucket with limited access, or a separate git repository. I think I prefer the git repository as it allows versioning (Note: S3 also does, but is provider-specific). So you can store all your environment-specific properties files with all their credentials and environment-specific configurations in a git repo with limited access (only Ops people). And that’s not bad, as long as it’s not the same repo as the source code.

Such a repo would look like this:

project
└─── production
|   |   application.properites
|   |   keystore.jks
└─── staging
|   |   application.properites
|   |   keystore.jks
└─── on-premise-client1
|   |   application.properites
|   |   keystore.jks
└─── on-premise-client2
|   |   application.properites
|   |   keystore.jks

Since many companies are using GitHub or BitBucket for their repositories, storing production credentials on a public provider may still be risky. That’s why it’s a good idea to encrypt the files in the repository. A good way to do it is via git-crypt. It is “transparent” encryption because it supports diff and encryption and decryption on the fly. Once you set it up, you continue working with the repo as if it’s not encrypted. There’s even a fork that works on Windows.

You simply run git-crypt init (after you’ve put the git-crypt binary on your OS Path), which generates a key. Then you specify your .gitattributes, e.g. like that:

secretfile filter=git-crypt diff=git-crypt
*.key filter=git-crypt diff=git-crypt
*.properties filter=git-crypt diff=git-crypt
*.jks filter=git-crypt diff=git-crypt

And you’re done. Well, almost. If this is a fresh repo, everything is good. If it is an existing repo, you’d have to clean up your history which contains the unencrypted files. Following these steps will get you there, with one addition – before calling git commit, you should call git-crypt status -f so that the existing files are actually encrypted.

You’re almost done. We should somehow share and backup the keys. For the sharing part, it’s not a big issue to have a team of 2-3 Ops people share the same key, but you could also use the GPG option of git-crypt (as documented in the README). What’s left is to backup your secret key (that’s generated in the .git/git-crypt directory). You can store it (password-protected) in some other storage, be it a company shared folder, Dropbox/Google Drive, or even your email. Just make sure your computer is not the only place where it’s present and that it’s protected. I don’t think key rotation is necessary, but you can devise some rotation procedure.

git-crypt authors claim to shine when it comes to encrypting just a few files in an otherwise public repo. And recommend looking at git-remote-gcrypt. But as often there are non-sensitive parts of environment-specific configurations, you may not want to encrypt everything. And I think it’s perfectly fine to use git-crypt even in a separate repo scenario. And even though encryption is an okay approach to protect credentials in your source code repo, it’s still not necessarily a good idea to have the environment configurations in the same repo. Especially given that different people/teams manage these credentials. Even in small companies, maybe not all members have production access.

The outstanding questions in this case is – how do you sync the properties with code changes. Sometimes the code adds new properties that should be reflected in the environment configurations. There are two scenarios here – first, properties that could vary across environments, but can have default values (e.g. scheduled job periods), and second, properties that require explicit configuration (e.g. database credentials). The former can have the default values bundled in the code repo and therefore in the release artifact, allowing external files to override them. The latter should be announced to the people who do the deployment so that they can set the proper values.

The whole process of having versioned environment-speific configurations is actually quite simple and logical, even with the encryption added to the picture. And I think it’s a good security practice we should try to follow.

The post Storing Encrypted Credentials In Git appeared first on Bozho's tech blog.