Tag Archives: https

OMG The Stupid It Burns

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/04/omg-stupid-it-burns.html

This article, pointed out by @TheGrugq, is stupid enough that it’s worth rebutting.

The article starts with the question “Why did the lessons of Stuxnet, Wannacry, Heartbleed and Shamoon go unheeded?“. It then proceeds to ignore the lessons of those things.
Some of the actual lessons should be things like how Stuxnet crossed air gaps, how Wannacry spread through flat Windows networking, how Heartbleed comes from technical debt, and how Shamoon furthers state aims by causing damage.
But this article doesn’t cover the technical lessons. Instead, it thinks the lesson should be the moral lesson, that we should take these things more seriously. But that’s stupid. It’s the sort of lesson people teach you that know nothing about the topic. When you have nothing of value to contribute to a topic you can always take the moral high road and criticize everyone for being morally weak for not taking it more seriously. Obviously, since doctors haven’t cured cancer yet, it’s because they don’t take the problem seriously.
The article continues to ignore the lesson of these cyber attacks and instead regales us with a list of military lessons from WW I and WW II. This makes the same flaw that many in the military make, trying to understand cyber through analogies with the real world. It’s not that such lessons could have no value, it’s that this article contains a poor list of them. It seems to consist of a random list of events that appeal to the author rather than events that have bearing on cybersecurity.
Then, in case we don’t get the point, the article bullies us with hyperbole, cliches, buzzwords, bombastic language, famous quotes, and citations. It’s hard to see how most of them actually apply to the text. Rather, it seems like they are included simply because he really really likes them.
The article invests much effort in discussing the buzzword “OODA loop”. Most attacks in cyberspace don’t have one. Instead, attackers flail around, trying lots of random things, overcoming defense with brute-force rather than an understanding of what’s going on. That’s obviously the case with Wannacry: it was an accident, with the perpetrator experimenting with what would happen if they added the ETERNALBLUE exploit to their existing ransomware code. The consequence was beyond anybody’s ability to predict.
You might claim that this is just the first stage, that they’ll loop around, observe Wannacry’s effects, orient themselves, decide, then act upon what they learned. Nope. Wannacry burned the exploit. It’s essentially removed any vulnerable systems from the public Internet, thereby making it impossible to use what they learned. It’s still active a year later, with infected systems behind firewalls busily scanning the Internet so that if you put a new system online that’s vulnerable, it’ll be taken offline within a few hours, before any other evildoer can take advantage of it.
See what I’m doing here? Learning the actual lessons of things like Wannacry? The thing the above article fails to do??
The article has a humorous paragraph on “defense in depth”, misunderstanding the term. To be fair, it’s the cybersecurity industry’s fault: they adopted then redefined the term. That’s why there’s two separate articles on Wikipedia: one for the old military term (as used in this article) and one for the new cybersecurity term.
As used in the cybersecurity industry, “defense in depth” means having multiple layers of security. Many organizations put all their defensive efforts on the perimeter, and none inside a network. The idea of “defense in depth” is to put more defenses inside the network. For example, instead of just one firewall at the edge of the network, put firewalls inside the network to segment different subnetworks from each other, so that a ransomware infection in the customer support computers doesn’t spread to sales and marketing computers.
The article talks about exploiting WiFi chips to bypass the defense in depth measures like browser sandboxes. This is conflating different types of attacks. A WiFi attack is usually considered a local attack, from somebody next to you in bar, rather than a remote attack from a server in Russia. Moreover, far from disproving “defense in depth” such WiFi attacks highlight the need for it. Namely, phones need to be designed so that successful exploitation of other microprocessors (namely, the WiFi, Bluetooth, and cellular baseband chips) can’t directly compromise the host system. In other words, once exploited with “Broadpwn”, a hacker would need to extend the exploit chain with another vulnerability in the hosts Broadcom WiFi driver rather than immediately exploiting a DMA attack across PCIe. This suggests that if PCIe is used to interface to peripherals in the phone that an IOMMU be used, for “defense in depth”.
Cybersecurity is a young field. There are lots of useful things that outsider non-techies can teach us. Lessons from military history would be well-received.
But that’s not this story. Instead, this story is by an outsider telling us we don’t know what we are doing, that they do, and then proceeds to prove they don’t know what they are doing. Their argument is based on a moral suasion and bullying us with what appears on the surface to be intellectual rigor, but which is in fact devoid of anything smart.
My fear, here, is that I’m going to be in a meeting where somebody has read this pretentious garbage, explaining to me why “defense in depth” is wrong and how we need to OODA faster. I’d rather nip this in the bud, pointing out if you found anything interesting from that article, you are wrong.

RDS for Oracle: Extending Outbound Network Access to use SSL/TLS

Post Syndicated from Surya Nallu original https://aws.amazon.com/blogs/architecture/rds-for-oracle-extending-outbound-network-access-to-use-ssltls/

In December 2016, we launched the Outbound Network Access functionality for Amazon RDS for Oracle, enabling customers to use their RDS for Oracle database instances to communicate with external web endpoints using the utl_http and utl tcp packages, and sending emails through utl_smtp. We extended the functionality by adding the option of using custom DNS servers, allowing such outbound network accesses to make use of any DNS server a customer chooses to use. These releases enabled HTTP, TCP and SMTP communication originating out of RDS for Oracle instances – limited to non-secure (non-SSL) mediums.

To overcome the limitation over SSL connections, we recently published a whitepaper, that guides through the process of creating customized Oracle wallet bundles on your RDS for Oracle instances. By making use of such wallets, you can now extend the Outbound Network Access capability to have external communications happen over secure (SSL/TLS) connections. This opens up new use cases for your RDS for Oracle instances.

With the right set of certificates imported into your RDS for Oracle instances (through Oracle wallets), your database instances can now:

  • Communicate with a HTTPS endpoint: Using utl_http, access a resource such as https://status.aws.amazon.com/robots.txt
  • Download files from Amazon S3 securely: Using a presigned URL from Amazon S3, you can now download any file over SSL
  • Extending Oracle Database links to use SSL: Database links between RDS for Oracle instances can now use SSL as long as the instances have the SSL option installed
  • Sending email over SMTPS:
    • You can now integrate with Amazon SES to send emails from your database instances and any other generic SMTPS with which the provider can be integrated

These are just a few high-level examples of new use cases that have opened up with the whitepaper. As a reminder, always ensure to have best security practices in place when making use of Outbound Network Access (detailed in the whitepaper).

About the Author

Surya Nallu is a Software Development Engineer on the Amazon RDS for Oracle team.

The End of Google Cloud Messaging, and What it Means for Your Apps

Post Syndicated from Zach Barbitta original https://aws.amazon.com/blogs/messaging-and-targeting/the-end-of-google-cloud-messaging-and-what-it-means-for-your-apps/

On April 10, 2018, Google announced the deprecation of its Google Cloud Messaging (GCM) platform. Specifically, the GCM server and client APIs are deprecated and will be removed as soon as April 11, 2019.  What does this mean for you and your applications that use Amazon Simple Notification Service (Amazon SNS) or Amazon Pinpoint?

First, nothing will break now or after April 11, 2019. GCM device tokens are completely interchangeable with the newer Firebase Cloud Messaging (FCM) device tokens. If you have existing GCM tokens, you’ll still be able to use them to send notifications. This statement is also true for GCM tokens that you generate in the future.

On the back end, we’ve already migrated Amazon SNS and Amazon Pinpoint to the server endpoint for FCM (https://fcm.googleapis.com/fcm/send). As a developer, you don’t need to make any changes as a result of this deprecation.

We created the following mini-FAQ to address some of the questions you may have as a developer who uses Amazon SNS or Amazon Pinpoint.

If I migrate to FCM from GCM, can I still use Amazon Pinpoint and Amazon SNS?

Yes. Your ability to connect to your applications and send messages through both Amazon SNS and Amazon Pinpoint doesn’t change. We’ll update the documentation for Amazon SNS and Amazon Pinpoint soon to reflect these changes.

If I don’t migrate to FCM from GCM, can I still use Amazon Pinpoint and Amazon SNS?

Yes. If you do nothing, your existing credentials and GCM tokens will still be valid. All applications that you previously set up to use Amazon Pinpoint or Amazon SNS will continue to work normally. When you call the API for Amazon Pinpoint or Amazon SNS, we initiate a request to the FCM server endpoint directly.

What are the differences between Amazon SNS and Amazon Pinpoint?

Amazon SNS makes it easy for developers to set up, operate, and send notifications at scale, affordably and with a high degree of flexibility. Amazon Pinpoint has many of the same messaging capabilities as Amazon SNS, with the same levels of scalability and flexibility.

The main difference between the two services is that Amazon Pinpoint provides both transactional and targeted messaging capabilities. By using Amazon Pinpoint, marketers and developers can not only send transactional messages to their customers, but can also segment their audiences, create campaigns, and analyze both application and message metrics.

How do I migrate from GCM to FCM?

For more information about migrating from GCM to FCM, see Migrate a GCM Client App for Android to Firebase Cloud Messaging on the Google Developers site.

If you have any questions, please post them in the comments section, or in the Amazon Pinpoint or Amazon SNS forums.

Implementing safe AWS Lambda deployments with AWS CodeDeploy

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/implementing-safe-aws-lambda-deployments-with-aws-codedeploy/

This post courtesy of George Mao, AWS Senior Serverless Specialist – Solutions Architect

AWS Lambda and AWS CodeDeploy recently made it possible to automatically shift incoming traffic between two function versions based on a preconfigured rollout strategy. This new feature allows you to gradually shift traffic to the new function. If there are any issues with the new code, you can quickly rollback and control the impact to your application.

Previously, you had to manually move 100% of traffic from the old version to the new version. Now, you can have CodeDeploy automatically execute pre- or post-deployment tests and automate a gradual rollout strategy. Traffic shifting is built right into the AWS Serverless Application Model (SAM), making it easy to define and deploy your traffic shifting capabilities. SAM is an extension of AWS CloudFormation that provides a simplified way of defining serverless applications.

In this post, I show you how to use SAM, CloudFormation, and CodeDeploy to accomplish an automated rollout strategy for safe Lambda deployments.

Scenario

For this walkthrough, you write a Lambda application that returns a count of the S3 buckets that you own. You deploy it and use it in production. Later on, you receive requirements that tell you that you need to change your Lambda application to count only buckets that begin with the letter “a”.

Before you make the change, you need to be sure that your new Lambda application works as expected. If it does have issues, you want to minimize the number of impacted users and roll back easily. To accomplish this, you create a deployment process that publishes the new Lambda function, but does not send any traffic to it. You use CodeDeploy to execute a PreTraffic test to ensure that your new function works as expected. After the test succeeds, CodeDeploy automatically shifts traffic gradually to the new version of the Lambda function.

Your Lambda function is exposed as a REST service via an Amazon API Gateway deployment. This makes it easy to test and integrate.

Prerequisites

To execute the SAM and CloudFormation deployment, you must have the following IAM permissions:

  • cloudformation:*
  • lambda:*
  • codedeploy:*
  • iam:create*

You may use the AWS SAM Local CLI or the AWS CLI to package and deploy your Lambda application. If you choose to use SAM Local, be sure to install it onto your system. For more information, see AWS SAM Local Installation.

All of the code used in this post can be found in this GitHub repository: https://github.com/aws-samples/aws-safe-lambda-deployments.

Walkthrough

For this post, use SAM to define your resources because it comes with built-in CodeDeploy support for safe Lambda deployments.  The deployment is handled and automated by CloudFormation.

SAM allows you to define your Serverless applications in a simple and concise fashion, because it automatically creates all necessary resources behind the scenes. For example, if you do not define an execution role for a Lambda function, SAM automatically creates one. SAM also creates the CodeDeploy application necessary to drive the traffic shifting, as well as the IAM service role that CodeDeploy uses to execute all actions.

Create a SAM template

To get started, write your SAM template and call it template.yaml.

AWSTemplateFormatVersion : '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: An example SAM template for Lambda Safe Deployments.

Resources:

  returnS3Buckets:
    Type: AWS::Serverless::Function
    Properties:
      Handler: returnS3Buckets.handler
      Runtime: nodejs6.10
      AutoPublishAlias: live
      Policies:
        - Version: "2012-10-17"
          Statement: 
          - Effect: "Allow"
            Action: 
              - "s3:ListAllMyBuckets"
            Resource: '*'
      DeploymentPreference:
          Type: Linear10PercentEvery1Minute
          Hooks:
            PreTraffic: !Ref preTrafficHook
      Events:
        Api:
          Type: Api
          Properties:
            Path: /test
            Method: get

  preTrafficHook:
    Type: AWS::Serverless::Function
    Properties:
      Handler: preTrafficHook.handler
      Policies:
        - Version: "2012-10-17"
          Statement: 
          - Effect: "Allow"
            Action: 
              - "codedeploy:PutLifecycleEventHookExecutionStatus"
            Resource:
              !Sub 'arn:aws:codedeploy:${AWS::Region}:${AWS::AccountId}:deploymentgroup:${ServerlessDeploymentApplication}/*'
        - Version: "2012-10-17"
          Statement: 
          - Effect: "Allow"
            Action: 
              - "lambda:InvokeFunction"
            Resource: !Ref returnS3Buckets.Version
      Runtime: nodejs6.10
      FunctionName: 'CodeDeployHook_preTrafficHook'
      DeploymentPreference:
        Enabled: false
      Timeout: 5
      Environment:
        Variables:
          NewVersion: !Ref returnS3Buckets.Version

This template creates two functions:

  • returnS3Buckets
  • preTrafficHook

The returnS3Buckets function is where your application logic lives. It’s a simple piece of code that uses the AWS SDK for JavaScript in Node.JS to call the Amazon S3 listBuckets API action and return the number of buckets.

'use strict';

var AWS = require('aws-sdk');
var s3 = new AWS.S3();

exports.handler = (event, context, callback) => {
	console.log("I am here! " + context.functionName  +  ":"  +  context.functionVersion);

	s3.listBuckets(function (err, data){
		if(err){
			console.log(err, err.stack);
			callback(null, {
				statusCode: 500,
				body: "Failed!"
			});
		}
		else{
			var allBuckets = data.Buckets;

			console.log("Total buckets: " + allBuckets.length);
			callback(null, {
				statusCode: 200,
				body: allBuckets.length
			});
		}
	});	
}

Review the key parts of the SAM template that defines returnS3Buckets:

  • The AutoPublishAlias attribute instructs SAM to automatically publish a new version of the Lambda function for each new deployment and link it to the live alias.
  • The Policies attribute specifies additional policy statements that SAM adds onto the automatically generated IAM role for this function. The first statement provides the function with permission to call listBuckets.
  • The DeploymentPreference attribute configures the type of rollout pattern to use. In this case, you are shifting traffic in a linear fashion, moving 10% of traffic every minute to the new version. For more information about supported patterns, see Serverless Application Model: Traffic Shifting Configurations.
  • The Hooks attribute specifies that you want to execute the preTrafficHook Lambda function before CodeDeploy automatically begins shifting traffic. This function should perform validation testing on the newly deployed Lambda version. This function invokes the new Lambda function and checks the results. If you’re satisfied with the tests, instruct CodeDeploy to proceed with the rollout via an API call to: codedeploy.putLifecycleEventHookExecutionStatus.
  • The Events attribute defines an API-based event source that can trigger this function. It accepts requests on the /test path using an HTTP GET method.
'use strict';

const AWS = require('aws-sdk');
const codedeploy = new AWS.CodeDeploy({apiVersion: '2014-10-06'});
var lambda = new AWS.Lambda();

exports.handler = (event, context, callback) => {

	console.log("Entering PreTraffic Hook!");
	
	// Read the DeploymentId & LifecycleEventHookExecutionId from the event payload
    var deploymentId = event.DeploymentId;
	var lifecycleEventHookExecutionId = event.LifecycleEventHookExecutionId;

	var functionToTest = process.env.NewVersion;
	console.log("Testing new function version: " + functionToTest);

	// Perform validation of the newly deployed Lambda version
	var lambdaParams = {
		FunctionName: functionToTest,
		InvocationType: "RequestResponse"
	};

	var lambdaResult = "Failed";
	lambda.invoke(lambdaParams, function(err, data) {
		if (err){	// an error occurred
			console.log(err, err.stack);
			lambdaResult = "Failed";
		}
		else{	// successful response
			var result = JSON.parse(data.Payload);
			console.log("Result: " +  JSON.stringify(result));

			// Check the response for valid results
			// The response will be a JSON payload with statusCode and body properties. ie:
			// {
			//		"statusCode": 200,
			//		"body": 51
			// }
			if(result.body == 9){	
				lambdaResult = "Succeeded";
				console.log ("Validation testing succeeded!");
			}
			else{
				lambdaResult = "Failed";
				console.log ("Validation testing failed!");
			}

			// Complete the PreTraffic Hook by sending CodeDeploy the validation status
			var params = {
				deploymentId: deploymentId,
				lifecycleEventHookExecutionId: lifecycleEventHookExecutionId,
				status: lambdaResult // status can be 'Succeeded' or 'Failed'
			};
			
			// Pass AWS CodeDeploy the prepared validation test results.
			codedeploy.putLifecycleEventHookExecutionStatus(params, function(err, data) {
				if (err) {
					// Validation failed.
					console.log('CodeDeploy Status update failed');
					console.log(err, err.stack);
					callback("CodeDeploy Status update failed");
				} else {
					// Validation succeeded.
					console.log('Codedeploy status updated successfully');
					callback(null, 'Codedeploy status updated successfully');
				}
			});
		}  
	});
}

The hook is hardcoded to check that the number of S3 buckets returned is 9.

Review the key parts of the SAM template that defines preTrafficHook:

  • The Policies attribute specifies additional policy statements that SAM adds onto the automatically generated IAM role for this function. The first statement provides permissions to call the CodeDeploy PutLifecycleEventHookExecutionStatus API action. The second statement provides permissions to invoke the specific version of the returnS3Buckets function to test
  • This function has traffic shifting features disabled by setting the DeploymentPreference option to false.
  • The FunctionName attribute explicitly tells CloudFormation what to name the function. Otherwise, CloudFormation creates the function with the default naming convention: [stackName]-[FunctionName]-[uniqueID].  Name the function with the “CodeDeployHook_” prefix because the CodeDeployServiceRole role only allows InvokeFunction on functions named with that prefix.
  • Set the Timeout attribute to allow enough time to complete your validation tests.
  • Use an environment variable to inject the ARN of the newest deployed version of the returnS3Buckets function. The ARN allows the function to know the specific version to invoke and perform validation testing on.

Deploy the function

Your SAM template is all set and the code is written—you’re ready to deploy the function for the first time. Here’s how to do it via the SAM CLI. Replace “sam” with “cloudformation” to use CloudFormation instead.

First, package the function. This command returns a CloudFormation importable file, packaged.yaml.

sam package –template-file template.yaml –s3-bucket mybucket –output-template-file packaged.yaml

Now deploy everything:

sam deploy –template-file packaged.yaml –stack-name mySafeDeployStack –capabilities CAPABILITY_IAM

At this point, both Lambda functions have been deployed within the CloudFormation stack mySafeDeployStack. The returnS3Buckets has been deployed as Version 1:

SAM automatically created a few things, including the CodeDeploy application, with the deployment pattern that you specified (Linear10PercentEvery1Minute). There is currently one deployment group, with no action, because no deployments have occurred. SAM also created the IAM service role that this CodeDeploy application uses:

There is a single managed policy attached to this role, which allows CodeDeploy to invoke any Lambda function that begins with “CodeDeployHook_”.

An API has been set up called safeDeployStack. It targets your Lambda function with the /test resource using the GET method. When you test the endpoint, API Gateway executes the returnS3Buckets function and it returns the number of S3 buckets that you own. In this case, it’s 51.

Publish a new Lambda function version

Now implement the requirements change, which is to make returnS3Buckets count only buckets that begin with the letter “a”. The code now looks like the following (see returnS3BucketsNew.js in GitHub):

'use strict';

var AWS = require('aws-sdk');
var s3 = new AWS.S3();

exports.handler = (event, context, callback) => {
	console.log("I am here! " + context.functionName  +  ":"  +  context.functionVersion);

	s3.listBuckets(function (err, data){
		if(err){
			console.log(err, err.stack);
			callback(null, {
				statusCode: 500,
				body: "Failed!"
			});
		}
		else{
			var allBuckets = data.Buckets;

			console.log("Total buckets: " + allBuckets.length);
			//callback(null, allBuckets.length);

			//  New Code begins here
			var counter=0;
			for(var i  in allBuckets){
				if(allBuckets[i].Name[0] === "a")
					counter++;
			}
			console.log("Total buckets starting with a: " + counter);

			callback(null, {
				statusCode: 200,
				body: counter
			});
			
		}
	});	
}

Repackage and redeploy with the same two commands as earlier:

sam package –template-file template.yaml –s3-bucket mybucket –output-template-file packaged.yaml
	
sam deploy –template-file packaged.yaml –stack-name mySafeDeployStack –capabilities CAPABILITY_IAM

CloudFormation understands that this is a stack update instead of an entirely new stack. You can see that reflected in the CloudFormation console:

During the update, CloudFormation deploys the new Lambda function as version 2 and adds it to the “live” alias. There is no traffic routing there yet. CodeDeploy now takes over to begin the safe deployment process.

The first thing CodeDeploy does is invoke the preTrafficHook function. Verify that this happened by reviewing the Lambda logs and metrics:

The function should progress successfully, invoke Version 2 of returnS3Buckets, and finally invoke the CodeDeploy API with a success code. After this occurs, CodeDeploy begins the predefined rollout strategy. Open the CodeDeploy console to review the deployment progress (Linear10PercentEvery1Minute):

Verify the traffic shift

During the deployment, verify that the traffic shift has started to occur by running the test periodically. As the deployment shifts towards the new version, a larger percentage of the responses return 9 instead of 51. These numbers match the S3 buckets.

A minute later, you see 10% more traffic shifting to the new version. The whole process takes 10 minutes to complete. After completion, open the Lambda console and verify that the “live” alias now points to version 2:

After 10 minutes, the deployment is complete and CodeDeploy signals success to CloudFormation and completes the stack update.

Check the results

If you invoke the function alias manually, you see the results of the new implementation.

aws lambda invoke –function [lambda arn to live alias] out.txt

You can also execute the prod stage of your API and verify the results by issuing an HTTP GET to the invoke URL:

Summary

This post has shown you how you can safely automate your Lambda deployments using the Lambda traffic shifting feature. You used the Serverless Application Model (SAM) to define your Lambda functions and configured CodeDeploy to manage your deployment patterns. Finally, you used CloudFormation to automate the deployment and updates to your function and PreTraffic hook.

Now that you know all about this new feature, you’re ready to begin automating Lambda deployments with confidence that things will work as designed. I look forward to hearing about what you’ve built with the AWS Serverless Platform.

Telegram Founder Pledges Millions in Bitcoin For VPNs and “Digital Resistance”

Post Syndicated from Andy original https://torrentfreak.com/telegram-founder-pledges-millions-in-bitcoin-for-vpns-and-digital-resistance-180418/

Starting yesterday, Russia went to war with free cross-platform messaging app Telegram. Authorities including the FSB wanted access to Telegram’s encryption keys, but the service refused to hand them over.

As a result, the service – which serviced 200,000,000 people in March alone – came under massive attack. Supported by a court ruling obtained last Friday, authorities ordered ISPs to block huge numbers of IP addresses in an effort to shut Telegram down.

Amazon and Google, whose services Telegram uses, were both hit with censorship measures, with around 1.8 million IP addresses belonging to the Internet giants blocked in an initial wave of action. But the government was just getting warmed up.

In an updated posted by Pavel Durov to Twitter from Switzerland late last night, the Telegram founder confirmed that Russia had massively stepped up the fight against his encrypted messaging platform.

Of course, 15 million IP addresses is a huge volume, particularly since ‘just’ 14 million of Telegram’s users are located in Russia – that’s more than one IP address for each of them. As a result, there are reports of completed unrelated services being affected by the ban, which is to be expected given its widespread nature. But Russia doesn’t want to stop there.

According to Reuters, local telecoms watchdog Rozcomnadzor asked both Google and Apple [Update: and APKMirror] to remove Telegram from their app stores, to prevent local citizens from gaining access to the software itself. It is unclear whether either company intends to comply but as yet, neither has responded publicly nor taken any noticeable action.

An announcement from Durov last night thanked the companies for not complying with the Russian government’s demands, noting that the efforts so far had proven mostly futile.

“Despite the ban, we haven’t seen a significant drop in user engagement so far, since Russians tend to bypass the ban with VPNs and proxies. We also have been relying on third-party cloud services to remain partly available for our users there,” Durov wrote on Telegram.

“Thank you for your support and loyalty, Russian users of Telegram. Thank you, Apple, Google, Amazon, Microsoft – for not taking part in political censorship.”

Durov noted that Russia accounts for around 7% of Telegram’s userbase, a figure that could be compensated for with organic growth in just a couple of months, even if Telegram lost access to the entire market. However, the action only appears to have lit a fire under the serial entrepreneur, who now has declared a war of his own against censorship.

“To support internet freedoms in Russia and elsewhere I started giving out bitcoin grants to individuals and companies who run socks5 proxies and VPN,” Durov said.

“I am happy to donate millions of dollars this year to this cause, and hope that other people will follow. I called this Digital Resistance – a decentralized movement standing for digital freedoms and progress globally.”

As founder of not only Telegram but also vKontakte, Russia’s answer to Facebook, Durov is a force to be reckoned with. As such, his promises are unlikely to be hollow ones. While Russia has drawn a line in the sand on encryption, it appears to have energized Durov to take a stand, one that could have a positive effect on anti-censorship measures both in Russia and further afield.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Notes on setting up Raspberry Pi 3 as WiFi hotspot

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/04/notes-on-setting-up-raspberry-pi-3-as.html

I want to sniff the packets for IoT devices. There are a number of ways of doing this, but one straightforward mechanism is configuring a “Raspberry Pi 3 B” as a WiFi hotspot, then running tcpdump on it to record all the packets that pass through it. Google gives lots of results on how to do this, but they all demand that you have the precise hardware, WiFi hardware, and software that the authors do, so that’s a pain.

I got it working using the instructions here. There are a few additional notes, which is why I’m writing this blogpost, so I remember them.
https://www.raspberrypi.org/documentation/configuration/wireless/access-point.md

I’m using the RPi-3-B and not the RPi-3-B+, and the latest version of Raspbian at the time of this writing, “Raspbian Stretch Lite 2018-3-13”.

Some things didn’t work as described. The first is that it couldn’t find the package “hostapd”. That solution was to run “apt-get update” a second time.

The second problem was error message about the NAT not working when trying to set the masquerade rule. That’s because the ‘upgrade’ updates the kernel, making the running system out-of-date with the files on the disk. The solution to that is make sure you reboot after upgrading.

Thus, what you do at the start is:

apt-get update
apt-get upgrade
apt-get update
shutdown -r now

Then it’s just “apt-get install tcpdump” and start capturing on wlan0. This will get the non-monitor-mode Ethernet frames, which is what I want.

Директивата за авторското право: компромисен проект на Българското председателство, април 2018

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/04/14/copyrigt_dir_bg_pres_compr/

Актуални новини за  хода на Директивата за авторското право в цифровия единен пазар – и участието на Българското председателство в процеса на постигане на съгласие по текстовете.

В Twitter се разпространяват два документа, публикувани на сайта на Австрийския парламент.

Компромисът на Българското председателство, който ще се обсъжда в понеделник, 16 април:

https://platform.twitter.com/widgets.js

И заедно с това уточнения и предложения по спорните разпоредби, вкл. чл.11 и чл.13 – отново предложение на Българското председателство:

https://platform.twitter.com/widgets.js

Securing messages published to Amazon SNS with AWS PrivateLink

Post Syndicated from Otavio Ferreira original https://aws.amazon.com/blogs/security/securing-messages-published-to-amazon-sns-with-aws-privatelink/

Amazon Simple Notification Service (SNS) now supports VPC Endpoints (VPCE) via AWS PrivateLink. You can use VPC Endpoints to privately publish messages to SNS topics, from an Amazon Virtual Private Cloud (VPC), without traversing the public internet. When you use AWS PrivateLink, you don’t need to set up an Internet Gateway (IGW), Network Address Translation (NAT) device, or Virtual Private Network (VPN) connection. You don’t need to use public IP addresses, either.

VPC Endpoints doesn’t require code changes and can bring additional security to Pub/Sub Messaging use cases that rely on SNS. VPC Endpoints helps promote data privacy and is aligned with assurance programs, including the Health Insurance Portability and Accountability Act (HIPAA), FedRAMP, and others discussed below.

VPC Endpoints for SNS in action

Here’s how VPC Endpoints for SNS works. The following example is based on a banking system that processes mortgage applications. This banking system, which has been deployed to a VPC, publishes each mortgage application to an SNS topic. The SNS topic then fans out the mortgage application message to two subscribing AWS Lambda functions:

  • Save-Mortgage-Application stores the application in an Amazon DynamoDB table. As the mortgage application contains personally identifiable information (PII), the message must not traverse the public internet.
  • Save-Credit-Report checks the applicant’s credit history against an external Credit Reporting Agency (CRA), then stores the final credit report in an Amazon S3 bucket.

The following diagram depicts the underlying architecture for this banking system:
 
Diagram depicting the architecture for the example banking system
 
To protect applicants’ data, the financial institution responsible for developing this banking system needed a mechanism to prevent PII data from traversing the internet when publishing mortgage applications from their VPC to the SNS topic. Therefore, they created a VPC endpoint to enable their publisher Amazon EC2 instance to privately connect to the SNS API. As shown in the diagram, when the VPC endpoint is created, an Elastic Network Interface (ENI) is automatically placed in the same VPC subnet as the publisher EC2 instance. This ENI exposes a private IP address that is used as the entry point for traffic destined to SNS. This ensures that traffic between the VPC and SNS doesn’t leave the Amazon network.

Set up VPC Endpoints for SNS

The process for creating a VPC endpoint to privately connect to SNS doesn’t require code changes: access the VPC Management Console, navigate to the Endpoints section, and create a new Endpoint. Three attributes are required:

  • The SNS service name.
  • The VPC and Availability Zones (AZs) from which you’ll publish your messages.
  • The Security Group (SG) to be associated with the endpoint network interface. The Security Group controls the traffic to the endpoint network interface from resources in your VPC. If you don’t specify a Security Group, the default Security Group for your VPC will be associated.

Help ensure your security and compliance

SNS can support messaging use cases in regulated market segments, such as healthcare provider systems subject to the Health Insurance Portability and Accountability Act (HIPAA) and financial systems subject to the Payment Card Industry Data Security Standard (PCI DSS), and is also in-scope with the following Assurance Programs:

The SNS API is served through HTTP Secure (HTTPS), and encrypts all messages in transit with Transport Layer Security (TLS) certificates issued by Amazon Trust Services (ATS). The certificates verify the identity of the SNS API server when encrypted connections are established. The certificates help establish proof that your SNS API client (SDK, CLI) is communicating securely with the SNS API server. A Certificate Authority (CA) issues the certificate to a specific domain. Hence, when a domain presents a certificate that’s issued by a trusted CA, the SNS API client knows it’s safe to make the connection.

Summary

VPC Endpoints can increase the security of your pub/sub messaging use cases by allowing you to publish messages to SNS topics, from instances in your VPC, without traversing the internet. Setting up VPC Endpoints for SNS doesn’t require any code changes because the SNS API address remains the same.

VPC Endpoints for SNS is now available in all AWS Regions where AWS PrivateLink is available. For information on pricing and regional availability, visit the VPC pricing page.
For more information and on-boarding, see Publishing to Amazon SNS Topics from Amazon Virtual Private Cloud in the SNS documentation.

If you have comments about this post, submit them in the Comments section below. If you have questions about anything in this post, start a new thread on the Amazon SNS forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Две изслушвания на Марк Зукърбърг пред парламентарни комисии в САЩ

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/04/10/fb_hearings/

Предстоят две изслушвания на Марк Зукърбърг  на 10 и 11 април 2018 г.  пред три парламентарни комисии в САЩ

Подготвените писмени показания на  Зукърбърг могат да се прочетат в американските медии.

Показанията имат  две части – за Кеймбридж Аналитика и за   намеса в изборите, свързана с Русия, като по всеки от двата въпроса Зукърбърг излага какво се е случило и какво прави компанията FB в отговор.

Изслушванията ще се проведат по предварителни съобщения  днес от 21.15 българско време (пред две сенатски  комисии – на живо тук –  https://cs.pn/2IxEXj7 ) и утре от 17 часа българско време (пред комисия на Камарата на представителите – на живо тук  https://cs.pn/2uMK392)

Междувременно той е провел и предварителни срещи с представители на Конгреса, а медиите предлагат нови и нови въпроси, които да му бъдат поставени.

Но мнението на Зейнеп Тюфекчи е по-различно: Какво го да питат в Конгреса: нищо. По-хубаво да си гледат работата.

 

User Authentication Best Practices Checklist

Post Syndicated from Bozho original https://techblog.bozho.net/user-authentication-best-practices-checklist/

User authentication is the functionality that every web application shared. We should have perfected that a long time ago, having implemented it so many times. And yet there are so many mistakes made all the time.

Part of the reason for that is that the list of things that can go wrong is long. You can store passwords incorrectly, you can have a vulnerably password reset functionality, you can expose your session to a CSRF attack, your session can be hijacked, etc. So I’ll try to compile a list of best practices regarding user authentication. OWASP top 10 is always something you should read, every year. But that might not be enough.

So, let’s start. I’ll try to be concise, but I’ll include as much of the related pitfalls as I can cover – e.g. what could go wrong with the user session after they login:

  • Store passwords with bcrypt/scrypt/PBKDF2. No MD5 or SHA, as they are not good for password storing. Long salt (per user) is mandatory (the aforementioned algorithms have it built in). If you don’t and someone gets hold of your database, they’ll be able to extract the passwords of all your users. And then try these passwords on other websites.
  • Use HTTPS. Period. (Otherwise user credentials can leak through unprotected networks). Force HTTPS if user opens a plain-text version.
  • Mark cookies as secure. Makes cookie theft harder.
  • Use CSRF protection (e.g. CSRF one-time tokens that are verified with each request). Frameworks have such functionality built-in.
  • Disallow framing (X-Frame-Options: DENY). Otherwise your website may be included in another website in a hidden iframe and “abused” through javascript.
  • Have a same-origin policy
  • Logout – let your users logout by deleting all cookies and invalidating the session. This makes usage of shared computers safer (yes, users should ideally use private browsing sessions, but not all of them are that savvy)
  • Session expiry – don’t have forever-lasting sessions. If the user closes your website, their session should expire after a while. “A while” may still be a big number depending on the service provided. For ajax-heavy website you can have regular ajax-polling that keeps the session alive while the page stays open.
  • Remember me – implementing “remember me” (on this machine) functionality is actually hard due to the risks of a stolen persistent cookie. Spring-security uses this approach, which I think should be followed if you wish to implement more persistent logins.
  • Forgotten password flow – the forgotten password flow should rely on sending a one-time (or expiring) link to the user and asking for a new password when it’s opened. 0Auth explain it in this post and Postmark gives some best pracitces. How the link is formed is a separate discussion and there are several approaches. Store a password-reset token in the user profile table and then send it as parameter in the link. Or do not store anything in the database, but send a few params: userId:expiresTimestamp:hmac(userId+expiresTimestamp). That way you have expiring links (rather than one-time links). The HMAC relies on a secret key, so the links can’t be spoofed. It seems there’s no consensus, as the OWASP guide has a bit different approach
  • One-time login links – this is an option used by Slack, which sends one-time login links instead of asking users for passwords. It relies on the fact that your email is well guarded and you have access to it all the time. If your service is not accessed to often, you can have that approach instead of (rather than in addition to) passwords.
  • Limit login attempts – brute-force through a web UI should not be possible; therefore you should block login attempts if they become too many. One approach is to just block them based on IP. The other one is to block them based on account attempted. (Spring example here). Which one is better – I don’t know. Both can actually be combined. Instead of fully blocking the attempts, you may add a captcha after, say, the 5th attempt. But don’t add the captcha for the first attempt – it is bad user experience.
  • Don’t leak information through error messages – you shouldn’t allow attackers to figure out if an email is registered or not. If an email is not found, upon login report just “Incorrect credentials”. On passwords reset, it may be something like “If your email is registered, you should have received a password reset email”. This is often at odds with usability – people don’t often remember the email they used to register, and the ability to check a number of them before getting in might be important. So this rule is not absolute, though it’s desirable, especially for more critical systems.
  • Make sure you use JWT only if it’s really necessary and be careful of the pitfalls.
  • Consider using a 3rd party authentication – OpenID Connect, OAuth by Google/Facebook/Twitter (but be careful with OAuth flaws as well). There’s an associated risk with relying on a 3rd party identity provider, and you still have to manage cookies, logout, etc., but some of the authentication aspects are simplified.
  • For high-risk or sensitive applications use 2-factor authentication. There’s a caveat with Google Authenticator though – if you lose your phone, you lose your accounts (unless there’s a manual process to restore it). That’s why Authy seems like a good solution for storing 2FA keys.

I’m sure I’m missing something. And you see it’s complicated. Sadly we’re still at the point where the most common functionality – authenticating users – is so tricky and cumbersome, that you almost always get at least some of it wrong.

The post User Authentication Best Practices Checklist appeared first on Bozho's tech blog.

Rotate Amazon RDS database credentials automatically with AWS Secrets Manager

Post Syndicated from Apurv Awasthi original https://aws.amazon.com/blogs/security/rotate-amazon-rds-database-credentials-automatically-with-aws-secrets-manager/

Recently, we launched AWS Secrets Manager, a service that makes it easier to rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. You can configure Secrets Manager to rotate secrets automatically, which can help you meet your security and compliance needs. Secrets Manager offers built-in integrations for MySQL, PostgreSQL, and Amazon Aurora on Amazon RDS, and can rotate credentials for these databases natively. You can control access to your secrets by using fine-grained AWS Identity and Access Management (IAM) policies. To retrieve secrets, employees replace plaintext secrets with a call to Secrets Manager APIs, eliminating the need to hard-code secrets in source code or update configuration files and redeploy code when secrets are rotated.

In this post, I introduce the key features of Secrets Manager. I then show you how to store a database credential for a MySQL database hosted on Amazon RDS and how your applications can access this secret. Finally, I show you how to configure Secrets Manager to rotate this secret automatically.

Key features of Secrets Manager

These features include the ability to:

  • Rotate secrets safely. You can configure Secrets Manager to rotate secrets automatically without disrupting your applications. Secrets Manager offers built-in integrations for rotating credentials for Amazon RDS databases for MySQL, PostgreSQL, and Amazon Aurora. You can extend Secrets Manager to meet your custom rotation requirements by creating an AWS Lambda function to rotate other types of secrets. For example, you can create an AWS Lambda function to rotate OAuth tokens used in a mobile application. Users and applications retrieve the secret from Secrets Manager, eliminating the need to email secrets to developers or update and redeploy applications after AWS Secrets Manager rotates a secret.
  • Secure and manage secrets centrally. You can store, view, and manage all your secrets. By default, Secrets Manager encrypts these secrets with encryption keys that you own and control. Using fine-grained IAM policies, you can control access to secrets. For example, you can require developers to provide a second factor of authentication when they attempt to retrieve a production database credential. You can also tag secrets to help you discover, organize, and control access to secrets used throughout your organization.
  • Monitor and audit easily. Secrets Manager integrates with AWS logging and monitoring services to enable you to meet your security and compliance requirements. For example, you can audit AWS CloudTrail logs to see when Secrets Manager rotated a secret or configure AWS CloudWatch Events to alert you when an administrator deletes a secret.
  • Pay as you go. Pay for the secrets you store in Secrets Manager and for the use of these secrets; there are no long-term contracts or licensing fees.

Get started with Secrets Manager

Now that you’re familiar with the key features, I’ll show you how to store the credential for a MySQL database hosted on Amazon RDS. To demonstrate how to retrieve and use the secret, I use a python application running on Amazon EC2 that requires this database credential to access the MySQL instance. Finally, I show how to configure Secrets Manager to rotate this database credential automatically. Let’s get started.

Phase 1: Store a secret in Secrets Manager

  1. Open the Secrets Manager console and select Store a new secret.
     
    Secrets Manager console interface
     
  2. I select Credentials for RDS database because I’m storing credentials for a MySQL database hosted on Amazon RDS. For this example, I store the credentials for the database superuser. I start by securing the superuser because it’s the most powerful database credential and has full access over the database.
     
    Store a new secret interface with Credentials for RDS database selected
     

    Note: For this example, you need permissions to store secrets in Secrets Manager. To grant these permissions, you can use the AWSSecretsManagerReadWriteAccess managed policy. Read the AWS Secrets Manager Documentation for more information about the minimum IAM permissions required to store a secret.

  3. Next, I review the encryption setting and choose to use the default encryption settings. Secrets Manager will encrypt this secret using the Secrets Manager DefaultEncryptionKeyDefaultEncryptionKey in this account. Alternatively, I can choose to encrypt using a customer master key (CMK) that I have stored in AWS KMS.
     
    Select the encryption key interface
     
  4. Next, I view the list of Amazon RDS instances in my account and select the database this credential accesses. For this example, I select the DB instance mysql-rds-database, and then I select Next.
     
    Select the RDS database interface
     
  5. In this step, I specify values for Secret Name and Description. For this example, I use Applications/MyApp/MySQL-RDS-Database as the name and enter a description of this secret, and then select Next.
     
    Secret Name and description interface
     
  6. For the next step, I keep the default setting Disable automatic rotation because my secret is used by my application running on Amazon EC2. I’ll enable rotation after I’ve updated my application (see Phase 2 below) to use Secrets Manager APIs to retrieve secrets. I then select Next.

    Note: If you’re storing a secret that you’re not using in your application, select Enable automatic rotation. See our AWS Secrets Manager getting started guide on rotation for details.

     
    Configure automatic rotation interface
     

  7. Review the information on the next screen and, if everything looks correct, select Store. We’ve now successfully stored a secret in Secrets Manager.
  8. Next, I select See sample code.
     
    The See sample code button
     
  9. Take note of the code samples provided. I will use this code to update my application to retrieve the secret using Secrets Manager APIs.
     
    Python sample code
     

Phase 2: Update an application to retrieve secret from Secrets Manager

Now that I have stored the secret in Secrets Manager, I update my application to retrieve the database credential from Secrets Manager instead of hard coding this information in a configuration file or source code. For this example, I show how to configure a python application to retrieve this secret from Secrets Manager.

  1. I connect to my Amazon EC2 instance via Secure Shell (SSH).
  2. Previously, I configured my application to retrieve the database user name and password from the configuration file. Below is the source code for my application.
    import MySQLdb
    import config

    def no_secrets_manager_sample()

    # Get the user name, password, and database connection information from a config file.
    database = config.database
    user_name = config.user_name
    password = config.password

    # Use the user name, password, and database connection information to connect to the database
    db = MySQLdb.connect(database.endpoint, user_name, password, database.db_name, database.port)

  3. I use the sample code from Phase 1 above and update my application to retrieve the user name and password from Secrets Manager. This code sets up the client and retrieves and decrypts the secret Applications/MyApp/MySQL-RDS-Database. I’ve added comments to the code to make the code easier to understand.
    # Use the code snippet provided by Secrets Manager.
    import boto3
    from botocore.exceptions import ClientError

    def get_secret():
    #Define the secret you want to retrieve
    secret_name = "Applications/MyApp/MySQL-RDS-Database"
    #Define the Secrets mManager end-point your code should use.
    endpoint_url = "https://secretsmanager.us-east-1.amazonaws.com"
    region_name = "us-east-1"

    #Setup the client
    session = boto3.session.Session()
    client = session.client(
    service_name='secretsmanager',
    region_name=region_name,
    endpoint_url=endpoint_url
    )

    #Use the client to retrieve the secret
    try:
    get_secret_value_response = client.get_secret_value(
    SecretId=secret_name
    )
    #Error handling to make it easier for your code to tolerate faults
    except ClientError as e:
    if e.response['Error']['Code'] == 'ResourceNotFoundException':
    print("The requested secret " + secret_name + " was not found")
    elif e.response['Error']['Code'] == 'InvalidRequestException':
    print("The request was invalid due to:", e)
    elif e.response['Error']['Code'] == 'InvalidParameterException':
    print("The request had invalid params:", e)
    else:
    # Decrypted secret using the associated KMS CMK
    # Depending on whether the secret was a string or binary, one of these fields will be populated
    if 'SecretString' in get_secret_value_response:
    secret = get_secret_value_response['SecretString']
    else:
    binary_secret_data = get_secret_value_response['SecretBinary']

    # Your code goes here.

  4. Applications require permissions to access Secrets Manager. My application runs on Amazon EC2 and uses an IAM role to obtain access to AWS services. I will attach the following policy to my IAM role. This policy uses the GetSecretValue action to grant my application permissions to read secret from Secrets Manager. This policy also uses the resource element to limit my application to read only the Applications/MyApp/MySQL-RDS-Database secret from Secrets Manager. You can visit the AWS Secrets Manager Documentation to understand the minimum IAM permissions required to retrieve a secret.
    {
    "Version": "2012-10-17",
    "Statement": {
    "Sid": "RetrieveDbCredentialFromSecretsManager",
    "Effect": "Allow",
    "Action": "secretsmanager:GetSecretValue",
    "Resource": "arn:aws:secretsmanager:::secret:Applications/MyApp/MySQL-RDS-Database"
    }
    }

Phase 3: Enable Rotation for Your Secret

Rotating secrets periodically is a security best practice because it reduces the risk of misuse of secrets. Secrets Manager makes it easy to follow this security best practice and offers built-in integrations for rotating credentials for MySQL, PostgreSQL, and Amazon Aurora databases hosted on Amazon RDS. When you enable rotation, Secrets Manager creates a Lambda function and attaches an IAM role to this function to execute rotations on a schedule you define.

Note: Configuring rotation is a privileged action that requires several IAM permissions and you should only grant this access to trusted individuals. To grant these permissions, you can use the AWS IAMFullAccess managed policy.

Next, I show you how to configure Secrets Manager to rotate the secret Applications/MyApp/MySQL-RDS-Database automatically.

  1. From the Secrets Manager console, I go to the list of secrets and choose the secret I created in the first step Applications/MyApp/MySQL-RDS-Database.
     
    List of secrets in the Secrets Manager console
     
  2. I scroll to Rotation configuration, and then select Edit rotation.
     
    Rotation configuration interface
     
  3. To enable rotation, I select Enable automatic rotation. I then choose how frequently I want Secrets Manager to rotate this secret. For this example, I set the rotation interval to 60 days.
     
    Edit rotation configuration interface
     
  4. Next, Secrets Manager requires permissions to rotate this secret on your behalf. Because I’m storing the superuser database credential, Secrets Manager can use this credential to perform rotations. Therefore, I select Use the secret that I provided in step 1, and then select Next.
     
    Select which secret to use in the Edit rotation configuration interface
     
  5. The banner on the next screen confirms that I have successfully configured rotation and the first rotation is in progress, which enables you to verify that rotation is functioning as expected. Secrets Manager will rotate this credential automatically every 60 days.
     
    Confirmation banner message
     

Summary

I introduced AWS Secrets Manager, explained the key benefits, and showed you how to help meet your compliance requirements by configuring AWS Secrets Manager to rotate database credentials automatically on your behalf. Secrets Manager helps you protect access to your applications, services, and IT resources without the upfront investment and on-going maintenance costs of operating your own secrets management infrastructure. To get started, visit the Secrets Manager console. To learn more, visit Secrets Manager documentation.

If you have comments about this post, submit them in the Comments section below. If you have questions about anything in this post, start a new thread on the Secrets Manager forum.

Want more AWS Security news? Follow us on Twitter.

Ефектите на медийната консолидация: по сценарий

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/04/04/sinclair-2/

 Sinclair Broadcast Group е най-голяма тв група по брой медии и също така по покритие в САЩ. Известни са с  новинарско съдържание и предавания, които популяризират консервативни политически позиции  и са в подкрепа на Републиканската партия.

 Когато Тръмп каже следното –

 

– ето какво следва в медиите на Синклер  – каскада от еднотипни изпълнения по сценарий, както се вижда във видеото:

“Някои  медии  използват своите платформи, за да наложат своето лично пристрастие. Това е изключително опасно за нашата демокрация.”

Видеото е публикувано и в NYT:

 

Показва ефектите  на медийната консолидация за правото на информация.  А също показва и как това управление означава критиката като fake.

+ op-ed от вчера

 

Amazon Transcribe Now Generally Available

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/amazon-transcribe-now-generally-available/


At AWS re:Invent 2017 we launched Amazon Transcribe in private preview. Today we’re excited to make Amazon Transcribe generally available for all developers. Amazon Transcribe is an automatic speech recognition service (ASR) that makes it easy for developers to add speech to text capabilities to their applications. We’ve iterated on customer feedback in the preview to make a number of enhancements to Amazon Transcribe.

New Amazon Transcribe Features in GA

To start off we’ve made the SampleRate parameter optional which means you only need to know the file type of your media and the input language. We’ve added two new features – the ability to differentiate multiple speakers in the audio to provide more intelligible transcripts (“who spoke when”), and a custom vocabulary to improve the accuracy of speech recognition for product names, industry-specific terminology, or names of individuals. To refresh our memories on how Amazon Transcribe works lets look at a quick example. I’ll convert this audio in my S3 bucket.

import boto3
transcribe = boto3.client("transcribe")
transcribe.start_transcription_job(
    TranscriptionJobName="TranscribeDemo",
    LanguageCode="en-US",
    MediaFormat="mp3",
    Media={"MediaFileUri": "https://s3.amazonaws.com/randhunt-transcribe-demo-us-east-1/out.mp3"}
)

This will output JSON similar to this (I’ve stripped out most of the response) with indidivudal speakers identified:

{
  "jobName": "reinvent",
  "accountId": "1234",
  "results": {
    "transcripts": [
      {
        "transcript": "Hi, everybody, i'm randall ..."
      }
    ],
    "speaker_labels": {
      "speakers": 2,
      "segments": [
        {
          "start_time": "0.000000",
          "speaker_label": "spk_0",
          "end_time": "0.010",
          "items": []
        },
        {
          "start_time": "0.010000",
          "speaker_label": "spk_1",
          "end_time": "4.990",
          "items": [
            {
              "start_time": "1.000",
              "speaker_label": "spk_1",
              "end_time": "1.190"
            },
            {
              "start_time": "1.190",
              "speaker_label": "spk_1",
              "end_time": "1.700"
            }
          ]
        }
      ]
    },
    "items": [
      {
        "start_time": "1.000",
        "end_time": "1.190",
        "alternatives": [
          {
            "confidence": "0.9971",
            "content": "Hi"
          }
        ],
        "type": "pronunciation"
      },
      {
        "alternatives": [
          {
            "content": ","
          }
        ],
        "type": "punctuation"
      },
      {
        "start_time": "1.190",
        "end_time": "1.700",
        "alternatives": [
          {
            "confidence": "1.0000",
            "content": "everybody"
          }
        ],
        "type": "pronunciation"
      }
    ]
  },
  "status": "COMPLETED"
}

Custom Vocabulary

Now if I needed to have a more complex technical discussion with a colleague I could create a custom vocabulary. A custom vocabulary is specified as an array of strings passed to the CreateVocabulary API and you can include your custom vocabulary in a transcription job by passing in the name as part of the Settings in a StartTranscriptionJob API call. An individual vocabulary can be as large as 50KB and each phrase must be less than 256 characters. If I wanted to transcribe the recordings of my highschool AP Biology class I could create a custom vocabulary in Python like this:

import boto3
transcribe = boto3.client("transcribe")
transcribe.create_vocabulary(
LanguageCode="en-US",
VocabularyName="APBiology"
Phrases=[
    "endoplasmic-reticulum",
    "organelle",
    "cisternae",
    "eukaryotic",
    "ribosomes",
    "hepatocyes",
    "cell-membrane"
]
)

I can refer to this vocabulary later on by the name APBiology and update it programatically based on any errors I may find in the transcriptions.

Available Now

Amazon Transcribe is available now in US East (N. Virginia), US West (Oregon), US East (Ohio) and EU (Ireland). Transcribe’s free tier gives you 60 minutes of transcription for free per month for the first 12 months with a pay-as-you-go model of $0.0004 per second of transcribed audio after that, with a minimum charge of 15 seconds.

When combined with other tools and services I think transcribe opens up a entirely new opportunities for application development. I’m excited to see what technologies developers build with this new service.

Randall

Engineering deep dive: Encoding of SCTs in certificates

Post Syndicated from Let's Encrypt - Free SSL/TLS Certificates original https://letsencrypt.org/2018/04/04/sct-encoding.html

<p>Let&rsquo;s Encrypt recently <a href="https://community.letsencrypt.org/t/signed-certificate-timestamps-embedded-in-certificates/57187">launched SCT embedding in
certificates</a>.
This feature allows browsers to check that a certificate was submitted to a
<a href="https://en.wikipedia.org/wiki/Certificate_Transparency">Certificate Transparency</a>
log. As part of the launch, we did a thorough review
that the encoding of Signed Certificate Timestamps (SCTs) in our certificates
matches the relevant specifications. In this post, I&rsquo;ll dive into the details.
You&rsquo;ll learn more about X.509, ASN.1, DER, and TLS encoding, with references to
the relevant RFCs.</p>

<p>Certificate Transparency offers three ways to deliver SCTs to a browser: In a
TLS extension, in stapled OCSP, or embedded in a certificate. We chose to
implement the embedding method because it would just work for Let&rsquo;s Encrypt
subscribers without additional work. In the SCT embedding method, we submit
a &ldquo;precertificate&rdquo; with a <a href="#poison">poison extension</a> to a set of
CT logs, and get back SCTs. We then issue a real certificate based on the
precertificate, with two changes: The poison extension is removed, and the SCTs
obtained earlier are added in another extension.</p>

<p>Given a certificate, let&rsquo;s first look for the SCT list extension. According to CT (<a href="https://tools.ietf.org/html/rfc6962#section-3.3">RFC 6962
section 3.3</a>),
the extension OID for a list of SCTs is <code>1.3.6.1.4.1.11129.2.4.2</code>. An <a href="http://www.hl7.org/Oid/information.cfm">OID (object
ID)</a> is a series of integers, hierarchically
assigned and globally unique. They are used extensively in X.509, for instance
to uniquely identify extensions.</p>

<p>We can <a href="https://acme-v01.api.letsencrypt.org/acme/cert/031f2484307c9bc511b3123cb236a480d451">download an example certificate</a>,
and view it using OpenSSL (if your OpenSSL is old, it may not display the
detailed information):</p>

<pre><code>$ openssl x509 -noout -text -inform der -in Downloads/031f2484307c9bc511b3123cb236a480d451

CT Precertificate SCTs:
Signed Certificate Timestamp:
Version : v1(0)
Log ID : DB:74:AF:EE:CB:29:EC:B1:FE:CA:3E:71:6D:2C:E5:B9:
AA:BB:36:F7:84:71:83:C7:5D:9D:4F:37:B6:1F:BF:64
Timestamp : Mar 29 18:45:07.993 2018 GMT
Extensions: none
Signature : ecdsa-with-SHA256
30:44:02:20:7E:1F:CD:1E:9A:2B:D2:A5:0A:0C:81:E7:
13:03:3A:07:62:34:0D:A8:F9:1E:F2:7A:48:B3:81:76:
40:15:9C:D3:02:20:65:9F:E9:F1:D8:80:E2:E8:F6:B3:
25:BE:9F:18:95:6D:17:C6:CA:8A:6F:2B:12:CB:0F:55:
FB:70:F7:59:A4:19
Signed Certificate Timestamp:
Version : v1(0)
Log ID : 29:3C:51:96:54:C8:39:65:BA:AA:50:FC:58:07:D4:B7:
6F:BF:58:7A:29:72:DC:A4:C3:0C:F4:E5:45:47:F4:78
Timestamp : Mar 29 18:45:08.010 2018 GMT
Extensions: none
Signature : ecdsa-with-SHA256
30:46:02:21:00:AB:72:F1:E4:D6:22:3E:F8:7F:C6:84:
91:C2:08:D2:9D:4D:57:EB:F4:75:88:BB:75:44:D3:2F:
95:37:E2:CE:C1:02:21:00:8A:FF:C4:0C:C6:C4:E3:B2:
45:78:DA:DE:4F:81:5E:CB:CE:2D:57:A5:79:34:21:19:
A1:E6:5B:C7:E5:E6:9C:E2
</code></pre>

<p>Now let&rsquo;s go a little deeper. How is that extension represented in
the certificate? Certificates are expressed in
<a href="https://en.wikipedia.org/wiki/Abstract_Syntax_Notation_One">ASN.1</a>,
which generally refers to both a language for expressing data structures
and a set of formats for encoding them. The most common format,
<a href="https://en.wikipedia.org/wiki/X.690#DER_encoding">DER</a>,
is a tag-length-value format. That is, to encode an object, first you write
down a tag representing its type (usually one byte), then you write
down a number expressing how long the object is, then you write down
the object contents. This is recursive: An object can contain multiple
objects within it, each of which has its own tag, length, and value.</p>

<p>One of the cool things about DER and other tag-length-value formats is that you
can decode them to some degree without knowing what they mean. For instance, I
can tell you that 0x30 means the data type &ldquo;SEQUENCE&rdquo; (a struct, in ASN.1
terms), and 0x02 means &ldquo;INTEGER&rdquo;, then give you this hex byte sequence to
decode:</p>

<pre><code>30 06 02 01 03 02 01 0A
</code></pre>

<p>You could tell me right away that decodes to:</p>

<pre><code>SEQUENCE
INTEGER 3
INTEGER 10
</code></pre>

<p>Try it yourself with this great <a href="https://lapo.it/asn1js/#300602010302010A">JavaScript ASN.1
decoder</a>. However, you wouldn&rsquo;t know
what those integers represent without the corresponding ASN.1 schema (or
&ldquo;module&rdquo;). For instance, if you knew that this was a piece of DogData, and the
schema was:</p>

<pre><code>DogData ::= SEQUENCE {
legs INTEGER,
cutenessLevel INTEGER
}
</code></pre>

<p>You&rsquo;d know this referred to a three-legged dog with a cuteness level of 10.</p>

<p>We can take some of this knowledge and apply it to our certificates. As a first
step, convert the above certificate to hex with
<code>xxd -ps &lt; Downloads/031f2484307c9bc511b3123cb236a480d451</code>. You can then copy
and paste the result into
<a href="https://lapo.it/asn1js">lapo.it/asn1js</a> (or use <a href="https://lapo.it/asn1js/#3082062F30820517A0030201020212031F2484307C9BC511B3123CB236A480D451300D06092A864886F70D01010B0500304A310B300906035504061302555331163014060355040A130D4C6574277320456E6372797074312330210603550403131A4C6574277320456E637279707420417574686F72697479205833301E170D3138303332393137343530375A170D3138303632373137343530375A302D312B3029060355040313223563396137662E6C652D746573742E686F66666D616E2D616E64726577732E636F6D30820122300D06092A864886F70D01010105000382010F003082010A0282010100BCEAE8F504D9D91FCFC69DB943254A7FED7C6A3C04E2D5C7DDD010CBBC555887274489CA4F432DCE6D7AB83D0D7BDB49C466FBCA93102DC63E0EB1FB2A0C50654FD90B81A6CB357F58E26E50F752BF7BFE9B56190126A47409814F59583BDD337DFB89283BE22E81E6DCE13B4E21FA6009FC8A7F903A17AB05C8BED85A715356837E849E571960A8999701EAE9CE0544EAAB936B790C3C35C375DB18E9AA627D5FA3579A0FB5F8079E4A5C9BE31C2B91A7F3A63AFDFEDB9BD4EA6668902417D286BE4BBE5E43CD9FE1B8954C06F21F5C5594FD3AB7D7A9CBD6ABF19774D652FD35C5718C25A3BA1967846CED70CDBA95831CF1E09FF7B8014E63030CE7A776750203010001A382032A30820326300E0603551D0F0101FF0404030205A0301D0603551D250416301406082B0601050507030106082B06010505070302300C0603551D130101FF04023000301D0603551D0E041604148B3A21ABADF50C4B30DCCD822724D2C4B9BA29E3301F0603551D23041830168014A84A6A63047DDDBAE6D139B7A64565EFF3A8ECA1306F06082B0601050507010104633061302E06082B060105050730018622687474703A2F2F6F6373702E696E742D78332E6C657473656E63727970742E6F7267302F06082B060105050730028623687474703A2F2F636572742E696E742D78332E6C657473656E63727970742E6F72672F302D0603551D110426302482223563396137662E6C652D746573742E686F66666D616E2D616E64726577732E636F6D3081FE0603551D200481F63081F33008060667810C0102013081E6060B2B0601040182DF130101013081D6302606082B06010505070201161A687474703A2F2F6370732E6C657473656E63727970742E6F72673081AB06082B0601050507020230819E0C819B54686973204365727469666963617465206D6179206F6E6C792062652072656C6965642075706F6E2062792052656C79696E67205061727469657320616E64206F6E6C7920696E206163636F7264616E636520776974682074686520436572746966696361746520506F6C69637920666F756E642061742068747470733A2F2F6C657473656E63727970742E6F72672F7265706F7369746F72792F30820104060A2B06010401D6790204020481F50481F200F0007500DB74AFEECB29ECB1FECA3E716D2CE5B9AABB36F7847183C75D9D4F37B61FBF64000001627313EB19000004030046304402207E1FCD1E9A2BD2A50A0C81E713033A0762340DA8F91EF27A48B3817640159CD30220659FE9F1D880E2E8F6B325BE9F18956D17C6CA8A6F2B12CB0F55FB70F759A419007700293C519654C83965BAAA50FC5807D4B76FBF587A2972DCA4C30CF4E54547F478000001627313EB2A0000040300483046022100AB72F1E4D6223EF87FC68491C208D29D4D57EBF47588BB7544D32F9537E2CEC10221008AFFC40CC6C4E3B24578DADE4F815ECBCE2D57A579342119A1E65BC7E5E69CE2300D06092A864886F70D01010B0500038201010095F87B663176776502F792DDD232C216943C7803876FCBEB46393A36354958134482E0AFEED39011618327C2F0203351758FEB420B73CE6C797B98F88076F409F3903F343D1F5D9540F41EF47EB39BD61B62873A44F00B7C8B593C6A416458CF4B5318F35235BC88EABBAA34F3E3F81BD3B047E982EE1363885E84F76F2F079F2B6EEB4ECB58EFE74C8DE7D54DE5C89C4FB5BB0694B837BD6F02BAFD5A6C007D1B93D25007BDA9B2BDBF82201FE1B76B628CE34E2D974E8E623EC57A5CB53B435DD4B9993ADF6BA3972F2B29D259594A94E17BBE06F34AAE5CF0F50297548C4DFFC5566136F78A3D3B324EAE931A14EB6BE6DA1D538E48CF077583C67B52E7E8">this handy link</a>). You can also run <code>openssl asn1parse -i -inform der -in Downloads/031f2484307c9bc511b3123cb236a480d451</code> to use OpenSSL&rsquo;s parser, which is less easy to use in some ways, but easier to copy and paste.</p>

<p>In the decoded data, we can find the OID <code>1.3.6.1.4.1.11129.2.4.2</code>, indicating
the SCT list extension. Per <a href="https://tools.ietf.org/html/rfc5280#page-17">RFC 5280, section
4.1</a>, an extension is defined:</p>

<pre><code>Extension ::= SEQUENCE {
extnID OBJECT IDENTIFIER,
critical BOOLEAN DEFAULT FALSE,
extnValue OCTET STRING
— contains the DER encoding of an ASN.1 value
— corresponding to the extension type identified
— by extnID
}
</code></pre>

<p>We&rsquo;ve found the <code>extnID</code>. The &ldquo;critical&rdquo; field is omitted because it has the
default value (false). Next up is the <code>extnValue</code>. This has the type
<code>OCTET STRING</code>, which has the tag &ldquo;0x04&rdquo;. <code>OCTET STRING</code> means &ldquo;here&rsquo;s
a bunch of bytes!&rdquo; In this case, as described by the spec, those bytes
happen to contain more DER. This is a fairly common pattern in X.509
to deal with parameterized data. For instance, this allows defining a
structure for extensions without knowing ahead of time all the structures
that a future extension might want to carry in its value. If you&rsquo;re a C
programmer, think of it as a <code>void*</code> for data structures. If you prefer Go,
think of it as an <code>interface{}</code>.</p>

<p>Here&rsquo;s that <code>extnValue</code>:</p>

<pre><code>04 81 F5 0481F200F0007500DB74AFEECB29ECB1FECA3E716D2CE5B9AABB36F7847183C75D9D4F37B61FBF64000001627313EB19000004030046304402207E1FCD1E9A2BD2A50A0C81E713033A0762340DA8F91EF27A48B3817640159CD30220659FE9F1D880E2E8F6B325BE9F18956D17C6CA8A6F2B12CB0F55FB70F759A419007700293C519654C83965BAAA50FC5807D4B76FBF587A2972DCA4C30CF4E54547F478000001627313EB2A0000040300483046022100AB72F1E4D6223EF87FC68491C208D29D4D57EBF47588BB7544D32F9537E2CEC10221008AFFC40CC6C4E3B24578DADE4F815ECBCE2D57A579342119A1E65BC7E5E69CE2
</code></pre>

<p>That&rsquo;s tag &ldquo;0x04&rdquo;, meaning <code>OCTET STRING</code>, followed by &ldquo;0x81 0xF5&rdquo;, meaning
&ldquo;this string is 245 bytes long&rdquo; (the 0x81 prefix is part of <a href="#variable-length">variable length
number encoding</a>).</p>

<p>According to <a href="https://tools.ietf.org/html/rfc6962#section-3.3">RFC 6962, section
3.3</a>, &ldquo;obtained SCTs
can be directly embedded in the final certificate, by encoding the
SignedCertificateTimestampList structure as an ASN.1 <code>OCTET STRING</code>
and inserting the resulting data in the TBSCertificate as an X.509v3
certificate extension&rdquo;</p>

<p>So, we have an <code>OCTET STRING</code>, all&rsquo;s good, right? Except if you remove the
tag and length from extnValue to get its value, you&rsquo;re left with:</p>

<pre><code>04 81 F2 00F0007500DB74AFEEC…
</code></pre>

<p>There&rsquo;s that &ldquo;0x04&rdquo; tag again, but with a shorter length. Why
do we nest one <code>OCTET STRING</code> inside another? It&rsquo;s because the
contents of extnValue are required by RFC 5280 to be valid DER, but a
SignedCertificateTimestampList is not encoded using DER (more on that
in a minute). So, by RFC 6962, a SignedCertificateTimestampList is wrapped in an
<code>OCTET STRING</code>, which is wrapped in another <code>OCTET STRING</code> (the extnValue).</p>

<p>Once we decode that second <code>OCTET STRING</code>, we&rsquo;re left with the contents:</p>

<pre><code>00F0007500DB74AFEEC…
</code></pre>

<p>&ldquo;0x00&rdquo; isn&rsquo;t a valid tag in DER. What is this? It&rsquo;s TLS encoding. This is
defined in <a href="https://tools.ietf.org/html/rfc5246#section-4">RFC 5246, section 4</a>
(the TLS 1.2 RFC). TLS encoding, like ASN.1, has both a way to define data
structures and a way to encode those structures. TLS encoding differs
from DER in that there are no tags, and lengths are only encoded when necessary for
variable-length arrays. Within an encoded structure, the type of a field is determined by
its position, rather than by a tag. This means that TLS-encoded structures are
more compact than DER structures, but also that they can&rsquo;t be processed without
knowing the corresponding schema. For instance, here&rsquo;s the top-level schema from
<a href="https://tools.ietf.org/html/rfc6962#section-3.3">RFC 6962, section 3.3</a>:</p>

<pre><code> The contents of the ASN.1 OCTET STRING embedded in an OCSP extension
or X509v3 certificate extension are as follows:

opaque SerializedSCT&lt;1..2^16-1&gt;;

struct {
SerializedSCT sct_list &lt;1..2^16-1&gt;;
} SignedCertificateTimestampList;

Here, &quot;SerializedSCT&quot; is an opaque byte string that contains the
serialized TLS structure.
</code></pre>

<p>Right away, we&rsquo;ve found one of those variable-length arrays. The length of such
an array (in bytes) is always represented by a length field just big enough to
hold the max array size. The max size of an <code>sct_list</code> is 65535 bytes, so the
length field is two bytes wide. Sure enough, those first two bytes are &ldquo;0x00
0xF0&rdquo;, or 240 in decimal. In other words, this <code>sct_list</code> will have 240 bytes. We
don&rsquo;t yet know how many SCTs will be in it. That will become clear only by
continuing to parse the encoded data and seeing where each struct ends (spoiler
alert: there are two SCTs!).</p>

<p>Now we know the first SerializedSCT starts with <code>0075…</code>. SerializedSCT
is itself a variable-length field, this time containing <code>opaque</code> bytes (much like <code>OCTET STRING</code>
back in the ASN.1 world). Like SignedCertificateTimestampList, it has a max size
of 65535 bytes, so we pull off the first two bytes and discover that the first
SerializedSCT is 0x0075 (117 decimal) bytes long. Here&rsquo;s the whole thing, in
hex:</p>

<pre><code>00DB74AFEECB29ECB1FECA3E716D2CE5B9AABB36F7847183C75D9D4F37B61FBF64000001627313EB19000004030046304402207E1FCD1E9A2BD2A50A0C81E713033A0762340DA8F91EF27A48B3817640159CD30220659FE9F1D880E2E8F6B325BE9F18956D17C6CA8A6F2B12CB0F55FB70F759A419
</code></pre>

<p>This can be decoded using the TLS encoding struct defined in <a href="https://tools.ietf.org/html/rfc6962#page-13">RFC 6962, section
3.2</a>:</p>

<pre><code>enum { v1(0), (255) }
Version;

struct {
opaque key_id[32];
} LogID;

opaque CtExtensions&lt;0..2^16-1&gt;;

struct {
Version sct_version;
LogID id;
uint64 timestamp;
CtExtensions extensions;
digitally-signed struct {
Version sct_version;
SignatureType signature_type = certificate_timestamp;
uint64 timestamp;
LogEntryType entry_type;
select(entry_type) {
case x509_entry: ASN.1Cert;
case precert_entry: PreCert;
} signed_entry;
CtExtensions extensions;
};
} SignedCertificateTimestamp;
</code></pre>

<p>Breaking that down:</p>

<pre><code># Version sct_version v1(0)
00
# LogID id (aka opaque key_id[32])
DB74AFEECB29ECB1FECA3E716D2CE5B9AABB36F7847183C75D9D4F37B61FBF64
# uint64 timestamp (milliseconds since the epoch)
000001627313EB19
# CtExtensions extensions (zero-length array)
0000
# digitally-signed struct
04030046304402207E1FCD1E9A2BD2A50A0C81E713033A0762340DA8F91EF27A48B3817640159CD30220659FE9F1D880E2E8F6B325BE9F18956D17C6CA8A6F2B12CB0F55FB70F759A419
</code></pre>

<p>To understand the &ldquo;digitally-signed struct,&rdquo; we need to turn back to <a href="https://tools.ietf.org/html/rfc5246#section-4.7">RFC 5246,
section 4.7</a>. It says:</p>

<pre><code>A digitally-signed element is encoded as a struct DigitallySigned:

struct {
SignatureAndHashAlgorithm algorithm;
opaque signature&lt;0..2^16-1&gt;;
} DigitallySigned;
</code></pre>

<p>And in <a href="https://tools.ietf.org/html/rfc5246#section-7.4.1.4.1">section
7.4.1.4.1</a>:</p>

<pre><code>enum {
none(0), md5(1), sha1(2), sha224(3), sha256(4), sha384(5),
sha512(6), (255)
} HashAlgorithm;

enum { anonymous(0), rsa(1), dsa(2), ecdsa(3), (255) }
SignatureAlgorithm;

struct {
HashAlgorithm hash;
SignatureAlgorithm signature;
} SignatureAndHashAlgorithm;
</code></pre>

<p>We have &ldquo;0x0403&rdquo;, which corresponds to sha256(4) and ecdsa(3). The next two
bytes, &ldquo;0x0046&rdquo;, tell us the length of the &ldquo;opaque signature&rdquo; field, 70 bytes in
decimal. To decode the signature, we reference <a href="https://tools.ietf.org/html/rfc4492#page-20">RFC 4492 section
5.4</a>, which says:</p>

<pre><code>The digitally-signed element is encoded as an opaque vector &lt;0..2^16-1&gt;, the
contents of which are the DER encoding corresponding to the
following ASN.1 notation.

Ecdsa-Sig-Value ::= SEQUENCE {
r INTEGER,
s INTEGER
}
</code></pre>

<p>Having dived through two layers of TLS encoding, we are now back in ASN.1 land!
We
<a href="https://lapo.it/asn1js/#304402207E1FCD1E9A2BD2A50A0C81E713033A0762340DA8F91EF27A48B3817640159CD30220659FE9F1D880E2E8F6B325BE9F18956D17C6CA8A6F2B12CB0F55FB70F759A419">decode</a>
the remaining bytes into a SEQUENCE containing two INTEGERS. And we&rsquo;re done! Here&rsquo;s the whole
extension decoded:</p>

<pre><code># Extension SEQUENCE – RFC 5280
30
# length 0x0104 bytes (260 decimal)
820104
# OBJECT IDENTIFIER
06
# length 0x0A bytes (10 decimal)
0A
# value (1.3.6.1.4.1.11129.2.4.2)
2B06010401D679020402
# OCTET STRING
04
# length 0xF5 bytes (245 decimal)
81F5
# OCTET STRING (embedded) – RFC 6962
04
# length 0xF2 bytes (242 decimal)
81F2
# Beginning of TLS encoded SignedCertificateTimestampList – RFC 5246 / 6962
# length 0xF0 bytes
00F0
# opaque SerializedSCT&lt;1..2^16-1&gt;
# length 0x75 bytes
0075
# Version sct_version v1(0)
00
# LogID id (aka opaque key_id[32])
DB74AFEECB29ECB1FECA3E716D2CE5B9AABB36F7847183C75D9D4F37B61FBF64
# uint64 timestamp (milliseconds since the epoch)
000001627313EB19
# CtExtensions extensions (zero-length array)
0000
# digitally-signed struct – RFC 5426
# SignatureAndHashAlgorithm (ecdsa-sha256)
0403
# opaque signature&lt;0..2^16-1&gt;;
# length 0x0046
0046
# DER-encoded Ecdsa-Sig-Value – RFC 4492
30 # SEQUENCE
44 # length 0x44 bytes
02 # r INTEGER
20 # length 0x20 bytes
# value
7E1FCD1E9A2BD2A50A0C81E713033A0762340DA8F91EF27A48B3817640159CD3
02 # s INTEGER
20 # length 0x20 bytes
# value
659FE9F1D880E2E8F6B325BE9F18956D17C6CA8A6F2B12CB0F55FB70F759A419
# opaque SerializedSCT&lt;1..2^16-1&gt;
# length 0x77 bytes
0077
# Version sct_version v1(0)
00
# LogID id (aka opaque key_id[32])
293C519654C83965BAAA50FC5807D4B76FBF587A2972DCA4C30CF4E54547F478
# uint64 timestamp (milliseconds since the epoch)
000001627313EB2A
# CtExtensions extensions (zero-length array)
0000
# digitally-signed struct – RFC 5426
# SignatureAndHashAlgorithm (ecdsa-sha256)
0403
# opaque signature&lt;0..2^16-1&gt;;
# length 0x0048
0048
# DER-encoded Ecdsa-Sig-Value – RFC 4492
30 # SEQUENCE
46 # length 0x46 bytes
02 # r INTEGER
21 # length 0x21 bytes
# value
00AB72F1E4D6223EF87FC68491C208D29D4D57EBF47588BB7544D32F9537E2CEC1
02 # s INTEGER
21 # length 0x21 bytes
# value
008AFFC40CC6C4E3B24578DADE4F815ECBCE2D57A579342119A1E65BC7E5E69CE2
</code></pre>

<p>One surprising thing you might notice: In the first SCT, <code>r</code> and <code>s</code> are twenty
bytes long. In the second SCT, they are both twenty-one bytes long, and have a
leading zero. Integers in DER are two&rsquo;s complement, so if the leftmost bit is
set, they are interpreted as negative. Since <code>r</code> and <code>s</code> are positive, if the
leftmost bit would be a 1, an extra byte has to be added so that the leftmost
bit can be 0.</p>

<p>This is a little taste of what goes into encoding a certificate. I hope it was
informative! If you&rsquo;d like to learn more, I recommend &ldquo;<a href="http://luca.ntop.org/Teaching/Appunti/asn1.html">A Layman&rsquo;s Guide to a
Subset of ASN.1, BER, and DER</a>.&rdquo;</p>

<p><a name="poison"></a>Footnote 1: A &ldquo;poison extension&rdquo; is defined by <a href="https://tools.ietf.org/html/rfc6962#section-3.1">RFC 6962
section 3.1</a>:</p>

<pre><code>The Precertificate is constructed from the certificate to be issued by adding a special
critical poison extension (OID `1.3.6.1.4.1.11129.2.4.3`, whose
extnValue OCTET STRING contains ASN.1 NULL data (0x05 0x00))
</code></pre>

<p>In other words, it&rsquo;s an empty extension whose only purpose is to ensure that
certificate processors will not accept precertificates as valid certificates. The
specification ensures this by setting the &ldquo;critical&rdquo; bit on the extension, which
ensures that code that doesn&rsquo;t recognize the extension will reject the whole
certificate. Code that does recognize the extension specifically as poison
will also reject the certificate.</p>

<p><a name="variable-length"></a>Footnote 2: Lengths from 0-127 are represented by
a single byte (short form). To express longer lengths, more bytes are used (long form).
The high bit (0x80) on the first byte is set to distinguish long form from short
form. The remaining bits are used to express how many more bytes to read for the
length. For instance, 0x81F5 means &ldquo;this is long form because the length is
greater than 127, but there&rsquo;s still only one byte of length (0xF5) to decode.&rdquo;</p>

Tag Amazon EBS Snapshots on Creation and Implement Stronger Security Policies

Post Syndicated from Woo Kim original https://aws.amazon.com/blogs/compute/tag-amazon-ebs-snapshots-on-creation-and-implement-stronger-security-policies/

This blog was contributed by Rucha Nene, Sr. Product Manager for Amazon EBS

AWS customers use tags to track ownership of resources, implement compliance protocols, control access to resources via IAM policies, and drive their cost accounting processes. Last year, we made tagging for Amazon EC2 instances and Amazon EBS volumes easier by adding the ability to tag these resources upon creation. We are now extending this capability to EBS snapshots.

Earlier, you could tag your EBS snapshots only after the resource had been created and sometimes, ended up with EBS snapshots in an untagged state if tagging failed. You also could not control the actions that users and groups could take over specific snapshots, or enforce tighter security policies.

To address these issues, we are making tagging for EBS snapshots more flexible and giving customers more control over EBS snapshots by introducing two new capabilities:

  • Tag on creation for EBS snapshots – You can now specify tags for EBS snapshots as part of the API call that creates the resource or via the Amazon EC2 Console when creating an EBS snapshot.
  • Resource-level permission and enforced tag usage – The CreateSnapshot, DeleteSnapshot, and ModifySnapshotAttrribute API actions now support IAM resource-level permissions. You can now write IAM policies that mandate the use of specific tags when taking actions on EBS snapshots.

Tag on creation

You can now specify tags for EBS snapshots as part of the API call that creates the resources. The resource creation and the tagging are performed atomically; both must succeed in order for the operation CreateSnapshot to succeed. You no longer need to build tagging scripts that run after EBS snapshots have been created.

Here’s how you specify tags when you create an EBS snapshot, using the console:

  1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
  2. In the navigation pane, choose Snapshots, Create Snapshot.
  3. On the Create Snapshot page, select the volume for which to create a snapshot.
  4. (Optional) Choose Add tags to your snapshot. For each tag, provide a tag key and a tag value.
  5. Choose Create Snapshot.

Using the AWS CLI:

aws ec2 create-snapshot --volume-id vol-0c0e757e277111f3c --description 'Prod_Backup' --tag-specifications 
'ResourceType=snapshot,Tags=[{Key=costcenter,Value=115},{Key=IsProd,Value=Yes}]'

To learn more, see Using Tags.

Resource-level permissions and enforced tag usage

CreateSnapshot, DeleteSnapshot, and ModifySnapshotAttribute now support resource-level permissions, which allow you to exercise more control over EBS snapshots. You can write IAM policies that give you precise control over access to resources and let you specify which users are able to create snapshots for a given set of volumes. You can also enforce the use of specific tags to help track resources and achieve more accurate cost allocation reporting.

For example, here’s a statement that requires that the costcenter tag (with a value of “115”) be present on the volume from which snapshots are being created. It requires that this tag be applied to all newly created snapshots. In addition, it requires that the created snapshots are tagged with User:username for the customer.

{
   "Version":"2012-10-17",
   "Statement":[
      {
         "Effect":"Allow",
         "Action":"ec2:CreateSnapshot",
         "Resource":"arn:aws:ec2:us-east-1:123456789012:volume/*",
	   "Condition": {
		"StringEquals":{
               "ec2:ResourceTag/costcenter":"115"
}
 }
	
      },
      {
         "Sid":"AllowCreateTaggedSnapshots",
         "Effect":"Allow",
         "Action":"ec2:CreateSnapshot",
         "Resource":"arn:aws:ec2:us-east-1::snapshot/*",
         "Condition":{
            "StringEquals":{
               "aws:RequestTag/costcenter":"115",
		   "aws:RequestTag/User":"${aws:username}"
            },
            "ForAllValues:StringEquals":{
               "aws:TagKeys":[
                  "costcenter",
			"User"
               ]
            }
         }
      },
      {
         "Effect":"Allow",
         "Action":"ec2:CreateTags",
         "Resource":"arn:aws:ec2:us-east-1::snapshot/*",
         "Condition":{
            "StringEquals":{
               "ec2:CreateAction":"CreateSnapshot"
            }
         }
      }
   ]
}

To implement stronger compliance and security policies, you could also restrict access to DeleteSnapshot, if the resource is not tagged with the user’s name. Here’s a statement that allows the deletion of a snapshot only if the snapshot is tagged with User:username for the customer.

{
   "Version":"2012-10-17",
   "Statement":[
      {
         "Effect":"Allow",
         "Action":"ec2:DeleteSnapshot",
         "Resource":"arn:aws:ec2:us-east-1::snapshot/*",
         "Condition":{
            "StringEquals":{
               "ec2:ResourceTag/User":"${aws:username}"
            }
         }
      }
   ]
}

To learn more and to see some sample policies, see IAM Policies for Amazon EC2 and Working with Snapshots.

Available Now

These new features are available now in all AWS Regions. You can start using it today from the Amazon EC2 Console, AWS Command Line Interface (CLI), or the AWS APIs.