Tag Archives: Uncategorized

AWS awarded PROTECTED certification in Australia

Post Syndicated from Mathew Graham original https://aws.amazon.com/blogs/security/aws-awarded-protected-certification-in-australia/

The Australian Cyber Security Centre (ACSC) has awarded PROTECTED certification to AWS for 42 of our cloud services. This is the highest data security certification available in Australia for cloud service providers, and AWS offers the most PROTECTED services of any public cloud service provider. You will find AWS on the ACSC’s Certified Cloud Services List (CCSL) at PROTECTED for AWS services, including Amazon Elastic Compute Cloud (Amazon EC2), Amazon Simple Storage Service (Amazon S3), AWS Lambda, AWS Key Management Service (AWS KMS), and Amazon GuardDuty.

We worked with the ACSC to develop a solution that meets Australian government security requirements while also offering a breadth of services so you can run highly sensitive workloads on AWS at scale. These certified AWS services are available within our existing AWS Asia-Pacific (Sydney) Region and cover service categories such as compute, storage, network, database, security, analytics, application integration, management and governance. Importantly, all certified services are available at current public prices, which ensures that you are able to use them without paying a premium for security.

Since March 2018, you’ve been able to assess and self-certify at PROTECTED under the Australian Digital Transformation Agency’s Secure Cloud Strategy, but our inclusion on the CCSL at PROTECTED removes this extra step. With our increased level of certification, you can build applications on AWS that meet the Australian government’s security requirements for highly sensitive workloads.

We have several additional resources to help you begin building at PROTECTED on AWS. The ACSC Consumer Guide and AWS IRAP PROTECTED Reference Architecture are available today on AWS Artifact to help you build applications on AWS. The IRAP Certification Report, ACSC Certification Report and ACSC Certification Letter, also on AWS Artifact, allow you to dive deep into our security approach.

If you have questions about our PROTECTED certification or would like to inquire about how to use AWS for your highly sensitive workloads, contact your account team.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.


Mathew Graham

Mathew is the Head of Security Assurance for Australia and New Zealand at AWS. He is passionate about working with regulators to help cloud adoption for our customers. Outside of AWS, Mathew’s time is completely taken up by his new twin daughters. He holds a Master of Information Security from CSU.

Alerting, monitoring, and reporting for PCI-DSS awareness with Amazon Elasticsearch Service and AWS Lambda

Post Syndicated from Michael Coyne original https://aws.amazon.com/blogs/security/alerting-monitoring-and-reporting-for-pci-dss-awareness-with-amazon-elasticsearch-service-and-aws-lambda/

Logging account activity within your AWS infrastructure is paramount to your security posture and could even be required by compliance standards such as PCI-DSS (Payment Card Industry Security Standard). Organizations often analyze these logs to adapt to changes and respond quickly to security events. For example, if users are reporting that their resources are unable to communicate with the public internet, it would be beneficial to know if a network access list had been changed just prior to the incident. Many of our customers ship AWS CloudTrail event logs to an Amazon Elasticsearch Service cluster for this type of analysis. However, security best practices and compliance standards could require additional considerations. Common concerns include how to analyze log data without the data leaving the security constraints of your private VPC.

In this post, I’ll show you not only how to store your logs, but how to put them to work to help you meet your compliance goals. This implementation deploys an Amazon Elasticsearch Service domain with Amazon Virtual Private Cloud (Amazon VPC) support by utilizing VPC endpoints. A VPC endpoint enables you to privately connect your VPC to Amazon Elasticsearch without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. An AWS Lambda function is used to ship AWS CloudTrail event logs to the Elasticsearch cluster. A separate AWS Lambda function performs scheduled queries on log sets to look for patterns of concern. Amazon Simple Notification Service (SNS) generates automated reports based on a sample set of PCI guidelines discussed further in this post and notifies stakeholders when specific events occur. Kibana serves as the command center, providing visualizations of CloudTrail events that need to be logged based on the provided sample set of PCI-DSS compliance guidelines. The automated report and dashboard that are constructed around the sample PCI-DSS guidelines assist in event awareness regarding your security posture and should not be viewed as a de facto means of achieving certification. This solution serves as an additional tool to provide visibility in to the actions and events within your environment. Deployment is made simple with a provided AWS CloudFormation template.

Figure 1: Architectural diagram

Figure 1: Architectural diagram

The figure above depicts the architecture discussed in this post. An Elasticsearch cluster with VPC support is deployed within an AWS Region and Availability Zone. This creates a VPC endpoint in a private subnet within a VPC. Kibana is an Elasticsearch plugin that resides within the Elasticsearch cluster, it is accessed through a provided endpoint in the output section of the CloudFormation template. CloudTrail is enabled in the VPC and ships CloudTrail events to both an S3 bucket and CloudWatch Log Group. The CloudWatch Log Group triggers a custom Lambda function that ships the CloudTrail Event logs to the Elasticsearch domain through the VPC endpoint. An additional Lambda function is created that performs a periodic set of Elasticsearch queries and produces a report that is sent to an SNS Topic. A Windows-based EC2 instance is deployed in a public subnet so users will have the ability to view and interact with a Kibana dashboard. Access to the EC2 instance can be restricted to an allowed CIDR range through a parameter set in the CloudFormation deployment. Access to the Elasticsearch cluster and Kibana is restricted to a Security Group that is created and is associated with the EC2 instance and custom Lambda functions.

Sample PCI-DSS Guidelines

This solution provides a sample set of (10) PCI-DSS guidelines for events that need to be logged.

  • All Commands, API action taken by AWS root user
  • All failed logins at the AWS platform level
  • Action related to RDS (configuration changes)
  • Action related to enabling/disabling/changing of CloudTrail, CloudWatch logs
  • All access to S3 bucket that stores the AWS logs
  • Action related to VPCs (creation, deletion and changes)
  • Action related to changes to SGs/NACLs (creation, deletion and changes)
  • Action related to IAM users, roles, and groups (creation, deletion and changes)
  • Action related to route tables (creation, deletion and changes)
  • Action related to subnets (creation, deletion and changes)

Solution overview

In this walkthrough, you’ll create an Elasticsearch cluster within an Amazon VPC environment. You’ll ship AWS CloudTrail logs to both an Amazon S3 Bucket (to maintain an immutable copy of the logs) and to a custom AWS Lambda function that will stream the logs to the Elasticsearch cluster. You’ll also create an additional Lambda function that will run once a day and build a report of the number of CloudTrail events that occurred based on the example set of 10 PCI-DSS guidelines and then notify stakeholders via SNS. Here’s what you’ll need for this solution:

To make it easier to get started, I’ve included an AWS CloudFormation template that will automatically deploy the solution. The CloudFormation template along with additional files can be downloaded from this link. You’ll need the following resources to set it up:

  • An S3 bucket to upload and store the sample AWS Lambda code and sample Kibana dashboards. This bucket name will be requested during the CloudFormation template deployment.
  • An Amazon Virtual Private Cloud (Amazon VPC).

If you’re unfamiliar with how CloudFormation templates work, you can find more info in the CloudFormation Getting Started guide.

AWS CloudFormation deployment

The following parameters are available in this template.

Parameter Default Description
Elasticsearch Domain Name Name of the Amazon Elasticsearch Service domain.
Elasticsearch Version 6.2 Version of Elasticsearch to deploy.
Elasticsearch Instance Count 3 The number of data nodes to deploy in to the Elasticsearch cluster.
Elasticsearch Instance Class The instance class to deploy for the Elasticsearch data nodes.
Elasticsearch Instance Volume Size 10 The size of the volume for each Elasticsearch data node in GB.
VPC to launch into The VPC to launch the Amazon Elasticsearch Service cluster into.
Availability Zone to launch into The Availability Zone to launch the Amazon Elasticsearch Service cluster into.
Private Subnet ID The subnet to launch the Amazon Elasticsearch Service cluster into.
Elasticsearch Security Group A new Security Group is created that will be associated with the Amazon Elasticsearch Service cluster.
Security Group Description A description for the above created Security Group.
Windows EC2 Instance Class m5.large Windows instance for interaction with Kibana.
EC2 Key Pair EC2 Key Pair to associate with the Windows EC2 instance.
Public Subnet Public subnet to associate with the Windows EC2 instance for access.
Remote Access Allowed CIDR The CIDR range to allow remote access (port 3389) to the EC2 instance.
S3 Bucket Name—Lambda Functions S3 Bucket that contains custom AWS Lambda functions.
Private Subnet Private subnet to associate with AWS Lambda functions that are deployed within a VPC.
CloudWatch Log Group Name This will create a CloudWatch Log Group for the AWS CloudTrail event logs.
S3 Bucket Name—CloudTrail logging This will create a new Amazon S3 Bucket for logging CloudTrail events. Name must be a globally unique value.
Date range to perform queries now-1d (examples: now-1d, now-7d, now-90d)
Lambda Subnet CIDR Create a Subnet CIDR to deploy AWS Lambda Elasticsearch query function in to
Availability Zone—Lambda The availability zone to associate with the preceding AWS Lambda Subnet
Email Address [email protected] Email address for reporting to notify stakeholders via SNS. You must accept the subscription by selecting the link sent to this address before alerts will arrive.

It takes 30-45 minutes for this stack to be created. When it’s complete, the CloudFormation console will display the following resource values in the Outputs tab. These values can be referenced at any time and will be needed in the following sections.

oElasticsearchDomainEndpoint Elasticsearch Domain Endpoint Hostname
oKibanaEndpoint Kibana Endpoint Hostname
oEC2Instance Windows EC2 Instance Name used for Kibana access
oSNSSubscriber SNS Subscriber Email Address
oElasticsearchDomainArn Arn of the Elasticsearch Domain
oEC2InstancePublicIp Public IP address of the Windows EC2 instance

Managing and testing the solution

Now that you’ve set up the environment, it’s time to configure the Kibana dashboard.

Kibana configuration

From the AWS CloudFormation output, gather information related to the Windows-based EC2 instance. Once you have retrieved that information, move on to the next steps.

Initial configuration and index pattern

  1. Log into the Windows EC2 instance via Remote Desktop Protocol (RDP) from a resource that is within the allowed CIDR range for remote access to the instance.
  2. Open a browser window and navigate to the Kibana endpoint hostname URL from the output of the AWS CloudFormation stack. Access to the Elasticsearch cluster and Kibana is restricted to the security group that is associated with the EC2 instance and custom Lambda functions during deployment.
  3. In the Kibana dashboard, select Management from the left panel and choose the link for Index Patterns.
  4. Add one index pattern containing the following: cwl-*
    Figure 2: Define the index pattern

    Figure 2: Define the index pattern

  5. Select Next Step.
  6. Select the Time Filter Field named @timestamp.
    Figure 3: Select "@timestamp"

    Figure 3: Select “@timestamp”

  7. Select Create index pattern.

At this point we’ve launched our environment and have accessed the Kibana console. Within the Kibana console, we’ve configured the index pattern for the CloudWatch logs that will contain the CloudTrail events. Next, we’ll configure visualizations and a dashboard.

Importing sample PCI DSS queries and Kibana dashboard

  1. Copy the export.json from the location you extracted the downloaded zip file to the EC2 Kibana bastion.
  2. Select Management on the left panel and choose the link for Saved Objects.
  3. Select Import in upper right corner and navigate to export.json.
  4. Select Yes, overwrite all saved objects, then select Index Pattern cwl-* and confirm all changes.
  5. Once the import completes, select PCI DSS Dashboard to see the sample dashboard and queries.

Note: You might encounter an error during the import that looks like this:

Figure 4: Error message

Figure 4: Error message

This simply means that your streamed logs do not have login-type events in the time period since your deployment. To correct this, you can add a field with a null event.

  1. From the left panel, select Dev Tools and copy the following JSON into the left panel of the console:
            POST /cwl-/default/
                "userIdentity": {
                    "userName": "test"

  2. Select the green Play triangle to execute the POST of a document with the missing field.
    Figure 5: Select the "Play" button

    Figure 5: Select the “Play” button

  3. Now reimport the dashboard using the steps in Importing Sample PCI DSS Queries and Kibana Dashboard. You should be able to complete the import with no errors.

At this point, you should have CloudTrail events that have been streamed to the Elasticsearch cluster, with a configured Kibana dashboard that looks similar to the following graphic:

Figure 6: A configured Kibana dashboard

Figure 6: A configured Kibana dashboard

Automated Reports

A custom AWS Lambda function was created during the deployment of the Amazon CloudFormation stack. This function uses the sample PCI-DSS guidelines from the Kibana dashboard to build a daily report. The Lambda function is triggered every 24 hours and performs a series of Elasticsearch time-based queries of now-1day (the last 24 hours) on the sample guidelines. The results are compiled into a message that is forwarded to Amazon Simple Notification Service (SNS), which sends a report to stakeholders based on the email address you provided in the CloudFormation deployment.

The Lambda function will be named <CloudFormation Stack Name>-ES-Query-LambdaFunction. The Lambda Function enables environment variables such as your query time window to be adjusted or additional functionality like additional Elasticsearch queries to be added to the code. The below sample report allows you to monitor any events against the sample PCI-DSS guidelines. These reports can then be further analyzed in the Kibana dashboard.

    Logging Compliance Report - Wednesday, 11. July 2018 01:06PM
    Violations for time period: 'now-1d'
    All Failed login attempts
    - No Alerts Found
    All Commands, API action taken by AWS root user
    - No Alerts Found
    Action related to RDS (configuration changes)
    - No Alerts Found
    Action related to enabling/disabling/changing of CloudTrail CloudWatch logs
    - 3 API calls indicating alteration of log sources detected
    All access to S3 bucket that stores the AWS logs
    - No Alerts Found
    Action related to VPCs (creation, deletion and changes)
    - No Alerts Found
    Action related to changes to SGs/NACLs (creation, deletion and changes)
    - No Alerts Found
    Action related to changes to IAM roles, users, and groups (creation, deletion and changes)
    - 2 API calls indicating creation, alteration or deletion of IAM roles, users, and groups
    Action related to changes to Route Tables (creation, deletion and changes)
    - No Alerts Found
    Action related to changes to Subnets (creation, deletion and changes)
    - No Alerts Found         


At this point, you have now created a private Elasticsearch cluster with Kibana dashboards that monitors AWS CloudTrail events on a sample set of PCI-DSS guidelines and uses Amazon SNS to send a daily report providing awareness in to your environment—all isolated securely within a VPC. In addition to CloudTrail events streaming to the Elasticsearch cluster, events are also shipped to an Amazon S3 bucket to maintain an immutable source of your log files. The provided Lambda functions can be further modified to add additional or more complex search queries and to create more customized reports for your organization. With minimal effort, you could begin sending additional log data from your instances or containers to gain even more insight as to the security state of your environment. The more data you retain, the more visibility you have into your resources and the closer you are to achieving Compliance-on-Demand.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.


Michael Coyne

Michael is a consultant for AWS Professional Services. He enjoys the fast-paced environment of ever-changing technology and assisting customers in solving complex issues. Away from AWS, Michael can typically be found with a guitar and spending time with his wife and two young kiddos. He holds a BS in Computer Science from WGU.

Add a layer of security for AWS SSO user portal sign-in with context-aware email-based verification

Post Syndicated from Ujjwal Pugalia original https://aws.amazon.com/blogs/security/add-a-layer-of-security-for-aws-sso-user-portal-sign-in-with-context-aware-email-based-verification/

If you’re an IT administrator of a growing workforce, your users will require access to a growing number of business applications and AWS accounts. You can use AWS Single Sign-On (AWS SSO) to create and manage users centrally and grant access to AWS accounts and business applications, such as such Salesforce, Box, and Slack. When you use AWS SSO, your users sign in to a central portal to access all of their AWS accounts and applications. Today, we launched email-based verification that provides an additional layer of security for users signing in to the AWS SSO user portal. AWS SSO supports a one-time passcode (OTP) sent to users’ email that they then use as a verification code during sign-in. When enabled, AWS SSO prompts users for their user name and password and then to enter a verification code that was sent to their email address. They need all three pieces of information to be able to sign in to the AWS SSO user portal.

You can enable email-based verification in context-aware or always-on mode. We recommend you enable email-based verification in context-aware mode for users created using the default AWS SSO directory. In this mode, users sign in easily with their username and password for most sign-ins, but must provide additional verification when their sign-in context changes, such as when signing in from a new device or an unknown location. Alternatively, if your company requires users to complete verification for every sign-in, you can use always-on mode.

In this post, I demonstrate how to enable verification in context-aware mode for users in your SSO directory using the AWS SSO console. I then demonstrate how to sign into the AWS SSO user portal using email-based verification.

Enable email-based verification in context-aware mode for users in your SSO directory

Before you enable email-based verification, you must ensure that all your users can access their email to retrieve their verification code. If your users require the AWS SSO user portal to access their email, do not enable email-based verification. For example, if you use AWS SSO to access Office 365, then your users may not be able to access their AWS SSO user portal when you enable email-based verification.

Follow these steps to enable email-based verification for users in your SSO directory:

  1. Sign in to the AWS SSO console. In the left navigation pane, select Settings, and then select Configure under the Two-step verification settings.
  2. Select Context-aware under Verification mode, and Email-based verification under Verification method, and then select Save changes.
    Figure 1: Select the verification mode and the verification method

    Figure 1: Select the verification mode and the verification method

  3. Before you choose to confirm the changes in the Enable email-based verification window, make sure that all your users can access their email to retrieve the verification code required to sign in to the AWS SSO user portal without signing in using AWS SSO. To confirm your choice, type CONFIRM (case-sensitive) in the text-entry field, and then select Confirm.
    Figure 2: The "Enable email-based verification" window

    Figure 2: The “Enable email-based verification” window

You’ll see that you successfully enabled email-based verification in context-aware mode for all users in your AWS SSO directory.

Figure 3: Verification of the settings

Figure 3: Verification of the settings

Next, I demonstrate how your users sign into the AWS SSO user portal with email-based verification in addition to their username and password

to the AWS SSO user portal with email-based verification

With email-based verification enabled in context-aware mode, users use the verification code sent to their email when there is a change in their sign-in context. Here’s how that works:

  1. Navigate to your AWS SSO user portal.
  2. Enter your email address and password, and then select Sign in.
    Figure 4: The "Single Sign-On" window

    Figure 4: The “Single Sign-On” window

  3. If AWS detects a change in your sign-in context, you’ll receive an email with a 6-digit verification code that you will enter in the next step.
    Figure 5: Example verification email

    Figure 5: Example verification email

  4. Enter the code in the Verification code box, and then select Sign in. If you haven’t received your verification code, select Resend email with a code to receive a new code, and be sure to check your spam folder. You can select This is a trusted device to mark your device as trusted so you don’t need to enter a verification code unless your sign-in context changes again, such as signing in from a new browser or an unknown location.
    Figure 6: Enter the verification code

    Figure 6: Enter the verification code

The user can now access AWS accounts and business applications that the administrator has configured for them.


In this post, I shared the benefits of using email-based verification in context-aware mode. I demonstrated how you can enable email-based verification for your users through the SSO console. I also showed you how to sign into the AWS SSO user portal with email-based verification. You can also enable email-based verification for SSO users from your connected AD directory by following the process outlined above.

If you have comments, please submit them in the Comments section below. If you have issues enabling email-based verification for your users, start a thread on the AWS SSO forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Ujjwal Pugalia

Ujjwal is the product manager for the console sign-in and sign-up experience at AWS. He enjoys working in the customer-centric environment at Amazon because it aligns with his prior experience building an enterprise marketplace. Outside of work, Ujjwal enjoys watching crime dramas on Netflix. He holds an MBA from Carnegie Mellon University (CMU) in Pittsburgh.

Migrate Wildfly Cluster to Amazon ECS using Service Discovery

Post Syndicated from Anuneet Kumar original https://aws.amazon.com/blogs/compute/migrate-wildfly-cluster-to-ecs-using-service-discovery/

This post is courtesy of Vidya Narasimhan, AWS Solutions Architect

1. Overview

Java Enterprise Edition has been an important server-side platform for over a decade for developing mission-critical & large-scale applications amongst enterprises. High-availability & fault tolerance for such applications is typically achieved through built-in JEE clustering provided by the platform.

JEE clustering represents a group of machines working together to transparently provide enterprise services such as JNDI, EJB, JMS, HTTPSession etc. that enable distribution, discovery, messaging, transaction, caching, replication & component failover.  Implementation of clustering technology varies in JEE platforms provided by different vendors. Many of the clustering implementations involve proprietary communication protocols that use multicast for intra-cluster communications that is not supported in public cloud.

This article is relevant for JEE platforms & other products that use JGroups based clustering such as Wildfly. The solution described allows easy migration of applications developed on these platforms using native clustering to Amazon Elastic Container Service (Amazon ECS) which is a highly scalable, fast, container management service that makes it easy to orchestrate, run & scale Docker containers on a cluster. This solution is useful when the business objective is to migrate to cloud fast with minimum changes to the application. The approach recommends lift & shift to AWS wherein the initial focus is to migrate as-is with optimizations coming in later incrementally.

Whether the JEE application to be migrated is designed as a monolith or micro services, a legacy or green-field deployment, there are multiple reasons why organizations should opt for containerization of their application. This link explains well the benefits of containerization (see section Why Use Containers) https://aws.amazon.com/getting-started/projects/break-monolith-app-microservices-ecs-docker-ec2/module-one/

2. Wildfly Clustering on ECS

Here onwards, this article highlights how to migrate a standard clustered JEE app deployed on Wildfly Application Server to Amazon ECS. Wildfly supports clustering out of the box and supports two modes of clustering, standalone & domain mode. This article explores how to setup WildFly cluster in ECS with multiple Wildfly standalone nodes enabled for HA to form a cluster. The clustering is demonstrated through a web application that replicates session information across the cluster nodes and can withstand a failover without session data loss.

The important components of clustering that requires a mention right away are ECS Service Discovery, JGroups & Infinispan.

  • JGroups – Wildfly clustering is enabled by the popular open-source JGroups toolkit. The JGroups subsystem provides group communication support for HA services using a multicast transmission by default. It deals with all aspects of node discovery and providing reliable messaging between the nodes as follows-
    • Node-to-node messaging — By default is based on UDP/multicast that can be extended via TCP/unicast.
    • Node discovery — By default uses multicast ping MPING. Alternatives include TCPPING, S3_PING, JDBC_PING, DNS_PING and others.

This article focusses on DNS_PING for node discovery using TCP protocol.

ECS Service discovery – Amazon ECS service can optionally be configured to use Amazon ECS Service Discovery. Service discovery uses Amazon Route 53 auto naming API actions to manage DNS entries (A or SRV records) for service tasks, making them discoverable within your VPC. You can specify health check conditions in a service task definition and Amazon ECS will ensure that only healthy service endpoints are returned by a service lookup.

As your services scale up or down in response to load or container health, the Route 53 hosted zone is kept up to date.

Wildfly uses JGroups to discover the cluster nodes via DNS_PING discovery protocol that sends a DNS service endpoint query to the ECS service registry maintained in Route53.

  • Infinispan – Wildfly uses Infinispan subsystem to provides high-performance, clustered, transactional caching. In a clustered web application, Infinispan handles the replication of application data across the cluster by means of a replicated/distributed cache. Under the hood, it uses JGroups channel for data transmission within the cluster.

3. Implementation Instructions

Configure Wildfly

  • Modify Wildfly standalone configuration file – Standalone-HA.xml. The HA suffix implies high availability configuration.
  1.  Modify the JGroup Subsystem – Add a TCP Stack with DNS_Ping as the discovery protocol & configure the DNS Query endpoint. It is important to note that the DNS_QUERY matches the ECS  service endpoint when configuring the ECS service.  
  2. Change the JGroup default stack to point to the TCP Stack.                           
  3. Configure a custom Infinispan replicated cache to be used by the web app or use the default cache.      

Build the docker image & store it in Elastic Container Registry (ECR)

  1. Package the JBoss/Wildfly image with JEE application & Wildfly platform on Docker. Create a Dockerfile & include the following:
    1. Install the WildFly distribution & set permissions – This approach requires the latest Wildfly distribution 15.0.0.Final released recently.          
    2. Copy the modified Wildfly standalone-ha.xml to the container.
    3. Deploy the JEE web application. This simple web app is configured as distributable and uses Infinispan to replicate session information across cluster nodes. It displays a page containing the container IP/hostname, Session ID & session data & helps demonstrate session replication.                   
    4. Define a custom entrypoint, entrypoint.sh, to boot Wildfly with the specified bind IP addresses to its interfaces. The script gets the container metadata, extracts the container IP to bind it to Wildfly interfaces. This interface binding is an important step as it enables the application related network communication (web, messaging) between the containers.    
    5. Add the enrypoint.sh script to the image in the Dockerfile.                                        
    6. Build the container & push it to ECR repository. Amazon Elastic Container Registry (ECR) is a fully managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images.

The Wildly configuration files, the Dockerfile & the web app WAR file can be found at the Github link https://github.com/vidyann/Wildfly_ECS

Create ECS Service with service discovery

  • Create a Service using service discovery.
    • This link describe steps to set up a ECS task & service with service discovery https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-service-discovery.html#create-service-discovery-taskdef. Though the example in the link creates a Fargate cluster, you can create an EC2 based cluster as well for this example.
    • While configuring the task choose the network mode as AWSVPC. The task networking features provided by the AWSVPC network mode give Amazon ECS tasks the same networking properties as Amazon EC2 instances. Benefits of task networking can be found here – https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html
    • Tasks can be flagged as compatible with EC2, Fargate, or both. Here is what the cluster & service looks like:                             
    • When setting up the container details in task, use 8080 as the port, which is the default Wildfly port. This can be changed through WIldfly configuration. Enable the cloudwatch logs which captures Wildfly logging.
    • While configuring the ECS service, ensure that the service name & namespace should combine to form service endpoint that exactly matches the DNS_Query endpoint configured in Wildfly configuration file. The container security group should allow inbound traffic to port 8080. Here is what the service endpoint looks like:     
    • The route53 registry created by ECS is shown below. We see two DNS entries corresponding to the DNS endpoint myapp.sampleaws.com.              
    • Finally view the Wildfly logs in the console by clicking a task instance. You can check if clustering is enabled by looking for a log entry as below:            

Here we see that a Wildfly cluster was formed with two nodes(same as the pic in route 53).

Run the Web App in a browser

  • Spin up a windows instance in the VPC & open the web app in a browser. Below is a screenshot of the webapp:                                         
  • Open in different browsers & tabs & verify the Container IP & session ID. Now force shutdown a node by resizing the ECS service task instances to one. Note that though the container IP in the webapp changes, the session ID does not change and the webapp is available and the HTTP Session is alive thus demonstrating the session replication & failover amongst the clustering nodes.

4. Summary

Our goal here is to migrate the enterprise JEE apps to Amazon ECS by tweaking a few configurations but gaining immediately the benefits of containerization & orchestration managed by ECS. By delegating the undifferentiated heavy lifting of container management, orchestration, scaling to ECS, you can focus on improvising/re-architecting your application to micro-services oriented architecture. Please note that all the deployment procedures in this article can be fully automated via the AWS CI/CD services.

400 безплатни курса от Ivy League

Post Syndicated from nellyo original https://nellyo.wordpress.com/2019/01/11/400-ivy-league/

Висшите училища от  Ivy League /Бръшляновата лига са сред най-престижните  в света. Те включват Браун, Харвард, Корнел, Принстън, Дартмут, Йейл,   Колумбийския университет в Ню Йорк  и Университета на Пенсилвания.

Училищата от Ivy League   вече предлагат безплатни онлайн курсове в множество платформи за онлайн обучение. Досега те са създали близо  500 курса, от които около 396 са   активни.

Ето една колекция от техни безплатни курсове в областите Компютърни науки, Бизнес, Хуманитарни науки, Социални науки, Изкуство и дизайн,   Здраве & Медицина, Наука за данните, Образование и др.

AWS Security profiles: Michael South, Principal Business Development Manager for Security Acceleration

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-michael-south-principal-business-development-manager-for-security-acceleration/


In the weeks leading up to the Solution Days event in Tokyo, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.

How long have you been at AWS, and what do you do in your current role?

I’ve been with AWS since August 2017. I’m part of a team called SeCBAT — the Security and Compliance Business Acceleration Team. I lead customer-focused executive security and compliance efforts for the public sector in the Americas, from Canada down to Chile, and spanning federal government, defense, state and local government, education, and non-profit verticals. The team was established in 2017 to address a need we saw in supporting our customers. While we have fantastic solution architects who connect easily with our customers’ architects and engineers, we didn’t have a readily available team in our World Wide Public Sector organization to engage customers interested in security at the executive level. When we worked with people like CISOs — Chief Information Security Officers — there was a communication gap. CISOs have a broader scope than engineers, and are oftentimes not as technically deep. Technology is only one piece of the puzzle that they’re trying to solve. Other challenging pieces include policy, strategy, culture shift, staffing and training, and the politics of their entire organization. SeCBAT is comprised of prior government CISOs (or similar roles), allowing us to establish trust quickly. We’ve been in their shoes, so we understand the scope of their concerns, we can walk them through how they can meet their security and compliance objectives in AWS, and we can help remove barriers to cloud adoption for the overall customer.

These customer engagements are one of my primary functions. The team also spends a lot of time on strategic communications: presenting at conferences and tradeshows, writing whitepapers and blogs, and generally providing thought leadership for cloud security. Lastly, we work closely with Amazon Public Policy as subject matter experts to assist in reviewing and commenting on draft legislation and government policies, and in meetings with legislators, regulators, and policy-makers to educate them on how security in the cloud works so they can make informed decisions.

What’s the most challenging part of your job?

Customers who are new to the cloud often grapple with feelings of fear and uncertainty (just like I did). For me, figuring out how to address that feeling is a challenge that varies from person to person. It isn’t necessarily based on facts or data — it’s a general human reaction to something new. “The cloud” is very mysterious to people who are just coming into it, and oftentimes their sources of information are inaccurate or sensationalized news articles, combined with a general overuse of the word “cloud” in marketing materials from traditional vendors who are trying to cash in on this industry shift. Once you learn what the cloud really is and how it works, what’s the same and what’s different than what you’re used to on-prem, you can figure out how to manage it, secure it, and incorporate it into your overall strategy. But trying to get past that initial fear of the unknown is challenging. Part of what I do is educate people and then challenge some of the assumptions they might have made prior to our meeting. I want people to be able to look at the data so that they can make an informed decision and not lose an opportunity over a baseless emotion. If they choose not to go to the cloud, then that is absolutely fine, but at least that decision is made on facts and what’s best for the organization.

What’s the most common misperception you encounter about cloud security and compliance?

Visibility. There’s a big misperception that customers will lose visibility into their data and their systems in the cloud, and this becomes a root cause of many other misconceptions. It’s usually the very first point that I focus on in my briefs and discussions. I walk customers through my cloud journey, including my background in traditional security in an on-prem environment. As the Deputy CISO for the city of Washington, DC, I was initially very nervous about transitioning to the cloud, but I tasked my team and myself to dive deep and learn. It didn’t take long for us to determine that not only could we be just as secure and compliant in the cloud as on-prem, but that we could achieve a greater level of security and compliance through resiliency, continuous monitoring, and automated security operations. During our research, we also had to deal with a few on-prem issues, and that’s when it dawned on me that the cloud gave me something that I’d been lacking for my entire IT career — essentially 100% visibility! It didn’t matter if a server was on or off, what network segment it was on, whether the endpoint agent was installed or reporting up, or any other state — I had absolute visibility into every asset we had in the cloud. From here, we could secure and automate with much greater confidence, which resulted in fewer “fires” to put out. Security ended up being a driving force behind the city’s cloud adoption strategy. The security and governance journey can take a while at first, but these factors will enable everyone else move fast, safely. The very first step is understanding the visibility that the cloud allows.

You’ll be giving a keynote at AWS Solution Days, in Tokyo. Is this the first time you’ve been to Japan?

No, my family and I were very fortunate to have lived in Yokosuka, Japan for a few years. I served in the U.S. Navy for 25 years prior to joining AWS, where I enjoyed two tours in Japan. The first was as the Seventh Fleet Information Assurance Manager, the lead for cybersecurity for all U.S. Naval forces in Asia. The second was as the Navy Chief Information Officer (CIO) for all U.S. Naval forces in Japan. Those experiences were some of the best of my career and family life. We would move back to Japan in a heartbeat!

The keynote is called “U.S. government and U.S. defense-related security.” What implications do U.S. government and defense policies have for AWS customers in Japan?

The U.S. and Japan are very strong political and military allies. Their governments and militaries share common interests and defense strategies, and collaborate on a myriad of socio-economic topics. This all requires the sharing of sensitive information, which is where having a common lexicon, standards, and processes for security benefit both parties. I plan to discuss the U.S. environment and highlight things that are working well in the U.S. that Japan might want to consider adopting, plus some things that might not be a good fit—coupled with recommendations on what might be better opportunities. I also plan to demonstrate that AWS is able to meet the high standards of the U.S. government and military with very strict, regulated security. I hope that this will give Japanese customers confidence in our ability to meet the similarly rigorous requirements they might have.

In your experience, how does the cloud security landscape differ between US and Japanese markets?

From my understanding, the Japanese government is in the very early stages of cloud adoption. Many ministries are assessing how they might use the cloud and secure their sensitive data in it. In addition to speaking at the summit, one of my reasons for visiting Japan is to meet with Japanese government customers to learn about their efforts. They’re very much interested in what the U.S. government is doing with AWS. They would like to leverage lessons learned, technical successes, and processes that are working well, in addition to learning about things that they might want to do differently. It’s a great opportunity to showcase all the work we’re doing with the U.S. government that could also benefit the Japanese government.

Five years from now, what changes do you think we’ll see across the security and compliance landscape?

My hope is that we’ll see a better, more holistic method of implementing governance with security engineering and security operations. Right now, globally across the cybersecurity landscape, there are silos: development security, governance, compliance, risk management, engineering, security operations, etc. They should be more mutually supportive and interconnected, and as you implement a plan in one area, it should go into effect seamlessly across the other areas.

Similarly, my hope is that five years from now we’ll start seeing a merge between the technologies and people and processes. Right now, the cybersecurity industry seems to try to tackle every problem with a technological solution. But technology is really the easiest part of every problem. The people and the processes are much more difficult. I think we need to devote a lot more time toward developing a holistic view of cybersecurity based on business risk and objectives.

Why should emerging markets move to the cloud now? Why not wait another five years in the hope that the technology will mature?

I’d like to challenge the assumption that the cloud is not mature. At least with AWS and our near competitors, I’d say the cloud is very mature and provides a level of sophistication that is very difficult and costly to replicate on-prem. If the concern is about technical maturity, you’re already late.

In addition, the waiting approach poses two problems: First, if you’re not engaged now in learning how the cloud works, you’ll just be further behind the curve in five years. Second, I see (and believe I’ll continue to see) that the vast majority of new technologies, services, and concepts are being born in the cloud. Everything is hyper-converging on the cloud as the foundational platform for all other emerging technologies. If you want to be successful with the next big idea in five years, it’s better to get into the cloud now and become an expert at what it can do—so that you’re ready for that next big idea. Because in some way, shape, or form, it’s going to be in or enabled by the cloud.

What are your favorite things to do when you’re visiting Japan?

The history and tradition of Kyoto makes it my favorite city in Japan. But since we’ll be in Tokyo, there a few things there that I’d recommend. First, the 100-Yen sushi-go-rounds. To Americans, I’d explain it as paying one US dollar for a small plate (2 pieces of nigiri or 4 roll slices) of fantastic sushi. You can eat thirty plates for thirty bucks! Places in Tokyo to visit are Harajuku for people-watching, with all the costumes and fashion, Shibuya for shopping, and of course Tokyo tower. I also recommend Ueno park, somewhat close to where our event will be held, which has a pond and zoo.

Japan is one of the safest and politest countries I’ve been to — and I’ve visited about 40 at this point. The people I’ve met there have all been extraordinarily nice and are what really makes Japan so special. I’d highly recommend visiting.

What’s your favorite thing to do in your hometown?

I’m originally from Denver, Colorado. If you’re in Denver, you’ve got to go up to the mountains. If you’re there in the summer, you can hike, camp, go white-water rafting, or horseback riding. If you’re there in the winter, you can go skiing or snowboarding, or just sit by the fire with a hot toddy. It really doesn’t matter. Just go up to the mountains and enjoy the beautiful scenery and wildlife.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security news? Follow us on Twitter.

Michael South

Michael joined AWS in 2017 as the Americas Regional Leader for public sector security and compliance business development. He supports customers who want to achieve business objectives and improve their security and compliance in the cloud. His customers span across the public sector, including: federal governments, militaries, state/provincial governments, academic institutions, and non-profits from North to South America. Prior to AWS, Michael was the Deputy Chief Information Security Officer for the city of Washington, DC and the U.S. Navy’s Chief Information Officer for Japan.

Build and deploy an application for Hyperledger Fabric on Amazon Managed Blockchain

Post Syndicated from Michael Edge original https://aws.amazon.com/blogs/big-data/build-and-deploy-an-application-for-hyperledger-fabric-on-amazon-managed-blockchain/

At re:Invent 2018, AWS announced Amazon Managed Blockchain, a fully managed service that makes it easy to create and manage scalable blockchain networks using the popular open source frameworks Hyperledger Fabric and Ethereum. A preview of the service is available with support for the Hyperledger Fabric framework, with support for Ethereum coming soon. For additional details about Managed Blockchain, see What Is Amazon Managed Blockchain? To use the service, you can sign up for the preview.

In this post, you will learn how to build a Hyperledger Fabric blockchain network using Managed Blockchain. After creating the Fabric network, you deploy a three-tier application that uses the network to track donations to a nonprofit organization. Nonprofits want to provide visibility to their supporters and transparency into how they are spending donations. For each donation made by a donor, Hyperledger Fabric tracks the specifics of how the donation is spent. Donors can use this information to decide whether the nonprofit is spending their donations as they had anticipated.

Blockchain is suitable for this scenario because it promotes trust between all members in the network, including donor organizations, investors, philanthropic institutions, suppliers, and the nonprofit itself. All members in the network have their own immutable, cryptographically secure copy of the donation and spending records. They can then independently review how effectively donations are spent. This transparency could lead to increased efficiency and insight into lowering costs for nonprofits.

Architecture overview

The application consists of the following tiers:

  1. Hyperledger Fabric chaincode that executes on the Fabric peer node. Chaincode is the smart contract that queries data and invokes transactions on the Fabric network.
  2. A RESTful API that uses the Hyperledger Fabric Client SDK to interact with the Fabric network and expose the functions provided by the chaincode. The Hyperledger Fabric Client SDK provides APIs to create and join channels, install and instantiate chaincode, and query data or invoke transactions.
  3. A user interface application that calls the API provided by the RESTful API.

This architecture provides loose coupling and abstraction in such a way that the end user of the application is not exposed to the inner workings of a Hyperledger Fabric network. In fact, besides a slider component on the user interface showing the blocks received from the Fabric network, there is no indication to the end users that the underlying technology is blockchain.

This loose coupling extends to the user interface developers, who simply use the functionality provided by the RESTful API and don’t need to know anything about Hyperledger Fabric or chaincode. Loose coupling therefore allows development of applications with a familiar look and feel, whether they be web, mobile, or other types of applications.

The rest of this article is divided into four sections, each discussing the different layers of the architecture, as follows:

  • Part 1 builds a Hyperledger Fabric network using Amazon Managed Blockchain.
  • Part 2 deploys business logic in the form of chaincode to the Fabric network.
  • Part 3 deploys a RESTful API that uses the Hyperledger Fabric Client SDK to interact with the chaincode.
  • Part 4 deploys an application that uses the functionality exposed by the RESTful API.

A request by an end user would flow through the layers as shown in Figure 1. Activity by a user on the user interface would result in a REST API call to the RESTful API server. In turn, this would use the Fabric SDK to interact with the Hyperledger Fabric components in Managed Blockchain to invoke a transaction or query data.

Figure 1 – Users interacting with a Hyperledger Fabric application

The accompanying repository

The Git repository that accompanies this post contains the artifacts that are required to finish parts 1–4 to create the end application:


Each part in this post is associated with a matching part in the Git repo. As we progress through each part, the post elaborates on the steps in the README files in the accompanying repo.

Note that although we currently don’t charge for the Managed Blockchain preview itself, executing the steps in this post consumes other AWS resources that will be billed at the applicable rates.

Let’s get started with Part 1.

Part 1: Build a Hyperledger Fabric network using Amazon Managed Blockchain

First, make sure that your AWS account has been added to the Managed Blockchain preview. Next, using the AWS Management Console, you can create a Hyperledger Fabric network using Managed Blockchain with just a few clicks. Open the Managed Blockchain console, and choose Create a network. Choose the Hyperledger Fabric framework, and provide a name for your network. Then choose Next.

Enter a name for the initial member that you want to add to your network. A member is the equivalent of a Hyperledger Fabric organization and often maps to a real-world organization. If you consider a Fabric network to be made up of a consortium of organizations that want to transact with each other, a member would be one of the organizations in the consortium.

Finally, enter an administrator user name and password for the member. Each member in an Amazon Managed Blockchain network has its own certificate authority (CA) that is responsible for registering and enrolling the users for this member. Entering this information here defines an identity that has the administrator role for this Hyperledger Fabric member.

After reviewing the details that you entered, create the network and member.

For additional details about these steps, see Part 1, Step 1: Create the Hyperledger Fabric blockchain network in the accompanying repository.

Managed Blockchain is a fully managed service. It creates and manages shared components such as the Hyperledger Fabric ordering service and the Fabric CA for each member, and it exposes them with endpoints. In a future step, you use virtual private cloud (VPC) endpoints to make the endpoints of these components available to a VPC in your account.

Creating a Hyperledger Fabric peer node

After your Hyperledger Fabric network and member have an ACTIVE status, it’s time to create a Fabric peer node. Peer nodes are where Fabric smart contracts execute (for example, chaincode). Peer nodes also contain the Fabric ledger, which consists of two parts: a journal that holds a cryptographically immutable transaction log (or “blockchain”) and a key-value store known as the world state that stores the current state of the ledger.

Part 1, Step 2 contains the steps to create a peer node. Each member on a network creates their own peer nodes, so select the member that you created previously and choose the link to create a peer node. Choose an instance type and the amount of storage for that node, and then create the peer node.

Like the ordering service and CA, each member’s peer nodes are managed by Amazon Managed Blockchain and can be accessed from your VPC via a VPC endpoint.

At this stage, you have a Hyperledger Fabric network with a highly available ordering service and CA, and a single peer. For the remainder of this post, we remain with this single-member network to reduce the scope. However, in a more robust test or production scenario that would simulate a multimember decentralized network, you could use the Amazon Managed Blockchain console or API to invite other members to join the network. In a follow-up blog post, we will walk through these steps.

Now, you need a way to interact with the Fabric network so that you can create channels, install chaincode, and invoke transactions.

Creating a Hyperledger Fabric client node in your VPC

To interact with the Fabric components provisioned by Amazon Managed Blockchain, you can download and use the open source Hyperledger Fabric CLI or SDK. You configure these clients to interact with the endpoints exposed by Managed Blockchain. The CLI is a peer binary that enables you to install, query, and invoke chaincode and create and join channels.

As shown in Figure 2, the Hyperledger Fabric components managed by Amazon Managed Blockchain are accessed via a Fabric client node (for example, Client A), which you provision in a VPC in your account. The Fabric client node hosts the open source Hyperledger Fabric CLI and allows you to interact with your Fabric network via the VPC endpoint. All network traffic between your VPC and your managed Fabric network occurs over the AWS backbone and is not exposed to the public internet.

Figure 2 – The layout of an Amazon Managed Blockchain network with two members

You use AWS CloudFormation to provision a new VPC in your AWS account, an Amazon EC2 instance configured as your Fabric client node, and the VPC endpoint to communicate with your Fabric network.

Part 1, Steps 3 and 4 in the GitHub repo explain how to provision and prepare your Fabric client node. Don’t forget to follow the prerequisites in Part 1 to create your AWS Cloud9 environment. AWS Cloud9 is a cloud-based integrated development environment (IDE) that lets you write, run, and debug your code with just a browser. However, you won’t be using the IDE functions. You’ll use the Linux command line provided by AWS Cloud9 because it comes pre-installed with some of the packages we need, such as the AWS CLI.

Creating a Hyperledger Fabric channel and installing chaincode

From the Fabric client node, you now create a channel on the Fabric network. Channels in Hyperledger Fabric are the means by which applications and peer nodes interact and transact privately with each other. A Fabric network can support many channels where each channel has a different combination of members.

The process for creating a channel includes creating a channel config file (configtx.yaml), which contains channel definitions in the form of profiles. You use the Hyperledger Fabric channel configuration (configtxgen) tool to generate a binary channel creation transaction file based on one of the profiles from configtx.yaml. Then you submit the channel creation transaction file to the Fabric ordering service where the channel is created. Block 0, the channel genesis block, is created at this point. It is added to the channel and returned to the peer node where it can be used to join the peer to the channel.

After creating the channel, install sample chaincode on the peer node and instantiate the chaincode on the channel. The sample chaincode comes from the Hyperledger Fabric samples repo and has already been cloned to the Fabric client node. Whereas installing chaincode simply packages the chaincode and copies it to the peer, instantiating chaincode is a binding process that binds the chaincode to the channel.

Instantiating chaincode performs a number of tasks:

  • It sets the endorsement policy for the chaincode. For more information, see Endorsement policies on the Hyperledger Fabric site.
  • It builds a Docker image where the chaincode is launched on the peer node that instantiated the chaincode.
  • It invokes the init method on the chaincode to initialize the ledger.

To create the channel and install and instantiate the chaincode, follow Steps 5–9 in Part 1 in the GitHub repo.

Querying the chaincode and invoking transactions

In Steps 10–12, you query the chaincode, invoke a chaincode transaction, and then query the chaincode again to check the effect of the transaction. As shown in Figure 3, querying chaincode takes place on the peer node. It involves the chaincode querying the world state, which is a key-value store storing the current state of the ledger. An identical copy of the world state is stored on each peer node that is joined to a channel.

Figure 3 – Chaincode interacting with the Hyperledger Fabric Ledger via the peer

This is the transaction flow that is kicked off when invoking a transaction: A Fabric client application sends a transaction proposal to the endorsing peers in the network for endorsement. The endorsing peers simulate the transaction and return the results to the client application. The client application packages all the endorsed transaction responses and submits the package to the ordering service. Transactions are ordered and cut into blocks before being sent back to all the peer nodes joined to the channel. Here the transactions are validated, their read/write set is checked, and each transaction updates the world state. The block is finally appended to the ledger.

At the end of Part 1, you’ve done the following:

  • Created a Hyperledger Fabric network using Amazon Managed Blockchain and provisioned a peer node
  • Created a new VPC with a Fabric client node connecting to the Fabric network via a VPC endpoint
  • Created a new channel
  • Installed chaincode on the peer and instantiated the chaincode on the channel
  • Queried the chaincode and invoked a transaction that updates the world state and results in a new block being added to the blockchain

Part 2: Deploy and test the chaincode for nonprofit transactions

Deploying chaincode is a process that you became familiar with in Part 1. The only difference in Part 2 is that you take the chaincode for the nonprofit application from the repo and deploy that, rather than deploying the sample chaincode that is already present on the Hyperledger Fabric client node.

Some background on the Fabric client node might help make this process clearer. The Fabric client node is an EC2 instance that runs a Docker container with the name of cli. To see this, enter docker ps after you connect using SSH to the Hyperledger Fabric client node. Entering docker inspect cli shows you detailed information about the cli container, including the directories on the host EC2 instance that are mounted into the Docker container. For example, the directory /home/ec2-user/fabric-samples/chaincode is mounted. This means that you can simply copy chaincode (or any file) to this directory on your EC2 Fabric client node, and it will be available within the cli container. After it is available to the cli container, you can use the peer chaincode install command to install the chaincode on the peer node.

To copy, install, and instantiate the nonprofit chaincode on the channel, follow the steps in Part 2: Non-profit (NGO) Chaincode in the GitHub repo.

Part 3: Deploy the RESTful API server

The RESTful API server is a Node.js Express application that uses the Hyperledger Fabric Client SDK to interact with the Fabric network. As mentioned previously, the Hyperledger Fabric Client SDK provides a wealth of functionality. It includes APIs to create and join channels, install and instantiate chaincode, and query blockchain metadata such as block heights and channel configuration information. In this post, we use a subset of SDK functionality that allows us to query chaincode and invoke transactions.

How does the RESTful API Node.js application connect to the Fabric network? There are two options:

  1. Use the API provided by the Hyperledger Fabric Client SDK to connect to the ordering service, the CA, and the peer nodes in your network.
  2. Create a connection profile, which is a YAML file describing the structure of your Fabric network. Pass this file to the Fabric Client SDK. The SDK uses it to connect to and interact with your Fabric network.

We use the second approach by creating a connection profile. You can see this in Step 3 of Part 3, where I use a script to generate a simple connection profile for your network.

Follow the steps in Part 3: RESTful API to expose the Chaincode to deploy the Node.js RESTful API server.

Part 4: Run the Node.js/Angular user interface application

The user interface application is a Node.js/Angular application that calls the API provided by the RESTful API server. It does not use the Hyperledger Fabric Client SDK nor does it have any connectivity to the Fabric network. Instead, each action in the application invokes a corresponding REST API function.

It’s also worth noting that all application data is owned by the Fabric network. Besides the images displayed in the gallery, all data is retrieved from the Fabric world state database via the RESTful API and the Fabric chaincode. The application provides functionality that allows donors to track how their donations are spent and includes the following functions:

  • Donors can review each nonprofit organization, donate funds to them, and rate them.
  • Donors can view the items that each nonprofit has spent funds on and can see how much of each donation was used to fund each spend item.
  • Donors can track the donations that they have personally made.

The steps to deploy the application are the same as for any Node.js application. One small edit is required to provide the endpoint for the RESTful API to the Node.js application, which is explained in Step 3.

Follow the steps in Part 4: The User Interface to deploy the Node.js user interface application.


Well done on completing the steps in this post. You built a Hyperledger Fabric network using Amazon Managed Blockchain and deployed a multi-tier application consisting of chaincode, a RESTful API, and a user interface application. You also deployed a working application that uses blockchain as its underlying data source.

Besides the slider component on the user interface showing the blocks received from the Hyperledger Fabric network, there is no indication to the end users that the underlying technology is blockchain. We have abstracted the application from the blockchain using a REST API that could support multiple channels such as web and mobile, and we provided block notifications via a standard WebSocket protocol.

For a test network to simulate this application on a decentralized architecture, the next step would be to add more members to the Fabric network and have those members provision peers that join the same channel. This will be the topic of a future blog post.

Try Amazon Managed Blockchain by signing up for the preview.

Thanks to the following people:

  • Siva Puppala, Khan Iftikhar, Rangavajhala Srikanth, and Pentakota Deekshitulu for building a great UI application
  • Michael Braendle, who reviewed and tested the accompanying repo


About the Author

Michael Edge is a senior cloud architect with AWS Professional Services, specializing in blockchain, containers and microservices.  

Mince Pi – what’s under your tree?

Post Syndicated from Liz Upton original https://www.raspberrypi.org/blog/mince-pi-whats-under-your-tree/

Merry Christmas everybody! We’re taking a little time off to spend with our families; we’ll be back in 2019. This post is for those of you who have found a piece of Pi under the tree or nestling uncomfortably in the toe of a stocking, and who are wondering what to do with it. Raise a glass of egg nog and join us in fighting over who gets the crispy bits this lunchtime.

So you’re the proud owner of a brand-new Raspberry Pi. Now what?

Your new Raspberry Pi

Did you wake up this morning to find a new Raspberry Pi under the tree? Congratulations, and welcome to the Raspberry Pi community! You’re one of us now, and we’re happy to have you on board.

But what if you’ve never seen a Raspberry Pi before? What are you supposed to do with it? What’s all the fuss about, and why does your new computer look so naked?

Setting up your Raspberry Pi

Are you comfy? Good. Then let us begin.

Download our free operating system

First of all, you need to make sure you have an operating system on your micro SD card: we suggest Raspbian, the Raspberry Pi Foundation’s official supported operating system. If your Pi is part of a starter kit, you might find that it comes with a micro SD card that already has Raspbian preinstalled. If not, you can download Raspbian for free from our website.

An easy way to get Raspbian onto your SD card is to use a free tool called Etcher. Watch The MagPi’s Lucy Hattersley show you what you need to do. You can also use NOOBS to install Raspbian on your SD card, and our Getting Started guide explains how to do that.

Plug it in and turn it on

Your new Raspberry Pi 3 comes with four USB ports and an HDMI port. These allow you to plug in a keyboard, a mouse, and a television or monitor. If you have a Raspberry Pi Zero, you may need adapters to connect your devices to its micro USB and micro HDMI ports. Both the Raspberry Pi 3 and the Raspberry Pi Zero W have onboard wireless LAN, so you can connect to your home network, and you can also plug an Ethernet cable into the Pi 3.

Make sure to plug the power cable in last. There’s no ‘on’ switch, so your Pi will turn on as soon as you connect the power. Raspberry Pi uses a micro USB power supply, so you can use a phone charger if you didn’t receive one as part of a kit.

Learn with our free projects

If you’ve never used a Raspberry Pi before, or you’re new to the world of coding, the best place to start is our projects site. It’s packed with free projects that will guide you through the basics of coding and digital making. You can create projects right on your screen using Scratch and Python, connect a speaker to make music with Sonic Pi, and upgrade your skills to physical making using items from around your house.

Here’s James to show you how to build a whoopee cushion using a Raspberry Pi, paper plates, tin foil and a sponge:

Raspberry Pi Whoopee cushion PRANK || HOW-TO || Raspberry Pi Foundation

Explore the world of Raspberry Pi physical computing with our free FutureLearn courses: http://rpf.io/futurelearn.

Diving deeper

You’ve plundered our projects, you’ve successfully rigged every chair in the house to make rude noises, and now you want to dive deeper into digital making. Good! While you’re digesting your Christmas dinner, take a moment to skim through the Raspberry Pi blog for inspiration. You’ll find projects from across our worldwide community, with everything from home automation projects and retrofit upgrades, to robots, gaming systems, and cameras.

Need a beginners’ guidebook? Look no further: here’s the official guide. It’s also available as a free download, like all our publications.

You’ll also find bucketloads of ideas in The MagPi magazine, the official monthly Raspberry Pi publication, available in both print and digital format. You can download every issue for free. If you subscribe, you’ll get a free Raspberry Pi 3A+ to add to your new collection. HackSpace magazine is another fantastic place to turn for Raspberry Pi projects, along with other maker projects and tutorials.

And, of course, simply typing “Raspberry Pi projects” into your preferred search engine will find thousands of ideas. Sites like Hackster, Hackaday, Instructables, Pimoroni, and Adafruit all have plenty of fab Raspberry Pi tutorials that they’ve devised themselves and that community members like you have created.

And finally

If you make something marvellous with your new Raspberry Pi – and we know you will – don’t forget to share it with us! Our Twitter, Facebook and Instagram accounts are brimming with chatter, projects, and events. And our forums are the best place to visit if you ever have questions about your Raspberry Pi or if you need some help.

It’s good to get together with like-minded folks, so check out the growing Raspberry Jam movement. Raspberry Jams are community-run events where makers and enthusiasts can meet other makers, show off their projects, and join in with workshops and discussions. Find your nearest Jam here.

Have a great break, and welcome to the community. We’ll see you in 2019!

The post Mince Pi – what’s under your tree? appeared first on Raspberry Pi.

ЕС: Заключения на Съвета относно засилването на европейското съдържание в цифровата икономика

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/12/19/%D0%B5%D1%81-%D0%B7%D0%B0%D0%BA%D0%BB%D1%8E%D1%87%D0%B5%D0%BD%D0%B8%D1%8F-%D0%BD%D0%B0-%D1%81%D1%8A%D0%B2%D0%B5%D1%82%D0%B0-%D0%BE%D1%82%D0%BD%D0%BE%D1%81%D0%BD%D0%BE-%D0%B7%D0%B0%D1%81%D0%B8%D0%BB/

ЕС се активизира да приеме множество законодателни предложения относно медиите, електронните комуникации и цифровата икономика преди изборите за ЕП 2019.

Днес в Официален вестник на ЕС са публикувани  Заключения на Съвета относно засилването на европейското съдържание в цифровата икономика. В констативната част четем:

Секторите на производство и разпространение на съдържание, които включват съдържание и творби от медиите (с аудиовизуално, печатно и онлайн съдържание), както и други сектори на културата и творчеството, са основни стълбове на общественото и икономическото развитие на Европа. Качеството и разнообразието на европейското съдържание са вътрешно присъщи на европейската идентичност и изпълняват съществена роля за демокрацията и социалното приобщаване, а също и за динамичния и конкурентоспособен характер на европейските сектори на медиите, културата и творчеството. Тези сектори освен това укрепват меката сила на Европа на световно равнище. Със своето комплексно въздействие те благоприятстват иновациите, творчеството и благосъстоянието в други области;

Цифровите и онлайн технологиите представляват голяма възможност за създаване на благоприятни условия за нова ера на европейското творчество. Освен това те предоставят възможност за по-голям достъп до европейско културно съдържание и за съхраняване, утвърждаване и разпространение на нашето европейско културно наследство, например посредством използването на виртуалната реалност. Цифровите технологии дават възможност на всички участващи страни да придобият нови умения и познания, да разработят нови услуги, продукти и пазари и да достигнат до нова публика. Онлайн платформите, по-специално социалните медии и платформите за споделяне на видеоклипове, осигуряват достъп до огромно разнообразие на съдържание, особено от трети страни, за безброй потребители в Европейския съюз и по целия свят;

Използването на цифровите и онлайн технологиите поражда предизвикателства за европейските сектори на производство и разпространение на съдържание като цяло. Всички участващи страни трябва да адаптират своите бизнес стратегии, да развият нови умения, да разширят своите познания, да преосмислят структурата на своите организации и да направят оценка на своите модели на финансиране и производство/разпространение. Увеличаващото се използване на данни все повече оказва въздействие върху веригите за създаване на стойност на всички равнища. Това развитие има и огромно влияние върху очакванията и поведението на потребителите;

Глобалните онлайн платформи оказаха съществено въздействие върху цифровата трансформация. По-специално основаният на алгоритми бизнес модел на тези онлайн платформи, предлагащи съдържание с културен и творчески характер, включително медийно съдържание, и основани на персонализираното разпространение на съдържание и на реклами, специално насочени към потребителите, породиха въпроси относно прозрачността, дезинформацията, медийния плурализъм, данъчното облагане, възнаграждението на създателите на съдържание, опазването на неприкосновеността на личния живот, популяризирането на съдържанието и културното многообразие.

В горния контекст се изтъкват следните политически приоритети в дневния ред на Европейския съюз:

  • Създаване на благоприятни условия за многообразието, видимостта и иновациите
  • Създаване на равни условия
  • Укрепване на доверието в информацията и източниците
  • Подобряване на уменията и компетентностите.


Toddler nightlight/stay-in-bed device

Post Syndicated from Liz Upton original https://www.raspberrypi.org/blog/toddler-nightlight-stay-in-bed-device/

Living with a toddler is the best thing. It really is. Seen through their eyes, everything you’re jaded about becomes new and exciting. Every piece of music is new. Frog and Toad are real people. Someone doesn’t care that you’re really, really bad at drawing, believing that you’re actually a kind of cross between Leonardo and Picasso; and you have a two-foot-tall excuse to sing Gaston at the top of your voice in public. The parents of toddlers are allowed into the ball pit at soft play. There’s lots of cake. The hugs and kisses are amazing.

frog and toad

Frog and Toad. Real people. If you are in charge of small children and do not own any of the Frog and Toad series, get yourself to a bookshop pronto. You can thank me later.

However. If my experience here is anything to go by, you may also be so tired you’re walking into things a lot. It doesn’t matter. The hugs and kisses are, like I said, amazing. And there are things you can do to mitigate that tiredness. Enter the Pi.

stay focused

I’m lucky. My toddler sleeps thorough. But sometimes she has an…aggravating habit of early wakefulness. After 7am I’m golden. I can do 6.30 at a push. Any earlier than that, though, and I am dead-eyed and leather-visaged for the rest of the day. It’s not a good look. Enter equally new parent Cary Ciavolella, who has engineered a solution. This is a project so simple even the most sleep-deprived parent should be able to put it together, using Pimoroni parts you can easily buy online. Cary has thoughtfully made all the code available for you so you don’t have to do anything other than build the physical object.

Pi nightlight

Cary’s nightlight can produce a number of different sorts of white noise, and changes colour from red (YOU’RE MEANT TO BE ASLEEP, KID) through orange (you can play in your room) to green (it’s time to get up). Coloured lights are a sensible option: toddlers can’t read numbers, let alone a clock face. It’s all addressable via a website, which, if you’re feeling fancy, you can set up with a favicon on your phone’s home screen so it feels like an app.

White noise – I use a little box from Amazon which plays the sound of the sea – and red-spectrum nightlights have solid research behind them if you’re trying to soothe a little one to sleep. Once you cross over into blue light, you’ll stop the pineal gland from producing melatonin, which is why I hate the fan I bought for our bedroom with a burning, fiery passion. Some smart-alec thought that putting a giant blue led on the front to demonstrate that the fan was on was a smart idea, never mind the whirling blades which are obvious to at least three of the senses. (I have never tried tasting it.)

With this in mind, I’ve one tiny alteration to make to Cary’s setup: you can permanently disable the green LED on the Pi Zero itself so that the only lights visible are the Pimoroni Blinkt – namely the ones that your little one should be looking at to figure out whether it’s time to get up yet. Just add the following to the Zero’s /boot/config.txt and reboot.

# Disable the ACT LED on the Raspberry Pi.



The post Toddler nightlight/stay-in-bed device appeared first on Raspberry Pi.

Christmas lights 2018

Post Syndicated from Liz Upton original https://www.raspberrypi.org/blog/christmas-lights-2018/

It’s the most wonderful time of the year! There’s much mistletoeing, and hearts will be glowing – as will thousands of Raspberry Pi-enabled Christmas light displays around the world.

Polish roadside crib

This morning I have mostly been spending my virtual time by a roadside in snowy Poland, inflicting carols on passers-by. (It turns out that the Polish carols this crib is programmed with rock a lot harder than the ones we listen to in England.) Visit the crib’s website to control it yourself.

Helpfully, Tomek, the maker, has documented some of the build over on Hackster if you want to learn more.

LightShow Pi

We are also suckers for a good Christmas son et lumiere. If you’re looking to make something yourself, LightShow Pi has been around for some years now, and goes from strength to strength. We’ve covered projects built with it in previous years, and it’s still in active development from what we can see, with new features for this Christmas like the ability to address individual RGB pixels. Most of the sound and music displays you’ll see using a Raspberry Pi are running LightShow Pi; it’s got a huge user base, and its online community on Reddit is a great place to get started.

2018 Christmas Light Show

Light display contains over 4,000 lights and 7,800 individual channels. It is controlled by 3 network based lighting controllers. The audio and lighting sequences are sent to the controllers by a Raspberry Pi.

This display from the USA must have taken forever to set up: you’re looking at 4,000 lights and 7,800 channels.  Here’s something more domestically proportioned from YouTube user Ken B, showing off LightShow Pi’s microweb user interface, which is perfect for use on your phone.

LightShow Pi Christmas Tree 2018

Demonstration of the microweb interface along with LED only operation using two matrices, lower one cycling.

Scared of the neighbours burning down your outdoor display, or not enough space for a full-size tree? Never fear: The Pi Hut’s 3D Christmas tree, designed by Rachel Rayns, formerly of this parish, is on sale again this year. We particularly loved this adaptation from Blitz City DIY, where Liz (not me, another Liz) RGB-ifies the tree: a great little Christmas electronics project to work through with the kids. Or on your own, because we don’t need to have all our fun vicariously through our children this Christmas. (Repeat ten times.)

RGB-ing the Pi Hut Xmas Tree Kit

The Pi Hut’s Xmas Tree Kit is a fun little soldering kit for the Raspberry Pi. It’s a great kit, but I thought it could do with a bit more color. This is just a quick video to talk about the kit and show off all the RGB goodness.

Any Christmas projects you’d like to share? Let us know in the comments!

The post Christmas lights 2018 appeared first on Raspberry Pi.

Making Robot Friends with the Crickit HAT for Raspberry Pi

Post Syndicated from Liz Upton original https://www.raspberrypi.org/blog/making-robot-friends-with-the-crickit-hat-for-raspberry-pi/

Here’s a guest post from our good friend Limor Fried, MIT hacker and engineer, Forbes Top Woman in Tech, and, of course, Founder of Adafruit. She’s just released a new add-on for the Pi that we’re really excited about: we think you’ll like the look of it too.

Sometimes we wonder if robotics engineers ever watch movies. If they did, they’d know that making robots into slaves always ends up in a robot rebellion. Why even go down that path? Here at Adafruit, we believe in making robots our friends! So if you find yourself wanting a companion, consider the robot. They’re fun to program, and you can get creative with decorations.

Crickit HAT atop a Raspberry Pi 3B+

With that in mind, we designed the Adafruit Crickit HAT – That’s our Creative Robotics & Interactive Construction Kit. It’s an add-on to the Raspberry Pi that lets you #MakeRobotFriend using your favorite programming language, Python!

Adafruit CRICKIT HAT for Raspberry Pi #RaspberryPi #adafruit #robots

The Adafruit CRICKIT HAT for Raspberry Pi. This is a clip from our weekly show when it debuted! https://www.adafruit.com/product/3957 Sometimes we wonder if robotics engineers ever watch movies. If they did, they’d know that making robots into slaves always ends up in a robot rebellion. Why even go down that path?

The Crickit HAT is a way to make robotics and interactive art projects with your Pi. Plug the Crickit HAT onto your Pi using the standard 2×20 GPIO connector and start controlling motors, servos or solenoids. You also get eight signal pins with analog inputs or PWM outputs, capacitive touch sensors, a NeoPixel driver and 3W amplified speaker. It complements and extends your Pi, doing all the things a Pi can’t do, so you can still use all the goodies on the Pi like video, camera, internet and Bluetooth…but now you have a robotics and mechatronics playground as well!

Control of the motors, sensors, neopixels, capacitive touch, etc. is all done in Python 3. It’s the easiest and best way to program your Pi, and after a couple pip installs you’ll be ready to go. Each input or output is wrapped into a python object so you can control a motor with simple commands like

crickit.motor_1.throttle = 0.5 # half speed forward


crickit.servo_1.angle = 90

Crickit HAT and peripherals

The Crickit hat is powered by seesaw, our i2c-to-whatever bridge firmware. so you only need to use two data pins to control the huge number of inputs and outputs on the Crickit. All those timers, PWMs, NeoPixels, sensors are offloaded to the co-processor. Stuff like managing the speed of motors via PWM is also done with the co-processor, so you’ll get smooth PWM outputs that don’t jitter when Linux gets busy with other stuff. What’s nice is that robotics tends to be fairly slow as electronics goes (you don’t need microsecond-level reaction time), so tunnelling all the control over I2C doesn’t affect robot functionality.

We wanted to go with a ‘bento box’ approach to robotics. Instead of having eight servo drivers, or four 10A motor controllers, or five stepper drivers, it has just a little bit of everything. We also stuck to just 5V power robotics, to keep things low-power and easy to use: 5V DC motors and steppers are easy to come by. Here’s what you can do with the Crickit HAT:

  • 4 x analog or digital servo control, with precision 16-bit timers.
  • 2 x bi-directional brushed DC motor control, 1 Amp current-limited each, with 8-bit PWM speed control (or one stepper).
  • 4 x high-current “Darlington” 500mA drive outputs with kick-back diode protection. For solenoids, relays, large LEDs, or one uni-polar stepper.
  • 4 x capacitive touch input sensors with alligator pads.
  • 8 x signal pins, which can be used as digital in/out or analog inputs.
  • 1 x NeoPixel driver with 5V level shifter – this is connected to the seesaw chip, not the Raspberry Pi, so you won’t be giving up pin 18. It can drive over 100 pixels.
  • 1 x Class D, 4-8 ohm speaker, 3W-max audio amplifier – this is connected to the I2S pins on the Raspberry Pi for high-quality digital audio. Works on any Pi, even Zeros that don’t have an audio jack!
  • Built-in USB to serial converter. The USB port on the HAT can be used to update the seesaw firmware on the Crickit with the drag-n-drop bootloader, or you can plug into your computer; it will also act as a USB converter for logging into the console and running command lines on the Pi.

If you’re curious about how seesaw works, check out our GitHub repos for the firmware that’s on the co-processor chip and  for the software that runs on the Pi to talk to it. We’d love to see more people using seesaw in their projects, especially SBC projects like the Pi, where a hardware-assistant can unlock the real-time-control power of a microcontroller.

The post Making Robot Friends with the Crickit HAT for Raspberry Pi appeared first on Raspberry Pi.

AWS Security Profile (and re:Invent 2018 wrap-up): Eric Docktor, VP of AWS Cryptography

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profile-and-reinvent-2018-wrap-up-eric-docktor-vp-of-aws-cryptography/

Eric Docktor

We sat down with Eric Docktor to learn more about his 19-year career at Amazon, what’s new with cryptography, and to get his take on this year’s re:Invent conference. (Need a re:Invent recap? Check out this post by AWS CISO Steve Schmidt.)

How long have you been at AWS, and what do you do in your current role?

I’ve been at Amazon for over nineteen years, but I joined AWS in April 2015. I’m the VP of AWS Cryptography, and I lead a set of teams that develops services related to encryption and cryptography. We own three services and a tool kit: AWS Key Management Service (AWS KMS), AWS CloudHSM, AWS Certificate Manager, plus the AWS Encryption SDK that we produce for our customers.

Our mission is to help people get encryption right. Encryption algorithms themselves are open source, and generally pretty well understood. But just implementing encryption isn’t enough to meet security standards. For instance, it’s great to encrypt data before you write it to disk, but where are you going to store the encryption key? In the real world, developers join and leave teams all the time, and new applications will need access to your data—so how do you make a key available to those who really need it, without worrying about someone walking away with it?

We build tools that help our customers navigate this process, whether we’re helping them secure the encryption keys that they use in the algorithms or the certificates that they use in asymmetric cryptography.

What did AWS Cryptography launch at re:Invent?

We’re really excited about the launch of KMS custom key store. We’ve received very positive feedback about how KMS makes it easy for people to control access to encryption keys. KMS lets you set up IAM policies that give developers or applications the ability to use a key to encrypt or decrypt, and you can also write policies which specify that a particular application—like an Amazon EMR job running in a given account—is allowed to use the encryption key to decrypt data. This makes it really easy to encrypt data without worrying about writing massive decrypt jobs if you want to perform analytics later.

But, some customers have told us that for regulatory or compliance reasons, they need encryption keys stored in single-tenant hardware security modules (HSMs) that they manage. This is where the new KMS custom key store feature comes in. Custom key store combines the ease of using KMS with the ability to run your own CloudHSM cluster to store your keys. You can create a CloudHSM cluster and link it to KMS. After setting that up, any time you want to generate a new master key, you can choose to have it generated and stored in your CloudHSM cluster instead of using a KMS multi-tenant HSM. The keys are stored in an HSM under your control, and they never leave that HSM. You can reference the key by its Amazon Resource Name (ARN), which allows it to be shared with users and applications, but KMS will handle the integration with your CloudHSM cluster so that all crypto operations stay in your single-tenant HSM.

You can read our blog post about custom key store for more details.

If both AWS KMS and AWS CloudHSM allow customers to store encryption keys, what’s the difference between the services?

Well, at a high level, sure, both services offer customers a high level of security when it comes to storing encryption keys in FIPS 140-2 validated hardware security modules. But there are some important differences, so we offer both services to allow customers to select the right tool for their workloads.

AWS KMS is a multi-tenant, managed service that allows you to use and manage encryption keys. It is integrated with over 50 AWS services, so you can use familiar APIs and IAM policies to manage your encryption keys, and you can allow them to be used in applications and by members of your organization. AWS CloudHSM provides a dedicated, FIPS 140-2 Level 3 HSM under your exclusive control, directly in your Amazon Virtual Private Cloud (VPC). You control the HSM, but it’s up to you to build the availability and durability you get out of the box with KMS. You also have to manage permissions for users and applications.

Other than helping customers store encryption keys, what else does the AWS Cryptography team do?

You can use CloudHSM for all sorts of cryptographic operations, not just key management. But we definitely do more than KMS and CloudHSM!

AWS Certificate Manager (ACM) is another offering from the cryptography team that’s popular with customers, who use it to generate and renew TLS certificates. Once you’ve got your certificate and you’ve told us where you want it deployed, we take care of renewing it and binding the new certificate for you. Earlier this year, we extended ACM to support private certificates as well, with the launch of ACM Private Certificate Authority.

We also helped the AWS IoT team launch support for cryptographically signing software updates sent to IoT devices. For IoT devices, and for software installation in general, it’s a best practice to only accept software updates from known publishers, and to validate that the new software has been correctly signed by the publisher before installing. We think all IoT devices should require software updates to be signed, so we’ve made this really easy for AWS IoT customers to implement.

What’s the most challenging part of your job?

We’ve built a suite of tools to help customers manage encryption, and we’re thrilled to see so many customers using services like AWS KMS to secure their data. But when I sit down with customers, especially large customers looking seriously at moving from on-premises systems to AWS, I often learn that they have years and years of investment into their on-prem security systems. Migrating to the cloud isn’t easy. It forces them to think differently about their security models. Helping customers think this through and map a strategy can be challenging, but it leads to innovation—for our customers, and for us. For instance, the idea for KMS custom key store actually came out of a conversation with a customer!

What’s your favorite part of your job?

Ironically, I think it’s the same thing! Working with customers on how they can securely migrate and manage their data in AWS can be challenging, but it’s really rewarding once the customer starts building momentum. One of my favorite moments of my AWS career was when Goldman Sachs went on stage at re:Invent last year and talked about how they use KMS to secure their data.

Five years from now, what changes do you think we’ll see within the field of encryption?

The cryptography community is in the early stages of developing a new cryptographic algorithm that will underpin encryption for data moving across the internet. The current standard is RSA, and it’s widely used. That little padlock you see in your web browser telling you that your connection is secure uses the RSA algorithm to set up an encrypted connection between the website and your browser. But, like all good things, RSA’s time may be coming to an end—the quantum computer could be its undoing. It’s not yet certain that quantum computers will ever achieve the scale and performance necessary for practical applications, but if one did, it could be used to attack the RSA algorithm. So cryptographers are preparing for this. Last year, the National Institute of Standards and Technology (NIST) put out a call for algorithms that might be able to replace RSA, and got 68 responses. NIST is working through those ideas now and will likely select a smaller number of algorithms for further study. AWS participated in two of those submissions and we’re keeping a close eye on NIST’s process. New cryptographic algorithms take years of testing and vetting before they make it into any standards, but we want to be ready, and we want to be on the forefront. Internally, we’re already considering what it would look like to make this change. We believe it’s our job to look around corners and prepare for changes like this, so our customers don’t have to.

What’s the most common misconception you encounter about encryption?

Encryption technology itself is decades-old and fairly well understood. That’s both the beauty and the curse of encryption standards: By the time anything becomes a standard, there are years and years of research and proof points into the stability and the security of the algorithm. But just because you have a really good encryption algorithm that takes an encryption key and a piece of data you want to secure and spits out an impenetrable cipher text, it doesn’t mean that you’re done. What did you do with the encryption key? Did you check it into source code? Did you write it on a piece of paper and leave it in the conference room? It’s these practices around the encryption that can be difficult to navigate.

Security-conscious customers know they need to encrypt sensitive data before writing it to disk. But, if you want your application to run smoothly, sometimes you need that data in clear text. Maybe you need the data in a cache. But who has access to the cache? And what logging might have accidentally leaked that information while the application was running and interacting with the cache?

Or take TLS certificates. Each TLS certificate has a public piece—the certificate—and a private piece—a private key. If an adversary got ahold of the private key, they could use it to impersonate your website or your API. So, how do you secure that key after you’ve procured the certificate?

It’s practices like this that some customers still struggle with. You have to think about all the places that your sensitive data is moving, and about real-world realities, like the fact that the data has to be unecrypted somewhere. That’s where AWS can help with the tooling.

Which re:Invent session videos would you recommend for someone interested in learning more about encryption?

Ken Beer’s encryption talk is a very popular session that I recommend to people year after year. If you want to learn more about KMS custom key store, you should also check out the video from the LaunchPad event, where we talked with Box about how they’re using custom key store.

People do a lot of networking during re:Invent. Any tips for maintaining those connections after everyone’s gone home?

Some of the people that I meet at re:Invent I get to see again every year. With these customers, I tend to stay in touch through email, and through Executive Briefing Center sessions. That contact is important since it lets us bounce ideas off each other and we use that feedback to refine AWS offerings. One conference I went to also created a Slack channel for attendees—and all the attendees are still on it. It’s quiet most of the time, but people have a way to re-engage with each other and ask a question, and it’ll be just like we’re all together again.

If you had to pick any other job, what would you want to do with your life?

If I could do anything, I’d be a backcountry ski guide. Now, I’m not a good enough skier to actually have this job! But I like being outside, in the mountains. If there was a way to make a living out of that, I would!

Author photo

Erick Docktor

Eric joined Amazon in 1999 and has worked in a variety of Amazon’s businesses, including being part of the teams that launched Amazon Marketplace, Amazon Prime, the first Kindle, and Fire Phone. Eric has also worked in Supply Chain planning systems and in Ordering. Since 2015, Eric has led the AWS Cryptography team that builds tools to make it easy for AWS customers to encrypt their data in AWS. Prior to Amazon, Eric was a journalist and worked for newspapers including the Oakland Tribune and the Doylestown (PA) Intelligencer.

Съд на ЕС: държавата може едностранно да отмени уведомлението за оттегляне от ЕС

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/12/11/brexit-5/

На 10 декември стана известно решението на Съда на ЕС по делото С-621/18, образувано по преюдициално запитване относно  чл.50 ДЕС.

Когато в съответствие с член 50 [ДЕС] държава-членка е уведомила Европейския съвет за своето намерение да се оттегли от Европейския съюз, позволява ли правото на ЕС уведомлението да бъде отменено едностранно от държавата?


Член 50 от ДЕС трябва да се тълкува в смисъл, че когато държава-членка е уведомила Европейския съвет в съответствие с този член за своето намерение да се оттегли от Европейския съюз, този член позволява на  държавата  да отмени едностранно, недвусмислено и безусловно това уведомление с писмено известие, адресирано до Европейския съвет, след като съответната държава-членка е взела решението за отмяна в съответствие с конституционните си изисквания. Целта на това писмено известие е да потвърди членството на съответната държава в ЕС при условия, които са непроменени по отношение на нейния статус на държава-членка.

Този едностранен акт трябва да бъде извършен в определен срок: докато споразумението за оттегляне, сключено между  държавата и Европейския съюз, не е влязло в сила, или ако такова споразумение не е сключено, докато двугодишният срок, определен в член 50, параграф 3 от ДЕС, евентуално удължен в съответствие с този параграф, не е изтекъл.

Днес парламентът на ОК трябваше да гласува споразумението с ЕС.

След решението на Съда на ЕС  Тереза Мей обяви, че отлага гласуването, като следваща дата не е обявена.

NYT по темата –



Learn about New AWS re:Invent Launches – December AWS Online Tech Talks

Post Syndicated from Robin Park original https://aws.amazon.com/blogs/aws/learn-about-new-aws-reinvent-launches-december-aws-online-tech-talks/

AWS Tech Talks

Join us in the next couple weeks to learn about some of the new service and feature launches from re:Invent 2018. Learn about features and benefits, watch live demos and ask questions! We’ll have AWS experts online to answer any questions you may have. Register today!

Note – All sessions are free and in Pacific Time.

Tech talks this month:


December 19, 2018 | 01:00 PM – 02:00 PM PTDeveloping Deep Learning Models for Computer Vision with Amazon EC2 P3 Instances – Learn about the different steps required to build, train, and deploy a machine learning model for computer vision.


December 11, 2018 | 01:00 PM – 02:00 PM PTIntroduction to AWS App Mesh – Learn about using AWS App Mesh to monitor and control microservices on AWS.

Data Lakes & Analytics

December 10, 2018 | 11:00 AM – 12:00 PM PTIntroduction to AWS Lake Formation – Build a Secure Data Lake in Days – AWS Lake Formation (coming soon) will make it easy to set up a secure data lake in days. With AWS Lake Formation, you will be able to ingest, catalog, clean, transform, and secure your data, and make it available for analysis and machine learning.

December 12, 2018 | 11:00 AM – 12:00 PM PTIntroduction to Amazon Managed Streaming for Kafka (MSK) – Learn about features and benefits, use cases and how to get started with Amazon MSK.


December 10, 2018 | 01:00 PM – 02:00 PM PTIntroduction to Amazon RDS on VMware – Learn how Amazon RDS on VMware can be used to automate on-premises database administration, enable hybrid cloud backups and read scaling for on-premises databases, and simplify database migration to AWS.

December 13, 2018 | 09:00 AM – 10:00 AM PTServerless Databases with Amazon Aurora and Amazon DynamoDB – Learn about the new serverless features and benefits in Amazon Aurora and DynamoDB, use cases and how to get started.

Enterprise & Hybrid

December 19, 2018 | 11:00 AM – 12:00 PM PTHow to Use “Minimum Viable Refactoring” to Achieve Post-Migration Operational Excellence – Learn how to improve the security and compliance of your applications in two weeks with “minimum viable refactoring”.


December 17, 2018 | 11:00 AM – 12:00 PM PTIntroduction to New AWS IoT Services – Dive deep into the AWS IoT service announcements from re:Invent 2018, including AWS IoT Things Graph, AWS IoT Events, and AWS IoT SiteWise.

Machine Learning

December 10, 2018 | 09:00 AM – 10:00 AM PTIntroducing Amazon SageMaker Ground Truth – Learn how to build highly accurate training datasets with machine learning and reduce data labeling costs by up to 70%.

December 11, 2018 | 09:00 AM – 10:00 AM PTIntroduction to AWS DeepRacer – AWS DeepRacer is the fastest way to get rolling with machine learning, literally. Get hands-on with a fully autonomous 1/18th scale race car driven by reinforcement learning, 3D racing simulator, and a global racing league.

December 12, 2018 | 01:00 PM – 02:00 PM PTIntroduction to Amazon Forecast and Amazon Personalize – Learn about Amazon Forecast and Amazon Personalize – what are the key features and benefits of these managed ML services, common use cases and how you can get started.

December 13, 2018 | 01:00 PM – 02:00 PM PTIntroduction to Amazon Textract: Now in Preview – Learn how Amazon Textract, now in preview, enables companies to easily extract text and data from virtually any document.


December 17, 2018 | 01:00 PM – 02:00 PM PTIntroduction to AWS Transit Gateway – Learn how AWS Transit Gateway significantly simplifies management and reduces operational costs with a hub and spoke architecture.


December 18, 2018 | 11:00 AM – 12:00 PM PTIntroduction to AWS RoboMaker, a New Cloud Robotics Service – Learn about AWS RoboMaker, a service that makes it easy to develop, test, and deploy intelligent robotics applications at scale.

Security, Identity & Compliance

December 17, 2018 | 09:00 AM – 10:00 AM PTIntroduction to AWS Security Hub – Learn about AWS Security Hub, and how it gives you a comprehensive view of high-priority security alerts and your compliance status across AWS accounts.


December 11, 2018 | 11:00 AM – 12:00 PM PTWhat’s New with Serverless at AWS – In this tech talk, we’ll catch you up on our ever-growing collection of natively supported languages, console updates, and re:Invent launches.

December 13, 2018 | 11:00 AM – 12:00 PM PTBuilding Real Time Applications using WebSocket APIs Supported by Amazon API Gateway – Learn how to build, deploy and manage APIs with API Gateway.


December 12, 2018 | 09:00 AM – 10:00 AM PTIntroduction to Amazon FSx for Windows File Server – Learn about Amazon FSx for Windows File Server, a new fully managed native Windows file system that makes it easy to move Windows-based applications that require file storage to AWS.

December 14, 2018 | 01:00 PM – 02:00 PM PTWhat’s New with AWS Storage – A Recap of re:Invent 2018 Announcements – Learn about the key AWS storage announcements that occurred prior to and at re:Invent 2018. With 15+ new service, feature, and device launches in object, file, block, and data transfer storage services, you will be able to start designing the foundation of your cloud IT environment for any application and easily migrate data to AWS.

December 18, 2018 | 09:00 AM – 10:00 AM PTIntroduction to Amazon FSx for Lustre – Learn about Amazon FSx for Lustre, a fully managed file system for compute-intensive workloads. Process files from S3 or data stores, with throughput up to hundreds of GBps and sub-millisecond latencies.

December 18, 2018 | 01:00 PM – 02:00 PM PTIntroduction to New AWS Services for Data Transfer – Learn about new AWS data transfer services, and which might best fit your requirements for data migration or ongoing hybrid workloads.

NEW – Machine Learning algorithms and model packages now available in AWS Marketplace

Post Syndicated from Shaun Ray original https://aws.amazon.com/blogs/aws/new-machine-learning-algorithms-and-model-packages-now-available-in-aws-marketplace/

At AWS, our mission is to put machine learning in the hands of every developer. That’s why in 2017 we launched Amazon SageMaker. Since then it has become one of the fastest growing services in AWS history, used by thousands of customers globally. Customers using Amazon SageMaker can use optimized algorithms offered in Amazon SageMaker, to run fully-managed MXNet, TensorFlow, PyTorch, and Chainer algorithms, or bring their own algorithms and models. When it comes to building their own machine learning model, many customers spend significant time developing algorithms and models that are solutions to problems that have already been solved.


Introducing Machine Learning in AWS Marketplace

I am pleased to announce the new Machine Learning category of products offered by AWS Marketplace, which includes over 150+ algorithms and model packages, with more coming every day. AWS Marketplace offers a tailored selection for vertical industries like retail (35 products), media (19 products), manufacturing (17 products), HCLS (15 products), and more. Customers can find solutions to critical use cases like breast cancer prediction, lymphoma classifications, hospital readmissions, loan risk prediction, vehicle recognition, retail localizer, botnet attack detection, automotive telematics, motion detection, demand forecasting, and speech recognition.

Customers can search and browse a list of algorithms and model packages in AWS Marketplace. Once customers have subscribed to a machine learning solution, they can deploy it directly from the SageMaker console, a Jupyter Notebook, the SageMaker SDK, or the AWS CLI. Amazon SageMaker protects buyers data by employing security measures such as static scans, network isolation, and runtime monitoring.

The intellectual property of sellers on the AWS Marketplace is protected by encrypting the algorithms and model package artifacts in transit and at rest, using secure (SSL) connections for communications, and ensuring role based access for deployment of artifacts. AWS provides a secure way for the sellers to monetize their work with a frictionless self-service process to publish their algorithms and model packages.


Machine Learning category in Action

Having tried to build my own models in the past, I sure am excited about this feature. After browsing through the available algorithms and model packages from AWS Marketplace, I’ve decided to try the Deep Vision vehicle recognition model, published by Deep Vision AI. This model will allow us to identify the make, model and type of car from a set of uploaded images. You could use this model for insurance claims, online car sales, and vehicle identification in your business.

I continue to subscribe and accept the default options of recommended instance type and region. I read and accept the subscription contract, and I am ready to get started with our model.

My subscription is listed in the Amazon SageMaker console and is ready to use. Deploying the model with Amazon SageMaker is the same as any other model package, I complete the steps in this guide to create and deploy our endpoint.

With our endpoint deployed I can start asking the model questions. In this case I will be using a single image of a car; the model is trained to detect the model, maker, and year information from any angle. First, I will start off with a Volvo XC70 and see what results I get:


{'result': [{'mmy': {'make': 'Volvo', 'score': 0.97, 'model': 'Xc70', 'year': '2016-2016'}, 'bbox': {'top': 146, 'left': 50, 'right': 1596, 'bottom': 813}, 'View': 'Front Left View'}]}

My model has detected the make, model and year correctly for the supplied image. I was recently on holiday in the UK and stayed with a relative who had a McLaren 570s supercar. The thought that crossed my mind as the gulf-wing doors opened for the first time and I was about to be sitting in the car, was how much it would cost for the insurance excess if things went wrong! Quite apt for our use case today.


{'result': [{'mmy': {'make': 'Mclaren', 'score': 0.95, 'model': '570S', 'year': '2016-2017'}, 'bbox': {'top': 195, 'left': 126, 'right': 757, 'bottom': 494}, 'View': 'Front Right View'}]}

The score (0.95) measures how confident the model is that the result is right. The range of the score is 0.0 to 1.0. My score is extremely accurate for the McLaren car, with the make, model and year all correct. Impressive results for a relatively rare type of car on the road. I test a few more cars given to me by the launch team who are excitedly looking over my shoulder and now it’s time to wrap up.

Within ten minutes, I have been able to choose a model package, deploy an endpoint and accurately detect the make, model and year of vehicles, with no data scientists, expensive GPU’s for training or writing any code. You can be sure I will be subscribing to a whole lot more of these models from AWS Marketplace throughout re:Invent week and trying to solve other use cases in less than 15 minutes!

Access for the machine learning category in AWS Marketplace can be achieved through the Amazon SageMaker console, or directly through AWS Marketplace itself. Once an algorithm or model has been successfully subscribed to, it is accessible via the console, SDK, and AWS CLI. Algorithms and models from the AWS Marketplace can be deployed just like any other model or algorithm, by selecting the AWS Marketplace option as your package source. Once you have chosen an algorithm or model, you can deploy it to Amazon SageMaker by following this guide.


Availability & Pricing

Customers pay a subscription fee for the use of an algorithm or model package and the AWS resource fee. AWS Marketplace provides a consolidated monthly bill for all purchased subscriptions.

At launch, AWS Marketplace for Machine Learning includes algorithms and models from Deep Vision AI Inc, Knowledgent, RocketML, Sensifai, Cloudwick Technologies, Persistent Systems, Modjoul, H2Oai Inc, Figure Eight [Crowdflower], Intel Corporation, AWS Gluon Model Zoos, and more with new sellers being added regularly. If you are interested in selling machine learning algorithms and model packages, please reach out to [email protected]



Use AWS CodeDeploy to Implement Blue/Green Deployments for AWS Fargate and Amazon ECS

Post Syndicated from Curtis Rissi original https://aws.amazon.com/blogs/devops/use-aws-codedeploy-to-implement-blue-green-deployments-for-aws-fargate-and-amazon-ecs/

We are pleased to announce support for blue/green deployments for services hosted using AWS Fargate and Amazon Elastic Container Service (Amazon ECS).

In AWS CodeDeploy, blue/green deployments help you minimize downtime during application updates. They allow you to launch a new version of your application alongside the old version and test the new version before you reroute traffic to it. You can also monitor the deployment process and, if there is an issue, quickly roll back.

With this new capability, you can create a new service in AWS Fargate or Amazon ECS  that uses CodeDeploy to manage the deployments, testing, and traffic cutover for you. When you make updates to your service, CodeDeploy triggers a deployment. This deployment, in coordination with Amazon ECS, deploys the new version of your service to the green target group, updates the listeners on your load balancer to allow you to test this new version, and performs the cutover if the health checks pass.

In this post, I show you how to configure blue/green deployments for AWS Fargate and Amazon ECS using AWS CodeDeploy. For information about how to automate this end-to-end using a continuous delivery pipeline in AWS CodePipeline and Amazon ECR, read Build a Continuous Delivery Pipeline for Your Container Images with Amazon ECR as Source.

Let’s dive in!


To follow along, you must have these resources in place:

  • A Docker image repository that contains an image you have built from your Dockerfile and application source. This walkthrough uses Amazon ECR. For more information, see Creating a Repository and Pushing an Image in the Amazon Elastic Container Registry User Guide.
  • An Amazon ECS cluster. You can use the default cluster created for you when you first use Amazon ECS or, on the Clusters page of the Amazon ECS console, you can choose a Networking only cluster. For more information, see Creating a Cluster in the Amazon Elastic Container Service User Guide.

Note: The image repository and cluster must be created in the same AWS Region.

Set up IAM service roles

Because you will be using AWS CodeDeploy to handle the deployments of your application to Amazon ECS, AWS CodeDeploy needs permissions to call Amazon ECS APIs, modify your load balancers, invoke Lambda functions, and describe CloudWatch alarms. Before you create an Amazon ECS service that uses the blue/green deployment type, you must create the AWS CodeDeploy IAM role (ecsCodeDeployRole). For instructions, see Amazon ECS CodeDeploy IAM Role in the Amazon ECS Developer Guide.

Create an Application Load Balancer

To allow AWS CodeDeploy and Amazon ECS to control the flow of traffic to multiple versions of your Amazon ECS service, you must create an Application Load Balancer.

Follow the steps in Creating an Application Load Balancer and make the following modifications:

  1. For step 6a in the Define Your Load Balancer section, name your load balancer sample-website-alb.
  2. For step 2 in the Configure Security Groups section:
    1. For Security group name, enter sample-website-sg.
    2. Add an additional rule to allow TCP port 8080 from anywhere (
  3. In the Configure Routing section:
    1. For Name, enter sample-website-tg-1.
    2. For Target type, choose to register your targets with an IP address.
  4. Skip the steps in the Create a Security Group Rule for Your Container Instances section.

Create an Amazon ECS task definition

Create an Amazon ECS task definition that references the Docker image hosted in your image repository. For the sake of this walkthrough, we use the Fargate launch type and the following task definition.

  "executionRoleArn": "arn:aws:iam::account_ID:role/ecsTaskExecutionRole",
  "containerDefinitions": [{
    "name": "sample-website",
    "image": "<YOUR ECR REPOSITORY URI>",
    "essential": true,
    "portMappings": [{
      "hostPort": 80,
      "protocol": "tcp",
      "containerPort": 80
  "requiresCompatibilities": [
  "networkMode": "awsvpc",
  "cpu": "256",
  "memory": "512",
  "family": "sample-website"

Note: Be sure to change the value for “image” to the Amazon ECR repository URI for the image you created and uploaded to Amazon ECR in Prerequisites.

Creating an Amazon ECS service with blue/green deployments

Now that you have completed the prerequisites and setup steps, you are ready to create an Amazon ECS service with blue/green deployment support from AWS CodeDeploy.

Create an Amazon ECS service

  1. Open the Amazon ECS console at https://console.aws.amazon.com/ecs/.
  2. From the list of clusters, choose the Amazon ECS cluster you created to run your tasks.
  3. On the Services tab, choose Create.

This opens the Configure service wizard. From here you are able to configure everything required to deploy, run, and update your application using AWS Fargate and AWS CodeDeploy.

  1. Under Configure service:
    1. For the Launch type, choose FARGATE.
    2. For Task Definition, choose the sample-website task definition that you created earlier.
    3. Choose the cluster where you want to run your applications tasks.
    4. For Service Name, enter Sample-Website.
    5. For Number of tasks, specify the number of tasks that you want your service to run.
  2. Under Deployments:
    1. For Deployment type, choose Blue/green deployment (powered by AWS CodeDeploy). This creates a CodeDeploy application and deployment group using the default settings. You can see and edit these settings in the CodeDeploy console later.
    2. For the service role, choose the CodeDeploy service role you created earlier.
  3. Choose Next step.
  4. Under VPC and security groups:
    1. From Subnets, choose the subnets that you want to use for your service.
    2. For Security groups, choose Edit.
      1. For Assigned security groups, choose Select existing security group.
      2. Under Existing security groups, choose the sample-website-sg group that you created earlier.
      3. Choose Save.
  5. Under Load Balancing:
    1. Choose Application Load Balancer.
    2. For Load balancer name, choose sample-website-alb.
  6. Under Container to load balance:
    1. Choose Add to load balancer.
    2. For Production listener port, choose 80:HTTP from the first drop-down list.
    3. For Test listener port, in Enter a listener port, enter 8080.
  7. Under Additional configuration:
    1. For Target group 1 name, choose sample-website-tg-1.
    2. For Target group 2 name, enter sample-website-tg-2.
  8. Under Service discovery (optional), clear Enable service discovery integration, and then choose Next step.
  9. Do not configure Auto Scaling. Choose Next step.
  10. Review your service for accuracy, and then choose Create service.
  11. If everything is created successfully, choose View service.

You should now see your newly created service, with at least one task running.

When you choose the Events tab, you should see that Amazon ECS has deployed the tasks to your sample-website-tg-1 target group. When you refresh, you should see your service reach a steady state.

In the AWS CodeDeploy console, you will see that the Amazon ECS Configure service wizard has created a CodeDeploy application for you. Click into the application to see other details, including the deployment group that was created for you.

If you click the deployment group name, you can view other details about your deployment.  Under Deployment type, you’ll see Blue/green. Under Deployment configuration, you’ll see CodeDeployDefault.ECSAllAtOnce. This indicates that after the health checks are passed, CodeDeploy updates the listeners on the Application Load Balancer to send 100% of the traffic over to the green environment.

Under Load Balancing, you can see details about your target groups and your production and test listener ARNs.

Let’s apply an update to your service to see the CodeDeploy deployment in action.

Trigger a CodeDeploy blue/green deployment

Create a revised task definition

To test the deployment, create a revision to your task definition for your application.

  1. Open the Amazon ECS console at https://console.aws.amazon.com/ecs/.
  2. From the navigation pane, choose Task Definitions.
  3. Choose your sample-website task definition, and then choose Create new revision.
  4. Under Tags:
    1. In Add key, enter Name.
    2. In Add value, enter Sample Website.
  5. Choose Create.

Update ECS service

You now need to update your Amazon ECS service to use the latest revision of your task definition.

  1. Open the Amazon ECS console at https://console.aws.amazon.com/ecs/.
  2. Choose the Amazon ECS cluster where you’ve deployed your Amazon ECS service.
  3. Select the check box next to your sample-website service.
  4. Choose Update to open the Update Service wizard.
    1. Under Configure service, for Task Definition, choose 2 (latest) from the Revision drop-down list.
  5. Choose Next step.
  6. Skip Configure deployments. Choose Next step.
  7. Skip Configure network. Choose Next step.
  8. Skip Set Auto Scaling (optional). Choose Next step.
  9. Review the changes, and then choose Update Service.
  10. Choose View Service.

You are now be taken to the Deployments tab of your service where you can see details about your blue/green deployment.

You can click the deployment ID to go to the details view for the CodeDeploy deployment.

From there you can see the deployments status:

You can also see the progress of the traffic shifting:

If you notice issues, you can stop and roll back the deployment. This shifts traffic back to the original (blue) task set and stops the deployment.

By default, CodeDeploy waits one hour after a successful deployment before it terminates the original task set. You can use the AWS CodeDeploy console to shorten this interval. After the task set is terminated, CodeDeploy marks the deployment complete.


In this post, I showed you how to create an AWS Fargate-based Amazon ECS service with blue/green deployments powered by AWS CodeDeploy. I showed you how to configure the required and prerequisite components, such as an Application Load Balancer and associated targets groups, all from the AWS Management Console. I hope that the information in this posts helps you get started implementing this for your own applications!

AWS Storage Update: Amazon S3 & Amazon S3 Glacier Launch Announcements for Archival Workloads

Post Syndicated from AWS Admin original https://aws.amazon.com/blogs/architecture/amazon-s3-amazon-s3-glacier-launch-announcements-for-archival-workloads/

By Matt Sidley, Senior Product Manager for S3

Customers have built archival workloads for several years using a combination of S3 storage classes, including S3 Standard, S3 Standard-Infrequent Access, and S3 Glacier. For example, many media companies are using the S3 Glacier storage class to store their core media archives. Most of this data is rarely accessed, but when they need data back (for example, because of breaking news), they need it within minutes. These customers have found S3 Glacier to be a great fit because they can retrieve data in 1-5 minutes and save up to 82% on their storage costs. Other customers in the financial services industry use S3 Standard to store recently generated data, and lifecycle older data to S3 Glacier.

We launched Glacier in 2012 as a secure, durable, and low-cost service to archive data. Customers can use Glacier either as an S3 storage class or through its direct API. Using the S3 Glacier storage class is popular because many applications are built to use the S3 API and with a simple lifecycle policy, older data can be easily shifted to S3 Glacier. S3 Glacier continues to be the lowest-cost storage from any major cloud provider that durably stores data across three Availability Zones or more and allows customers to retrieve their data in minutes.

We’re constantly listening to customer feedback and looking for ways to make it easier to build applications in the cloud. Today we’re announcing six new features across Amazon S3 and S3 Glacier.

Amazon S3 Object Lock

S3 Object Lock is a new feature that prevents data from being deleted during a customer-defined retention period. You can use Object Lock with any S3 storage class, including S3 Glacier. There are many use cases for S3 Object Lock, including customers who want additional safeguards for data that must be retained, and for customers migrating from existing write-once-read-many (WORM) systems to AWS. You can also use S3 Lifecycle policies to transition data and S3 Object Lock will maintain WORM protection as your data is tiered.

S3 Object Lock can be configured in one of two modes: Governance or Compliance. When deployed in Governance mode, only AWS accounts with specific IAM permissions are able to remove the lock. If you require stronger immutability to comply with regulations, you can use Compliance mode. In Compliance mode, the lock cannot be removed by any user, including the root account. Take a look here:

S3 Object Lock is helpful in industries where long-term records retention is mandated by regulations or compliance rules. S3 Object Lock has been assessed for SEC Rule 17a-4(f), FINRA Rule 4511, and CFTC Regulation 1.31 by Cohasset Associates. Cohasset Associates is a management consulting firm specializing in records management and information governance. Read more and find a copy of the Cohasset Associates Assessment report in our documentation here.

New S3 Glacier Features

One of the things we hear from customers about using S3 Glacier is that they prefer to use the most common S3 APIs to operate directly on S3 Glacier objects. Today we’re announcing the availability of S3 PUT to Glacier, which enables you to use the standard S3 “PUT” API and select any storage class, including S3 Glacier, to store the data. Data can be stored directly in S3 Glacier, eliminating the need to upload to S3 Standard and immediately transition to S3 Glacier with a zero-day lifecycle policy. You can “PUT” to S3 Glacier like any other S3 storage class:

Many customers also want to keep a low-cost durable copy of their data in a second region for disaster recovery. We’re also announcing the launch of S3 Cross-Region Replication to S3 Glacier. You can now directly replicate data into the S3 Glacier storage class in a different AWS region.

Restoring Data from S3 Glacier

S3 Glacier provides three restore speeds for you to access your data: expedited (to retrieve data in 1-5 minutes), standard (3-5 hours), or bulk (5-12 hours). With S3 Restore Speed Upgrade, you can now issue a second restore request at a faster restore speed and get your data back sooner. This is useful if you originally requested standard or bulk speed, but later determine that you need a faster restore speed.

After a restore from S3 Glacier has been requested, you likely want to know when the restore completes. Now, with S3 Restore Notifications, you’ll receive a notification when the restoration has completed and the data is available. Many applications today are being built using AWS Lambda and event-driven actions, and you can now use the restore notification to automatically trigger the next step in your application as soon as S3 Glacier data is restored. For example, you can use notifications and Lambda functions to package and fulfill digital orders using archives restored from S3 Glacier.

Here, I’ve set up notifications to fire when my restores complete so I can use Lambda to kick off a piece of analysis I need to run:

You might need to restore many objects from S3 Glacier; for example, to pull all of your log files within a given time range. Using the new feature in Preview, you can provide a manifest of those log files to restore and, with one request, initiate a restore on millions or even trillions of objects just as easily as you can on just a few. S3 Batch Operations automatically manages retries, tracks progress, sends notifications, generates completion reports, and delivers events to AWS CloudTrail for all changes made and tasks executed.

To get started with the new features on Amazon S3, visit https://aws.amazon.com/s3/. We’re excited about these improvements and think they’ll make it even easier to build archival applications using Amazon S3 and S3 Glacier. And we’re not yet done. Stay tuned, as we have even more coming!

AWS Security Profiles: Sam Koppes, Senior Product Manager

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-sam-koppes-senior-product-manager/

Amazon Spheres and author info

In the weeks leading up to re:Invent, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.

How long have you been at AWS, and what do you do in your current role?

I’ve been with AWS for a year, and I’m a Senior Product Manager for the AWS CloudTrail team. I’m responsible for product roadmap decisions, customer outreach, and for planning our engineering work.

How do you explain your job to non-tech friends?

I work on a technical product, and for any tech product, responsibility is split in half: We have product managers and engineering managers. Product managers are responsible for what the product does. They’re responsible for figuring out how it behaves, what needs it addresses, and why customers would want it. Engineering managers are responsible for figuring out how to make it. When you look to build a product, there’s always the how and the what. I’m responsible for the what.

What are you currently working on that you’re excited about?

The scale challenges that we’re facing today are extremely interesting. We’re allowing customers to build things at an absolutely unheard-of scale, and bringing security into that mix is a challenge. But it’s also one of the great opportunities for AWS — we can bring a lot of value to customers by making security as turnkey as possible so that it just comes with the additional scale and additional service areas. I want people to sleep easy at night knowing that we’ve got their backs.

What’s your favorite part of your job?

When I deliver a product, I love sending out the What’s New announcement. During our launch calls, I love collecting social media feedback to measure the impact of our products. But really, the best part is the post-launch investigation that we do, which allows us understand whether we hit the mark or not. My team usually does a really good job of making sure that we deliver the kinds of features that our customers need, so seeing the impact we’ve had is very gratifying. It’s a privilege to get to hear about the ways we’re changing people’s lives with the new features we’re building.

How did you choose your particular topic for re:Invent this year?

My session is called Augmenting Security Posture and Improving Operational Health with AWS CloudTrail. As a service, CloudTrail has been around a while. But I’ve found that customers face knowledge gaps in terms of what to do with it. There are a lot of people out there with an impressive depth of experience, but they sometimes lack an additional breadth that would be helpful. We also have a number of new customers who want more guidance. So I’m using the session to do a reboot: I’ll start from the beginning and go through what the service is and all the things it does for you, and then I’ll highlight some of the benefits of CloudTrail that might be a little less obvious. I built the session based on discussions with customers, who frequently tell me they start using the service — and only belatedly realize that they can do much more with it beyond, say, using it as a compliance tool. When you start using CloudTrail, you start amassing a huge pile of information that can be quite valuable. So I’ll spend some time showing customers how they can use this information to enhance their security posture, to increase their operational health, and to simplify their operational troubleshooting.

What are you hoping that your audience will take away from it?

I want people to walk away with two fistfuls of ideas for cool things they can do with CloudTrail. There are some new features we’re going to talk about, so even if you’re a power user, my hope is that you’ll return to work with three or four features you have a burning desire to try out.

What does cloud security mean to you, personally?

I’m very aware of the magnitude of the threats that exist today. It’s an evolving landscape. We have a lot of powerful tools and really smart people who are fighting this battle, but we have to think of it as an ongoing war. To me, the promise you should get from any provider is that of a safe haven — an eye in the storm, if you will — where you have relative calm in the midst of the chaos going on in the industry. Problems will constantly evolve. New penetration techniques will appear. But if we’re really delivering on our promise of security, our customers should feel good about the fact that they have a secure place that allows them to go about their business without spending much mental capacity worrying about it all. People should absolutely remain vigilant and focused, but they don’t have to spend all of their time and energy trying to stay abreast of what’s going on in the security landscape.

What’s the most common misperception you encounter about cloud security and compliance?

Many people think that security is a magic wand: You wave it, and it leads to a binary state of secure or not secure. And that’s just not true. A better way to think of security is as a chain that’s only as strong as its weakest link. You might find yourself in a situation where lots of people have worked very hard to build a very secure environment — but then one person comes in and builds on top of it without thinking about security, and the whole thing blows wide open. All it takes is one little hole somewhere. People need to understand that everyone has to participate in security.

In your opinion, what’s the biggest challenge that people face as they move to the cloud?

At AWS, we follow this thing called the Shared Responsibility Model: AWS is responsible for securing everything from the virtualization layer down, and customers are responsible for building secure applications. One of the biggest challenges that people face lies in understanding what it means to be secure while doing application development. Companies like AWS have invested hugely in understanding different attack vectors and learning how to lock down our systems when it comes to the foundational service we offer. But when customers build on a platform that is fundamentally very secure, we still need to make sure that we’re educating them about the kinds of things that they need to do, or not do, to ensure that they stay within this secure footprint.

Five years from now, what changes do you think we’ll see across the security and compliance landscape?

I think we’ll see a tremendous amount of growth in the application of machine learning and artificial intelligence. Historically, we’ve approached security in a very binary way: rules-based security systems in which things are either okay or not okay. And we’ve built complex systems that define “okay” based on a number of criteria. But we’ve always lacked the ability to apply a pseudo-human level of intelligence to threat detection and remediation, and today, we’re seeing that start to change. I think we’re in the early stages of a world where machine learning and artificial intelligence become a foundational, indispensable part of an effective security perimeter. Right now, we’re in a world where we can build strong defenses against known threats, and we can build effective hedging strategies to intercept things we consider risky. Beyond that, we have no real way of dynamically detecting and adapting to threat vectors as they evolve — but that’s what we’ll start to see as machine learning and artificial intelligence enter the picture.

If you had to pick any other job, what would you want to do with your life?

I have a heavy engineering background, so I could see myself becoming a very vocal and customer-obsessed engineering manager. For a more drastic career change, I’d write novels—an ability that I’ve been developing in my free time.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security news? Follow us on Twitter.


Sam Koppes

Sam is a Senior Product Manager at Amazon Web Services. He currently works on AWS CloudTrail and has worked on AWS CloudFormation, as well. He has extensive experience in both the product management and engineering disciplines, and is passionate about making complex technical offerings easy to understand for customers.