Tag Archives: Uncategorized

How to connect to AWS Secrets Manager service within a Virtual Private Cloud

Post Syndicated from Divya Sridhar original https://aws.amazon.com/blogs/security/how-to-connect-to-aws-secrets-manager-service-within-a-virtual-private-cloud/

You can now use AWS Secrets Manager with Amazon Virtual Private Cloud (Amazon VPC) endpoints powered by AWS Privatelink and keep traffic between your VPC and Secrets Manager within the AWS network.

AWS Secrets Manager is a secrets management service that helps you protect access to your applications, services, and IT resources. This service enables you to rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. When your application running within an Amazon VPC communicates with Secrets Manager, this communication traverses the public internet. By using Secrets Manager with Amazon VPC endpoints, you can now keep this communication within the AWS network and help meet your compliance and regulatory requirements to limit public internet connectivity. You can start using Secrets Manager with Amazon VPC endpoints by creating an Amazon VPC endpoint for Secrets Manager with a few clicks on the VPC console or via AWS CLI. Once you create the VPC endpoint, you can start using it without making any code or configuration changes in your application.

The diagram demonstrates how Secrets Manager works with Amazon VPC endpoints. It shows how I retrieve a secret stored in Secrets Manager from an Amazon EC2 instance. When the request is sent to Secrets Manager, the entire data flow is contained within the VPC and the AWS network.

Figure 1: How Secrets Manager works with Amazon VPC endpoints

Figure 1: How Secrets Manager works with Amazon VPC endpoints

Solution overview

In this post, I show you how to use Secrets Manager with an Amazon VPC endpoint. In this example, we have an application running on an EC2 instance in VPC named vpc-5ad42b3c. This application requires a database password to an RDS instance running in the same VPC. I have stored the database password in Secrets Manager. I will now show how to:

  1. Create an Amazon VPC endpoint for Secrets Manager using the VPC console.
  2. Use the Amazon VPC endpoint via AWS CLI to retrieve the RDS database secret stored in Secrets Manager from an application running on an EC2 instance.

Step 1: Create an Amazon VPC endpoint for Secrets Manager

  1. Open the Amazon VPC console, select Endpoints, and then select Create Endpoint.
  2. Select AWS Services as the Service category, and then, in the Service Name list, select the Secrets Manager endpoint service named com.amazonaws.us-west-2.secrets-manager.
     
    Figure 2: Options to select when creating an endpoint

    Figure 2: Options to select when creating an endpoint

  3. Specify the VPC you want to create the endpoint in. For this post, I chose the VPC named vpc-5ad42b3c where my RDS instance and application are running.
  4. To create a VPC endpoint, you need to specify the private IP address range in which the endpoint will be accessible. To do this, select the subnet for each Availability Zone (AZ). This restricts the VPC endpoint to the private IP address range specific to each AZ and also creates an AZ-specific VPC endpoint. Specifying more than one subnet-AZ combination helps improve fault tolerance and make the endpoint accessible from a different AZ in case of an AZ failure. Here, I specify subnet IDs for availability zones us-west-2a, us-west-2b, and us-west-2c:
     
    Figure 3: Specifying subnet IDs

    Figure 3: Specifying subnet IDs

  5. Select the Enable Private DNS Name checkbox for the VPC endpoint. Private DNS resolves the standard Secrets Manager DNS hostname https://secretsmanager.<region>.amazonaws.com. to the private IP addresses associated with the VPC endpoint specific DNS hostname. As a result, you can access the Secrets Manager VPC Endpoint via the AWS Command Line Interface (AWS CLI) or AWS SDKs without making any code or configuration changes to update the Secrets Manager endpoint URL.
     
    Figure 4: The "Enable Private DNS Name" checkbox

    Figure 4: The “Enable Private DNS Name” checkbox

  6. Associate a security group with this endpoint. The security group enables you to control the traffic to the endpoint from resources in your VPC. For this post, I chose to associate the security group named sg-07e4197d that I created earlier. This security group has been set up to allow all instances running within VPC vpc-5ad42b3c to access the Secrets Manager VPC endpoint. Select Create endpoint to finish creating the endpoint.
     
    Figure 5: Associate a security group and create the endpoint

    Figure 5: Associate a security group and create the endpoint

  7. To view the details of the endpoint you created, select the link on the console.
     
    Figure 6: Viewing the endpoint details

    Figure 6: Viewing the endpoint details

  8. The Details tab shows all the DNS hostnames generated while creating the Amazon VPC endpoint that can be used to connect to Secrets Manager. I can now use the standard endpoint secretsmanager.us-west-2.amazonaws.com or one of the VPC-specific endpoints to connect to Secrets Manager within vpc-5ad42b3c where my RDS instance and application also resides.
     
    Figure 7: The "Details" tab

    Figure 7: The “Details” tab

Step 2: Access Secrets Manager through the VPC endpoint

Now that I have created the VPC endpoint, all traffic between my application running on an EC2 instance hosted within VPC named vpc-5ad42b3c and Secrets Manager will be within the AWS network. This connection will use the VPC endpoint and I can use it to retrieve my RDS database secret stored in Secrets Manager. I can retrieve the secret via the AWS SDK or CLI. As an example, I can use the CLI command shown below to retrieve the current version of my RDS database secret:

$aws secretsmanager get-secret-value –secret-id MyDatabaseSecret –version-stage AWSCURRENT

Since my AWS CLI is configured for us-west-2 region, it uses the standard Secrets Manager endpoint URL https://secretsmanager.us-west-2.amazonaws.com. This standard endpoint automatically routes to the VPC endpoint since I enabled support for Private DNS hostname while creating the VPC endpoint. The above command will result in the following output:


{
  "ARN": "arn:aws:secretsmanager:us-west-2:123456789012:secret:MyDatabaseSecret-a1b2c3",
  "Name": "MyDatabaseSecret",
  "VersionId": "EXAMPLE1-90ab-cdef-fedc-ba987EXAMPLE",
  "SecretString": "{\n  \"username\":\"david\",\n  \"password\":\"BnQw&XDWgaEeT9XGTT29\"\n}\n",
  "VersionStages": [
    "AWSCURRENT"
  ],
  "CreatedDate": 1523477145.713
} 

Summary

I’ve shown you how to create a VPC endpoint for AWS Secrets Manager and retrieve an RDS database secret using the VPC endpoint. Secrets Manager VPC Endpoints help you meet compliance and regulatory requirements about limiting public internet connectivity within your VPC. It enables your applications running within a VPC to use Secrets Manager while keeping traffic between the VPC and Secrets Manager within the AWS network. You can start using Amazon VPC Endpoints for Secrets Manager by creating endpoints in the VPC console or AWS CLI. Once created, your applications that interact with Secrets Manager do not require any code or configuration changes.

To learn more about connecting to Secrets Manager through a VPC endpoint, read the Secrets Manager documentation. For guidance about your overall VPC network structure, see Practical VPC Design.

If you have questions about this feature or anything else related to Secrets Manager, start a new thread in the Secrets Manager forum.

Want more AWS Security news? Follow us on Twitter.

Zelda casemod with levitating Triforce

Post Syndicated from Liz Upton original https://www.raspberrypi.org/blog/zelda-casemod-with-levitating-triforce/

I know: you’ve seen a bajillion RetroPie implementations before, and a bajillion casemods to go with them. But this one’s so hopelessly, magnificently splendid that we felt we had to share. Magnetic levitation. It’s not just for trains and frogs.

This Zelda casemod, covered with engraved pine from the forests of Hyrule and shiny brass mouldings hammered by…dwarves or something, would be gorgeous as-is. The levitating, mirrored Triforce twizzling away on top is the icing on the cake; and a very lovely cake it is too. Here’s some video (in Spanish, with English subtitles) from Tuberviejuner in Spain, walking you through the build.

Raspberry pi Zelda mod: MagicBerry WindWaker by Makomod & Tuberviejuner.

Raspberry pi Zelda mod: Magic Berry WindWaker The Legend of Zelda by Makomod & Tuberviejuner alucinad con el triforce levitador.

This magical piece of work is by MakoMod, a case modder who splits his time between Barcelona and Texas. There’s a Pi inside running RetroPie, and a separate electromagnetic device levitating the Triforce up top. If you’re interested in incorporating something like this into one of your own builds, there are two ways to go: make your own from scratch, as DrewPaul Designs has done here, or buy a pre-built kit.

If you get in there quickly, you’ve a chance to own this one-off case: MakoMod is auctioning it on eBay. You’ve got until July 14 2018 to bid – good luck!

The post Zelda casemod with levitating Triforce appeared first on Raspberry Pi.

ЕП гласува Директивата за авторско право на 5 юли 2018

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/06/29/%D0%B5%D0%BF-%D0%B3%D0%BB%D0%B0%D1%81%D1%83%D0%B2%D0%B0-%D0%B4%D0%B8%D1%80%D0%B5%D0%BA%D1%82%D0%B8%D0%B2%D0%B0%D1%82%D0%B0-%D0%B7%D0%B0-%D0%B0%D0%B2%D1%82%D0%BE%D1%80%D1%81%D0%BA%D0%BE-%D0%BF%D1%80/

 

*

Как ще гласуват българските членове на Европейския парламент?

#Summit #JAPSEN #WorldCup

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/06/25/nocomment/

Mehreen KhanBrussels correspondent @FT.

Bokyo Borissov – Bulgarian PM  whose country is still holding the EU presidency

 

Query for the latest Amazon Linux AMI IDs using AWS Systems Manager Parameter Store

Post Syndicated from Martin Yip original https://aws.amazon.com/blogs/compute/query-for-the-latest-amazon-linux-ami-ids-using-aws-systems-manager-parameter-store/

Want a simpler way to query for the latest Amazon Linux AMI? AWS Systems Manager Parameter Store already allows for querying the latest Windows AMI. Now, support has been expanded to include the latest Amazon Linux AMI. Each Amazon Linux AMI now has its own Parameter Store namespace that is public and describable. Upon querying, an AMI namespace returns only its regional ImageID value.

The namespace is made up of two parts:

  • Parameter Store Prefix (tree): /aws/service/ami-amazon-linux-latest/
  • AMI name alias: (example) amzn-ami-hvm-x86_64-gp2

You can determine an Amazon Linux AMI alias by taking the full AMI name property of an Amazon Linux public AMI and removing the date-based version identifier. A list of these AMI name properties can be seen by running one for the following Amazon EC2 queries.

Using the AWS CLI:

aws ec2 describe-images --owners amazon --filters "Name=name,Values=amzn*" --query 'sort_by(Images, &CreationDate)[].Name'

Using PowerShell:

Get-EC2ImageByName -Name amzn* | Sort-Object CreationDate | Select-Object Name

For example, amzn2-ami-hvm-2017.12.0.20171208-x86_64-gp2 without the date-based version becomes amzn2-ami-hvm-x86_64-gp2.

When you add the public Parameter Store prefix namespace to the AMI alias, you have the Parameter Store name of “/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2”.

Each unique AMI namespace always remains the same. You no longer need to pattern match on name filters, and you no longer need to sort through CreationDate AMI properties. As Amazon Linux AMIs are patched and new versions are released to the public, AWS updates the Parameter Store value with the latest ImageID value for each AMI namespace in all supported Regions.

Before this release, finding the latest regional ImageID for an Amazon Linux AMI involved a three-step process. First, using an API call to search the list of available public AMIs. Second, filtering the results by a given partial string name. Third, sorting the matches by CreationDate property and selecting the newest ImageID. Querying AWS Systems Manager greatly simplifies this process.
Querying for the latest AMI using public parameters

After you have your target namespace, your query can be created to retrieve the latest Amazon Linux AMI ImageID value. Each Region has an exact replica namespace containing its Region-specific ImageID value.

Using the AWS CLI:

aws ssm get-parameters --names /aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2 --region us-east-1 

Using PowerShell:

Get-SSMParameter -Name /aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2 -region us-east-1

Always launch new instances with the latest ImageID

After you have created the query, you can embed the command as a command substitution into your new instance launches.

Using the AWS CLI:

 aws ec2 run-instances --image-id $(aws ssm get-parameters --names /aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2 --query 'Parameters[0].[Value]' --output text) --count 1 --instance-type m4.large

Using PowerShell:

New-EC2Instance -ImageId ((Get-SSMParameterValue -Name /aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2).Parameters[0].Value) -InstanceType m4.large -AssociatePublicIp $true

This new instance launch always results in the latest publicly available Amazon Linux AMI for amzn2-ami-hvm-x86_64-gp2. Similar embedding can be used in a number of automation process, docs, and coding languages.

Display a complete list of all available Public Parameter Amazon Linux AMIs

You can also query for the complete list of AWS Amazon Linux Parameter Store namespaces available.

Using the AWS CLI:

aws ssm get-parameters-by-path --path "/aws/service/ami-amazon-linux-latest" --region us-east-1

Using PowerShell:

Get-SSMParametersByPath -Path "/aws/service/ami-amazon-linux-latest" -region us-east-1

Here’s an example list retrieved from a get-parameters-by-path call:

 /aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-ebs
 /aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2
 /aws/service/ami-amazon-linux-latest/amzn2-ami-minimal-hvm-x86_64-ebs
 /aws/service/ami-amazon-linux-latest/amzn-ami-hvm-x86_64-ebs
 /aws/service/ami-amazon-linux-latest/amzn-ami-hvm-x86_64-gp2
 /aws/service/ami-amazon-linux-latest/amzn-ami-hvm-x86_64-s3
 /aws/service/ami-amazon-linux-latest/amzn-ami-minimal-hvm-x86_64-ebs
 /aws/service/ami-amazon-linux-latest/amzn-ami-minimal-hvm-x86_64-s3

Launching latest Amazon Linux AMI in an AWS CloudFormation stack

AWS CloudFormation also supports Parameter Store. For more information, see Integrating AWS CloudFormation with AWS Systems Manager Parameter Store. Here’s an example of how you would reference the latest Amazon Linux AMI in a CloudFormation template.

 # Use public Systems Manager Parameter
 Parameters :
 LatestAmiId :
 Type : 'AWS::SSM::Parameter::Value'
 Default: ‘/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2’

Resources :
 Instance :
 Type : 'AWS::EC2::Instance'
 Properties :
 ImageId : !Ref LatestAmiId

 

About the Author

Arend Castelein is a software development engineer on the Amazon Linux team. Most of his work relates to making Amazon Linux updates available sooner while also reducing the workload for his teammates. Outside of work, he enjoys rock climbing and playing indie games.

Create Dynamic Contact Forms for S3 Static Websites Using AWS Lambda, Amazon API Gateway, and Amazon SES

Post Syndicated from Saurabh Shrivastava original https://aws.amazon.com/blogs/architecture/create-dynamic-contact-forms-for-s3-static-websites-using-aws-lambda-amazon-api-gateway-and-amazon-ses/

In the era of the cloud, hosting a static website is cheaper, faster and simpler than traditional on premise hosting, where you always have to maintain a running server.  Basically, no static website is truly static. I can promise you will find at least a “contact us” page in most static websites, which, by their very nature, are dynamically generated. And all businesses need a “contact us” page to help customers connect with business owners for services, inquiries or feedback. In its simplest form, a “contact us” page should collect a user’s basic information (name, e-mail id, phone number, short message and e-mail) and get shared with the business via email when submitted.

AWS provides a simplified way to host your static website in an Amazon S3 bucket using your own custom domain. You can either choose to register a new domain with AWS Route 53 or transfer your domain to Route 53 for hosting in five simple steps.

Obviously, you don’t want to spin-up a server to handle a simple “contact us” form, but it’s a critical element of your website. Luckily, in this post-cloud world, AWS delivers a serverless option. You can use AWS Lambda with Amazon API Gateway to create a serverless backend and use Amazon Simple Email Service to send an e-mail to the business owner whenever a customer submits any inquiry or feedback. Let’s learn how to do it.

Architecture Flow

Here, we are assuming a common website-to-cloud migration scenario, where you have registered your domain name with a 3rd party domain registrar and after migration of your website to Amazon S3. From there, you switched to Amazon Route 53 as your DNS provider. You contacted your DNS provider and updated the name server (NS) record to use the name servers in the delegation that you set in Amazon Route 53 (find step-by-step details in the AWS S3 development guide). Your email server still belongs to your DNS provider as you brought that in the package when you registered your domain with a multi-year contract.

Following is the architecture flow with detailed guidance.

lambdases

In the above diagram, the customer is submitting their inquiry through a “contact us” form, which is hosted in an Amazon S3 bucket as a static website. Information will flow in three simple steps:

  • Your “contact us” form will collect all user information and post to Amazon API Gateway restful service.
  • Amazon API Gateway will pass collected user information to an AWS lambda function.
  • AWS Lambda function will auto generate an e-mail and forward it to your mail server using Amazon SES.

Your “Contact Us” Form

Let’s start with a simple “contact us” form html code snippet:

<form id="contact-form" method="post">
      <h4>Name:</h4>
      <input type="text" style="height:35px;" id="name-input" placeholder="Enter name here…" class="form-control" style="width:100%;" /><br/>
      <h4>Phone:</h4>
      <input type="phone" style="height:35px;" id="phone-input" placeholder="Enter phone number" class="form-control" style="width:100%;"/><br/>
      <h4>Email:</h4>
      <input type="email" style="height:35px;" id="email-input" placeholder="Enter email here…" class="form-control" style="width:100%;"/><br/>
      <h4>How can we help you?</h4>
      <textarea id="description-input" rows="3" placeholder="Enter your message…" class="form-control" style="width:100%;"></textarea><br/>
      <div class="g-recaptcha" data-sitekey="6Lc7cVMUAAAAAM1yxf64wrmO8gvi8A1oQ_ead1ys" class="form-control" style="width:100%;"></div>
      <button type="button" onClick="submitToAPI(event)" class="btn btn-lg" style="margin-top:20px;">Submit</button>
</form>

The above form will ask the user to enter their name, phone, e-mail, and provide a free-form text box to write inquiry/feedback details and includes a submit button.

Later in the post, I’ll share the JQuery code for field validation and the variables to collect values.

Defining AWS Lambda Function

The next step is to create a lambda function, which will get all user information through the API Gateway. The lambda function will look something like this:

The AWS  lambda function mailfwd is triggered from the API Gateway POST method, which we will create the next section and send information to Amazon SES for mail forwarding.

If you are new to AWS Lambda then follow these simple steps to Create a Simple Lambda Function and get yourself familiar.

  1. Go to the console and click on “Create Function” and select blueprints for hello-world nodejs6.10 version as shown in below screenshot and click on configure button at the bottom.
  2. To create your AWS Lambda function,  select the “edit code inline” setting, which will have an editor box with the code in it, and replace that code (making sure to change [email protected] to your real e-mail address and update your actual domain in the response variable):

    var AWS = require('aws-sdk');
    var ses = new AWS.SES();
     
    var RECEIVER = '[email protected]';
    var SENDER = '[email protected]';
    
    var response = {
     "isBase64Encoded": false,
     "headers": { 'Content-Type': 'application/json', 'Access-Control-Allow-Origin': 'example.com'},
     "statusCode": 200,
     "body": "{\"result\": \"Success.\"}"
     };
    
    exports.handler = function (event, context) {
        console.log('Received event:', event);
        sendEmail(event, function (err, data) {
            context.done(err, null);
        });
    };
     
    function sendEmail (event, done) {
        var params = {
            Destination: {
                ToAddresses: [
                    RECEIVER
                ]
            },
            Message: {
                Body: {
                    Text: {
                        Data: 'name: ' + event.name + '\nphone: ' + event.phone + '\nemail: ' + event.email + '\ndesc: ' + event.desc,
                        Charset: 'UTF-8'
                    }
                },
                Subject: {
                    Data: 'Website Referral Form: ' + event.name,
                    Charset: 'UTF-8'
                }
            },
            Source: SENDER
        };
        ses.sendEmail(params, done);
    }
    

Now you can execute and test your AWS lambda function as directed in the AWS developer guide. Make sure to update the Lambda execution role and follow the steps provided in the Lambda developer guide to create a basic execution role.

Add following code under policy to allow Amazon SES access to AWS lambda function:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "ses:SendEmail",
            "Resource": "*"
        }
    ]
}

Creating the API Gateway

Now, let’s create the API Gateway that will provide a restful API endpoint for our AWS Lambda function, which we are going to create next. We will use this API endpoint to post user-submitted information in the “Contact Us” form — which will also get posted to the AWS Lambda function.

If you are new to API Gateway, follow these simple steps to create and test an API from the example in the API Gateway Console to familiarize yourself.

  1. Login to AWS console and select API Gateway.  Click on create new API and fill your API name.
  2. Now go to your API name — listed in the left-hand navigation — click on the “actions” drop down, and select “create resource.”
  3. Select your newly-created resource and choose “create method.”  Choose a POST.  Here, you will choose our AWS Lambda Function. To do this, select “mailfwd” from the drop down.
  4. After saving the form above, Click on the “action” menu and choose “deploy API.”  You will see final resources and methods something like below:
  5. Now get your Restful API URL from the “stages” tab as shown in the screenshot below. We will use this URL on our “contact us” HTML page to send the request with all user information.
  6. Make sure to Enable CORS in the API Gateway or you’ll get an error:”Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://abc1234.execute-api.us-east-1.amazonaws.com/02/mailme. (Reason: CORS header ‘Access-Control-Allow-Origin’ missing).”

Setup Amazon SES

Amazon SES requires that you verify your identities (the domains or email addresses that you send email from) to confirm that you own them, and to prevent unauthorized use. Follow the steps outlined in the Amazon SES user guide to verify your sender e-mail.

Connecting it all Together

Since we created our AWS Lambda function and provided the API-endpoint access using API gateway, it’s time to connect all the pieces together and test them. Put following JQuery code in your ContactUs HTML page <head> section. Replace URL variable with your API Gateway URL. You can change field validation as per your need.

function submitToAPI(e) {
       e.preventDefault();
       var URL = "https://abc1234.execute-api.us-east-1.amazonaws.com/01/contact";

            var Namere = /[A-Za-z]{1}[A-Za-z]/;
            if (!Namere.test($("#name-input").val())) {
                         alert ("Name can not less than 2 char");
                return;
            }
            var mobilere = /[0-9]{10}/;
            if (!mobilere.test($("#phone-input").val())) {
                alert ("Please enter valid mobile number");
                return;
            }
            if ($("#email-input").val()=="") {
                alert ("Please enter your email id");
                return;
            }

            var reeamil = /^([\w-\.][email protected]([\w-]+\.)+[\w-]{2,6})?$/;
            if (!reeamil.test($("#email-input").val())) {
                alert ("Please enter valid email address");
                return;
            }

       var name = $("#name-input").val();
       var phone = $("#phone-input").val();
       var email = $("#email-input").val();
       var desc = $("#description-input").val();
       var data = {
          name : name,
          phone : phone,
          email : email,
          desc : desc
        };

       $.ajax({
         type: "POST",
         url : "https://abc1234.execute-api.us-east-1.amazonaws.com/01/contact",
         dataType: "json",
         crossDomain: "true",
         contentType: "application/json; charset=utf-8",
         data: JSON.stringify(data),

         
         success: function () {
           // clear form and show a success message
           alert("Successfull");
           document.getElementById("contact-form").reset();
       location.reload();
         },
         error: function () {
           // show an error message
           alert("UnSuccessfull");
         }});
     }

Now you should be able to submit your contact form and start receiving email notifications when a form is completed and submitted.

Conclusion

Here we are addressing a common use case — a simple contact form — which is important for any small business hosting their website on Amazon S3. This post should help make your static website more dynamic without spinning up any server.

Have you had challenges adding a “contact us” form to your small business website?

About the author

Saurabh Shrivastava is a Solutions Architect working with global systems integrators. He works with our partners and customers to provide them architectural guidance for building scalable architecture in hybrid and AWS environment. In his spare time, he enjoys spending time with his family, hiking, and biking.

Deploying a 4K, GPU-backed Linux desktop instance on AWS

Post Syndicated from Roshni Pary original https://aws.amazon.com/blogs/compute/deploying-4k-gpu-backed-linux-desktop-instance-on-aws/

Contributed by Amr Ragab, HPC Application Consultant, AWS Professional Services

AWS currently supports many managed des­ktop delivery mechanisms. Amazon WorkSpaces and Amazon AppStream 2.0 both deliver managed Windows-based machine images with GPU-backed instances. However, many desktop services and applications are better served through a Linux backed instance. Given the variety of Linux distributions as well as desktop managers, it can be valuable to have a generic solution for provisioning a Linux desktop on Amazon EC2.

A GPU-backed instance reduces the computational requirements from the client (local) machine, eliminating the need for a local discrete GPU to run graphical workloads. The framebuffer objects generated by the GPU are compressed when sent over the network, and decompressed by the local CPU resources. This allows clients to take advantage of the server GPU and display the high-resolution content on local thin clients, mobile devices, and low-powered desktops and laptops. Such GPU-backed Linux instances have been used for VFX rendering, computational drug discovery, and computational fluid dynamics (CFD) simulation use cases. An upcoming followup post details enabling this technology on the Windows platform.

Configuration

In this configuration, a client machine connects to the provisioned desktop (server) in the cloud. The server captures the framebuffer, which is sent in real time to the client machine over the network. Thus latency is an important metric to consider when provisioning this solution. I recommend choosing the nearest AWS Region (under 100 ms). Some customers may even prefer to install AWS Direct Connect.

Region Latency
US-East (Virginia) 18 ms
US East (Ohio) 31 ms
US-West (California) 77 ms
US-West (Oregon) 97 ms
Canada (Central) 29 ms
Europe (Ireland) 89 ms
Europe (London) 90 ms
Europe (Frankfurt) 108 ms
Asia Pacific (Mumbai) 197 ms
Asia Pacific (Seoul) 198 ms
Asia Pacific (Singapore) 288 ms
Asia Pacific (Sydney) 218 ms
Asia Pacific (Tokyo) 188 ms
South America (São Paulo) 138 ms
China (Beijing) 267 ms
AWS GovCloud (US) 97 ms

Source: http://www.cloudping.info/ from the Amazon offices located in Herndon, VA

Bandwidth requirements depend on the quality of the desktop experience as well as the desired resolution. Provision the backend Linux desktop instance with a 4096×2160 (4K) resolution. Depending on the specific G3 instance type selected, multi-GPU managed desktops give additional performance benefits. Each instance can also host multiple users, either in collaborative sessions, or with up to four independent 4K monitors. The GPU framebuffer memory used per session generally limits the number of sessions per managed desktop.

A smooth reliable experience depends on a low latency and high-bandwidth connection to the EC2 instance hosting the desktop. One of the benefits of using a multithreaded framebuffer reader is that only the defined block of the rendered desktop that is changing needs to be sent over the network. Full-screen redraws may be necessary only in rare cases. The minimum requirements for this 4K (3840×2160) configuration are as follows:

  • Bandwidth: 50 Mbps
  • Latency: < 30 ms
  • Jitter: < 5 ms

Deployment

Use RHEL/CentOS for the deployment. Except for DCV, this stack is compatible with Debian/Ubuntu distributions. Use the CentOS 7.5 Server AMI and install the NVIDIA/Xorg/KDE stack  to create a fully functioning desktop environment with a max resolution of 16384 x 8640 (that is, 4x4K) at 60 Hz.

This stack contains the following software:

  • CentOS 7.5 Base
  • Xorg 1.19
  • NVIDIA Grid Driver 6.1 (for the G3 instance family)
  • KDE Desktop environment
  • VirtualGL
  • TurboVNC
  • NICE DCV

To make the most efficient use of the NVIDIA Tesla M60 framebuffer memory, disable the compositing features of the desktop manager. Other non-compositing desktop managers (such as XFCE, MATE, etc.) are supported as well. This ensures that the GPU is reserved for specific OpenGL API tasks for the application, and that the performance is not impacted by the desktop environment decorations.

Start up a CentOS 7.5 server desktop based on the latest AMI available in the closest Region:

Distributor ID:    CentOS
Description:       CentOS Linux release 7.5.1804 (Core)
Release:           7.5.1804
Codename:          Core

Now install the Xorg stack with the KDE desktop manager:

sudo yum install epel-release
sudo yum update
sudo yum groupinstall "Development Tools"
sudo yum install xorg-* kernel-devel dkms python-pip lsb
sudo pip install awscli
sudo yum groupinstall "KDE Plasma Workspaces"
sudo systemctl disable firewalld #AWS security groups will provide our firewall rules
# if there is a kernel update
sudo reboot

Download the NVIDIA Grid driver (6.1). For more information, see Installing the NVIDIA Driver on Linux Instances.

aws s3 cp --recursive s3://ec2-linux-nvidia-drivers/ .
chmod +x latest/NVIDIA-Linux-x86_64-390.57-grid.run
sudo .latest/NVIDIA-Linux-x86_64-390.57-grid.run
# register the driver with dkms, ignore errors associated with 32bit compatible libraries

Deposit the xorg.conf file in /etc/X11/xorg.conf:

Section "ServerLayout"
        Identifier     "X.org Configured"
        Screen      0  "Screen0" 0 0
        InputDevice    "Mouse0" "CorePointer"
        InputDevice    "Keyboard0" "CoreKeyboard"
EndSection
 
Section "Files"
        ModulePath   "/usr/lib64/xorg/modules"
        FontPath     "catalogue:/etc/X11/fontpath.d"
        FontPath     "built-ins"
EndSection
 
Section "Module"
        Load  "glx"
EndSection
 
Section "InputDevice"
        Identifier  "Keyboard0"
        Driver      "kbd"
EndSection
 
Section "InputDevice"
        Identifier  "Mouse0"
        Driver      "mouse"
        Option      "Protocol" "auto"
        Option      "Device" "/dev/input/mice"
        Option      "ZAxisMapping" "4 5 6 7"
EndSection
 
Section "Monitor"
        Identifier   "Monitor0"
        VendorName   "Monitor Vendor"
        ModelName    "Monitor Model"
        Modeline "3840x2160_60.00"  712.34  3840 4152 4576 5312  2160 2161 2164 2235  -HSync +Vsync
EndSection

 
Section "Device"
        Identifier  "Card0"
        Driver      "nvidia"
        Option "ConnectToAcpid" "0"
        BusID       "PCI:0:30:0"
EndSection
 
Section "Screen"
        Identifier "Screen0"
        Device     "Card0"
        Monitor    "Monitor0"
        SubSection "Display"
                Viewport   0 0
                Depth     24
        Modes    "4096x2160" "3840x2160"
        EndSubSection
EndSection

Reboot again and check that the nvidia-gridd service is running. You may notice errors. They can be safely ignored after the nvidia-gridd service successfully acquires a license.

[[email protected] ~]# systemctl status nvidia-gridd.service
● nvidia-gridd.service - NVIDIA Grid Daemon
   Loaded: loaded (/usr/lib/systemd/system/nvidia-gridd.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2018-05-29 18:37:35 UTC; 39s ago
  Process: 863 ExecStart=/usr/bin/nvidia-gridd (code=exited, status=0/SUCCESS)
 Main PID: 881 (nvidia-gridd)
   CGroup: /system.slice/nvidia-gridd.service
           └─881 /usr/bin/nvidia-gridd
May 29 18:37:35 ip-10-0-125-164.ec2.internal systemd[1]: Starting NVIDIA Grid Daemon...
May 29 18:37:35 ip-10-0-125-164.ec2.internal nvidia-gridd[881]: Started (881)
May 29 18:37:35 ip-10-0-125-164.ec2.internal systemd[1]: Started NVIDIA Grid Daemon.
May 29 18:37:36 ip-10-0-125-164.ec2.internal nvidia-gridd[881]: Configuration parameter ( ServerAddress  FeatureType) not set
May 29 18:37:40 ip-10-0-125-164.ec2.internal nvidia-gridd[881]: Calling load_byte_array(tra)
May 29 18:37:41 ip-10-0-125-164.ec2.internal nvidia-gridd[881]: License acquired successfully (2)

You can confirm that 4K resolution is enabled by running the following command:

DISPLAY=:0 xrandr -q
Screen 0: minimum 8 x 8, current 4096 x 2160, maximum 16384 x 8640
DVI-D-0 connected primary 4096x2160+0+0 (normal left inverted right x axis y axis) 641mm x 400mm
2560x1600 59.86+
4096x2160 60.03*
3840x2160 60.00 

Finally, check that your underlying GL renderer is using the NVIDIA driver by querying glxinfo

DISPLAY=:0 glxinfo

OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: Quadro FX Tesla M60/PCIe/SSE2
OpenGL core profile version string: 4.5.0 NVIDIA 390.57
OpenGL core profile shading language version string: 4.50 NVIDIA
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
OpenGL version string: 4.6.0 NVIDIA 390.57
OpenGL shading language version string: 4.60 NVIDIA

At the time of publication, OpenGL 4.5 is enabled. Your applications can take advantage of that API for rendering.

To interact with the instance, install server-side desktop remote display software that can specifically take advantage of the 3D hardware acceleration. For example, AWS provides the NICE DCV platform.

DCV is an accelerated remote desktop framework that provides in-web browser desktop connections. DCV is supported in both Windows and Linux (RHEL/CentOS). In the Windows platform, OpenGL and DirectX are fully supported. DCV entitlement is free when provisioning on AWS. NICE DCV is also provided as a component to the AWS EnginFrame and myHPC solutions.

To install DCV, download the NICE DCV 2017 EL7 archive and Administrative Guide. After you extract the archive in the instance, you see a list of nice-* RPMS. You don’t have to worry about licensing, as the installer captures that the instance is running in AWS.

sudo yum localinstall nice-*
sudo systemctl enable dcvserver
sudo systemctl start dcvserver

When the DCV server starts, you have the option to create a single console session or multiple virtual sessions. You must assign a password for the CentOS user issued, by running the following command:

sudo passwd centos

Start the console session:

sudo dcv create-session --type=console --owner centos session1
sudo dcv list-sessions

The AWS security groups are enabled to allow TCP 8443 traffic to the instance. You see the DCV login portal and can interact with the instance. Other popular frameworks include the following:

You can also find plug and play images for managed desktops in the AWS Marketplace.

Optimization

Implement the changes outlined in the Optimizing GPU Settings (P2, P3, and G3 Instances) topic. You can turn off the autoboost feature and set the maximum graphics and memory clocks manually.

sudo nvidia-smi --auto-boost-default=0
sudo nvidia-smi -ac 2505,1177

Application testing

For testing, look at PyMOL (PyMOL Molecular Graphics System, Version 2.0 Schrödinger, LLC.). PyMOL is a standard commercial drug discovery application that is used for processing, and visualizing biochemical structures.  I used the opensource fork.

With the NVIDIA GRID licensing enabled earlier, PyMOL can take advantage of the Quadro features supplied by the Tesla M60. After it’s installed and loaded, you can confirm the functionality of the entire G3 instance software stack installed earlier:

PyMOL(TM) Molecular Graphics System, Version 2.1.0.
 Copyright (c) Schrodinger, LLC.
 All Rights Reserved.
 
    Created by Warren L. DeLano, Ph.D. 
 
    PyMOL is user-supported open-source software.  Although some versions
    are freely available, PyMOL is not in the public domain.
 
    If PyMOL is helpful in your work or study, then please volunteer 
    support for our ongoing efforts to create open and affordable scientific
    software by purchasing a PyMOL Maintenance and/or Support subscription.

    More information can be found at "http://www.pymol.org".
 
    Enter "help" for a list of commands.
    Enter "help <command-name>" for information on a specific command.

 Hit ESC anytime to toggle between text and graphics.

 Detected OpenGL version 2.0 or greater. Shaders available.
 Detected GLSL version 4.60.
 OpenGL graphics engine:
  GL_VENDOR:   NVIDIA Corporation
  GL_RENDERER: Quadro FX Tesla M60/PCIe/SSE2
  GL_VERSION:  4.6.0 NVIDIA 390.57
 Adapting to Quadro hardware.
 Detected 16 CPU cores.  Enabled multithreaded rendering.

In the PyMOL window, run “fetch 5ta3”, which is a 39k amino acid protein, under the 4K desktop environment. Rotating and translating the protein should be smooth and respond quickly to pointer events.

The PyMOL Gallery contains other representative examples that take advantage of various visualization and processing workflows. Also, you can find many demos (choose Wizard, Demo).

Under the Sculpting demo, you can show the pointer latency between the client and server.

Finally, look at ray tracing. From the PyMOL wiki, take a chemical structure and render each frame with ray tracing to produce a video. On the Tesla M60 with Quadro features enabled, the total render time was approximately 1 minute.

Scalability

As I mentioned previously, the framebuffer redirection protocols have a feature set to create multiple virtual sessions per node. A virtual session is not necessarily tied to a single user either. In other words, the number of independent virtual sessions is limited by the total amount of GPU frame buffer memory used in all sessions per GPU. Thus, it’s possible to scale horizontally by increasing the number of G3 instances, or vertically by using larger instance types in the G3 family.

Summary

The G3 instance type is purpose-built to provide a managed, high-end professional graphics infrastructure for visual computing needs. With NICE DCV, you can take advantage of NVIDIA Quadro software features for a range of applications including drug discovery and VFX rendering. Connected with the AWS high-performance network backbone, the instance can become an integral part of your graphics workload pipeline. Now, you can power up and deliver your applications to teams working anywhere in the world.

Всеки трети в България не чете никога

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/06/12/reading/

Всеки трети в България от пълнолетните у нас (36%) заявява, че не чете никога,  а всеки четвърти (24%) чете почти ежедневно. Това показват данните от национално представително изследване на общественото мнение, проведено от Институт Отворено общество – София през април 2018 г.

Делът на хората, които заявяват, че отделят време за четене на книги почти всеки ден, е около 3 пъти по-голям в столицата (33%) и в областните градове (30%) от дела на тези в селата (12%).

От тези, които вярват, че медиите предоставят точна информация и са независими, над 40% заявяват, че изобщо не четат. От тези, които не вярват на медиите,  30% заявяват, че не четат – четящите са значително по-скептични.

Културното наследство в политиките на ЕС

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/06/12/cult_her/

В Официален вестник на ЕС:

Заключения на Съвета относно извеждането на преден план на културното наследство в политиките на ЕС

В своите заключения от 14 декември 2017 г.  Европейският съвет призова държавите членки, Съвета и Комисията, в съответствие с областите им на компетентност, да продължат работата, с оглед да се възползват от предлаганата от Европейската година на културното наследство през 2018 г. възможност за повишаване на осведомеността относно социалното и икономическото значение на културата и културното наследство.

Държавите се приканват:

  • Да признаят ролята на културното наследство в имащите отношение национални секторни програми, съфинансирани от ЕС, с оглед на запазването на стойността и значението на културното наследство за местните хора и бъдещите поколения и пълно разгръщане на потенциала на културното наследство като ресурс за икономическо развитие, социално сближаване и културна идентичност;
  • Да продължат сътрудничеството си, като отчитат приоритетите и дейностите в новия Работен план за културата за периода след 2019 г., свързани с интегрирането на културното наследство в други политики на ЕС.

Държавите и Комисията се приканват:

  • Да изведат на преден план културното наследство в имащите отношение политики на ЕС и да повишат осведомеността сред заинтересованите страни относно взаимните ползи от интегрирането му в други секторни политики, както и относно възможностите за финансиране за културното наследство , включително чрез предоставяне на навременна информация на заинтересованите страни по отношение на наличните средства на ЕС, насочени към културното наследство.
  • Без да се предрешават преговорите по следващата многогодишна финансова рамка, да проучат възможностите за поставяне, когато е целесъобразно, на по-ясен акцент върху опазването и утвърждаването на общото европейско културно наследство в имащите отношение програми на ЕС. Това може да се направи чрез вземане под внимание на културното наследство при изготвянето и изпълнението на програмите, но също и чрез включване на културното наследство като стратегическа цел сред основните им приоритети.
  • Да стимулират иновациите, устойчивостта и социалното приобщаване чрез специфични ориентирани към културното наследство проекти с европейско измерение и социална добавена стойност и като се вземе предвид аспектът на равенството между половете.
  • Да поощряват сътрудничеството между европейските изследователи, специалисти и институции за образование и обучение с оглед насърчаване на висококачествени умения, обучение и трансфер на знания в традиционните и новите професии, свързани с културното наследство.
  • Да наложат в още по-голяма степен принципа на основано на участието управление на културното наследство чрез анализиране на текущите практики в управлението на културата, определяне на действия, по целесъобразност, за създаване на по-отворено, основано на участието, ефективно и последователно управление на културата, както и чрез обмен на най-добри практики.
  • Да определят добрите национални и международни практики и да улеснят обмена им чрез насърчаване на мобилността на специалистите от сектора на културата в Европа
  • Да задълбочат и разширят диалога с организациите на гражданското общество, европейските граждани, и по-специално с европейската младеж с цел постигане на по-задълбочено разбиране на приноса на европейското културно наследство към утвърждаването на общата европейска идентичност в цялото ѝ многообразие от култури, езици и наследства.
  • Да продължат да подкрепят културното наследство като важен елемент на стратегическия подход на ЕС към международните културни отношения, както и при насърчаването на междукултурния диалог.
  • Да прилагат съвместно с международни организации общи и координирани транснационални действия с цел съхраняване и опазване на културното наследство по устойчив начин и в съответствие с Програмата до 2030 г. за устойчиво развитие
  • Да насърчават подпомагането на цифровизацията на културното наследство като инструмент за отворен достъп до културата и знанията, като по този начин се стимулират иновациите, творчеството и основаното на участието управление на културното наследство.
  • Да предоставят онлайн резултатите, докладите и оценките на финансирани от ЕС инициативи и проекти в областта на културното наследство по по-систематичен и лесен за намиране начин.
  • Да използват възможността, осигурена от Европейската година на културното наследство през 2018 г., за изграждане на обща и цялостна стратегическа визия за културното наследство и да гарантират неговото запазване чрез разработване на конкретни действия. Когато е възможно, следва да се търсят полезни взаимодействия с Европейската стратегия за културното наследство за ХХI век на Съвета на Европа.
  • Да оказват подкрепа за разработването на основани на факти политики, като продължават да работят с Евростат и националните статистически служби по събирането на надеждни данни относно социалния и икономическия принос на културното наследство и да допринасят към подобни усилия на международно равнище на организации като ЮНЕСКО и Съвета на Европа

How to create custom alerts with Amazon Macie

Post Syndicated from Jeremy Haynes original https://aws.amazon.com/blogs/security/how-to-create-custom-alerts-with-amazon-macie/

Amazon Macie is a security service that makes it easy for you to discover, classify, and protect sensitive data in Amazon Simple Storage Service (Amazon S3). Macie collects AWS CloudTrail events and Amazon S3 metadata such as permissions and content classification. In this post, I’ll show you how to use Amazon Macie to create custom alerts for those data sets to notify you of events and objects of interest. I’ll go through the various types of data you can find in Macie, and talk about how to identify fields that are relevant to a given security use case. What you’ll learn is a method of investigation and alerting that you’ll be able to apply to your own situation.

If you’re totally new to Macie, make sure you first head over to the AWS Blog launch post for instructions on how to get started.

Understand the types of data in Macie

There are three sources of data found in Macie: CloudTrail events, S3 Bucket metadata, and S3 Object metadata. Each data source is stored separately in its own index. Therefore, when writing queries it’s important to keep fields from different sources in separate queries.

Within each data source there are two types of fields: Extracted and Generated. Extracted fields come directly from pre-existing AWS fields associated with that data source. For example, Extracted fields from the CloudTrail data source come directly from the events recorded in the CloudTrail logs that correspond to the actions taken by an IAM user, role, or an AWS service. See these CloudTrail log file examples for more details.

Generated fields, on the other hand, are created by Macie. These fields provide additional security value and context for creating more powerful queries. For example, the Internet Service Provider (ISP) field is populated by taking the IP addresses from a CloudTrail event and matching them against a global service provider list. This enables searching for the ISP associated with an API call.

In the sections below, I’ll explore each of the three data sources in more detail. There are reference guides in the Macie documentation dedicated to each source that provide an extensive list of fields and descriptions. I’ll use these lists to discover what fields are appropriate for investigating a particular security use case.

Choose a security use case to instrument alerts

For the purposes of this post, I’ll choose one particular security use case to focus on. The process of discovering relevant data fields collected by Macie and turning those into custom alerts will be the same for any use case you wish to investigate. With that in mind, let’s choose the theme of sensitive or critical data stored in S3 to explore because it can affect many AWS customers.

When beginning to design alerts, the first step is to think about all of the resources, attributes, actions, and identities related to the subject. In this case, I’m looking at sensitive or critical data stored in S3, so the following are some potentially useful fields of data to consider:

  • S3 bucket and object resources
  • S3 configuration and security attributes
  • Read, write, and delete actions on S3
  • IAM users, roles, and access policies associated with the S3 resources

I’ll use this list to guide my search for relevant fields in the Macie reference documentation. First, I’ll dive into the CloudTrail data.

Explore CloudTrail data

The best way to build an understanding of what activities are happening in your AWS environment is by using the CloudTrail data source because it contains your AWS API calls and the identities that made them. If you’re unfamiliar with CloudTrail, head over to the AWS CloudTrail documentation for more information.

Ok, I have a critical S3 bucket and I want to be notified each time an object write is attempted to it outside of the typical access pattern used in my corporate environment. In this case, write actions to the bucket are normally controlled by my organization’s own identity system via federated access. So that means I want to write a query that searches for object write attempts by a non-federated principal. If you’re unfamiliar with what federation is, that’s ok, you can still follow along. If you’re curious about federation, see the IAM Identity Providers and Federation documentation.

I begin by opening the Macie CloudTrail Reference documentation and looking at the section titled “CloudTrail Data Fields Extracted by Macie.” Skimming over the list, I see that the objectsWritten.key description matches my criteria for investigating actions on an S3 resource: “A list of S3 objects’ ARNs that were part of a PutObject, CopyObject, or CompleteMultipartUpload API calls.”

Next, I take a look at the CloudTrail field names that are extracted from the userIdentity object in an event. The userIdentity.type field looks promising, but I’m unsure what values this field can accept. Using the CloudTrail userIdentity Element documentation as a reference, I look up the type field and see that FederatedUser is listed as one of the values.

Great! I have all of the information necessary to write my first custom query. I’ll search for all “user sessions” (a 5-minute aggregation of CloudTrail events corresponding to a user) that have both an attempted write to my S3 bucket and any API call with a userIdentity type which is not FederatedUser. It’s important to note that I’m searching groups of events and not individual events because we’re looking for events related to a specific API call rather than all events related to all API calls. To match values which do not equal FederatedUser, I use a regular expression and place FederatedUser inside parenthesis with a tilde character (~) at the beginning. Here’s what I came up with using the corresponding Macie field names and example search queries as a formatting guide:
 
objectsWritten.key:/arn:aws:s3:::my_sensitive_bucket.*/ AND userIdentityType.key:/(~FederatedUser)/
 
Now, it’s time to test this query on the Research page and turn it into a custom alert.

Create custom alerts

The first step to create a custom alert is to run the proposed query from the Research page. I do this so I can verify that the results match my expectations and to get an idea of how often the alert will occur. Once I’m happy with the query, setting up a custom alert is only a few clicks away.

  1. In the Macie navigation pane, select Research.
  2. Select the data source matching the query from the options in the drop-down menu. In this case, I select CloudTrail data.
     
    Figure 1: Select the data source on the Research page

    Figure 1: Select the data source on the Research page

  3. To run the search, I copy my custom query, paste it in the query box, and then select Enter.
     
    objectsWritten.key:/arn:aws:s3:::my_sensitive_bucket.*/ AND userIdentityType.key:/(~FederatedUser)/
     
  4. At this point, I verify that the results match my expectations and make any necessary modifications to the query. To learn more about how the features of the Research page can help with verifying and modifying queries, see Using the Macie Research Tab.
  5. Once I’m confident in the results, I select the Save query as alert icon.
     
    Figure 2: Select the "Save query as alert" icon

    Figure 2: Select the “Save query as alert” icon

  6. Now, I fill in the remaining fields: Alert title, Description, Min number of matches, and Severity. For more information about each of these fields, see Macie Adding New Custom Basic Alerts.
     
    Figure 3: Fill in the remaining "Alert title," "Description," "Min number of matches," and "Severity" fields

    Figure 3: Fill in the remaining “Alert title,” “Description,” “Min number of matches,” and “Severity” fields

  7. In the Whitelisted users field, I add any users which appeared in my Research page results that I would like to exclude from the alert. For more details on this feature, see Whitelisting Users or Buckets for Basic Alerts.
  8. Finally, I save the alert.

That’s it! I now have a custom basic alert that will alert me every time there’s an attempt to write to my bucket in the same user session as an API call with a userIdentityType other than FederatedUser. Now, I’ll use the S3 object and S3 bucket data sources to look for some more useful fields.

Use S3 object data

The S3 object data source contains fields extracted from S3 API metadata, as well as fields populated by Macie relating to content classification. To expand my search and alerts for sensitive or critical data stored in S3, I’ll look at the S3 Object Data Reference documentation and look for fields that are promising candidates.

I see that S3 Server Side Encryption Settings metadata could be useful because I know that all of the objects in a certain bucket should be encrypted using AES256, and I’d like to be notified every time an object is uploaded that doesn’t match that attribute.

To create the query, I combine the server_encryption field and the bucket field to match on all S3 objects within the specified bucket. Note the forward slashes and “.*” that make this a regular expression search. This allows me to match all buckets that share my project name, even when the full bucket name is different.
 
filesystem_metadata.bucket:/.*my_sensitive_project.*/ AND NOT filesystem_metadata.server_encryption:"AES256"
 
Next, I have a different bucket that I know should never have any objects which contain personally identifiable information (PII). This includes such information as names, email addresses, and credit card numbers. For the full list of what Macie considers PII, see Classifying Data with Amazon Macie. I’d like to set up an alert that notifies me every time an object containing any type of PII is added to this bucket. Since this is a data field that’s provided by Macie, I look under the Generated Fields heading and find the field pii_impact. I’m looking for all levels of PII impact, so my query will search for any value which isn’t equal to none.

As before, I’ll combine this with the bucket field to include all S3 objects matching the bucket name.
 
filesystem_metadata.bucket:"my_logs_bucket" AND NOT pii_impact:"none"
 

Use S3 bucket data

The S3 bucket data source contains information extracted from S3 bucket APIs, as well as fields that Macie generates by processing bucket metadata and access control lists (ACLs). Following the same method as before, I head over to the S3 Bucket Data Reference documentation and look for fields that will help me create useful alerts.

There are plenty of fields which could be useful here, depending on how much information I know in advance about my bucket and which type of security vector I want to protect against. To narrow my search, I decide to add some protection against accidental or unauthorized data destruction.

In the S3 Versioning section, I see that the Multi-Factor Authentication Delete settings are one of the available fields. Since I have this bucket locked down to only allow MFA delete actions, I can create an alert to notify me every time this delete action is disabled.
 
bucket:"my_critical_bucket" AND versioning.MFADelete:"disabled"
 
Another method of potential data destruction is through the bucket lifecycle expiration settings automatically removing data after a period of time. I’d like to know if someone changes this to a low number so I can make a modification and prevent losing any recent data.
 
bucket:"my_critical_bucket" AND lifecycle_configuration.Rules.Expiration.Days:<3
 
Now that I have gathered another set of potential alert queries, I can walk through the same steps I used for CloudTrail data to turn them into custom alerts. Once saved, I’ll begin to receive notifications in the Macie console whenever a match is found. I can view the alerts I’ve received by selecting Alerts from the Macie navigation pane.

Summary

I described the various types and sources of data available in Macie. After demonstrating how to take a security use case and discover relevant fields, I stepped through the process of creating queries and turning them into custom alerts. My goal has been to show you how to build alerts that are tailored to your specific environment and solve your own individual needs.

If you have comments about this post, submit them in the Comments section below. If you have questions about how to use Macie, or you’d like to request new fields and data sources, start a new thread on the Macie forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Streaming Events from Amazon Pinpoint to Redshift

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/streaming-events-from-amazon-pinpoint-to-redshift/

Note: This post was originally written by Ryan Idrigo-Lam, one of the founding members of the Amazon Pinpoint team.


You can use Amazon Pinpoint to segment, target, and engage with your customers directly from the console. The Pinpoint console also includes a variety of dashboards that you can use to keep track of how your customers use your applications, and measure how likely your customers are to engage with the messages you send them.

Some Pinpoint customers, however, have use cases that require a bit more than what these dashboards have to offer. For example, some customers want to join their Pinpoint data to external data sets, or to collect historical data beyond the six month window that Pinpoint retains. To help customers meet these needs, and many more, Amazon Pinpoint includes a feature called Event Streams.

This article provides information about using Event Streams to export your data from Amazon Pinpoint and into a high-performance Amazon Redshift database. Once your data is in Redshift, you can run queries against it, join it with other data sets, use it as a data source for analytics and data visualization tools, and much more.

Step 1: Create a Redshift Cluster

The first step in this process involves creating a new Redshift cluster to store your data. You can complete this step in a few clicks by using the Amazon Redshift console. For more information, see Managing Clusters Using the Console in the Amazon Redshift Cluster Management Guide.

When you create the new cluster, make a note of the values you specify for the Cluster Identifier, Database Name, Master User Name, and Master User Password. You’ll use all of these values when you set up Amazon Kinesis Firehose in the next section.

Step 2: Create a Firehose Delivery Stream with a Redshift Destination

After you create your Redshift cluster, you can create the Amazon Kinesis Data Firehose delivery stream that will deliver your Pinpoint data to the Redshift cluster.

To create the Kinesis Data Firehose delivery stream

  1. Open the Amazon Kinesis Data Firehose console at https://console.aws.amazon.com/firehose/home.
  2. Choose Create delivery stream.
  3. For Delivery stream name, type a name.
  4. Under Choose source, for Source, choose Direct PUT or other sources. Choose Next.
  5. On the Process records page, do the following:
    1. Under Transform source records with AWS Lambda, choose Enabled if you want to use a Lambda function to transform the data before Firehose loads it into Redshift. Otherwise, choose Disabled.
    2. Under Convert record format, choose Disabled, and then choose Next.
  6. On the Choose destination page, do the following:
    1. For Destination, choose Amazon Redshift.
    2. Under Amazon Redshift destination, specify the Cluster name, User name, Password, and Database for the Redshift database you created earlier. Also specify a name for the Table.
    3. Under Intermediate S3 destination, choose an S3 bucket to store data in. Alternatively, choose Create new to create a new bucket. Choose Next.
  7. On the Configure settings page, do the following:
    1. Under IAM role, choose an IAM role that Firehose can use to access your S3 bucket and KMS key. Alternatively, you can have the Firehose console create a new role. Choose Next.
    2. On the Review page, confirm the settings you specified on the previous pages. If the settings are correct, choose Create delivery stream.

Step 3: Create a JSONPaths file

The next step in this process is to create a JSONPaths file and upload it to an Amazon S3 bucket. You use the JSONPaths file to tell Amazon Redshift how to interpret the unstructured JSON that Amazon Pinpoint provides.

To create a JSONPaths file and upload it to Amazon S3

  1. In a text editor, create a new file.
  2. Paste the following code into the text file:
    {
      "jsonpaths": [
        "$['event_type']",
        "$['event_timestamp']",
        "$['arrival_timestamp']",
        "$['event_version']",
        "$['application']['app_id']",
        "$['application']['package_name']",
        "$['application']['version_name']",
        "$['application']['version_code']",
        "$['application']['title']",
        "$['application']['cognito_identity_pool_id']",
        "$['application']['sdk']['name']",
        "$['application']['sdk']['version']",
        "$['client']['client_id']",
        "$['client']['cognito_id']",
        "$['device']['model']",
        "$['device']['make']",
        "$['device']['platform']['name']",
        "$['device']['platform']['version']",
        "$['device']['locale']['code']",
        "$['device']['locale']['language']",
        "$['device']['locale']['country']",
        "$['session']['session_id']",
        "$['session']['start_timestamp']",
        "$['session']['stop_timestamp']",
        "$['monetization']['transaction']['transaction_id']",
        "$['monetization']['transaction']['store']",
        "$['monetization']['transaction']['item_id']",
        "$['monetization']['transaction']['quantity']",
        "$['monetization']['transaction']['price']['reported_price']",
        "$['monetization']['transaction']['price']['amount']",
        "$['monetization']['transaction']['price']['currency']['code']",
        "$['monetization']['transaction']['price']['currency']['symbol']",
        "$['attributes']['campaign_id']",
        "$['attributes']['campaign_activity_id']",
        "$['attributes']['my_custom_attribute']",
        "$['metrics']['my_custom_metric']"
      ]
    }

  3. Modify the preceding code example to include the fields that you want to import into Redshift.
    Note: You can specify custom attributes or metrics by replacing my_custom_attribute or my_custom_metric in the example above with your custom attributes or metrics, respectively.
  4. When you finish modifying the code example, remove all whitespace, including spaces and line breaks, from the file. Save the file as json-paths.json.
  5. Open the Amazon S3 console at https://s3.console.aws.amazon.com/s3/home.
  6. Choose the S3 bucket you created when you set up the Firehose stream. Upload json-paths.json into the bucket.

Step 4: Configure the table in Redshift

At this point, it’s time to finish setting up your Redshift database. In this section, you’ll create a table in the Redshift cluster you created earlier. The columns in this table mirror the values you specified in the JSONPaths file in the previous section.

  1. Connect to your Redshift cluster by using a database tool such as SQL Workbench/J. For more information about connecting to a cluster, see Connect to the Cluster in the Amazon Redshift Getting Started Guide.
  2. Create a new table that contains a column for each field in the JSONPaths file you created in the preceding section. You can use the following example as a template.
    CREATE schema AWSMA;
    CREATE TABLE AWSMA.event(
      event_type VARCHAR(256) NOT NULL ENCODE LZO,
      event_timestamp TIMESTAMP NOT NULL ENCODE LZO,
      arrival_timestamp TIMESTAMP NULL ENCODE LZO,
      event_version CHAR(12) NULL ENCODE LZO,
      application_app_id VARCHAR(64) NOT NULL ENCODE LZO,
      application_package_name VARCHAR(256) NULL ENCODE LZO,
      application_version_name VARCHAR(256) NULL ENCODE LZO,
      application_version_code VARCHAR(256) NULL ENCODE LZO,
      application_title VARCHAR(256) NULL ENCODE LZO,
      application_cognito_identity_pool_id VARCHAR(64) NULL ENCODE LZO,
      application_sdk_name VARCHAR(256) NULL ENCODE LZO,
      application_sdk_version VARCHAR(256) NULL ENCODE LZO,
      client_id VARCHAR(64) NULL DISTKEY ENCODE LZO,
      client_cognito_id VARCHAR(64) NULL ENCODE LZO,
      device_model VARCHAR(256) NULL ENCODE LZO,
      device_make VARCHAR(256) NULL ENCODE LZO,
      device_platform_name VARCHAR(256) NULL ENCODE LZO,
      device_platform_version VARCHAR(256) NULL ENCODE LZO,
      device_locale_code VARCHAR(256) NULL ENCODE LZO,
      device_locale_language VARCHAR(64) NULL ENCODE LZO,
      device_locale_country VARCHAR(64) NULL ENCODE LZO,
      session_id VARCHAR(64) NULL ENCODE LZO,
      session_start_timestamp TIMESTAMP NULL ENCODE LZO,
      session_stop_timestamp TIMESTAMP NULL ENCODE LZO,
      monetization_transaction_id VARCHAR(64) NULL ENCODE LZO,
      monetization_transaction_store VARCHAR(64) NULL ENCODE LZO,
      monetization_transaction_item_id VARCHAR(64) NULL ENCODE LZO,
      monetization_transaction_quantity FLOAT8 NULL,
      monetization_transaction_price_reported VARCHAR(64) NULL ENCODE LZO,
      monetization_transaction_price_amount FLOAT8 NULL,
      monetization_transaction_price_currency_code VARCHAR(16) NULL ENCODE LZO,
      monetization_transaction_price_currency_symbol VARCHAR(32) NULL ENCODE LZO,
      - Custom Attributes
      a_campaign_id VARCHAR(4000),
      a_campaign_activity_id VARCHAR(4000),
      a_my_custom_attribute VARCHAR(4000),
      - Custom Metrics
      m_my_custom_metric float8
    )
    SORTKEY ( application_app_id, event_timestamp, event_type);

Step 5: Configure the Firehose Stream

You’re getting close! At this point, you’re ready to point the Kinesis Data Firehose stream to your JSONPaths file so that Redshift parses the incoming data properly. You also need to list the columns of the table that your data will be copied into.

To configure the Firehose Stream

  1. Open the Amazon Kinesis Data Firehose console at https://console.aws.amazon.com/firehose/home.
  2. In the list of delivery streams, choose the delivery stream you created earlier.
  3. On the Details tab, choose Edit.
  4. Under Amazon Redshift destination, for COPY options, paste the following:
    JSON 's3://s3-bucket/json-paths.json'
    TRUNCATECOLUMNS
    TIMEFORMAT 'epochmillisecs'

  5. Replace s3-bucket in the preceding code example with the path to the S3 bucket that contains json-paths.json.
  6. For Columns, list all of the columns that are present in the JSONPaths file you created earlier. Specify the column names in the same order as they’re listed in the json-paths.json file, using commas to separate the column names. When you finish, choose Save.

Step 6: Enable Event Streams in Amazon Pinpoint

The only thing left to do now is to tell Amazon Pinpoint to start sending data to Amazon Kinesis.

To enable Event Streaming in Amazon Pinpoint

  1. Open the Amazon Pinpoint console at https://console.aws.amazon.com/pinpoint/home.
  2. Choose the application or project that you want to enable event streams for.
  3. In the navigation pane, choose Settings.
  4. On the Event stream tab, choose Enable streaming of events to Amazon Kinesis.
  5. Under Stream to Amazon Kinesis, select Send events to an Amazon Kinesis Firehose delivery stream.
  6. For Amazon Kinesis Firehose delivery stream, choose the stream you created earlier.
  7. For IAM role, choose an existing role that allows the firehose:PutRecordBatch action, or choose Automatically create a role to have Amazon Pinpoint create a role with the appropriate permissions. If you choose to have Amazon Pinpoint create a role for you, type a name for the role. Choose Save.

That’s it! Once you complete this final step, Amazon Pinpoint starts exporting the data you specified into your Redshift cluster.

I hope this walk through was helpful. If you have any questions, please let us know in the comments or in the Amazon Pinpoint forum.

Австрия: отговорност на YouTube

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/06/06/youtube-2/

Виенският търговски съд постановява, че YouTube не е  интернет посредник, а носи отговорност за транспортираното съдържание. От това следва, че сайтът трябва предварително да проверява съдържанието.

Компанията YouTube вижда себе си като посредник, който не носи отговорност за съдържанието. И така би желала да я виждат другите. И това е стандартната защита.

Австрийска телевизия Puls 4  предприема действия срещу съдържание  в YouTube без уредени права. “Възразяваме срещу YouTube, което прави възможно качването на съдържание, произведено от нас, без да ни пита  и без да плаща  възнаграждение”, казват от телевизията.

Отговорност на платформите. Започва се.

Решението от първа инстанция не е окончателно.

SAP on AWS – Past, Present, and Future

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/sap-on-aws-past-present-and-future/

While many of my AWS colleagues are preparing for SAPPHIRE NOW, I thought this would be a good time to bring you up to date on what we have already done to make AWS a great home for SAP’s products and to share our plans to make it even better.

The Story So Far
Our enterprise customers want to bring gigantic, memory-intensive workloads to the AWS Cloud. with a special focus on large-scale production deployments of SAP HANA. Here’s what we have done so far to meet this important requirement:

May 2016 – We announced the x1.32xlarge instance type with 2 TB of memory, purpose-built for running SAP HANA in the cloud.

August 2016 – We announced SAP certification and support for scale-out clusters of up to 7 nodes and 14 TB of memory.

October 2016 – We announced the x1.16xlarge instance type with 1 TB of memory, perfect for testing and for smaller SAP HANA deployments, along with increased regional availability for both of the X1 instances.

May 2017 – We announced the x1e.32xlarge instance type with 4 TB of memory and SAP support for very large scale-out SAP HANA clusters of up to 17 nodes (34 TB of memory).

November 2017 – We announced SAP support for even larger on-demand SAP HANA clusters with up to 25 x1.32xlarge nodes (50 TB of memory).

Along the way, we have been working with customers like Brooks Brothers, Visy, Sumitomo Chemicals, and Kellogg’s to build business-critical HANA implementations on AWS. These customers (and many others) have improved their agility, realized cost savings, and increased performance as part of their move to the cloud.

Right Here, Right Now
As you may know, the C5 and M5 instances are powered by the latest Intel® Xeon® Scalable (Skylake) processors, and make use of our new lightweight, hardware-accelerated Nitro hypervisor. Both types of instances are fully certified by SAP, and deliver a measurable performance increase with respect to their predecessors. The Nitro Hypervisor provides consistent performance and increased compute and memory resources for virtualized EC2 instances by removing host system software components. It allows us to offer larger instance sizes (like c5.18xlarge) that make just about all of the server’s resources available to customers.

As an indication of our progress over the last couple of years, our first SAP certified NetWeaver installations on m2.4xlarge instances delivered 7400 SAPS (925 per vCPU). Today, the m5.24xlarge instances can deliver 135,230 SAPS (1409 per vCPU), our best performance to date. You can read the new SAP benchmarks for C5 and M5 instances, along with Measuring in SAPS, to learn more.

In the Works – Instances with More Memory
Our collaboration with SAP began in 2008 with the goal of providing our customers with options for running their mission-critical SAP systems in the cloud. We worked side-by-side with SAP to enable production deployments of HANA in 2014, and now offer a wide range of EC2 instances that are certified by SAP to run HANA.

Our goal is to make it as easy as possible to run HANA and to provide you with instance sizes that are a great fit for many different applications and installations. At the last SAPPHIRE NOW conference, we announced our plans to launch EC2 instances with 8 TB to 16 TB of memory. Today I would like to tell you a bit more about the specs and sizes for these instances.

We are planning to launch high-memory EC2 Bare Metal instances with 6 TB, 9 TB, and 12 TB of memory, designed from the ground up to run mission-critical deployments of SAP HANA. Like the existing Bare Metal instances, these instances allow the operating system to run directly on the underlying hardware while still providing access to all of the benefits of the cloud as full-fledged members of the EC2 family.

The instances run on an 8-socket platform built with Intel Xeon Scalable (Skylake) processors. They can be launched in a VPC, offer ENA-based Enhanced Networking and EBS-optimization by default, and are available on EC2 Dedicated Hosts. You will be able to launch them in all of the usual ways, and to use IAM to control authentication, authorization, and auditing. Instances will be able to make use of multiple EBS volumes, each storing up to 16 TB of data, for elastic capacity.

I did not have the opportunity to go hands-on with the new instances, but my colleagues shared a few screen shots with me! Here’s some of the output from dmesg on an instance with 6 TB of memory:

And here’s what lscpu displays:

We plan to make these instances available in private preview this summer, and to move them to general availability this fall. While 12 TB instances are certainly a big step forward, we don’t plan to stop there, and are working on even bigger ones — instances with more than 16 TB of memory are in the works as well!

If you would like to join the private preview for these new instances, please contact us.

Amazon AppStream 2.0 with SAP GUI
In other AWS / SAP news, you can now use Amazon AppStream 2.0 to visualize the SAP GUI in any browser that is HTML5-compatible.

This is a clean, simple, and efficient alternative to installing the SAP GUI on every desktop. Response time improves, as does user productivity, because less data moves between client and server. Replacing hundreds or thousands of installed copies of SAP GUI with a centrally managed image also reduces the overall management effort.

To learn more about this cool new way to make the SAP GUI available to your users, read Deploying SAP GUI on Amazon AppStream 2.0.

Say Hello at SAPPHIRE NOW
The AWS team will be in booth 642 at SAPPHIRE this week with a full set of sessions from our team, our customers, and our partners in our in-booth theater. Many of our customers will also be telling their stories during sessions throughout the event. A listing of available sessions and activities can be found here.

Jeff;

AWS Online Tech Talks – June 2018

Post Syndicated from Devin Watson original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-june-2018/

AWS Online Tech Talks – June 2018

Join us this month to learn about AWS services and solutions. New this month, we have a fireside chat with the GM of Amazon WorkSpaces and our 2nd episode of the “How to re:Invent” series. We’ll also cover best practices, deep dives, use cases and more! Join us and register today!

Note – All sessions are free and in Pacific Time.

Tech talks featured this month:

 

Analytics & Big Data

June 18, 2018 | 11:00 AM – 11:45 AM PTGet Started with Real-Time Streaming Data in Under 5 Minutes – Learn how to use Amazon Kinesis to capture, store, and analyze streaming data in real-time including IoT device data, VPC flow logs, and clickstream data.
June 20, 2018 | 11:00 AM – 11:45 AM PT – Insights For Everyone – Deploying Data across your Organization – Learn how to deploy data at scale using AWS Analytics and QuickSight’s new reader role and usage based pricing.

 

AWS re:Invent
June 13, 2018 | 05:00 PM – 05:30 PM PTEpisode 2: AWS re:Invent Breakout Content Secret Sauce – Hear from one of our own AWS content experts as we dive deep into the re:Invent content strategy and how we maintain a high bar.
Compute

June 25, 2018 | 01:00 PM – 01:45 PM PTAccelerating Containerized Workloads with Amazon EC2 Spot Instances – Learn how to efficiently deploy containerized workloads and easily manage clusters at any scale at a fraction of the cost with Spot Instances.

June 26, 2018 | 01:00 PM – 01:45 PM PTEnsuring Your Windows Server Workloads Are Well-Architected – Get the benefits, best practices and tools on running your Microsoft Workloads on AWS leveraging a well-architected approach.

 

Containers
June 25, 2018 | 09:00 AM – 09:45 AM PTRunning Kubernetes on AWS – Learn about the basics of running Kubernetes on AWS including how setup masters, networking, security, and add auto-scaling to your cluster.

 

Databases

June 18, 2018 | 01:00 PM – 01:45 PM PTOracle to Amazon Aurora Migration, Step by Step – Learn how to migrate your Oracle database to Amazon Aurora.
DevOps

June 20, 2018 | 09:00 AM – 09:45 AM PTSet Up a CI/CD Pipeline for Deploying Containers Using the AWS Developer Tools – Learn how to set up a CI/CD pipeline for deploying containers using the AWS Developer Tools.

 

Enterprise & Hybrid
June 18, 2018 | 09:00 AM – 09:45 AM PTDe-risking Enterprise Migration with AWS Managed Services – Learn how enterprise customers are de-risking cloud adoption with AWS Managed Services.

June 19, 2018 | 11:00 AM – 11:45 AM PTLaunch AWS Faster using Automated Landing Zones – Learn how the AWS Landing Zone can automate the set up of best practice baselines when setting up new

 

AWS Environments

June 21, 2018 | 11:00 AM – 11:45 AM PTLeading Your Team Through a Cloud Transformation – Learn how you can help lead your organization through a cloud transformation.

June 21, 2018 | 01:00 PM – 01:45 PM PTEnabling New Retail Customer Experiences with Big Data – Learn how AWS can help retailers realize actual value from their big data and deliver on differentiated retail customer experiences.

June 28, 2018 | 01:00 PM – 01:45 PM PTFireside Chat: End User Collaboration on AWS – Learn how End User Compute services can help you deliver access to desktops and applications anywhere, anytime, using any device.
IoT

June 27, 2018 | 11:00 AM – 11:45 AM PTAWS IoT in the Connected Home – Learn how to use AWS IoT to build innovative Connected Home products.

 

Machine Learning

June 19, 2018 | 09:00 AM – 09:45 AM PTIntegrating Amazon SageMaker into your Enterprise – Learn how to integrate Amazon SageMaker and other AWS Services within an Enterprise environment.

June 21, 2018 | 09:00 AM – 09:45 AM PTBuilding Text Analytics Applications on AWS using Amazon Comprehend – Learn how you can unlock the value of your unstructured data with NLP-based text analytics.

 

Management Tools

June 20, 2018 | 01:00 PM – 01:45 PM PTOptimizing Application Performance and Costs with Auto Scaling – Learn how selecting the right scaling option can help optimize application performance and costs.

 

Mobile
June 25, 2018 | 11:00 AM – 11:45 AM PTDrive User Engagement with Amazon Pinpoint – Learn how Amazon Pinpoint simplifies and streamlines effective user engagement.

 

Security, Identity & Compliance

June 26, 2018 | 09:00 AM – 09:45 AM PTUnderstanding AWS Secrets Manager – Learn how AWS Secrets Manager helps you rotate and manage access to secrets centrally.
June 28, 2018 | 09:00 AM – 09:45 AM PTUsing Amazon Inspector to Discover Potential Security Issues – See how Amazon Inspector can be used to discover security issues of your instances.

 

Serverless

June 19, 2018 | 01:00 PM – 01:45 PM PTProductionize Serverless Application Building and Deployments with AWS SAM – Learn expert tips and techniques for building and deploying serverless applications at scale with AWS SAM.

 

Storage

June 26, 2018 | 11:00 AM – 11:45 AM PTDeep Dive: Hybrid Cloud Storage with AWS Storage Gateway – Learn how you can reduce your on-premises infrastructure by using the AWS Storage Gateway to connecting your applications to the scalable and reliable AWS storage services.
June 27, 2018 | 01:00 PM – 01:45 PM PTChanging the Game: Extending Compute Capabilities to the Edge – Discover how to change the game for IIoT and edge analytics applications with AWS Snowball Edge plus enhanced Compute instances.
June 28, 2018 | 11:00 AM – 11:45 AM PTBig Data and Analytics Workloads on Amazon EFS – Get best practices and deployment advice for running big data and analytics workloads on Amazon EFS.

Monitoring your Amazon SNS message filtering activity with Amazon CloudWatch

Post Syndicated from Rachel Richardson original https://aws.amazon.com/blogs/compute/monitoring-your-amazon-sns-message-filtering-activity-with-amazon-cloudwatch/

This post is courtesy of Otavio Ferreira, Manager, Amazon SNS, AWS Messaging.

Amazon SNS message filtering provides a set of string and numeric matching operators that allow each subscription to receive only the messages of interest. Hence, SNS message filtering can simplify your pub/sub messaging architecture by offloading the message filtering logic from your subscriber systems, as well as the message routing logic from your publisher systems.

After you set the subscription attribute that defines a filter policy, the subscribing endpoint receives only the messages that carry attributes matching this filter policy. Other messages published to the topic are filtered out for this subscription. In this way, the native integration between SNS and Amazon CloudWatch provides visibility into the number of messages delivered, as well as the number of messages filtered out.

CloudWatch metrics are captured automatically for you. To get started with SNS message filtering, see Filtering Messages with Amazon SNS.

Message Filtering Metrics

The following six CloudWatch metrics are relevant to understanding your SNS message filtering activity:

  • NumberOfMessagesPublished – Inbound traffic to SNS. This metric tracks all the messages that have been published to the topic.
  • NumberOfNotificationsDelivered – Outbound traffic from SNS. This metric tracks all the messages that have been successfully delivered to endpoints subscribed to the topic. A delivery takes place either when the incoming message attributes match a subscription filter policy, or when the subscription has no filter policy at all, which results in a catch-all behavior.
  • NumberOfNotificationsFilteredOut – This metric tracks all the messages that were filtered out because they carried attributes that didn’t match the subscription filter policy.
  • NumberOfNotificationsFilteredOut-NoMessageAttributes – This metric tracks all the messages that were filtered out because they didn’t carry any attributes at all and, consequently, didn’t match the subscription filter policy.
  • NumberOfNotificationsFilteredOut-InvalidAttributes – This metric keeps track of messages that were filtered out because they carried invalid or malformed attributes and, thus, didn’t match the subscription filter policy.
  • NumberOfNotificationsFailed – This last metric tracks all the messages that failed to be delivered to subscribing endpoints, regardless of whether a filter policy had been set for the endpoint. This metric is emitted after the message delivery retry policy is exhausted, and SNS stops attempting to deliver the message. At that moment, the subscribing endpoint is likely no longer reachable. For example, the subscribing SQS queue or Lambda function has been deleted by its owner. You may want to closely monitor this metric to address message delivery issues quickly.

Message filtering graphs

Through the AWS Management Console, you can compose graphs to display your SNS message filtering activity. The graph shows the number of messages published, delivered, and filtered out within the timeframe you specify (1h, 3h, 12h, 1d, 3d, 1w, or custom).

SNS message filtering for CloudWatch Metrics

To compose an SNS message filtering graph with CloudWatch:

  1. Open the CloudWatch console.
  2. Choose Metrics, SNS, All Metrics, and Topic Metrics.
  3. Select all metrics to add to the graph, such as:
    • NumberOfMessagesPublished
    • NumberOfNotificationsDelivered
    • NumberOfNotificationsFilteredOut
  4. Choose Graphed metrics.
  5. In the Statistic column, switch from Average to Sum.
  6. Title your graph with a descriptive name, such as “SNS Message Filtering”

After you have your graph set up, you may want to copy the graph link for bookmarking, emailing, or sharing with co-workers. You may also want to add your graph to a CloudWatch dashboard for easy access in the future. Both actions are available to you on the Actions menu, which is found above the graph.

Summary

SNS message filtering defines how SNS topics behave in terms of message delivery. By using CloudWatch metrics, you gain visibility into the number of messages published, delivered, and filtered out. This enables you to validate the operation of filter policies and more easily troubleshoot during development phases.

SNS message filtering can be implemented easily with existing AWS SDKs by applying message and subscription attributes across all SNS supported protocols (Amazon SQS, AWS Lambda, HTTP, SMS, email, and mobile push). CloudWatch metrics for SNS message filtering is available now, in all AWS Regions.

For information about pricing, see the CloudWatch pricing page.

For more information, see:

Connect, collaborate, and learn at AWS Global Summits in 2018

Post Syndicated from Tina Kelleher original https://aws.amazon.com/blogs/big-data/connect-collaborate-and-learn-at-aws-global-summits-in-2018/

Regardless of your career path, there’s no denying that attending industry events can provide helpful career development opportunities — not only for improving and expanding your skill sets, but for networking as well. According to this article from PayScale.com, experts estimate that somewhere between 70-85% of new positions are landed through networking.

Narrowing our focus to networking opportunities with cloud computing professionals who’re working on tackling some of today’s most innovative and exciting big data solutions, attending big data-focused sessions at an AWS Global Summit is a great place to start.

AWS Global Summits are free events that bring the cloud computing community together to connect, collaborate, and learn about AWS. As the name suggests, these summits are held in major cities around the world, and attract technologists from all industries and skill levels who’re interested in hearing from AWS leaders, experts, partners, and customers.

In addition to networking opportunities with top cloud technology providers, consultants and your peers in our Partner and Solutions Expo, you’ll also hone your AWS skills by attending and participating in a multitude of education and training opportunities.

Here’s a brief sampling of some of the upcoming sessions relevant to big data professionals:

May 31st : Big Data Architectural Patterns and Best Practices on AWS | AWS Summit – Mexico City

June 6th-7th: Various (click on the “Big Data & Analytics” header) | AWS Summit – Berlin

June 20-21st : [email protected] | Public Sector Summit – Washington DC

June 21st: Enabling Self Service for Data Scientists with AWS Service Catalog | AWS Summit – Sao Paulo

Be sure to check out the main page for AWS Global Summits, where you can see which cities have AWS Summits planned for 2018, register to attend an upcoming event, or provide your information to be notified when registration opens for a future event.