Tag Archives: Web app

NoSQLMap – Automated NoSQL Exploitation Tool

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/Y4RGC1J9G-U/

NoSQLMap is an open source Python-based automated NoSQL exploitation tool designed to audit for as well as automate injection attacks and exploit default configuration weaknesses in NoSQL databases. It is also intended to attack web applications using NoSQL in order to disclose data from the database. Presently the tool’s exploits are focused…

Read the full post at darknet.org.uk

What You Need To Know About Server Side Request Forgery (SSRF)

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/jiE0TjlsGI4/

SSRF or Server Side Request Forgery is an attack vector that has been around for a long time, but do you actually know what it is? Server Side Request Forgery (SSRF) refers to an attack where in an attacker is able to send a crafted request from a vulnerable web application. SSRF is usually used […]

The post What You Need To Know About…

Read the full post at darknet.org.uk

Popcorn Time Devs Help Streaming Aggregator Reelgood to ‘Fix Piracy’

Post Syndicated from Ernesto original https://torrentfreak.com/popcorn-time-devs-help-streaming-aggregator-reelgood-to-fix-piracy-170812/

During the fall of 2015, the MPAA shut down one of the most prominent pirate streaming services, Popcorn Time fork PopcornTime.io.

While the service was found to be clearly infringing, many of the developers didn’t set out to break the law. Most of all, they wanted to provide the public with easy access to their favorite movies and TV-shows.

Fast forward nearly two years and several of these Popcorn Time developers are still on the same quest. The main difference is that they now operate on the safe side of the law.

The startup they’re working with is called Reelgood, which can be best described as a streaming service aggregator. The San-Francisco based company, founded by ex-Facebook employee David Sanderson, recently raised $3.5 million and has opened its doors to the public.

The goal of Reelgood is similar to Popcorn Time in the way that it aims to be the go-to tool for people to access their entertainment. Instead of using pirate sources, however, Reelgood stitches together content from various legal platforms, both paid and free.

Reelgood

TorrentFreak spoke to former Popcorn Time developer Luigi Poole, who’s leading the charge on the development of Reelgood’s web app. He stresses that the increasing fragmentation of streaming services, which drives some people to pirate sites, is one of the problems Reelgood hopes to fix.

“There’s a misconception that torrenting is done by bad people who don’t want to pay for content. I’d say, in the vast majority of cases, torrenting is a symptom of the massive fragmentation that’s been given as the only legal option to the consumer,” Poole says.

While people have many reasons to pirate, some stick to unauthorized services because it’s simply too cumbersome to dig through all the legal options. Pirate sites have a single interface to all popular movies and TV-shows and legal platforms don’t.

“The modern TV/movie ecosystem is made up of an increasing number of different services. This makes finding content like changing channels, only more complicated. Is that movie you’re about to buy or rent on a service you already pay for? Right now there’s no way to do this other than a cumbersome search using each service’s individual search. Time to go digging,” Poole says.

“We believe this is the main reason people torrent — it’s just easier, given that the legal options presented to us are essentially a ‘go fetch’ treasure hunt,” he adds.

Flipping that channel on an old school television often beats the online streaming experience. That is, for those who want more than Netflix alone.

And the problem isn’t going away anytime soon. As we reported earlier this week, there’s a trend towards more fragmentation, instead of less. Disney is pulling some of its most popular content from the US Netflix in 2019, keeping piracy relevant.

“The untold story is that consumers are throwing up their hands with all this fragmentation, and turning to torrenting not because it’s free, but because it’s intuitive and easy,” Poole says.

“Reelgood fixes this problem by acting as a pirate site interface for every legal option, sort of like a TV guide to anything streaming, also giving you notifications anytime something is new, letting you track when certain content becomes available, and not only telling you where it’s available but taking you straight there with one click to play.”

Reelgood can be seen as a defragmentation tool, creating a uniform interface for all the legal platforms people have access to. In addition to paid services such as Netflix and HBO, it also lists free content from Fox, CBS, Crackle, and many other providers.

TorrentFreak took it for a spin and it indeed works as advertised. Simply add your streaming service accounts and all will be bundled into an elegant and uniform interface that allows you to watch and track everything with a single click.

The service is still limited to US libraries but there are already plans to expand it to other countries, which is promising. While it may not eradicate piracy anytime soon, it does a good job of trying to organize the increasingly complex streaming landscape.

Unfortunately, it’s still not cheap to use more than a handful of paid services, but that’s a problem even Reelgood can’t fix. Not even with help from seven former Popcorn Time developers.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Automating Blue/Green Deployments of Infrastructure and Application Code using AMIs, AWS Developer Tools, & Amazon EC2 Systems Manager

Post Syndicated from Ramesh Adabala original https://aws.amazon.com/blogs/devops/bluegreen-infrastructure-application-deployment-blog/

Previous DevOps blog posts have covered the following use cases for infrastructure and application deployment automation:

An AMI provides the information required to launch an instance, which is a virtual server in the cloud. You can use one AMI to launch as many instances as you need. It is security best practice to customize and harden your base AMI with required operating system updates and, if you are using AWS native services for continuous security monitoring and operations, you are strongly encouraged to bake into the base AMI agents such as those for Amazon EC2 Systems Manager (SSM), Amazon Inspector, CodeDeploy, and CloudWatch Logs. A customized and hardened AMI is often referred to as a “golden AMI.” The use of golden AMIs to create EC2 instances in your AWS environment allows for fast and stable application deployment and scaling, secure application stack upgrades, and versioning.

In this post, using the DevOps automation capabilities of Systems Manager, AWS developer tools (CodePipeLine, CodeDeploy, CodeCommit, CodeBuild), I will show you how to use AWS CodePipeline to orchestrate the end-to-end blue/green deployments of a golden AMI and application code. Systems Manager Automation is a powerful security feature for enterprises that want to mature their DevSecOps practices.

Here are the high-level phases and primary services covered in this use case:

 

You can access the source code for the sample used in this post here: https://github.com/awslabs/automating-governance-sample/tree/master/Bluegreen-AMI-Application-Deployment-blog.

This sample will create a pipeline in AWS CodePipeline with the building blocks to support the blue/green deployments of infrastructure and application. The sample includes a custom Lambda step in the pipeline to execute Systems Manager Automation to build a golden AMI and update the Auto Scaling group with the golden AMI ID for every rollout of new application code. This guarantees that every new application deployment is on a fully patched and customized AMI in a continuous integration and deployment model. This enables the automation of hardened AMI deployment with every new version of application deployment.

 

 

We will build and run this sample in three parts.

Part 1: Setting up the AWS developer tools and deploying a base web application

Part 1 of the AWS CloudFormation template creates the initial Java-based web application environment in a VPC. It also creates all the required components of Systems Manager Automation, CodeCommit, CodeBuild, and CodeDeploy to support the blue/green deployments of the infrastructure and application resulting from ongoing code releases.

Part 1 of the AWS CloudFormation stack creates these resources:

After Part 1 of the AWS CloudFormation stack creation is complete, go to the Outputs tab and click the Elastic Load Balancing link. You will see the following home page for the base web application:

Make sure you have all the outputs from the Part 1 stack handy. You need to supply them as parameters in Part 3 of the stack.

Part 2: Setting up your CodeCommit repository

In this part, you will commit and push your sample application code into the CodeCommit repository created in Part 1. To access the initial git commands to clone the empty repository to your local machine, click Connect to go to the AWS CodeCommit console. Make sure you have the IAM permissions required to access AWS CodeCommit from command line interface (CLI).

After you’ve cloned the repository locally, download the sample application files from the part2 folder of the Git repository and place the files directly into your local repository. Do not include the aws-codedeploy-sample-tomcat folder. Go to the local directory and type the following commands to commit and push the files to the CodeCommit repository:

git add .
git commit -a -m "add all files from the AWS Java Tomcat CodeDeploy application"
git push

After all the files are pushed successfully, the repository should look like this:

 

Part 3: Setting up CodePipeline to enable blue/green deployments     

Part 3 of the AWS CloudFormation template creates the pipeline in AWS CodePipeline and all the required components.

a) Source: The pipeline is triggered by any change to the CodeCommit repository.

b) BuildGoldenAMI: This Lambda step executes the Systems Manager Automation document to build the golden AMI. After the golden AMI is successfully created, a new launch configuration with the new AMI details will be updated into the Auto Scaling group of the application deployment group. You can watch the progress of the automation in the EC2 console from the Systems Manager –> Automations menu.

c) Build: This step uses the application build spec file to build the application build artifact. Here are the CodeBuild execution steps and their status:

d) Deploy: This step clones the Auto Scaling group, launches the new instances with the new AMI, deploys the application changes, reroutes the traffic from the elastic load balancer to the new instances and terminates the old Auto Scaling group. You can see the execution steps and their status in the CodeDeploy console.

After the CodePipeline execution is complete, you can access the application by clicking the Elastic Load Balancing link. You can find it in the output of Part 1 of the AWS CloudFormation template. Any consecutive commits to the application code in the CodeCommit repository trigger the pipelines and deploy the infrastructure and code with an updated AMI and code.

 

If you have feedback about this post, add it to the Comments section below. If you have questions about implementing the example used in this post, open a thread on the Developer Tools forum.


About the author

 

Ramesh Adabala is a Solutions Architect in Southeast Enterprise Solution Architecture team at Amazon Web Services.

All You Need To Know About Cross-Site Request Forgery (CSRF)

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/nBF_Xjl7rQw/

Cross-Site Request Forgery is a term you’ve properly heard in the context of web security or web hacking, but do you really know what it means? The OWASP definition is as follows: Cross-Site Request Forgery (CSRF) is an attack that forces an end user to execute unwanted actions on a web application in which they’re […]

The post All You Need…

Read the full post at darknet.org.uk

CyberChef – Cyber Swiss Army Knife

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/SOhld_nebGs/

CyberChef is a simple, intuitive web app for carrying out all manner of “cyber” operations within a web browser. These operations include simple encoding like XOR or Base64, more complex encryption like AES, DES and Blowfish, creating binary and hexdumps, compression and decompression of data, calculating hashes and checksums, IPv6 and X.509…

Read the full post at darknet.org.uk

Will Magento’s Progressive Web Apps drive more mobile revenue for merchants?

Post Syndicated from chris desantis original https://www.anchor.com.au/blog/2017/07/magentos-progressive-web-apps/

Magento Live always has a few big announcements and this year’s London edition did not disappoint. With the promise of more traffic and higher conversions, Magento, in collaboration with Google, will offer native Progressive Web Apps (PWA’s) to the Magento ecommerce community in 2018.

Even though that’s at LEAST five months away, you might already be pondering ‘What does that mean for my ecommerce site?’. Has responsive web design gone out of fashion like man buns and paleo diets? And how do I explain Progressive Web Apps to the boss?

What are Progressive Web Apps?

Progressive Web Apps are technically standard web pages that can ‘live’ on a user’s home screen and behave like a native app, without the need for downloading and installing an app. PWAs promise a faster, more reliable, and more engaging user experience.

Google Developers state that:

“Progressive Web Apps are experiences that combine the best of the web and the best of apps.”

However, for developers, creating Progressive Web Apps requires the use of Service Workers, a Manifest and Application shell architecture, and more, to ensure the promised user experience is delivered. It’s an evolution of the way developers are currently working but is yet another thing for developers to master.

Google Developers has a huge run down on all things PWAcheck it out.

But, back to the Magento announcement. It’s actually a big deal.

Magento is one of the most widely adopted ecommerce platforms in use today. By partnering with Google to bring PWAs into the Magento platform, PWAs are poised to become the new norm for forward-thinking ecommerce brands.

And what’s not to love? Productivity and cost benefits for e-commerce stores could be huge—instead of maintaining separate native app and web properties, a single Progressive Web App could be a new reality. For developers already using the Magento platform, the friction to get a PWA live is likely to be significantly reduced if this partnership lives up to the hype.

Mark Lavelle, CEO of Magento, stated:

“We see PWAs as a natural evolution of the mobile web, and by working with industry
leaders such as Google to develop PWAs, we plan to keep merchants ahead of the curve.”

So, don’t throw away your responsive site and native apps just yet. But keep an eye on Magento and Google, as we think this is going to be a huge benefit to developers and businesses in the ecommerce space. Roll on 2018!

Read the Magento press release here.

Check out pwa.rocks for interesting some examples of Progressive Web Apps in action.

Orange CTA Button | Progressive Web Apps

The post Will Magento’s Progressive Web Apps drive more mobile revenue for merchants? appeared first on AWS Managed Services by Anchor.

AWS HIPAA Eligibility Update (July 2017) – Eight Additional Services

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-hipaa-eligibility-update-july-2017-eight-additional-services/

It is time for an update on our on-going effort to make AWS a great host for healthcare and life sciences applications. As you can see from our Health Customer Stories page, Philips, VergeHealth, and Cambia (to choose a few) trust AWS with Protected Health Information (PHI) and Personally Identifying Information (PII) as part of their efforts to comply with HIPAA and HITECH.

In May we announced that we added Amazon API Gateway, AWS Direct Connect, AWS Database Migration Service, and Amazon Simple Queue Service (SQS) to our list of HIPAA eligible services and discussed our how customers and partners are putting them to use.

Eight More Eligible Services
Today I am happy to share the news that we are adding another eight services to the list:

Amazon CloudFront can now be utilized to enhance the delivery and transfer of Protected Health Information data to applications on the Internet. By providing a completely secure and encryptable pathway, CloudFront can now be used as a part of applications that need to cache PHI. This includes applications for viewing lab results or imaging data, and those that transfer PHI from Healthcare Information Exchanges (HIEs).

AWS WAF can now be used to protect applications running on AWS which operate on PHI such as patient care portals, patient scheduling systems, and HIEs. Requests and responses containing encrypted PHI and PII can now pass through AWS WAF.

AWS Shield can now be used to protect web applications such as patient care portals and scheduling systems that operate on encrypted PHI from DDoS attacks.

Amazon S3 Transfer Acceleration can now be used to accelerate the bulk transfer of large amounts of research, genetics, informatics, insurance, or payer/payment data containing PHI/PII information. Transfers can take place between a pair of AWS Regions or from an on-premises system and an AWS Region.

Amazon WorkSpaces can now be used by researchers, informaticists, hospital administrators and other users to analyze, visualize or process PHI/PII data using on-demand Windows virtual desktops.

AWS Directory Service can now be used to connect the authentication and authorization systems of organizations that use or process PHI/PII to their resources in the AWS Cloud. For example, healthcare providers operating hybrid cloud environments can now use AWS Directory Services to allow their users to easily transition between cloud and on-premises resources.

Amazon Simple Notification Service (SNS) can now be used to send notifications containing encrypted PHI/PII as part of patient care, payment processing, and mobile applications.

Amazon Cognito can now be used to authenticate users into mobile patient portal and payment processing applications that use PHI/PII identifiers for accounts.

Additional HIPAA Resources
Here are some additional resources that will help you to build applications that comply with HIPAA and HITECH:

Keep in Touch
In order to make use of any AWS service in any manner that involves PHI, you must first enter into an AWS Business Associate Addendum (BAA). You can contact us to start the process.

Jeff;

Launch – .NET Core Support In AWS CodeStar and AWS Codebuild

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/launch-net-core-support-in-aws-codestar-and-aws-codebuild/

A few months ago, I introduced the AWS CodeStar service, which allows you to quickly develop, build, and deploy applications on AWS. AWS CodeStar helps development teams to increase the pace of releasing applications and solutions while reducing some of the challenges of building great software.

When the CodeStar service launched in April, it was released with several project templates for Amazon EC2, AWS Elastic Beanstalk, and AWS Lambda using five different programming languages; JavaScript, Java, Python, Ruby, and PHP. Each template provisions the underlying AWS Code Services and configures an end-end continuous delivery pipeline for the targeted application using AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, and AWS CodeDeploy.

As I have participated in some of the AWS Summits around the world discussing AWS CodeStar, many of you have shown curiosity in learning about the availability of .NET templates in CodeStar and utilizing CodeStar to deploy .NET applications. Therefore, it is with great pleasure and excitement that I announce that you can now develop, build, and deploy cross-platform .NET Core applications with the AWS CodeStar and AWS CodeBuild services.

AWS CodeBuild has added the ability to build and deploy .NET Core application code to both Amazon EC2 and AWS Lambda. This new CodeBuild capability has enabled the addition of two new project templates in AWS CodeStar for .NET Core applications.  These new project templates enable you to deploy .NET Code applications to Amazon EC2 Linux Instances, and provides everything you need to get started quickly, including .NET Core sample code and a full software development toolchain.

Of course, I can’t wait to try out the new addition to the project templates within CodeStar and the update .NET application build options with CodeBuild. For my test scenario, I will use CodeStar to create, build, and deploy my .NET Code ASP.Net web application on EC2. Then, I will extend my ASP.Net application by creating a .NET Lambda function to be compiled and deployed with CodeBuild as a part of my application’s pipeline. This Lambda function can then be called and used within my ASP.Net application to extend the functionality of my web application.

So, let’s get started!

First, I’ll log into the CodeStar console and start a new CodeStar project. I am presented with the option to select a project template.


Right now, I would like to focus on building .NET Core projects, therefore, I’ll filter the project templates by selecting the C# in the Programming Languages section. Now, CodeStar only shows me the new .NET Core project templates that I can use to build web applications and services with ASP.NET Core.

I think I’ll use the ASP.NET Core web application project template for my first CodeStar .NET Core application. As you can see by the project template information display, my web application will be deployed on Amazon EC2, which signifies to me that my .NET Core code will be compiled and packaged using AWS CodeBuild and deployed to EC2 using the AWS CodeDeploy service.


My hunch about the services is confirmed on the next screen when CodeStar shows the AWS CodePipeline and the AWS services that will be configured for my new project. I’ll name this web application project, ASPNetCore4Tara, and leave the default Project ID that CodeStar generates from the project name. Yes, I know that this is one of the goofiest names I could ever come up with, but, hey, it will do for this test project so I’ll go ahead and click the Next button. I should mention that you have the option to edit your Amazon EC2 configuration for your project on this screen before CodeStar starts configuring and provisioning the services needed to run your application.

Since my ASP.Net Core web application will be deployed to an Amazon EC2 instance, I will need to choose an Amazon EC2 Key Pair for encryption of the login used to allow me to SSH into this instance. For my ASPNetCore4Tara project, I will use an existing Amazon EC2 key pair I have previously used for launching my other EC2 instances. However, if I was creating this project and I did not have an EC2 key pair or if I didn’t have access to the .pem file (private key file) for an existing EC2 key pair, I would have to first visit the EC2 console and create a new EC2 key pair to use for my project. This is important because if you remember, without having the EC2 key pair with the associated .pem file, I would not be able to log into my EC2 instance.

With my EC2 key pair selected and confirmation that I have the related private file checked, I am ready to click the Create Project button.


After CodeStar completes the creation of the project and the provisioning of the project related AWS services, I am ready to view the CodeStar sample application from the application endpoint displayed in the CodeStar dashboard. This sample application should be familiar to you if have been working with the CodeStar service or if you had an opportunity to read the blog post about the AWS CodeStar service launch. I’ll click the link underneath Application Endpoints to view the sample ASP.NET Core web application.

Now I’ll go ahead and clone the generated project and connect my Visual Studio IDE to the project repository. I am going to make some changes to the application and since AWS CodeBuild now supports .NET Core builds and deployments to both Amazon EC2 and AWS Lambda, I will alter my build specification file appropriately for the changes to my web application that will include the use of the Lambda function.  Don’t worry if you are not familiar with how to clone the project and connect it to the Visual Studio IDE, CodeStar provides in-console step-by-step instructions to assist you.

First things first, I will open up the Visual Studio IDE and connect to AWS CodeCommit repository provisioned for my ASPNetCore4Tara project. It is important to note that the Visual Studio 2017 IDE is required for .NET Core projects in AWS CodeStar and the AWS Toolkit for Visual Studio 2017 will need to be installed prior to connecting your project repository to the IDE.

In order to connect to my repo within Visual Studio, I will open up Team Explorer and select the Connect link under the AWS CodeCommit option under Hosted Service Providers. I will click Ok to keep my default AWS profile toolkit credentials.

I’ll then click Clone under the Manage Connections and AWS CodeCommit hosted provider section.

Once I select my aspnetcore4tara repository in the Clone AWS CodeCommit Repository dialog, I only have to enter my IAM role’s HTTPS Git credentials in the Git Credentials for AWS CodeCommit dialog and my process is complete. If you’re following along and receive a dialog for Git Credential Manager login, don’t worry just your enter the same IAM role’s Git credentials.


My project is now connected to the aspnetcore4tara CodeCommit repository and my web application is loaded to editing. As you will notice in the screenshot below, the sample project is structured as a standard ASP.NET Core MVC web application.

With the project created, I can make changes and updates. Since I want to update this project with a .NET Lambda function, I’ll quickly start a new project in Visual Studio to author a very simple C# Lambda function to be compiled with the CodeStar project. This AWS Lambda function will be included in the CodeStar ASP.NET Core web application project.

The Lambda function I’ve created makes a call to the REST API of NASA’s popular Astronomy Picture of the Day website. The API sends back the latest planetary image and related information in JSON format. You can see the Lambda function code below.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;

using System.Net.Http;
using Amazon.Lambda.Core;

// Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class.
[assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]

namespace NASAPicOfTheDay
{
    public class SpacePic
    {
        HttpClient httpClient = new HttpClient();
        string nasaRestApi = "https://api.nasa.gov/planetary/apod?api_key=DEMO_KEY";

        /// <summary>
        /// A simple function that retreives NASA Planetary Info and 
        /// Picture of the Day
        /// </summary>
        /// <param name="context"></param>
        /// <returns>nasaResponse-JSON String</returns>
        public async Task<string> GetNASAPicInfo(ILambdaContext context)
        {
            string nasaResponse;
            
            //Call NASA Picture of the Day API
            nasaResponse = await httpClient.GetStringAsync(nasaRestApi);
            Console.WriteLine("NASA API Response");
            Console.WriteLine(nasaResponse);
            
            //Return NASA response - JSON format
            return nasaResponse; 
        }
    }
}

I’ll now publish this C# Lambda function and test by using the Publish to AWS Lambda option provided by the AWS Toolkit for Visual Studio with NASAPicOfTheDay project. After publishing the function, I can test it and verify that it is working correctly within Visual Studio and/or the AWS Lambda console. You can learn more about building AWS Lambda functions with C# and .NET at: http://docs.aws.amazon.com/lambda/latest/dg/dotnet-programming-model.html

 

Now that I have my Lambda function completed and tested, all that is left is to update the CodeBuild buildspec.yml file within my aspnetcore4tara CodeStar project to include publishing and deploying of the Lambda function.

To accomplish this, I will create a new folder named functions and copy the folder that contains my Lambda function .NET project to my aspnetcore4tara web application project directory.

 

 

To build and publish my AWS Lambda function, I will use commands in the buildspec.yml file from the aws-lambda-dotnet tools library, which helps .NET Core developers develop AWS Lambda functions. I add a file, funcprof, to the NASAPicOfTheDay folder which contains customized profile information for use with aws-lambda-dotnet tools. All that is left is to update the buildspec.yml file used by CodeBuild for the ASPNetCore4Tara project build to include the packaging and the deployment of the NASAPictureOfDay AWS Lambda function. The updated buildspec.yml is as follows:

version: 0.2
phases:
  env:
  variables:
    basePath: 'hold'
  install:
    commands:
      - echo set basePath for project
      - basePath=$(pwd)
      - echo $basePath
      - echo Build restore and package Lambda function using AWS .NET Tools...
      - dotnet restore functions/*/NASAPicOfTheDay.csproj
      - cd functions/NASAPicOfTheDay
      - dotnet lambda package -c Release -f netcoreapp1.0 -o ../lambda_build/nasa-lambda-function.zip
  pre_build:
    commands:
      - echo Deploy Lambda function used in ASPNET application using AWS .NET Tools. Must be in path of Lambda function build 
      - cd $basePath
      - cd functions/NASAPicOfTheDay
      - dotnet lambda deploy-function NASAPicAPI -c Release -pac ../lambda_build/nasa-lambda-function.zip --profile-location funcprof -fd 'NASA API for Picture of the Day' -fn NASAPicAPI -fh NASAPicOfTheDay::NASAPicOfTheDay.SpacePic::GetNASAPicInfo -frun dotnetcore1.0 -frole arn:aws:iam::xxxxxxxxxxxx:role/lambda_exec_role -framework netcoreapp1.0 -fms 256 -ft 30  
      - echo Lambda function is now deployed - Now change directory back to Base path
      - cd $basePath
      - echo Restore started on `date`
      - dotnet restore AspNetCoreWebApplication/AspNetCoreWebApplication.csproj
  build:
    commands:
      - echo Build started on `date`
      - dotnet publish -c release -o ./build_output AspNetCoreWebApplication/AspNetCoreWebApplication.csproj
artifacts:
  files:
    - AspNetCoreWebApplication/build_output/**/*
    - scripts/**/*
    - appspec.yml
    

That’s it! All that is left is for me to add and commit all my file additions and updates to the AWS CodeCommit git repository provisioned for my ASPNetCore4Tara project. This kicks off the AWS CodePipeline for the project which will now use AWS CodeBuild new support for .NET Core to build and deploy both the ASP.NET Core web application and the .NET AWS Lambda function.

 

Summary

The support for .NET Core in AWS CodeStar and AWS CodeBuild opens the door for .NET developers to take advantage of the benefits of Continuous Integration and Delivery when building .NET based solutions on AWS.  Read more about .NET Core support in AWS CodeStar and AWS CodeBuild here or review product pages for AWS CodeStar and/or AWS CodeBuild for more information on using the services.

Enjoy building .NET projects more efficiently with Amazon Web Services using .NET Core with AWS CodeStar and AWS CodeBuild.

Tara

 

Take the Journey: Build Your First Serverless Web Application

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/build-your-first-serverless-application/

I realized at a young age that I really liked writing those special statements that would control the computer and make it work in the manner in which I desired. This technique of controlling the computer and building things on the machine, I learned from my teachers was called writing code, and it fascinated me. Even now, what seems like centuries later, I still get the thrill of writing code, building cool solutions, and tackling all the associated challenges of this craft. It is no wonder then, that I am a huge fan of serverless computing and serverless architectures.

Serverless Computing allows me to do what I enjoy, which is write code, without having to provision and/or configure servers. Using the AWS Serverless Platform means that all the heavy lifting of server management is handled by AWS, allowing you to focus on building your application.

If you enjoy coding like I do and have yet to dive into building serverless applications, boy do I have some sensational news for you. You can build your own serverless web application with our new Serverless Web Application Guide, which provides step-by-step instructions for you to create and deploy your serverless web application on AWS.

 

The Serverless Web Application Guide is a hands-on tutorial that will assist you in building a fully scalable, serverless web application using the following AWS Services:

  • AWS Lambda: a managed service for serverless compute that allows you to run code without provisioning or managing servers
  • Amazon S3: a managed service that provides simple, durable, scalable object storage
  • Amazon Cognito: a managed service that allows you to add user sign-up, and data synchronization to your application
  • Amazon API Gateway: a managed service which you can create, publish, and maintain secure APIs
  • Amazon DynamoDB: a fast and flexible NoSQL managed cloud database with support for various document and key-value storage models

The application you will build is a simple web application designed for a fictional transportation service. The application will enable users to register and login into the website to request rides from a very unique transportation fleet. You will accomplish this by using the aforementioned AWS services with the serverless application architecture shown in the diagram below.

 
The guide breaks up the each step to build your serverless web application into five separate modules.

 

  1. Static Web Hosting: Amazon S3 hosts static web resources including HTML, CSS, JavaScript, and image files that are loaded in the user’s browser.
  2. User Management: Amazon Cognito provides user management and authentication functions to secure the backend API.
  3. Serverless Backend: Amazon DynamoDB provides a persistence layer where data can be stored by the API’s Lambda function.
  4. RESTful APIs: JavaScript executed in the browser sends and receives data from a public backend API built using AWS Lambda and API Gateway.
  5. Resource Cleanup: All the resources created throughout the tutorial will be terminated.

To be successful in building the application, you must remember to complete each module in sequential order, as the modules are dependent on resources created in the previous one. Some of the guide’s modules provide CloudFormation templates to aid you in generating the necessary resources to build the application if you do not wish to create them manually.

 

Summary

Now that you know all about this fantastic new guide for building a serverless web application, you are ready to journey into the world of AWS serverless computing and have some fun writing the code to build the application. The guide is great for beginners and yet still has cool features that even seasoned serverless computing developers will enjoy building. And to top it off, you don’t have to worry about the cost. Each service used is eligible for the AWS Free Tier and is only estimated to cost less than $0.25 if you are outside of Free Tier usage limits.

Take the plunge today and dive into building serverless applications on the AWS serverless platform with this new and exciting Serverless Web Application Guide.

 

Tara

Perform Near Real-time Analytics on Streaming Data with Amazon Kinesis and Amazon Elasticsearch Service

Post Syndicated from Tristan Li original https://aws.amazon.com/blogs/big-data/perform-near-real-time-analytics-on-streaming-data-with-amazon-kinesis-and-amazon-elasticsearch-service/

Nowadays, streaming data is seen and used everywhere—from social networks, to mobile and web applications, IoT devices, instrumentation in data centers, and many other sources. As the speed and volume of this type of data increases, the need to perform data analysis in real time with machine learning algorithms and extract a deeper understanding from the data becomes ever more important. For example, you might want a continuous monitoring system to detect sentiment changes in a social media feed so that you can react to the sentiment in near real time.

In this post, we use Amazon Kinesis Streams to collect and store streaming data. We then use Amazon Kinesis Analytics to process and analyze the streaming data continuously. Specifically, we use the Kinesis Analytics built-in RANDOM_CUT_FOREST function, a machine learning algorithm, to detect anomalies in the streaming data. Finally, we use Amazon Kinesis Firehose to export the anomalies data to Amazon Elasticsearch Service (Amazon ES). We then build a simple dashboard in the open source tool Kibana to visualize the result.

Solution overview

The following diagram depicts a high-level overview of this solution.

Amazon Kinesis Streams

You can use Amazon Kinesis Streams to build your own streaming application. This application can process and analyze streaming data by continuously capturing and storing terabytes of data per hour from hundreds of thousands of sources.

Amazon Kinesis Analytics

Kinesis Analytics provides an easy and familiar standard SQL language to analyze streaming data in real time. One of its most powerful features is that there are no new languages, processing frameworks, or complex machine learning algorithms that you need to learn.

Amazon Kinesis Firehose

Kinesis Firehose is the easiest way to load streaming data into AWS. It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, and Amazon Elasticsearch Service.

Amazon Elasticsearch Service

Amazon ES is a fully managed service that makes it easy to deploy, operate, and scale Elasticsearch for log analytics, full text search, application monitoring, and more.

Solution summary

The following is a quick walkthrough of the solution that’s presented in the diagram:

  1. IoT sensors send streaming data into Kinesis Streams. In this post, you use a Python script to simulate an IoT temperature sensor device that sends the streaming data.
  2. By using the built-in RANDOM_CUT_FOREST function in Kinesis Analytics, you can detect anomalies in real time with the sensor data that is stored in Kinesis Streams. RANDOM_CUT_FOREST is also an appropriate algorithm for many other kinds of anomaly-detection use cases—for example, the media sentiment example mentioned earlier in this post.
  3. The processed anomaly data is then loaded into the Kinesis Firehose delivery stream.
  4. By using the built-in integration that Kinesis Firehose has with Amazon ES, you can easily export the processed anomaly data into the service and visualize it with Kibana.

Implementation steps

The following sections walk through the implementation steps in detail.

Creating the delivery stream

  1. Open the Amazon Kinesis Streams console.
  2. Create a new Kinesis stream. Give it a name that indicates it’s for raw incoming stream data—for example, RawStreamData. For Number of shards, type 1.
  3. The Python code provided below simulates a streaming application, such as an IoT device, and generates random data and anomalies into a Kinesis stream. The code generates two temperature ranges, where the first range is the hypothetical sensor’s normal operating temperature range (10–20), and the second is the anomaly temperature range (100–120).Make sure to change the stream name on line 16 and 20 and the Region on line 6 to match your configuration. Alternatively, you can download the Amazon Kinesis Data Generator from this repository and use it to generate the data.
    import json
    import datetime
    import random
    import testdata
    from boto import kinesis
    
    kinesis = kinesis.connect_to_region("us-east-1")
    
    def getData(iotName, lowVal, highVal):
       data = {}
       data["iotName"] = iotName
       data["iotValue"] = random.randint(lowVal, highVal) 
       return data
    
    while 1:
       rnd = random.random()
       if (rnd < 0.01):
          data = json.dumps(getData("DemoSensor", 100, 120))  
          kinesis.put_record("RawStreamData", data, "DemoSensor")
          print '***************************** anomaly ************************* ' + data
       else:
          data = json.dumps(getData("DemoSensor", 10, 20))  
          kinesis.put_record("RawStreamData", data, "DemoSensor")
          print data

  4. Open the Amazon Elasticsearch Service console and create a new domain.
    1. Give the domain a unique name. In the Configure cluster screen, use the default settings.
    2. In the Set up access policy screen, in the Set the domain access policy list, choose Allow access to the domain from specific IP(s).
    3. Enter the public IP address of your computer.
      Note: If you’re working behind a proxy or firewall, see the “Use a proxy to simplify request signing” section in this AWS Database blog post to learn how to work with a proxy. For additional information about securing access to your Amazon ES domain, see How to Control Access to Your Amazon Elasticsearch Domain in the AWS Security Blog.
  5. After the Amazon ES domain is up and running, you can set up and configure Kinesis Firehose to export results to Amazon ES:
    1. Open the Amazon Kinesis Firehose console and choose Create Delivery Stream.
    2. In the Destination dropdown list, choose Amazon Elasticsearch Service.
    3. Type a stream name, and choose the Amazon ES domain that you created in Step 4.
    4. Provide an index name and ES type. In the S3 bucket dropdown list, choose Create New S3 bucket. Choose Next.
    5. In the configuration, change the Elasticsearch Buffer size to 1 MB and the Buffer interval to 60s. Use the default settings for all other fields. This shortens the time for the data to reach the ES cluster.
    6. Under IAM Role, choose Create/Update existing IAM role.
      The best practice is to create a new role every time. Otherwise, the console keeps adding policy documents to the same role. Eventually the size of the attached policies causes IAM to reject the role, but it does it in a non-obvious way, where the console basically quits functioning.
    7. Choose Next to move to the Review page.
  6. Review the configuration, and then choose Create Delivery Stream.
  7. Run the Python file for 1–2 minutes, and then press Ctrl+C to stop the execution. This loads some data into the stream for you to visualize in the next step.

Analyzing the data

Now it’s time to analyze the IoT streaming data using Amazon Kinesis Analytics.

  1. Open the Amazon Kinesis Analytics console and create a new application. Give the application a name, and then choose Create Application.
  2. On the next screen, choose Connect to a source. Choose the raw incoming data stream that you created earlier. (Note the stream name Source_SQL_STREAM_001 because you will need it later.)
  3. Use the default settings for everything else. When the schema discovery process is complete, it displays a success message with the formatted stream sample in a table as shown in the following screenshot. Review the data, and then choose Save and continue.
  4. Next, choose Go to SQL editor. When prompted, choose Yes, start application.
  5. Copy the following SQL code and paste it into the SQL editor window.
    CREATE OR REPLACE STREAM "TEMP_STREAM" (
       "iotName"        varchar (40),
       "iotValue"   integer,
       "ANOMALY_SCORE"  DOUBLE);
    -- Creates an output stream and defines a schema
    CREATE OR REPLACE STREAM "DESTINATION_SQL_STREAM" (
       "iotName"       varchar(40),
       "iotValue"       integer,
       "ANOMALY_SCORE"  DOUBLE,
       "created" TimeStamp);
     
    -- Compute an anomaly score for each record in the source stream
    -- using Random Cut Forest
    CREATE OR REPLACE PUMP "STREAM_PUMP_1" AS INSERT INTO "TEMP_STREAM"
    SELECT STREAM "iotName", "iotValue", ANOMALY_SCORE FROM
      TABLE(RANDOM_CUT_FOREST(
        CURSOR(SELECT STREAM * FROM "SOURCE_SQL_STREAM_001")
      )
    );
    
    -- Sort records by descending anomaly score, insert into output stream
    CREATE OR REPLACE PUMP "OUTPUT_PUMP" AS INSERT INTO "DESTINATION_SQL_STREAM"
    SELECT STREAM "iotName", "iotValue", ANOMALY_SCORE, ROWTIME FROM "TEMP_STREAM"
    ORDER BY FLOOR("TEMP_STREAM".ROWTIME TO SECOND), ANOMALY_SCORE DESC;

 

  1. Choose Save and run SQL.
    As the application is running, it displays the results as stream data arrives. If you don’t see any data coming in, run the Python script again to generate some fresh data. When there is data, it appears in a grid as shown in the following screenshot.Note that you are selecting data from the source stream name Source_SQL_STREAM_001 that you created previously. Also note the ANOMALY_SCORE column. This is the value that the Random_Cut_Forest function calculates based on the temperature ranges provided by the Python script. Higher (anomaly) temperature ranges have a higher score.Looking at the SQL code, note that the first two blocks of code create two new streams to store temporary data and the final result. The third block of code analyzes the raw source data (Stream_Pump_1) using the Random_Cut_Forest function. It calculates an anomaly score (ANOMALY_SCORE) and inserts it into the TEMP_STREAM stream. The final code block loads the result stored in the TEMP_STREAM into DESTINATION_SQL_STREAM.
  2. Choose Exit (done editing) next to the Save and run SQL button to return to the application configuration page.

Load processed data into the Kinesis Firehose delivery stream

Now, you can export the result from DESTINATION_SQL_STREAM into the Amazon Kinesis Firehose stream that you created previously.

  1. On the application configuration page, choose Connect to a destination.
  2. Choose the stream name that you created earlier, and use the default settings for everything else. Then choose Save and Continue.
  3. On the application configuration page, choose Exit to Kinesis Analytics applications to return to the Amazon Kinesis Analytics console.
  4. Run the Python script again for 4–5 minutes to generate enough data to flow through Amazon Kinesis Streams, Kinesis Analytics, Kinesis Firehose, and finally into the Amazon ES domain.
  5. Open the Kinesis Firehose console, choose the stream, and then choose the Monitoring
  6. As the processed data flows into Kinesis Firehose and Amazon ES, the metrics appear on the Delivery Stream metrics page. Keep in mind that the metrics page takes a few minutes to refresh with the latest data.
  7. Open the Amazon Elasticsearch Service dashboard in the AWS Management Console. The count in the Searchable documents column increases as shown in the following screenshot. In addition, the domain shows a cluster health of Yellow. This is because, by default, it needs two instances to deploy redundant copies of the index. To fix this, you can deploy two instances instead of one.

Visualize the data using Kibana

Now it’s time to launch Kibana and visualize the data.

  1. Use the ES domain link to go to the cluster detail page, and then choose the Kibana link as shown in the following screenshot.

    If you’re working behind a proxy or firewall, see the “Use a proxy to simplify request signing” section in this blog post to learn how to work with a proxy.
  2. In the Kibana dashboard, choose the Discover tab to perform a query.
  3. You can also visualize the data using the different types of charts offered by Kibana. For example, by going to the Visualize tab, you can quickly create a split bar chart that aggregates by ANOMALY_SCORE per minute.


Conclusion

In this post, you learned how to use Amazon Kinesis to collect, process, and analyze real-time streaming data, and then export the results to Amazon ES for analysis and visualization with Kibana. If you have comments about this post, add them to the “Comments” section below. If you have questions or issues with implementing this solution, please open a new thread on the Amazon Kinesis or Amazon ES discussion forums.


Next Steps

Take your skills to the next level. Learn real-time clickstream anomaly detection with Amazon Kinesis Analytics.

 


About the Author

Tristan Li is a Solutions Architect with Amazon Web Services. He works with enterprise customers in the US, helping them adopt cloud technology to build scalable and secure solutions on AWS.

 

 

 

 

Prepare for the OWASP Top 10 Web Application Vulnerabilities Using AWS WAF and Our New White Paper

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/prepare-for-the-owasp-top-10-web-application-vulnerabilities-using-aws-waf-and-our-new-white-paper/

Are you aware of the Open Web Application Security Project (OWASP) and the work that they do to improve the security of web applications? Among many other things, they publish a list of the 10 most critical application security flaws, known as the OWASP Top 10. The release candidate for the 2017 version contains a consensus view of common vulnerabilities often found in web sites and web applications.

AWS WAF, as I described in my blog post, New – AWS WAF, helps to protect your application from application-layer attacks such as SQL injection and cross-site scripting. You can create custom rules to define the types of traffic that are accepted or rejected.

Our new white paper, Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities, shows you how to put AWS WAF to use. Going far beyond a simple recommendation to “use WAF,” it includes detailed, concrete mitigation strategies and implementation details for the most important items in the OWASP Top 10 (formally known as A1 through A10):

Download Today
The white paper provides background and context for each vulnerability, and then shows you how to create WAF rules to identify and block them. It also provides some defense-in-depth recommendations, including a very cool suggestion to use [email protected] to prevalidate the parameters supplied to HTTP requests.

The white paper links to a companion AWS CloudFormation template that creates a Web ACL, along with the recommended condition types and rules. You can use this template as a starting point for your own work, adding more condition types and rules as desired.

AWSTemplateFormatVersion: '2010-09-09'
Description: AWS WAF Basic OWASP Example Rule Set

## ::PARAMETERS::
## Template parameters to be configured by user
Parameters:
  stackPrefix:
    Type: String
    Description: The prefix to use when naming resources in this stack. Normally we would use the stack name, but since this template can be us\
ed as a resource in other stacks we want to keep the naming consistent. No symbols allowed.
    ConstraintDescription: Alphanumeric characters only, maximum 10 characters
    AllowedPattern: ^[a-zA-z0-9]+$
    MaxLength: 10
    Default: generic
  stackScope:
    Type: String
    Description: You can deploy this stack at a regional level, for regional WAF targets like Application Load Balancers, or for global targets\
, such as Amazon CloudFront distributions.
    AllowedValues:
      - Global
      - Regional
    Default: Regional
...

Attend our Webinar
If you would like to learn more about the topics discussed in this new white paper, please plan to attend our upcoming webinar, Secure Your Applications with AWS Web Application Firewall (WAF) and AWS Shield. On July 12, 2017, my colleagues Jeffrey Lyon and Sundar Jayashekar will show you how to secure your web applications and how to defend against the most common Layer 7 attacks.

Jeff;

 

 

 

New Security Whitepaper Now Available: Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities

Post Syndicated from Vlad Vlasceanu original https://aws.amazon.com/blogs/security/new-security-whitepaper-now-available-use-aws-waf-to-mitigate-owasps-top-10-web-application-vulnerabilities/

Whitepaper image

Today, we released a new security whitepaper: Use AWS WAF to Mitigate OWASP’s Top 10 Web Application Vulnerabilities. This whitepaper describes how you can use AWS WAF, a web application firewall, to address the top application security flaws as named by the Open Web Application Security Project (OWASP). Using AWS WAF, you can write rules to match patterns of exploitation attempts in HTTP requests and block requests from reaching your web servers. This whitepaper discusses manifestations of these security vulnerabilities, AWS WAF–based mitigation strategies, and other AWS services or solutions that can help address these threats.

– Vlad

Yahoo Mail’s New Tech Stack, Built for Performance and Reliability

Post Syndicated from mikesefanov original https://yahooeng.tumblr.com/post/162320493306

By Suhas Sadanandan, Director of Engineering 

When it comes to performance and reliability, there is perhaps no application where this matters more than with email. Today, we announced a new Yahoo Mail experience for desktop based on a completely rewritten tech stack that embodies these fundamental considerations and more.

We built the new Yahoo Mail experience using a best-in-class front-end tech stack with open source technologies including React, Redux, Node.js, react-intl (open-sourced by Yahoo), and others. A high-level architectural diagram of our stack is below.

image

New Yahoo Mail Tech Stack

In building our new tech stack, we made use of the most modern tools available in the industry to come up with the best experience for our users by optimizing the following fundamentals:

Performance

A key feature of the new Yahoo Mail architecture is blazing-fast initial loading (aka, launch).

We introduced new network routing which sends users to their nearest geo-located email servers (proximity-based routing). This has resulted in a significant reduction in time to first byte and should be immediately noticeable to our international users in particular.

We now do server-side rendering to allow our users to see their mail sooner. This change will be immediately noticeable to our low-bandwidth users. Our application is isomorphic, meaning that the same code runs on the server (using Node.js) and the client. Prior versions of Yahoo Mail had programming logic duplicated on the server and the client because we used PHP on the server and JavaScript on the client.   

Using efficient bundling strategies (JavaScript code is separated into application, vendor, and lazy loaded bundles) and pushing only the changed bundles during production pushes, we keep the cache hit ratio high. By using react-atomic-css, our homegrown solution for writing modular and scoped CSS in React, we get much better CSS reuse.  

In prior versions of Yahoo Mail, the need to run various experiments in parallel resulted in additional branching and bloating of our JavaScript and CSS code. While rewriting all of our code, we solved this issue using Mendel, our homegrown solution for bucket testing isomorphic web apps, which we have open sourced.  

Rather than using custom libraries, we use native HTML5 APIs and ES6 heavily and use PolyesterJS, our homegrown polyfill solution, to fill the gaps. These factors have further helped us to keep payload size minimal.

With all the above optimizations, we have been able to reduce our JavaScript and CSS footprint by approximately 50% compared to the previous desktop version of Yahoo Mail, helping us achieve a blazing-fast launch.

In addition to initial launch improvements, key features like search and message read (when a user opens an email to read it) have also benefited from the above optimizations and are considerably faster in the latest version of Yahoo Mail.

We also significantly reduced the memory consumed by Yahoo Mail on the browser. This is especially noticeable during a long running session.

Reliability

With this new version of Yahoo Mail, we have a 99.99% success rate on core flows: launch, message read, compose, search, and actions that affect messages. Accomplishing this over several billion user actions a day is a significant feat. Client-side errors (JavaScript exceptions) are reduced significantly when compared to prior Yahoo Mail versions.

Product agility and launch velocity

We focused on independently deployable components. As part of the re-architecture of Yahoo Mail, we invested in a robust continuous integration and delivery flow. Our new pipeline allows for daily (or more) pushes to all Mail users, and we push only the bundles that are modified, which keeps the cache hit ratio high.

Developer effectiveness and satisfaction

In developing our tech stack for the new Yahoo Mail experience, we heavily leveraged open source technologies, which allowed us to ensure a shorter learning curve for new engineers. We were able to implement a consistent and intuitive onboarding program for 30+ developers and are now using our program for all new hires. During the development process, we emphasise predictable flows and easy debugging.

Accessibility

The accessibility of this new version of Yahoo Mail is state of the art and delivers outstanding usability (efficiency) in addition to accessibility. It features six enhanced visual themes that can provide accommodation for people with low vision and has been optimized for use with Assistive Technology including alternate input devices, magnifiers, and popular screen readers such as NVDA and VoiceOver. These features have been rigorously evaluated and incorporate feedback from users with disabilities. It sets a new standard for the accessibility of web-based mail and is our most-accessible Mail experience yet.

Open source 

We have open sourced some key components of our new Mail stack, like Mendel, our solution for bucket testing isomorphic web applications. We invite the community to use and build upon our code. Going forward, we plan on also open sourcing additional components like react-atomic-css, our solution for writing modular and scoped CSS in React, and lazy-component, our solution for on-demand loading of resources.

Many of our company’s best technical minds came together to write a brand new tech stack and enable a delightful new Yahoo Mail experience for our users.

We encourage our users and engineering peers in the industry to test the limits of our application, and to provide feedback by clicking on the Give Feedback call out in the lower left corner of the new version of Yahoo Mail.

Protect Web Sites & Services Using Rate-Based Rules for AWS WAF

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/protect-web-sites-services-using-rate-based-rules-for-aws-waf/

AWS WAF (Web Application Firewall) helps to protect your application from many different types of application-layer attacks that involve requests that are malicious or malformed. As I showed you when I first wrote about this service (New – AWS WAF), you can define rules that match cross-site scripting, IP address, SQL injection, size, or content constraints:

When incoming requests match rules, actions are invoked. Actions can either allow, block, or simply count matches.

The existing rule model is powerful and gives you the ability to detect and respond to many different types of attacks. It does not, however, allow you to respond to attacks that simply consist of a large number of otherwise valid requests from a particular IP address. These requests might be a web-layer DDoS attack, a brute-force login attempt, or even a partner integration gone awry.

New Rate-Based Rules
Today we are adding Rate-based Rules to WAF, giving you control of when IP addresses are added to and removed from a blacklist, along with the flexibility to handle exceptions and special cases:

Blacklisting IP Addresses – You can blacklist IP addresses that make requests at a rate that exceeds a configured threshold rate.

IP Address Tracking– You can see which IP addresses are currently blacklisted.

IP Address Removal – IP addresses that have been blacklisted are automatically removed when they no longer make requests at a rate above the configured threshold.

IP Address Exemption – You can exempt certain IP addresses from blacklisting by using an IP address whitelist inside of the a rate-based rule. For example, you might want to allow trusted partners to access your site at a higher rate.

Monitoring & Alarming – You can watch and alarm on CloudWatch metrics that are published for each rule.

You can combine new Rate-based Rules with WAF Conditions to implement sophisticated rate-limiting strategies. For example, you could use a Rate-based Rule and a WAF Condition that matches your login pages. This would allow you to impose a modest threshold on your login pages (to avoid brute-force password attacks) and allow a more generous one on your marketing or system status pages.

Thresholds are defined in terms of the number of incoming requests from a single IP address within a 5 minute period. Once this threshold is breached, additional requests from the IP address are blocked until the request rate falls below the threshold.

Using Rate-Based Rules
Here’s how you would define a Rate-based Rule that protects the /login portion of your site. Start by defining a WAF condition that matches the desired string in the URI of the page:

Then use this condition to define a Rate-based Rule (the rate limit is expressed in terms of requests within a 5 minute interval, but the blacklisting goes in to effect as soon as the limit is breached):

With the condition and the rule in place, create a Web ACL (ProtectLoginACL) to bring it all together and to attach it to the AWS resource (a CloudFront distribution in this case):

Then attach the rule (ProtectLogin) to the Web ACL:

The resource is now protected in accord with the rule and the web ACL. You can monitor the associated CloudWatch metrics (ProtectLogin and ProtectLoginACL in this case). You could even create CloudWatch Alarms and use them to fire Lambda functions when a protection threshold is breached. The code could examine the offending IP address and make a complex, business-driven decision, perhaps adding a whitelisting rule that gives an extra-generous allowance to a trusted partner or to a user with a special payment plan.

Available Now
The new, Rate-based Rules are available now and you can start using them today! Rate-based rules are priced the same as Regular rules; see the WAF Pricing page for more info.

Jeff;

Building Loosely Coupled, Scalable, C# Applications with Amazon SQS and Amazon SNS

Post Syndicated from Tara Van Unen original https://aws.amazon.com/blogs/compute/building-loosely-coupled-scalable-c-applications-with-amazon-sqs-and-amazon-sns/

 
Stephen Liedig, Solutions Architect

 

One of the many challenges professional software architects and developers face is how to make cloud-native applications scalable, fault-tolerant, and highly available.

Fundamental to your project success is understanding the importance of making systems highly cohesive and loosely coupled. That means considering the multi-dimensional facets of system coupling to support the distributed nature of the applications that you are building for the cloud.

By that, I mean addressing not only the application-level coupling (managing incoming and outgoing dependencies), but also considering the impacts of of platform, spatial, and temporal coupling of your systems. Platform coupling relates to the interoperability, or lack thereof, of heterogeneous systems components. Spatial coupling deals with managing components at a network topology level or protocol level. Temporal, or runtime coupling, refers to the ability of a component within your system to do any kind of meaningful work while it is performing a synchronous, blocking operation.

The AWS messaging services, Amazon SQS and Amazon SNS, help you deal with these forms of coupling by providing mechanisms for:

  • Reliable, durable, and fault-tolerant delivery of messages between application components
  • Logical decomposition of systems and increased autonomy of components
  • Creating unidirectional, non-blocking operations, temporarily decoupling system components at runtime
  • Decreasing the dependencies that components have on each other through standard communication and network channels

Following on the recent topic, Building Scalable Applications and Microservices: Adding Messaging to Your Toolbox, in this post, I look at some of the ways you can introduce SQS and SNS into your architectures to decouple your components, and show how you can implement them using C#.

Walkthrough

To illustrate some of these concepts, consider a web application that processes customer orders. As good architects and developers, you have followed best practices and made your application scalable and highly available. Your solution included implementing load balancing, dynamic scaling across multiple Availability Zones, and persisting orders in a Multi-AZ Amazon RDS database instance, as in the following diagram.


In this example, the application is responsible for handling and persisting the order data, as well as dealing with increases in traffic for popular items.

One potential point of vulnerability in the order processing workflow is in saving the order in the database. The business expects that every order has been persisted into the database. However, any potential deadlock, race condition, or network issue could cause the persistence of the order to fail. Then, the order is lost with no recourse to restore the order.

With good logging capability, you may be able to identify when an error occurred and which customer’s order failed. This wouldn’t allow you to “restore” the transaction, and by that stage, your customer is no longer your customer.

As illustrated in the following diagram, introducing an SQS queue helps improve your ordering application. Using the queue isolates the processing logic into its own component and runs it in a separate process from the web application. This, in turn, allows the system to be more resilient to spikes in traffic, while allowing work to be performed only as fast as necessary in order to manage costs.


In addition, you now have a mechanism for persisting orders as messages (with the queue acting as a temporary database), and have moved the scope of your transaction with your database further down the stack. In the event of an application exception or transaction failure, this ensures that the order processing can be retired or redirected to the Amazon SQS Dead Letter Queue (DLQ), for re-processing at a later stage. (See the recent post, Using Amazon SQS Dead-Letter Queues to Control Message Failure, for more information on dead-letter queues.)

Scaling the order processing nodes

This change allows you now to scale the web application frontend independently from the processing nodes. The frontend application can continue to scale based on metrics such as CPU usage, or the number of requests hitting the load balancer. Processing nodes can scale based on the number of orders in the queue. Here is an example of scale-in and scale-out alarms that you would associate with the scaling policy.

Scale-out Alarm

aws cloudwatch put-metric-alarm --alarm-name AddCapacityToCustomerOrderQueue --metric-name ApproximateNumberOfMessagesVisible --namespace "AWS/SQS" 
--statistic Average --period 300 --threshold 3 --comparison-operator GreaterThanOrEqualToThreshold --dimensions Name=QueueName,Value=customer-orders
--evaluation-periods 2 --alarm-actions <arn of the scale-out autoscaling policy>

Scale-in Alarm

aws cloudwatch put-metric-alarm --alarm-name RemoveCapacityFromCustomerOrderQueue --metric-name ApproximateNumberOfMessagesVisible --namespace "AWS/SQS" 
 --statistic Average --period 300 --threshold 1 --comparison-operator LessThanOrEqualToThreshold --dimensions Name=QueueName,Value=customer-orders
 --evaluation-periods 2 --alarm-actions <arn of the scale-in autoscaling policy>

In the above example, use the ApproximateNumberOfMessagesVisible metric to discover the queue length and drive the scaling policy of the Auto Scaling group. Another useful metric is ApproximateAgeOfOldestMessage, when applications have time-sensitive messages and developers need to ensure that messages are processed within a specific time period.

Scaling the order processing implementation

On top of scaling at an infrastructure level using Auto Scaling, make sure to take advantage of the processing power of your Amazon EC2 instances by using as many of the available threads as possible. There are several ways to implement this. In this post, we build a Windows service that uses the BackgroundWorker class to process the messages from the queue.

Here’s a closer look at the implementation. In the first section of the consuming application, use a loop to continually poll the queue for new messages, and construct a ReceiveMessageRequest variable.

public static void PollQueue()
{
    while (_running)
    {
        Task<ReceiveMessageResponse> receiveMessageResponse;

        // Pull messages off the queue
        using (var sqs = new AmazonSQSClient())
        {
            const int maxMessages = 10;  // 1-10

            //Receiving a message
            var receiveMessageRequest = new ReceiveMessageRequest
            {
                // Get URL from Configuration
                QueueUrl = _queueUrl, 
                // The maximum number of messages to return. 
                // Fewer messages might be returned. 
                MaxNumberOfMessages = maxMessages, 
                // A list of attributes that need to be returned with message.
                AttributeNames = new List<string> { "All" },
                // Enable long polling. 
                // Time to wait for message to arrive on queue.
                WaitTimeSeconds = 5 
            };

            receiveMessageResponse = sqs.ReceiveMessageAsync(receiveMessageRequest);
        }

The WaitTimeSeconds property of the ReceiveMessageRequest specifies the duration (in seconds) that the call waits for a message to arrive in the queue before returning a response to the calling application. There are a few benefits to using long polling:

  • It reduces the number of empty responses by allowing SQS to wait until a message is available in the queue before sending a response.
  • It eliminates false empty responses by querying all (rather than a limited number) of the servers.
  • It returns messages as soon any message becomes available.

For more information, see Amazon SQS Long Polling.

After you have returned messages from the queue, you can start to process them by looping through each message in the response and invoking a new BackgroundWorker thread.

// Process messages
if (receiveMessageResponse.Result.Messages != null)
{
    foreach (var message in receiveMessageResponse.Result.Messages)
    {
        Console.WriteLine("Received SQS message, starting worker thread");

        // Create background worker to process message
        BackgroundWorker worker = new BackgroundWorker();
        worker.DoWork += (obj, e) => ProcessMessage(message);
        worker.RunWorkerAsync();
    }
}
else
{
    Console.WriteLine("No messages on queue");
}

The event handler, ProcessMessage, is where you implement business logic for processing orders. It is important to have a good understanding of how long a typical transaction takes so you can set a message VisibilityTimeout that is long enough to complete your operation. If order processing takes longer than the specified timeout period, the message becomes visible on the queue. Other nodes may pick it and process the same order twice, leading to unintended consequences.

Handling Duplicate Messages

In order to manage duplicate messages, seek to make your processing application idempotent. In mathematics, idempotent describes a function that produces the same result if it is applied to itself:

f(x) = f(f(x))

No matter how many times you process the same message, the end result is the same (definition from Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions, Hohpe and Wolf, 2004).

There are several strategies you could apply to achieve this:

  • Create messages that have inherent idempotent characteristics. That is, they are non-transactional in nature and are unique at a specified point in time. Rather than saying “place new order for Customer A,” which adds a duplicate order to the customer, use “place order <orderid> on <timestamp> for Customer A,” which creates a single order no matter how often it is persisted.
  • Deliver your messages via an Amazon SQS FIFO queue, which provides the benefits of message sequencing, but also mechanisms for content-based deduplication. You can deduplicate using the MessageDeduplicationId property on the SendMessage request or by enabling content-based deduplication on the queue, which generates a hash for MessageDeduplicationId, based on the content of the message, not the attributes.
var sendMessageRequest = new SendMessageRequest
{
    QueueUrl = _queueUrl,
    MessageBody = JsonConvert.SerializeObject(order),
    MessageGroupId = Guid.NewGuid().ToString("N"),
    MessageDeduplicationId = Guid.NewGuid().ToString("N")
};
  • If using SQS FIFO queues is not an option, keep a message log of all messages attributes processed for a specified period of time, as an alternative to message deduplication on the receiving end. Verifying the existence of the message in the log before processing the message adds additional computational overhead to your processing. This can be minimized through low latency persistence solutions such as Amazon DynamoDB. Bear in mind that this solution is dependent on the successful, distributed transaction of the message and the message log.

Handling exceptions

Because of the distributed nature of SQS queues, it does not automatically delete the message. Therefore, you must explicitly delete the message from the queue after processing it, using the message ReceiptHandle property (see the following code example).

However, if at any stage you have an exception, avoid handling it as you normally would. The intention is to make sure that the message ends back on the queue, so that you can gracefully deal with intermittent failures. Instead, log the exception to capture diagnostic information, and swallow it.

By not explicitly deleting the message from the queue, you can take advantage of the VisibilityTimeout behavior described earlier. Gracefully handle the message processing failure and make the unprocessed message available to other nodes to process.

In the event that subsequent retries fail, SQS automatically moves the message to the configured DLQ after the configured number of receives has been reached. You can further investigate why the order process failed. Most importantly, the order has not been lost, and your customer is still your customer.

private static void ProcessMessage(Message message)
{
    using (var sqs = new AmazonSQSClient())
    {
        try
        {
            Console.WriteLine("Processing message id: {0}", message.MessageId);

            // Implement messaging processing here
            // Ensure no downstream resource contention (parallel processing)
            // <your order processing logic in here…>
            Console.WriteLine("{0} Thread {1}: {2}", DateTime.Now.ToString("s"), Thread.CurrentThread.ManagedThreadId, message.MessageId);
            
            // Delete the message off the queue. 
            // Receipt handle is the identifier you must provide 
            // when deleting the message.
            var deleteRequest = new DeleteMessageRequest(_queueName, message.ReceiptHandle);
            sqs.DeleteMessageAsync(deleteRequest);
            Console.WriteLine("Processed message id: {0}", message.MessageId);

        }
        catch (Exception ex)
        {
            // Do nothing.
            // Swallow exception, message will return to the queue when 
            // visibility timeout has been exceeded.
            Console.WriteLine("Could not process message due to error. Exception: {0}", ex.Message);
        }
    }
}

Using SQS to adapt to changing business requirements

One of the benefits of introducing a message queue is that you can accommodate new business requirements without dramatically affecting your application.

If, for example, the business decided that all orders placed over $5000 are to be handled as a priority, you could introduce a new “priority order” queue. The way the orders are processed does not change. The only significant change to the processing application is to ensure that messages from the “priority order” queue are processed before the “standard order” queue.

The following diagram shows how this logic could be isolated in an “order dispatcher,” whose only purpose is to route order messages to the appropriate queue based on whether the order exceeds $5000. Nothing on the web application or the processing nodes changes other than the target queue to which the order is sent. The rates at which orders are processed can be achieved by modifying the poll rates and scalability settings that I have already discussed.

Extending the design pattern with Amazon SNS

Amazon SNS supports reliable publish-subscribe (pub-sub) scenarios and push notifications to known endpoints across a wide variety of protocols. It eliminates the need to periodically check or poll for new information and updates. SNS supports:

  • Reliable storage of messages for immediate or delayed processing
  • Publish / subscribe – direct, broadcast, targeted “push” messaging
  • Multiple subscriber protocols
  • Amazon SQS, HTTP, HTTPS, email, SMS, mobile push, AWS Lambda

With these capabilities, you can provide parallel asynchronous processing of orders in the system and extend it to support any number of different business use cases without affecting the production environment. This is commonly referred to as a “fanout” scenario.

Rather than your web application pushing orders to a queue for processing, send a notification via SNS. The SNS messages are sent to a topic and then replicated and pushed to multiple SQS queues and Lambda functions for processing.

As the diagram above shows, you have the development team consuming “live” data as they work on the next version of the processing application, or potentially using the messages to troubleshoot issues in production.

Marketing is consuming all order information, via a Lambda function that has subscribed to the SNS topic, inserting the records into an Amazon Redshift warehouse for analysis.

All of this, of course, is happening without affecting your order processing application.

Summary

While I haven’t dived deep into the specifics of each service, I have discussed how these services can be applied at an architectural level to build loosely coupled systems that facilitate multiple business use cases. I’ve also shown you how to use infrastructure and application-level scaling techniques, so you can get the most out of your EC2 instances.

One of the many benefits of using these managed services is how quickly and easily you can implement powerful messaging capabilities in your systems, and lower the capital and operational costs of managing your own messaging middleware.

Using Amazon SQS and Amazon SNS together can provide you with a powerful mechanism for decoupling application components. This should be part of design considerations as you architect for the cloud.

For more information, see the Amazon SQS Developer Guide and Amazon SNS Developer Guide. You’ll find tutorials on all the concepts covered in this post, and more. To can get started using the AWS console or SDK of your choice visit:

Happy messaging!

Secure API Access with Amazon Cognito Federated Identities, Amazon Cognito User Pools, and Amazon API Gateway

Post Syndicated from Ed Lima original https://aws.amazon.com/blogs/compute/secure-api-access-with-amazon-cognito-federated-identities-amazon-cognito-user-pools-and-amazon-api-gateway/

Ed Lima, Solutions Architect

 

Our identities are what define us as human beings. Philosophical discussions aside, it also applies to our day-to-day lives. For instance, I need my work badge to get access to my office building or my passport to travel overseas. My identity in this case is attached to my work badge or passport. As part of the system that checks my access, these documents or objects help define whether I have access to get into the office building or travel internationally.

This exact same concept can also be applied to cloud applications and APIs. To provide secure access to your application users, you define who can access the application resources and what kind of access can be granted. Access is based on identity controls that can confirm authentication (AuthN) and authorization (AuthZ), which are different concepts. According to Wikipedia:

 

The process of authorization is distinct from that of authentication. Whereas authentication is the process of verifying that “you are who you say you are,” authorization is the process of verifying that “you are permitted to do what you are trying to do.” This does not mean authorization presupposes authentication; an anonymous agent could be authorized to a limited action set.

Amazon Cognito allows building, securing, and scaling a solution to handle user management and authentication, and to sync across platforms and devices. In this post, I discuss the different ways that you can use Amazon Cognito to authenticate API calls to Amazon API Gateway and secure access to your own API resources.

 

Amazon Cognito Concepts

 

It’s important to understand that Amazon Cognito provides three different services:

Today, I discuss the use of the first two. One service doesn’t need the other to work; however, they can be configured to work together.
 

Amazon Cognito Federated Identities

 
To use Amazon Cognito Federated Identities in your application, create an identity pool. An identity pool is a store of user data specific to your account. It can be configured to require an identity provider (IdP) for user authentication, after you enter details such as app IDs or keys related to that specific provider.

After the user is validated, the provider sends an identity token to Amazon Cognito Federated Identities. In turn, Amazon Cognito Federated Identities contacts the AWS Security Token Service (AWS STS) to retrieve temporary AWS credentials based on a configured, authenticated IAM role linked to the identity pool. The role has appropriate IAM policies attached to it and uses these policies to provide access to other AWS services.

Amazon Cognito Federated Identities currently supports the IdPs listed in the following graphic.

 



Continue reading Secure API Access with Amazon Cognito Federated Identities, Amazon Cognito User Pools, and Amazon API Gateway