Tag Archives: Financial Services

Automate thousands of mainframe tests on AWS with the Micro Focus Enterprise Suite

Post Syndicated from Kevin Yung original https://aws.amazon.com/blogs/devops/automate-mainframe-tests-on-aws-with-micro-focus/

Micro Focus – AWS Advanced Technology Parnter, they are a global infrastructure software company with 40 years of experience in delivering and supporting enterprise software.

We have seen mainframe customers often encounter scalability constraints, and they can’t support their development and test workforce to the scale required to support business requirements. These constraints can lead to delays, reduce product or feature releases, and make them unable to respond to market requirements. Furthermore, limits in capacity and scale often affect the quality of changes deployed, and are linked to unplanned or unexpected downtime in products or services.

The conventional approach to address these constraints is to scale up, meaning to increase MIPS/MSU capacity of the mainframe hardware available for development and testing. The cost of this approach, however, is excessively high, and to ensure time to market, you may reject this approach at the expense of quality and functionality. If you’re wrestling with these challenges, this post is written specifically for you.

To accompany this post, we developed an AWS prescriptive guidance (APG) pattern for developer instances and CI/CD pipelines: Mainframe Modernization: DevOps on AWS with Micro Focus.

Overview of solution

In the APG, we introduce DevOps automation and AWS CI/CD architecture to support mainframe application development. Our solution enables you to embrace both Test Driven Development (TDD) and Behavior Driven Development (BDD). Mainframe developers and testers can automate the tests in CI/CD pipelines so they’re repeatable and scalable. To speed up automated mainframe application tests, the solution uses team pipelines to run functional and integration tests frequently, and uses systems test pipelines to run comprehensive regression tests on demand. For more information about the pipelines, see Mainframe Modernization: DevOps on AWS with Micro Focus.

In this post, we focus on how to automate and scale mainframe application tests in AWS. We show you how to use AWS services and Micro Focus products to automate mainframe application tests with best practices. The solution can scale your mainframe application CI/CD pipeline to run thousands of tests in AWS within minutes, and you only pay a fraction of your current on-premises cost.

The following diagram illustrates the solution architecture.

Mainframe DevOps On AWS Architecture Overview, on the left is the conventional mainframe development environment, on the left is the CI/CD pipelines for mainframe tests in AWS

Figure: Mainframe DevOps On AWS Architecture Overview

 

Best practices

Before we get into the details of the solution, let’s recap the following mainframe application testing best practices:

  • Create a “test first” culture by writing tests for mainframe application code changes
  • Automate preparing and running tests in the CI/CD pipelines
  • Provide fast and quality feedback to project management throughout the SDLC
  • Assess and increase test coverage
  • Scale your test’s capacity and speed in line with your project schedule and requirements

Automated smoke test

In this architecture, mainframe developers can automate running functional smoke tests for new changes. This testing phase typically “smokes out” regression of core and critical business functions. You can achieve these tests using tools such as py3270 with x3270 or Robot Framework Mainframe 3270 Library.

The following code shows a feature test written in Behave and test step using py3270:

# home_loan_calculator.feature
Feature: calculate home loan monthly repayment
  the bankdemo application provides a monthly home loan repayment caculator 
  User need to input into transaction of home loan amount, interest rate and how many years of the loan maturity.
  User will be provided an output of home loan monthly repayment amount

  Scenario Outline: As a customer I want to calculate my monthly home loan repayment via a transaction
      Given home loan amount is <amount>, interest rate is <interest rate> and maturity date is <maturity date in months> months 
       When the transaction is submitted to the home loan calculator
       Then it shall show the monthly repayment of <monthly repayment>

    Examples: Homeloan
      | amount  | interest rate | maturity date in months | monthly repayment |
      | 1000000 | 3.29          | 300                     | $4894.31          |

 

# home_loan_calculator_steps.py
import sys, os
from py3270 import Emulator
from behave import *

@given("home loan amount is {amount}, interest rate is {rate} and maturity date is {maturity_date} months")
def step_impl(context, amount, rate, maturity_date):
    context.home_loan_amount = amount
    context.interest_rate = rate
    context.maturity_date_in_months = maturity_date

@when("the transaction is submitted to the home loan calculator")
def step_impl(context):
    # Setup connection parameters
    tn3270_host = os.getenv('TN3270_HOST')
    tn3270_port = os.getenv('TN3270_PORT')
	# Setup TN3270 connection
    em = Emulator(visible=False, timeout=120)
    em.connect(tn3270_host + ':' + tn3270_port)
    em.wait_for_field()
	# Screen login
    em.fill_field(10, 44, 'b0001', 5)
    em.send_enter()
	# Input screen fields for home loan calculator
    em.wait_for_field()
    em.fill_field(8, 46, context.home_loan_amount, 7)
    em.fill_field(10, 46, context.interest_rate, 7)
    em.fill_field(12, 46, context.maturity_date_in_months, 7)
    em.send_enter()
    em.wait_for_field()    

    # collect monthly replayment output from screen
    context.monthly_repayment = em.string_get(14, 46, 9)
    em.terminate()

@then("it shall show the monthly repayment of {amount}")
def step_impl(context, amount):
    print("expected amount is " + amount.strip() + ", and the result from screen is " + context.monthly_repayment.strip())
assert amount.strip() == context.monthly_repayment.strip()

To run this functional test in Micro Focus Enterprise Test Server (ETS), we use AWS CodeBuild.

We first need to build an Enterprise Test Server Docker image and push it to an Amazon Elastic Container Registry (Amazon ECR) registry. For instructions, see Using Enterprise Test Server with Docker.

Next, we create a CodeBuild project and uses the Enterprise Test Server Docker image in its configuration.

The following is an example AWS CloudFormation code snippet of a CodeBuild project that uses Windows Container and Enterprise Test Server:

  BddTestBankDemoStage:
    Type: AWS::CodeBuild::Project
    Properties:
      Name: !Sub '${AWS::StackName}BddTestBankDemo'
      LogsConfig:
        CloudWatchLogs:
          Status: ENABLED
      Artifacts:
        Type: CODEPIPELINE
        EncryptionDisabled: true
      Environment:
        ComputeType: BUILD_GENERAL1_LARGE
        Image: !Sub "${EnterpriseTestServerDockerImage}:latest"
        ImagePullCredentialsType: SERVICE_ROLE
        Type: WINDOWS_SERVER_2019_CONTAINER
      ServiceRole: !Ref CodeBuildRole
      Source:
        Type: CODEPIPELINE
        BuildSpec: bdd-test-bankdemo-buildspec.yaml

In the CodeBuild project, we need to create a buildspec to orchestrate the commands for preparing the Micro Focus Enterprise Test Server CICS environment and issue the test command. In the buildspec, we define the location for CodeBuild to look for test reports and upload them into the CodeBuild report group. The following buildspec code uses custom scripts DeployES.ps1 and StartAndWait.ps1 to start your CICS region, and runs Python Behave BDD tests:

version: 0.2
phases:
  build:
    commands:
      - |
        # Run Command to start Enterprise Test Server
        CD C:\
        .\DeployES.ps1
        .\StartAndWait.ps1

        py -m pip install behave

        Write-Host "waiting for server to be ready ..."
        do {
          Write-Host "..."
          sleep 3  
        } until(Test-NetConnection 127.0.0.1 -Port 9270 | ? { $_.TcpTestSucceeded } )

        CD C:\tests\features
        MD C:\tests\reports
        $Env:Path += ";c:\wc3270"

        $address=(Get-NetIPAddress -AddressFamily Ipv4 | where { $_.IPAddress -Match "172\.*" })
        $Env:TN3270_HOST = $address.IPAddress
        $Env:TN3270_PORT = "9270"
        
        behave.exe --color --junit --junit-directory C:\tests\reports
reports:
  bankdemo-bdd-test-report:
    files: 
      - '**/*'
    base-directory: "C:\\tests\\reports"

In the smoke test, the team may run both unit tests and functional tests. Ideally, these tests are better to run in parallel to speed up the pipeline. In AWS CodePipeline, we can set up a stage to run multiple steps in parallel. In our example, the pipeline runs both BDD tests and Robot Framework (RPA) tests.

The following CloudFormation code snippet runs two different tests. You use the same RunOrder value to indicate the actions run in parallel.

#...
        - Name: Tests
          Actions:
            - Name: RunBDDTest
              ActionTypeId:
                Category: Build
                Owner: AWS
                Provider: CodeBuild
                Version: 1
              Configuration:
                ProjectName: !Ref BddTestBankDemoStage
                PrimarySource: Config
              InputArtifacts:
                - Name: DemoBin
                - Name: Config
              RunOrder: 1
            - Name: RunRbTest
              ActionTypeId:
                Category: Build
                Owner: AWS
                Provider: CodeBuild
                Version: 1
              Configuration:
                ProjectName : !Ref RpaTestBankDemoStage
                PrimarySource: Config
              InputArtifacts:
                - Name: DemoBin
                - Name: Config
              RunOrder: 1  
#...

The following screenshot shows the example actions on the CodePipeline console that use the preceding code.

Screenshot of CodePipeine parallel execution tests using a same run order value

Figure – Screenshot of CodePipeine parallel execution tests

Both DBB and RPA tests produce jUnit format reports, which CodeBuild can ingest and show on the CodeBuild console. This is a great way for project management and business users to track the quality trend of an application. The following screenshot shows the CodeBuild report generated from the BDD tests.

CodeBuild report generated from the BDD tests showing 100% pass rate

Figure – CodeBuild report generated from the BDD tests

Automated regression tests

After you test the changes in the project team pipeline, you can automatically promote them to another stream with other team members’ changes for further testing. The scope of this testing stream is significantly more comprehensive, with a greater number and wider range of tests and higher volume of test data. The changes promoted to this stream by each team member are tested in this environment at the end of each day throughout the life of the project. This provides a high-quality delivery to production, with new code and changes to existing code tested together with hundreds or thousands of tests.

In enterprise architecture, it’s commonplace to see an application client consuming web services APIs exposed from a mainframe CICS application. One approach to do regression tests for mainframe applications is to use Micro Focus Verastream Host Integrator (VHI) to record and capture 3270 data stream processing and encapsulate these 3270 data streams as business functions, which in turn are packaged as web services. When these web services are available, they can be consumed by a test automation product, which in our environment is Micro Focus UFT One. This uses the Verastream server as the orchestration engine that translates the web service requests into 3270 data streams that integrate with the mainframe CICS application. The application is deployed in Micro Focus Enterprise Test Server.

The following diagram shows the end-to-end testing components.

Regression Test the end-to-end testing components using ECS Container for Exterprise Test Server, Verastream Host Integrator and UFT One Container, all integration points are using Elastic Network Load Balancer

Figure – Regression Test Infrastructure end-to-end Setup

To ensure we have the coverage required for large mainframe applications, we sometimes need to run thousands of tests against very large production volumes of test data. We want the tests to run faster and complete as soon as possible so we reduce AWS costs—we only pay for the infrastructure when consuming resources for the life of the test environment when provisioning and running tests.

Therefore, the design of the test environment needs to scale out. The batch feature in CodeBuild allows you to run tests in batches and in parallel rather than serially. Furthermore, our solution needs to minimize interference between batches, a failure in one batch doesn’t affect another running in parallel. The following diagram depicts the high-level design, with each batch build running in its own independent infrastructure. Each infrastructure is launched as part of test preparation, and then torn down in the post-test phase.

Regression Tests in CodeBuoild Project setup to use batch mode, three batches running in independent infrastructure with containers

Figure – Regression Tests in CodeBuoild Project setup to use batch mode

Building and deploying regression test components

Following the design of the parallel regression test environment, let’s look at how we build each component and how they are deployed. The followings steps to build our regression tests use a working backward approach, starting from deployment in the Enterprise Test Server:

  1. Create a batch build in CodeBuild.
  2. Deploy to Enterprise Test Server.
  3. Deploy the VHI model.
  4. Deploy UFT One Tests.
  5. Integrate UFT One into CodeBuild and CodePipeline and test the application.

Creating a batch build in CodeBuild

We update two components to enable a batch build. First, in the CodePipeline CloudFormation resource, we set BatchEnabled to be true for the test stage. The UFT One test preparation stage uses the CloudFormation template to create the test infrastructure. The following code is an example of the AWS CloudFormation snippet with batch build enabled:

#...
        - Name: SystemsTest
          Actions:
            - Name: Uft-Tests
              ActionTypeId:
                Category: Build
                Owner: AWS
                Provider: CodeBuild
                Version: 1
              Configuration:
                ProjectName : !Ref UftTestBankDemoProject
                PrimarySource: Config
                BatchEnabled: true
                CombineArtifacts: true
              InputArtifacts:
                - Name: Config
                - Name: DemoSrc
              OutputArtifacts:
                - Name: TestReport                
              RunOrder: 1
#...

Second, in the buildspec configuration of the test stage, we provide a build matrix setting. We use the custom environment variable TEST_BATCH_NUMBER to indicate which set of tests runs in each batch. See the following code:

version: 0.2
batch:
  fast-fail: true
  build-matrix:
    static:
      ignore-failure: false
    dynamic:
      env:
        variables:
          TEST_BATCH_NUMBER:
            - 1
            - 2
            - 3 
phases:
  pre_build:
commands:
#...

After setting up the batch build, CodeBuild creates multiple batches when the build starts. The following screenshot shows the batches on the CodeBuild console.

Regression tests Codebuild project ran in batch mode, three batches ran in prallel successfully

Figure – Regression tests Codebuild project ran in batch mode

Deploying to Enterprise Test Server

ETS is the transaction engine that processes all the online (and batch) requests that are initiated through external clients, such as 3270 terminals, web services, and websphere MQ. This engine provides support for various mainframe subsystems, such as CICS, IMS TM and JES, as well as code-level support for COBOL and PL/I. The following screenshot shows the Enterprise Test Server administration page.

Enterprise Server Administrator window showing configuration for CICS

Figure – Enterprise Server Administrator window

In this mainframe application testing use case, the regression tests are CICS transactions, initiated from 3270 requests (encapsulated in a web service). For more information about Enterprise Test Server, see the Enterprise Test Server and Micro Focus websites.

In the regression pipeline, after the stage of mainframe artifact compiling, we bake in the artifact into an ETS Docker container and upload the image to an Amazon ECR repository. This way, we have an immutable artifact for all the tests.

During each batch’s test preparation stage, a CloudFormation stack is deployed to create an Amazon ECS service on Windows EC2. The stack uses a Network Load Balancer as an integration point for the VHI’s integration.

The following code is an example of the CloudFormation snippet to create an Amazon ECS service using an Enterprise Test Server Docker image:

#...
  EtsService:
    DependsOn:
    - EtsTaskDefinition
    - EtsContainerSecurityGroup
    - EtsLoadBalancerListener
    Properties:
      Cluster: !Ref 'WindowsEcsClusterArn'
      DesiredCount: 1
      LoadBalancers:
        -
          ContainerName: !Sub "ets-${AWS::StackName}"
          ContainerPort: 9270
          TargetGroupArn: !Ref EtsPort9270TargetGroup
      HealthCheckGracePeriodSeconds: 300          
      TaskDefinition: !Ref 'EtsTaskDefinition'
    Type: "AWS::ECS::Service"

  EtsTaskDefinition:
    Properties:
      ContainerDefinitions:
        -
          Image: !Sub "${AWS::AccountId}.dkr.ecr.us-east-1.amazonaws.com/systems-test/ets:latest"
          LogConfiguration:
            LogDriver: awslogs
            Options:
              awslogs-group: !Ref 'SystemsTestLogGroup'
              awslogs-region: !Ref 'AWS::Region'
              awslogs-stream-prefix: ets
          Name: !Sub "ets-${AWS::StackName}"
          cpu: 4096
          memory: 8192
          PortMappings:
            -
              ContainerPort: 9270
          EntryPoint:
          - "powershell.exe"
          Command: 
          - '-F'
          - .\StartAndWait.ps1
          - 'bankdemo'
          - C:\bankdemo\
          - 'wait'
      Family: systems-test-ets
    Type: "AWS::ECS::TaskDefinition"
#...

Deploying the VHI model

In this architecture, the VHI is a bridge between mainframe and clients.

We use the VHI designer to capture the 3270 data streams and encapsulate the relevant data streams into a business function. We can then deliver this function as a web service that can be consumed by a test management solution, such as Micro Focus UFT One.

The following screenshot shows the setup for getCheckingDetails in VHI. Along with this procedure we can also see other procedures (eg calcCostLoan) defined that get generated as a web service. The properties associated with this procedure are available on this screen to allow for the defining of the mapping of the fields between the associated 3270 screens and exposed web service.

example of VHI designer to capture the 3270 data streams and encapsulate the relevant data streams into a business function getCheckingDetails

Figure – Setup for getCheckingDetails in VHI

The following screenshot shows the editor for this procedure and is initiated by the selection of the Procedure Editor. This screen presents the 3270 screens that are involved in the business function that will be generated as a web service.

VHI designer Procedure Editor shows the procedure

Figure – VHI designer Procedure Editor shows the procedure

After you define the required functional web services in VHI designer, the resultant model is saved and deployed into a VHI Docker image. We use this image and the associated model (from VHI designer) in the pipeline outlined in this post.

For more information about VHI, see the VHI website.

The pipeline contains two steps to deploy a VHI service. First, it installs and sets up the VHI models into a VHI Docker image, and it’s pushed into Amazon ECR. Second, a CloudFormation stack is deployed to create an Amazon ECS Fargate service, which uses the latest built Docker image. In AWS CloudFormation, the VHI ECS task definition defines an environment variable for the ETS Network Load Balancer’s DNS name. Therefore, the VHI can bootstrap and point to an ETS service. In the VHI stack, it uses a Network Load Balancer as an integration point for UFT One test integration.

The following code is an example of a ECS Task Definition CloudFormation snippet that creates a VHI service in Amazon ECS Fargate and integrates it with an ETS server:

#...
  VhiTaskDefinition:
    DependsOn:
    - EtsService
    Type: AWS::ECS::TaskDefinition
    Properties:
      Family: systems-test-vhi
      NetworkMode: awsvpc
      RequiresCompatibilities:
        - FARGATE
      ExecutionRoleArn: !Ref FargateEcsTaskExecutionRoleArn
      Cpu: 2048
      Memory: 4096
      ContainerDefinitions:
        - Cpu: 2048
          Name: !Sub "vhi-${AWS::StackName}"
          Memory: 4096
          Environment:
            - Name: esHostName 
              Value: !GetAtt EtsInternalLoadBalancer.DNSName
            - Name: esPort
              Value: 9270
          Image: !Ref "${AWS::AccountId}.dkr.ecr.us-east-1.amazonaws.com/systems-test/vhi:latest"
          PortMappings:
            - ContainerPort: 9680
          LogConfiguration:
            LogDriver: awslogs
            Options:
              awslogs-group: !Ref 'SystemsTestLogGroup'
              awslogs-region: !Ref 'AWS::Region'
              awslogs-stream-prefix: vhi

#...

Deploying UFT One Tests

UFT One is a test client that uses each of the web services created by the VHI designer to orchestrate running each of the associated business functions. Parameter data is supplied to each function, and validations are configured against the data returned. Multiple test suites are configured with different business functions with the associated data.

The following screenshot shows the test suite API_Bankdemo3, which is used in this regression test process.

the screenshot shows the test suite API_Bankdemo3 in UFT One test setup console, the API setup for getCheckingDetails

Figure – API_Bankdemo3 in UFT One Test Editor Console

For more information, see the UFT One website.

Integrating UFT One and testing the application

The last step is to integrate UFT One into CodeBuild and CodePipeline to test our mainframe application. First, we set up CodeBuild to use a UFT One container. The Docker image is available in Docker Hub. Then we author our buildspec. The buildspec has the following three phrases:

  • Setting up a UFT One license and deploying the test infrastructure
  • Starting the UFT One test suite to run regression tests
  • Tearing down the test infrastructure after tests are complete

The following code is an example of a buildspec snippet in the pre_build stage. The snippet shows the command to activate the UFT One license:

version: 0.2
batch: 
# . . .
phases:
  pre_build:
    commands:
      - |
        # Activate License
        $process = Start-Process -NoNewWindow -RedirectStandardOutput LicenseInstall.log -Wait -File 'C:\Program Files (x86)\Micro Focus\Unified Functional Testing\bin\HP.UFT.LicenseInstall.exe' -ArgumentList @('concurrent', 10600, 1, ${env:AUTOPASS_LICENSE_SERVER})        
        Get-Content -Path LicenseInstall.log
        if (Select-String -Path LicenseInstall.log -Pattern 'The installation was successful.' -Quiet) {
          Write-Host 'Licensed Successfully'
        } else {
          Write-Host 'License Failed'
          exit 1
        }
#...

The following command in the buildspec deploys the test infrastructure using the AWS Command Line Interface (AWS CLI)

aws cloudformation deploy --stack-name $stack_name `
--template-file cicd-pipeline/systems-test-pipeline/systems-test-service.yaml `
--parameter-overrides EcsCluster=$cluster_arn `
--capabilities CAPABILITY_IAM

Because ETS and VHI are both deployed with a load balancer, the build detects when the load balancers become healthy before starting the tests. The following AWS CLI commands detect the load balancer’s target group health:

$vhi_health_state = (aws elbv2 describe-target-health --target-group-arn $vhi_target_group_arn --query 'TargetHealthDescriptions[0].TargetHealth.State' --output text)
$ets_health_state = (aws elbv2 describe-target-health --target-group-arn $ets_target_group_arn --query 'TargetHealthDescriptions[0].TargetHealth.State' --output text)          

When the targets are healthy, the build moves into the build stage, and it uses the UFT One command line to start the tests. See the following code:

$process = Start-Process -Wait  -NoNewWindow -RedirectStandardOutput UFTBatchRunnerCMD.log `
-FilePath "C:\Program Files (x86)\Micro Focus\Unified Functional Testing\bin\UFTBatchRunnerCMD.exe" `
-ArgumentList @("-source", "${env:CODEBUILD_SRC_DIR_DemoSrc}\bankdemo\tests\API_Bankdemo\API_Bankdemo${env:TEST_BATCH_NUMBER}")

The next release of Micro Focus UFT One (November or December 2020) will provide an exit status to indicate a test’s success or failure.

When the tests are complete, the post_build stage tears down the test infrastructure. The following AWS CLI command tears down the CloudFormation stack:


#...
	post_build:
	  finally:
	  	- |
		  Write-Host "Clean up ETS, VHI Stack"
		  #...
		  aws cloudformation delete-stack --stack-name $stack_name
          aws cloudformation wait stack-delete-complete --stack-name $stack_name

At the end of the build, the buildspec is set up to upload UFT One test reports as an artifact into Amazon Simple Storage Service (Amazon S3). The following screenshot is the example of a test report in HTML format generated by UFT One in CodeBuild and CodePipeline.

UFT One HTML report shows regression testresult and test detals

Figure – UFT One HTML report

A new release of Micro Focus UFT One will provide test report formats supported by CodeBuild test report groups.

Conclusion

In this post, we introduced the solution to use Micro Focus Enterprise Suite, Micro Focus UFT One, Micro Focus VHI, AWS developer tools, and Amazon ECS containers to automate provisioning and running mainframe application tests in AWS at scale.

The on-demand model allows you to create the same test capacity infrastructure in minutes at a fraction of your current on-premises mainframe cost. It also significantly increases your testing and delivery capacity to increase quality and reduce production downtime.

A demo of the solution is available in AWS Partner Micro Focus website AWS Mainframe CI/CD Enterprise Solution. If you’re interested in modernizing your mainframe applications, please visit Micro Focus and contact AWS mainframe business development at [email protected].

References

Micro Focus

 

Peter Woods

Peter Woods

Peter has been with Micro Focus for almost 30 years, in a variety of roles and geographies including Technical Support, Channel Sales, Product Management, Strategic Alliances Management and Pre-Sales, primarily based in Europe but for the last four years in Australia and New Zealand. In his current role as Pre-Sales Manager, Peter is charged with driving and supporting sales activity within the Application Modernization and Connectivity team, based in Melbourne.

Leo Ervin

Leo Ervin

Leo Ervin is a Senior Solutions Architect working with Micro Focus Enterprise Solutions working with the ANZ team. After completing a Mathematics degree Leo started as a PL/1 programming with a local insurance company. The next step in Leo’s career involved consulting work in PL/1 and COBOL before he joined a start-up company as a technical director and partner. This company became the first distributor of Micro Focus software in the ANZ region in 1986. Leo’s involvement with Micro Focus technology has continued from this distributorship through to today with his current focus on cloud strategies for both DevOps and re-platform implementations.

Kevin Yung

Kevin Yung

Kevin is a Senior Modernization Architect in AWS Professional Services Global Mainframe and Midrange Modernization (GM3) team. Kevin currently is focusing on leading and delivering mainframe and midrange applications modernization for large enterprise customers.

Mercado Libre: How to Block Malicious Traffic in a Dynamic Environment

Post Syndicated from Gaston Ansaldo original https://aws.amazon.com/blogs/architecture/mercado-libre-how-to-block-malicious-traffic-in-a-dynamic-environment/

Blog post contributors: Pablo Garbossa and Federico Alliani of Mercado Libre

Introduction

Mercado Libre (MELI) is the leading e-commerce and FinTech company in Latin America. We have a presence in 18 countries across Latin America, and our mission is to democratize commerce and payments to impact the development of the region.

We manage an ecosystem of more than 8,000 custom-built applications that process an average of 2.2 million requests per second. To support the demand, we run between 50,000 to 80,000 Amazon Elastic Cloud Compute (EC2) instances, and our infrastructure scales in and out according to the time of the day, thanks to the elasticity of the AWS cloud and its auto scaling features.

Mercado Libre

As a company, we expect our developers to devote their time and energy building the apps and features that our customers demand, without having to worry about the underlying infrastructure that the apps are built upon. To achieve this separation of concerns, we built Fury, our platform as a service (PaaS) that provides an abstraction layer between our developers and the infrastructure. Each time a developer deploys a brand new application or a new version of an existing one, Fury takes care of creating all the required components such as Amazon Virtual Private Cloud (VPC), Amazon Elastic Load Balancing (ELB), Amazon EC2 Auto Scaling group (ASG), and EC2) instances. Fury also manages a per-application Git repository, CI/CD pipeline with different deployment strategies, such like blue-green and rolling upgrades, and transparent application logs and metrics collection.

Fury- MELI PaaS

For those of us on the Cloud Security team, Fury represents an opportunity to enforce critical security controls across our stack in a way that’s transparent to our developers. For instance, we can dictate what Amazon Machine Images (AMIs) are vetted for use in production (such as those that align with the Center for Internet Security benchmarks). If needed, we can apply security patches across all of our fleet from a centralized location in a very scalable fashion.

But there are also other attack vectors that every organization that has a presence on the public internet is exposed to. The AWS recent Threat Landscape Report shows a 23% YoY increase in the total number of Denial of Service (DoS) events. It’s evident that organizations need to be prepared to quickly react under these circumstances.

The variety and the number of attacks are increasing, testing the resilience of all types of organizations. This is why we started working on a solution that allows us to contain application DoS attacks, and complements our perimeter security strategy, which is based on services such as AWS Shield and AWS Web Application Firewall (WAF). In this article, we will walk you through the solution we built to automatically detect and block these events.

The strategy we implemented for our solution, Network Behavior Anomaly Detection (NBAD), consists of four stages that we repeatedly execute:

  1. Analyze the execution context of our applications, like CPU and memory usage
  2. Learn their behavior
  3. Detect anomalies, gather relevant information and process it
  4. Respond automatically

Step 1: Establish a baseline for each application

End user traffic enters through different AWS CloudFront distributions that route to multiple Elastic Load Balancers (ELBs). Behind the ELBs, we operate a fleet of NGINX servers from where we connect back to the myriad of applications that our developers create via Fury.

MELI Architecture - nomaly detection project-step 1

Step 1: MELI Architecture – Anomaly detection project

We collect logs and metrics for each application that we ship to Amazon Simple Storage Service (S3) and Datadog. We then partition these logs using AWS Glue to make them available for consumption via Amazon Athena. On average, we send 3 terabytes (TB) of log files in parquet format to S3.

Based on this information, we developed processes that we complement with commercial solutions, such as Datadog’s Anomaly Detection, which allows us to learn the normal behavior or baseline of our applications and project expected adaptive growth thresholds for each one of them.

Anomaly detection

Step 2: Anomaly detection

When any of our apps receives a number of requests that fall outside the limits set by our anomaly detection algorithms, an Amazon Simple Notification Service (SNS) event is emitted, which triggers a workflow in the Anomaly Analyzer, a custom-built component of this solution.

Upon receiving such an event, the Anomaly Analyzer starts composing the so-called event context. In parallel, the Data Extractor retrieves vital insights via Athena from the log files stored in S3.

The output of this process is used as the input for the data enrichment process. This is responsible for consulting different threat intelligence sources that are used to further augment the analysis and determine if the event is an actual incident or not.

At this point, we build the context that will allow us not only to have greater certainty in calculating the score, but it will also help us validate and act quicker. This context includes:

  • Application’s owner
  • Affected business metrics
  • Error handling statistics of our applications
  • Reputation of IP addresses and associated users
  • Use of unexpected URL parameters
  • Distribution by origin of the traffic that generated the event (cloud providers, geolocation, etc.)
  • Known behavior patterns of vulnerability discovery or exploitation
Step 2: MELI Architecture - Anomaly detection project

Step 2: MELI Architecture – Anomaly detection project

Step 3: Incident response

Once we reconstruct the context of the event, we calculate a score for each “suspicious actor” involved.

Step 3: MELI Architecture - Anomaly detection project

Step 3: MELI Architecture – Anomaly detection project

Based on these analysis results we carry out a series of verifications in order to rule out false positives. Finally, we execute different actions based on the following criteria:

Manual review

If the outcome of the automatic analysis results in a medium risk scoring, we activate a manual review process:

  1. We send a report to the application’s owners with a summary of the context. Based on their understanding of the business, they can activate the Incident Response Team (IRT) on-call and/or provide feedback that allows us to improve our automatic rules.
  2. In parallel, our threat analysis team receives and processes the event. They are equipped with tools that allow them to add IP addresses, user-agents, referrers, or regular expressions into Amazon WAF to carry out temporary blocking of “bad actors” in situations where the attack is in progress.

Automatic response

If the analysis results in a high risk score, an automatic containment process is triggered. The event is sent to our block API, which is responsible for adding a temporary rule designed to mitigate the attack in progress. Behind the scenes, our block API leverages AWS WAF to create IPSets. We reference these IPsets from our custom rule groups in our web ACLs, in order to block IPs that source the malicious traffic. We found many benefits in the new release of AWS WAF, like support for Amazon Managed Rules, larger capacity units per web ACL as well as an easier to use API.

Conclusion

By leveraging the AWS platform and its powerful APIs, and together with the AWS WAF service team and solutions architects, we were able to build an automated incident response solution that is able to identify and block malicious actors with minimal operator intervention. Since launching the solution, we have reduced YoY application downtime over 92% even when the time under attack increased over 10x. This has had a positive impact on our users and therefore, on our business.

Not only was our downtime drastically reduced, but we also cut the number of manual interventions during this type of incident by 65%.

We plan to iterate over this solution to further reduce false positives in our detection mechanisms as well as the time to respond to external threats.

About the authors

Pablo Garbossa is an Information Security Manager at Mercado Libre. His main duties include ensuring security in the software development life cycle and managing security in MELI’s cloud environment. Pablo is also an active member of the Open Web Application Security Project® (OWASP) Buenos Aires chapter, a nonprofit foundation that works to improve the security of software.

Federico Alliani is a Security Engineer on the Mercado Libre Monitoring team. Federico and his team are in charge of protecting the site against different types of attacks. He loves to dive deep into big architectures to drive performance, scale operational efficiency, and increase the speed of detection and response to security events.

Fundbox: Simplifying Ways to Query and Analyze Data by Different Personas

Post Syndicated from Annik Stahl original https://aws.amazon.com/blogs/architecture/fundbox-simplifying-ways-to-query-and-analyze-data-by-different-personas/

Fundbox is a leading technology platform focused on disrupting the $21 trillion B2B commerce market by building the world’s first B2B payment and credit network. With Fundbox, sellers of all sizes can quickly increase average order volumes (AOV) and improve close rates by offering more competitive net terms and payment plans to their SMB buyers. With heavy investments in machine learning and the ability to quickly analyze the transactional data of SMB’s, Fundbox is reimagining B2B payments and credit products in new category-defining ways.

Learn how how the company simplified the way different personas in the organization query and analyze data by building a self-service data orchestration platform. The platform architecture is entirely serverless, which simplifies the ability to scale and adopt to unpredictable demand. The platform was built using AWS Step Functions, AWS Lambda, Amazon API Gateway, Amazon DynamoDB, AWS Fargate, and other AWS Serverless managed services.

For more content like this, subscribe to our YouTube channels This is My Architecture, This is My Code, and This is My Model, or visit the This is My Architecture on AWS, which has search functionality and the ability to filter by industry, language, and service.

BBVA: Architecture for Large-Scale Macie Implementation

Post Syndicated from Neel Sendas original https://aws.amazon.com/blogs/architecture/bbva-architecture-for-large-scale-macie-implementation/

This post was co-written by Andrew Alaniz , Technical Information Security Officer, and Brady Pratt, Cloud Security Enginner, both at BBVA USA.

Introduction

Data Loss Prevention (DLP) is a common topic among companies that work with any type of sensitive data. One of the challenges is that many people either don’t fully understand what DLP is, or rather, have their own definition of what it is. Regardless of one’s interpretation of DLP, one thing is certain: before you can control data loss, you need to locate find the data sources.

If an organization can’t identify its data, it can’t protect it. BBVA USA, a bank holding company, turned to AWS for advice, and decided to use Amazon Macie to accomplish this in Amazon Simple Storage Service (Amazon S3). Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS. This blog post will share some of the design and architecture we used to deploy Macie using services, such as AWS Lambda and Amazon CloudWatch.

Data challenges in Amazon S3

Although all S3 buckets are private by default, everyone is aware of the challenges of unsecured S3 buckets exposing data publicly. Amazon has provided a way to prevent that by removing the ability to make buckets public. As with other data storage mechanisms, this doesn’t stop anyone from storing sensitive data within AWS and exposing it another way. With Macie, one can classify the data stored in S3, centrally, and through AWS Organizations.

Recommended architecture

We can break the Macie architecture into two main parts: S3 discovery and evaluation, and S3 sensitive data discovery:

Macie architecture

The setup of discovery and evaluation is simple and straightforward, and should be enabled through Amazon Organizations and across all accounts. The cost of this piece is minimal, and it provides valuable insights into the compliance state of S3 buckets.

Once setup of discovery and evaluation is completed, we are a ready to move to the next step and configure discovery jobs for our S3 buckets. The architecture includes the use of S3, Amazon CloudWatch Events, Amazon EventBridge, and Lambda. All of the execution should happen in a centralized account, but the event triggers should come from each individual account.

Architectural considerations

When determining the architectural design of the solution, consider a few main components:

Centralization

Utilize AWS Organizations: Macie allows native integration with AWS Organizations. This is a significant advantage for Macie. Additionally, within AWS Organizations, it allows the delegation of the Macie master account to a subordinate account. The benefit of this is that it allows centralized management while allowing for the compartmentalization of roles.

Ease of management

One of the most challenging things to manage is non-conforming configurations. It’s much easier to manage a standard way to create, name, and configure settings. Once the classification jobs were ready to be created, we had to take into consideration the following when deploying Macie for our use case:

  1. Macie classifies content in a single job across one account.
  2. If you submit multiple jobs that contain the same bucket, Macie will scan the objects multiple times.
  3. Macie jobs are immutable.

Due to these considerations, we decided to create one job per S3 bucket. This allows administrators to search more easily for jobs related to findings.

Cost considerations

Macie plays an essential role, not only in identifying data and improving data collection, but also in compliance. We needed to make a decision about how to determine if an S3 bucket would be included in a classification job. Initially, we considered including all buckets no matter what. The logic here was that even if we make an assumption that a bucket would never have sensitive data in it, an entity with the right role could always add something at a later date.

Finally, we implemented a solution to tag specific buckets that were known to have immutable properties and which would never allow sensitive data to be added. We could do this because we knew exactly what data was in the bucket, who or what created the bucket, and exactly who or what had access to the bucket.

An example of this type of bucket is the S3 bucket used to store VPC Flow Logs. We know that this bucket is only created by provisioning scripts and is only going to store VPC flow logs that contain no sensitive data based on data classification standards. Also, only VPC services and specific security services can access this bucket for anything other than READ. This is controlled organizationally and can be tagged with a simple ignore key/value pair upon creation.

Deploying Macie at BBVA USA

BBVA USA developed an approach to working within AWS that allows guardrails to be applied as accounts are created. One of those guardrails identifies if developers have stored sensitive data in an account. BBVA needed to be able to do this, and do it at scale. If there is a roadblock or a challenge with AWS services, the first place BBVA looks is to support, but the second place is the Technical Account Manager.

After initiating conversations with its account team, BBVA determined that AWS Macie was the tool to help them with this challenge.

With the help of its technical account manager (TAM), BBVA was able to meet with the Macie Product team and discuss the best options for deploying at scale. Through these conversations, they were even able to influence the Macie product roadmap.

Getting Macie ready to deploy at scale was actually quite simple once the architectural pattern was designed.

Initial job creation

In order to set up jobs for each existing bucket in the organization, it’s a matter of scripting the job creation and adding each bucket from each account into its own job, which is pretty straightforward.

Job creation for new buckets

The recommended architecture and implementation for existing buckets:

  1. Whenever a new S3 bucket is added to Organization accounts, trigger a CloudWatch Event in the target account.
  2. Set up a cross account EventBridge to consume the Event. Using the EventBridge allows for a simpler configuration and centralized management of both Events and Lambda.
  3. Trigger a Lambda function in a delegated Macie admin account, which creates classification jobs to apply Macie to all the newly created S3 buckets.
  4. Repeat the same process when a bucket is deleted by triggering a cancel job.

Evaluate the state of S3 buckets

To evaluate the S3 accounts, turn on Macie at the organization master account and delegate administration to a subordinate account used for Macie. This enables management consolidation of security features into a centralized security account. This helps further restrict access from those that may need access to the master billing account. Finally, enable Macie by default on all organization accounts.

Evaluate the state of S3 buckets

Conclusion

BBVA USA worked directly with the Macie product team by leveraging its relationship with the AWS account team and Enterprise Support. This allowed the company to eventually deploy Macie quickly and at scale. Through Macie, the company is able to track any changes to configurations on buckets that allow a bucket to be public, shared, or replicated with external accounts and if the encryption policies are disabled. Using Macie, BBVA was able to identify buckets that contained sensitive information and put in another control to bolster its AWS governance profile.

Cyber hygiene and MAS Notice 655

Post Syndicated from Darran Boyd original https://aws.amazon.com/blogs/security/cyber-hygiene-and-mas-notice-655/

In this post, I will provide guidance and resources that will help you align to the expectations of the Monetary Authority of Singapore (MAS) Notice 655 – Notice on Cyber Hygiene.

The Monetary Authority of Singapore (MAS) issued Notice 655 – Notice on Cyber Hygiene on 6 Aug 2019. This notice is applicable to all banks in Singapore and takes effect from 6 Aug 2020. The notice sets out the requirements on cyber hygiene for banks across the following six categories: administrative accounts, security patches, security standards, network perimeter defense, malware protection, and multi-factor authentication.

Whilst Notice 655 is specific to all banks in Singapore, the AWS security guidance I provide in this post is based on consistent best practices. As always, it’s important to note that security and compliance is a shared responsibility between AWS and you as our customer. AWS is responsible for the security of the cloud, but you are responsible for your security in the cloud.

To aid in your alignment to Notice 655, AWS has developed a MAS Notice 655 – Cyber Hygiene – Workbook, which is available in AWS Artifact. The workbook covers each of the six categories of cyber hygiene in Notice 655 and maps to the following:

The downloadable workbook contains two embedded formats:

  • Microsoft Excel – coverage includes AWS responsibility control statements, and Well-Architected Framework best practices.
  • Dynamic HTML – same as Microsoft Excel, with the added feature that the Well-Architected Framework best practices are mapped to AWS Config managed rules and Amazon GuardDuty findings, where available or applicable.

Administrative accounts

“4.1. A relevant entity must ensure that every administrative account in respect of any operating system, database, application, security appliance or network device, is secured to prevent any unauthorised access to or use of such account.”

For administrative accounts, it is important to follow best practices for the privileged accounts, keeping in mind both human and programmatic access.

The most privileged user account in an AWS account is the root user. When you first create an AWS account (unless you create it with AWS Organizations), this is the initial user account created. The root user is associated with the provided email address and password used to create the account. The root user account has access to every resource in the account—including the ability to close it. To align to the principle of least privilege, the root user account should not be used for everyday tasks. Instead, AWS Identity and Access Management (IAM) roles should be created and scoped to particular roles and functions within your organization. Furthermore, AWS strongly recommends that you integrate with a centralized identity provider, or a directory service, to authenticate all users in a centralized place. This reduces the requirement for multiple credentials and reduces management complexity.

There are some additional key steps that you should do to further protect your root user account.

Ensure that you have a very long and complex password, and if necessary you should change the root user password to meet this recommendation.

  • Put the root user password in a secure location, and consider a physical or virtual password vault with strong multi-party access protocol.
  • Delete any access keys, and remove any programmatic access keys from the root user account.
  • Enable multi-factor authentication (MFA), and consider a hardware-based token that is stored in a physical vault or safe with a strong multi-party access protocol. Consider using a separate secure vault store for the password and the MFA token, with separate permissions for access.
  • A simple but hugely important step is to ensure your account information is correct, which includes the assigned root email address, so that AWS Support can contact you.

Do keep in mind that there are a few AWS tasks that require root user.

You should use IAM roles for programmatic or system-to-system access to AWS resources that are integrated with IAM. For example, you should use roles for applications that run on Amazon Elastic Compute Cloud (Amazon EC2) instances. Ensure the principle of least privilege is being applied for the IAM policies that are attached to the roles.

For cases where credentials that are not from AWS IAM, such as database credentials, need to be used by your application, you should not hard-code these credentials in the application source code, or stored in an un-encrypted state. It is recommended that you use a secrets management solution. For example, AWS Secrets Manager helps you protect the secrets needed to access your applications, services, and IT resources. Secrets Manager enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. Users and applications can retrieve secrets with a call to Secrets Manager APIs, which eliminates the need to hardcode sensitive information in plain text.

For more information, see 4.1 Administrative Accounts in the MAS Notice 655 workbook on AWS Artifact.

Security Patches

“4.2 (a) A relevant entity must ensure that security patches are applied to address vulnerabilities to every system, and apply such security patches within a timeframe that is commensurate with the risks posed by each vulnerability.”

Consider the various categories of security patches you need to manage, based on the AWS Shared Responsibility Model and the AWS services you are using.

Here are some common examples, but does not represent an exhaustive list:

Operating system

When using services from AWS where you have control over the operating system, it is your responsibility to perform patching on these services. For example, if you use Amazon EC2 with Linux, applying security patches for Linux is your responsibility. To help you with this, AWS publishes security patches for Amazon Linux at the Amazon Linux Security Center.

AWS Inspector allows you to run scheduled vulnerability assessments on your Amazon EC2 instances, and provides a report of findings against rules packages that include operating system configuration benchmarks for common vulnerabilities and exposures (CVEs) and Center for Internet Security (CIS) guidelines. To see if AWS Inspector is available in your AWS Region, see the list of supported Regions for Amazon Inspector.

For managing patching activity at scale, consider AWS Systems Manager Patch Manager to automate the process of patching managed instances with both security-related patches and other types of updates.

Container orchestration and containers

If you are running and managing your own container orchestration capability, it is your responsibility to perform patching for both the master and worker nodes. If you are using Amazon Elastic Kubernetes Service (Amazon EKS), then AWS manages the patching of the control plane, and publishes EKS-optimized Amazon Machine Images (AMIs) that include the necessary worker node binaries (Docker and Kubelet). This AMI is updated regularly and includes the most up to date version of these components. You can update your EKS managed nodes to the latest versions of the EKS-optimized AMIs with a single command in the EKS console, API, or CLI. If you are building your own custom AMIs to use for EKS worker nodes, AWS also publishes Packer scripts that document the AWS build steps, to allow you to identify the binaries included in each version of the AMI.

AWS Fargate provides the option of serverless compute for containers, so you can avoid the operational overhead of scaling, patching, securing, and managing servers.

For the container images, you do need to ensure these are scanned for vulnerabilities and patched. The Amazon Elastic Container Registry (Amazon ECR) service offers the ability to scan container images for common vulnerabilities and exposures (CVEs).

Database engines

If you are running and managing your own databases on top of an AWS service such as Amazon EC2, it is your responsibility to perform patching of the database engine.

If you are using Amazon Relational Database Service (Amazon RDS), then AWS will automatically perform the patching of the database engine. This is done within the configurable Amazon RDS maintenance window, and is your opportunity to control when DB instance modifications, database engine version upgrades, and software patching occurs.

In cases where you are using fully-managed AWS database services such as Amazon DynamoDB, AWS takes care of the underlying patching requirements.

Application

For application code and dependencies that you run on AWS services, you own and manage the patching. This applies to applications that your organization has built themselves, or applications from a third-party software vendor. You should make sure that you have a mechanism for ensuring that vulnerabilities in the application code you run are regularly identified & patched.

For more information, see 4.2 Security Patches in the MAS Notice 655 workbook on AWS Artifact.

Security Standards

“4.3. (b)… a relevant entity must ensure that every system conforms to the set of security standards.”

After you have defined your organizational security standards, the next step is to verify your conformance to these standards. In my consultation with customers, I advise that it is a best practice to enforce these security standards as early in the development lifecycle possible. For example, you may have a standard requiring that data of a specific data classification must be encrypted at rest with a AWS Key Management Service (AWS KMS) customer-managed customer master key (CMK). The way this is typically achieved is by defining your Infrastructure-as-Code (IaC), for example using AWS CloudFormation. As your projects move through the various stages of development in your pipeline, you can automatically and programmatically check your IaC templates against codified security standards that you have defined. AWS has a number of tools that assist you with defining you rules and evaluating your IaC templates.

In the case of AWS CloudFormation, you may want to consider the tools AWS CloudFormation Guard, cfn-lint or cfn_nag. Enforcing your security standards as early in the development lifecycle as possible has some key benefits. It instills a culture and practice of creating deployments that are aligned to your standards from the outset, and allows developers to move fast by using the tools and workflow that work best for their team, while providing feedback early enough so they have time to resolve any issues & meet security standards during the development process.

It’s important to complement this IaC pipeline approach with additional controls to ensure that security standards remain in place after it is deployed. You should make sure to look at both preventative controls and detective controls.

For preventative controls, the focus is on IAM permissions. You can use these fine-grained permissions to enforce at the level of the IAM principal (such as user or role) to control what actions can or cannot be taken on AWS resources. You can make use of AWS Organizations service control policies (SCPs) to enforce permission guardrails globally across the entire organization, across an organizational unit, or across individual AWS accounts. Some example SCPs that may align to your security standards include the following: Prevent any virtual private cloud (VPC) that doesn’t already have internet access from getting it, Prevent users from disabling Amazon GuardDuty or modifying its configuration. Additionally, you can use the SCPs described in the AWS Control Tower Guardrail Reference, which you can implement with or without using AWS Control Tower.

For detective controls, after your infrastructure is deployed, you should make use of the AWS Security Hub and/or AWS Config rules to help you meet your compliance needs. You should ensure that the findings from these services are integrated with your technology operations processes to take action, or you can use automated remediation.

For more information, see 4.3 Security Standards in the MAS Notice 655 workbook on AWS Artifact.

Network Perimeter Defense

“4.4. A relevant entity must implement controls at its network perimeter to restrict all unauthorised network traffic.”

Having a layered security strategy is a best practice, and this applies equally to your network. AWS provides a number of complimentary network configuration options that you can implement to add network protection to your resources. You should consider using all of the options I describe here for your AWS workload. You can implement multiple strategies together where possible, to provide network defense in depth.

For network layer protection, you can use security groups for your VPC. Security groups act as a virtual firewall for members of the group to control inbound and outbound traffic. For each security group, you add rules that control the inbound traffic to instances, and a separate set of rules that control the outbound traffic. You can attach Security groups to EC2 instances and other AWS services that use elastic network interfaces, including RDS instances, VPC endpoints, AWS Lambda functions, and Amazon SageMaker notebooks.

You can also use network access control lists (ACLs) as an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets, and supports allow rules and deny rules. Network ACLs are a good option for controlling traffic at the subnet level.

For application-layer protection against common web exploits, you can use AWS WAF. You use AWS WAF on the Application Load Balancer that fronts your web servers or origin servers that are running on Amazon EC2, on Amazon API Gateway for your APIs, or you can use AWS WAF together with Amazon CloudFront. This allows you to strengthen security at the edge, filtering more of the unwanted traffic out before it reaches your critical content, data, code, and infrastructure.

For distributed denial of service (DDoS) protection, you can use AWS Shield, which is a managed service to provide protection against DDoS attacks for applications running on AWS. AWS Shield is available in two tiers: AWS Shield Standard and AWS Shield Advanced. All AWS customers benefit from the automatic protections of AWS Shield Standard, at no additional charge. AWS Shield Standard defends against most common, frequently occurring network and transport layer DDoS attacks that target your web site or applications. When you use AWS Shield Standard with Amazon CloudFront and Amazon Route 53, you receive comprehensive availability protection against all known infrastructure (Layer 3 and 4) attacks. AWS Shield Advanced provides advanced attack mitigation, visibility and attack notification, DDoS cost protection, and specialist support.

AWS Firewall Manager allows you to centrally configure and manage firewall rules across your accounts and applications in AWS Organizations. Also in AWS Firewall Manager, you can centrally manage and deploy security groups, AWS WAF rules, and AWS Shield Advanced protections.

There are many AWS Partner Network (APN) solutions that provide additional capabilities and protection that work alongside the AWS solutions discussed, in categories such as intrusion detection systems (IDS). For more information, find an APN Partner.

For more information, see 4.4 Network Perimeter Defence in the MAS Notice 655 workbook on AWS Artifact for additional information.

Malware protection

“4.5. A relevant entity must ensure that one or more malware protection measures are implemented on every system, to mitigate the risk of malware infection, where such malware protection measures are available and can be implemented.”

Malware protection requires a multi-faceted approach, including all of the following:

  • Training your employees in security awareness
  • Finding and patching vulnerabilities within your AWS workloads
  • Segmenting your networks
  • Limiting access to your critical systems and data
  • Having a comprehensive backup and restore strategy
  • Detection of malware
  • Creating incident response plans

In the previous sections of this post, I covered security patching, network segmentation, and limiting access. Now I’ll review the remaining elements.

Employee security awareness is crucial, because it is generally accepted that the primary vector by which malware is installed within your organization is by phishing (or spear phishing), where an employee is misled into installing malware, or opens an attachment that uses a vulnerability in software to install itself.

For backup and restore, a comprehensive and tested strategy is crucial, especially when the motivation of the malware is deletion, modification, or mass encryption (ransomware). You can review the AWS backup and restore solutions and leverage the various high-durability storage classes provided by Amazon Simple Storage Service (Amazon S3).

For malware protection, as with other security domains, it is important to have complementary detective controls along with the preventative ones, it is important to have systems for early detection of malware, or of the activity indicative of malware presence. Across your AWS account, when you understanding what typical activity looks like, that gives you a baseline of network and user activity that you can continuously monitor for anomalies.

Amazon GuardDuty is a threat detection service that continuously monitors and compares activity within your AWS environment for malicious or unauthorized behavior to help you protect your AWS accounts and workloads. Amazon GuardDuty uses machine learning, anomaly detection, and integrated threat intelligence to identify and prioritize potential threats.

Continuing on the topic of malware detection, you should consider other approaches as well, including endpoint detection and response (EDR) solutions. AWS has a number of partners that specialize in this space. For more information, find an APN Partner.

Finally, you should make sure that you have a security incident response plan to help you respond to an incident, communicate during an incident, and recover from it. AWS recommends that you create these response plans in the form of playbooks. A good way to create a playbook is to start off simple and iterate to improve your plan. Before you need to respond to an actual event, you should consider the tasks that you can do ahead of time to improve your recovery timeframes. Some of the issues to consider include pre-provisioning access to your responders, and pre-deploying the tools that the responders or forensic teams will need. Importantly, do not wait for an actual incident to test your response and recovery plans. You should run game days to practice, improve and iterate.

For more information, see 4.5 Malware protection in the MAS Notice 655 workbook on AWS Artifact.

Multi-factor authentication

“4.6. … a relevant entity must ensure that multi-factor authentication is implemented for the following:
(a)all administrative accounts in respect of any operating system, database, application, security appliance or network device that is a critical system; and
(b)all accounts on any system used by the relevant entity to access customer information through the internet.“

When using multi-factor authentication (MFA), it’s important to for you to think of the various layers that you need to implement.

For access to the AWS API, AWS Management Console, and AWS resources that use AWS Identity and Access Management (IAM), you can configure MFA with a number of different form factors, and apply it to users within your AWS accounts. As I mentioned in the Administrative accounts section, AWS recommends that you apply MFA to the root account user. Where possible, you should not use IAM users, but instead use identity federation with IAM roles. By using identity federation with IAM roles, you can apply and enforce MFA at the level of your identity provider, for example Active Directory Federation Services (AD FS) or AWS Single Sign-On. For highly privileged actions, you may want to configure MFA-protected API access to only allow the action if MFA authentication has been performed.

With regards to third-party applications, which includes software as a service (SaaS), you should consider integration with AWS services or third-party services to provide MFA protection. For example, AWS Single Sign-On (SSO) includes built-in integrations to many business applications, including Salesforce, Office 365, and others.

For your own in-house applications, you may want to consider solutions such as Amazon Cognito. Amazon Cognito goes beyond standard MFA (which use SMS or TOTP), and includes the option of adaptive authentication when using the advanced security features. With this feature enabled, when Amazon Cognito detects unusual sign-in activity, such as attempts from new locations and unknown devices, it can challenge the user with additional verification checks.

For more information, see 4.6 Multi-Factor authentication in the MAS Notice 655 workbook on AWS Artifact.

Conclusion

AWS products and services have security features designed to help you improve the security of your workloads, and meet your compliance requirements. Many AWS services are reviewed by independent third-party auditors, and these audit reports are available on AWS Artifact. You can use AWS services, tools, and guidance to address your side of the shared responsibility model to align with the requirements stated in Notice 655 – Notice on Cyber Hygiene.

Review the MAS Notice 655 – Cyber Hygiene – Workbook on AWS Artifact to understand both the AWS control environment (the AWS side of the shared responsibility model) and the guidance AWS provides to help you with your side of the shared responsibility model. You will find AWS guidance in the AWS Well-Architected Framework best practices, and where available or applicable to detective controls, in AWS Config rules and Amazon GuardDuty findings.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Boyd author photo

Darran Boyd

Darran is a Principal Security Solutions Architect at AWS, responsible for helping remove security blockers for our customers and accelerating their journey to the AWS Cloud. Darran’s focus and passion is to deliver strategic security initiatives that un-lock and enable our customers at scale across the financial services industry and beyond.

Over 150 AWS services now have a security chapter

Post Syndicated from Marta Taggart original https://aws.amazon.com/blogs/security/over-150-aws-services-now-have-security-chapter/

We’re happy to share an update on the service documentation initiative that we first told you about on the AWS Security Blog in June, 2019. We’re excited to announce that over 150 services now have dedicated security chapters available in the AWS security documentation.

In case you aren’t familiar with the security chapters, they were developed to provide easy-to-find, easy-to-consume security content in existing service documentation, so you don’t have to refer to multiple sources when reviewing the security capabilities of an AWS service. The chapters align with the Security Epics of the AWS Cloud Adoption Framework (CAF), including information about the security ‘of’ the cloud and security ‘in’ the cloud, as outlined in the AWS Shared Responsibility Model. The chapters cover the following security topics from the CAF, as applicable for each AWS service:

  • Data protection
  • Identity and access management
  • Logging and monitoring
  • Compliance validation
  • Resilience
  • Infrastructure security
  • Configuration and vulnerability analysis
  • Security best practices

These topics also align with the control domains of many industry-recognized standards that customers use to meet their compliance needs when using cloud services. This enables customers to evaluate the services against the frameworks they are already using.

We thought it might be helpful to share some of the ways that we’ve seen our customers and partners use the security chapters as a resource to both assess services and configure them securely. We’ve seen customers develop formal service-by-service assessment processes that include key considerations, such as achieving compliance, data protection, isolation of compute environments, automating audits with APIs, and operational access and security, when determining how cloud services can help them address their regulatory obligations.

To support their cloud journey and digital transformation, Fidelity Investments established a Cloud Center of Excellence (CCOE) to assist and enable Fidelity business units to safely and securely adopt cloud services at scale. The CCOE security team created a collaborative approach, inviting business units to partner with them to identify use cases and perform service testing in a safe environment. This ongoing process enables Fidelity business units to gain service proficiency while working directly with the security team so that risks are properly assessed, minimized, and evidenced well before use in a production environment.

Steve MacIntyre, Cloud Security Lead at Fidelity Investments, explains how the availability of the chapters assists them in this process: “As a diversified financial services organization, it is critical to have a deep understanding of the security, data protection, and compliance features for each AWS offering. The AWS security “chapters” allow us to make informed decisions about the safety of our data and the proper configuration of services within the AWS environment.”

Information found in the security chapters has also been used by customers as key inputs in refining their cloud governance, and helping customers to balance agility and innovation, while remaining secure as they adopt new services. Outlining customer responsibilities that are laid out under the AWS Shared Responsibility Model, the chapters have influenced the refinement of service assessment processes by a number of AWS customers, enabling customization to meet specific control objectives based on known use cases.

For example, when AWS Partner Network (APN) Partner Deloitte works on cloud strategies with organizations, they advise on topics that range from enterprise-wide cloud adoption to controls needed for specific AWS services.

Devendra Awasthi, Cloud Risk & Compliance Leader at Deloitte & Touche LLP, explained that, “When working with companies to help develop a secure cloud adoption framework, we don’t want them to make assumptions about shared responsibility that lead to a false sense of security. We advise clients to use the AWS service security chapters to identify their responsibilities under the AWS Shared Responsibility Model; the chapters can be key to informing their decision-making process for specific service use.”

Partners and customers, including Deloitte and Fidelity, have been helpful by providing feedback on both the content and structure of the security chapters. Service teams will continue to update the security chapters as new features are released, and in the meantime, we would appreciate your input to help us continue to expand the content. You can give us your feedback by selecting the Feedback button in the lower right corner of any documentation page. We look forward to learning how you use the security chapters within your organization.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Marta Taggart

Marta is a Seattle-native and Senior Program Manager in AWS Security, where she focuses on privacy, content development, and educational programs. Her interest in education stems from two years she spent in the education sector while serving in the Peace Corps in Romania. In her free time, she’s on a global hunt for the perfect cup of coffee.

Author

Kristen Haught

Kristen is a Security and Compliance Business Development Manager focused on strategic initiatives that enable financial services customers to adopt Amazon Web Services for regulated workloads. She cares about sharing strategies that help customers adopt a culture of innovation, while also strengthening their security posture and minimizing risk in the cloud.

AWS Well-Architected for Financial Services

Post Syndicated from Arjun Chakraborty original https://aws.amazon.com/blogs/architecture/aws-well-architected-for-financial-services/

Cloud is part of the new normal in the financial services industry. Higher customer expectations have raised the stakes, pushing institutions to streamline operations, lower costs, and accelerate innovation. The conversation is no longer about whether to embrace the cloud, but rather, how quickly it can be done. To address the need to provide tailored advice, AWS added the concept of AWS Well-Architected Lenses in 2017. AWS now is happy to announce Financial Services lens, the first industry specific guidance for the AWS Well-Architected Framework. This post provides an introduction of its purpose, topics covered, and common scenarios included.

In the Financial Services lens, we focus on how to design, deploy, and architect financial services industry workloads that promote the resiliency, security, and operational performance in line with risk and control objectives that firms define for themselves. This includes companies and industries that are required to meet the regulatory and compliance requirements of supervisory authorities.

While we recommend that AWS customers start with the standard AWS Well-Architected Framework as a foundation for their architecture review, the Financial Services lens provides additional best practices for the resiliency, security, and operational performance requirements of financial institutions. These best practices are based on our experience working with financial services customers globally.

The guidance inside the lens can be grouped under the following:

Well-Architected FinServ lens

General Design principles to improve security, data privacy, and resiliency in the cloud

These principles include both general design principles for unlocking the true value of cloud and specific best practices for your AWS environments. For example, we have advocated for a Security by Design approach (SbD), wherein firms should implement their architectures in repeatable templates that have control objectives, security baselines, and audit capabilities baked in. The lens also provides specific guidance on how to the implement this approach, such as how to protect your compute infrastructure and protect your data with encryption.

Guardrails to confidently build and deploy financial applications on AWS

The lens provides best practices in a number of areas including Identity and Access Management (IAM), networking, secrets management, data privacy, resiliency, and performance. When taken together with the standard Well-Architected Framework, it will help accelerate migration or new development of applications on the cloud while making sure that security, risk, and compliance needs are met as well.

Techniques to build transparency and auditability into your AWS environment

With AWS, customers have access to a higher fidelity of observability in terms of what happens in their environments compared to traditional on-premise environments. The lens identifies a number of best practices to harness this capability for better business and operational outcomes. We’ve also provided additional guidance on how to gather, store, and secure evidence for audit requirements associated with cloud environments.

Recommendations for controls to help expedite adoption of new AWS services

“Allow listing” new AWS services or certifying that a new service implements all the internal security and control standards is a time-consuming process for many financial services customers. It also holds back teams from leveraging innovation in new AWS services and features. In this lens, we have discussed a number of best practices in the areas of IAM privileges, private connectivity, and encryption which should help expedite the allow listing of new services.

The Financial Services Lens whitepaper is intended to provide guidance for common industry workloads, such as:

  • Financial data on the cloud
  • Building data lakes for use cases such as regulatory reporting
  • Artificial intelligence (AI) and machine learning (ML)
  • Grid computing for risk and modeling
  • Building a platform for ML and AI
  • Open banking
  • Digital user engagement

If any of these scenarios fits your needs, building to the principles of the Financial Services lens in the AWS Well-Architected Framework can improve security, resiliency, and operational efficiency for all financial services customers on AWS, and can also assist in meeting regulatory and compliance obligations.

Conclusion

We strive to provide you with a consistent approach to evaluate architectures and implement designs that will scale over time. AWS is committed to the maintaining the Financial Services Lens whitepaper as a living document; as the technology and regulatory landscape evolves and new AWS services come on line, we’ll update the Financial Services lens appropriately. We are also actively working towards implementing the Financial Services lens in the Well-Architected Tool, so that teams can run Well-Architected reviews against best practices included in the whitepaper.

Special thanks to the following individuals who contributed to building this resource, among many others who helped with review and implementation: Ilya Epshteyn, Misha Goussev, Som Chatterjee, James Craig, Anjana Kandalam, Roberto Silva, Chris Redmond, Pawan Agnihotri, Rahul Prabhakar, Jaswinder Hayre, Jennifer Code, Igor Kleyman, John McDonald, and John Kain.

Building a Self-Service, Secure, & Continually Compliant Environment on AWS

Post Syndicated from Japjot Walia original https://aws.amazon.com/blogs/architecture/building-a-self-service-secure-continually-compliant-environment-on-aws/

Introduction

If you’re an enterprise organization, especially in a highly regulated sector, you understand the struggle to innovate and drive change while maintaining your security and compliance posture. In particular, your banking customers’ expectations and needs are changing, and there is a broad move away from traditional branch and ATM-based services towards digital engagement.

With this shift, customers now expect personalized product offerings and services tailored to their needs. To achieve this, a broad spectrum of analytics and machine learning (ML) capabilities are required. With security and compliance at the top of financial service customers’ agendas, being able to rapidly innovate and stay secure is essential. To achieve exactly that, AWS Professional Services engaged with a major Global systemically important bank (G-SIB) customer to help develop ML capabilities and implement a Defense in Depth (DiD) security strategy. This blog post provides an overview of this solution.

The machine learning solution

The following architecture diagram shows the ML solution we developed for a customer. This architecture is designed to achieve innovation, operational performance, and security performance in line with customer-defined control objectives, as well as meet the regulatory and compliance requirements of supervisory authorities.

Machine learning solution developed for customer

This solution is built and automated using AWS CloudFormation templates with pre-configured security guardrails and abstracted through the service catalog. AWS Service Catalog allows you to quickly let your users deploy approved IT services ensuring governance, compliance, and security best practices are enforced during the provisioning of resources.

Further, it leverages Amazon SageMaker, Amazon Simple Storage Service (S3), and Amazon Relational Database Service (RDS) to facilitate the development of advanced ML models. As security is paramount for this workload, data in S3 is encrypted using client-side encryption and column-level encryption on columns in RDS. Our customer also codified their security controls via AWS Config rules to achieve continual compliance

Compute and network isolation

To enable our customer to rapidly explore new ML models while achieving the highest standards of security, separate VPCs were used to isolate infrastructure and accessed control by security groups. Core to this solution is Amazon SageMaker, a fully managed service that provides the ability to rapidly build, train, and deploy ML models. Amazon SageMaker notebooks are managed Juypter notebooks that:

  1. Prepare and process data
  2. Write code to train models
  3. Deploy models to SageMaker hosting
  4. Test or validate models

In our solution, notebooks run in an isolated VPC with no egress connectivity other than VPC endpoints, which enable private communication with AWS services. When used in conjunction with VPC endpoint policies, you can use notebooks to control access to those services. In our solution, this is used to allow the SageMaker notebook to communicate only with resources owned by AWS Organizations through the use of the aws:PrincipalOrgID condition key. AWS Organizations helps provide governance to meet strict compliance regulation and you can use the aws:PrincipalOrgID condition key in your resource-based policies to easily restrict access to Identity Access Management (IAM) principals from accounts.

Data protection

Amazon S3 is used to store training data, model artifacts, and other data sets. Our solution uses server-side encryption with customer master keys (CMKs) stored in AWS Key Management Service (SSE-KMS) encryption to protect data at rest. SSE-KMS leverages KMS and uses an envelope encryption strategy with CMKs. Envelop encryption is the practice of encrypting data with a data key and then encrypting that data key using another key – the CMK. CMKs are created in KMS and never leave KMS unencrypted. This approach allows fine-grained control around access to the CMK and the logging of all access and attempts to access the key to Amazon CloudTrail. In our solution, the age of the CMK is tracked by AWS Config and is regularly rotated. AWS Config enables you to assess, audit, and evaluate the configurations of deployed AWS resources by continuously monitoring and recording AWS resource configurations. This allows you to automate the evaluation of recorded configurations against desired configurations.

Amazon S3 Block Public Access is also used at an account level to ensure that existing and newly created resources block bucket policies or access-control lists (ACLs) don’t allow public access. Service control policies (SCPs) are used to prevent users from modifying this setting. AWS Config continually monitors S3 and remediates any attempt to make a bucket public.

Data in the solution are classified according to their sensitivity that corresponds to your customer’s data classification hierarchy. Classification in the solution is achieved through resource tagging, and tags are used in conjunction with AWS Config to ensure adherence to encryption, data retention, and archival requirements.

Continuous compliance

Our solution adopts a continuous compliance approach, whereby the compliance status of the architecture is continuously evaluated and auto-remediated if a configuration change attempts to violate the compliance posture. To achieve this, AWS Config and config rules are used to confirm that resources are configured in compliance with defined policies. AWS Lambda is used to implement a custom rule set that extends the rules included in AWS Config.

Data exfiltration prevention

In our solution, VPC Flow Logs are enabled on all accounts to record information about the IP traffic going to and from network interfaces in each VPC. This allows us to watch for abnormal and unexpected outbound connection requests, which could be an indication of attempts to exfiltrate data. Amazon GuardDuty analyzes VPC Flow Logs, AWS CloudTrail event logs, and DNS logs to identify unexpected and potentially malicious activity within the AWS environment. For example, GuardDuty can detect compromised Amazon Elastic Cloud Compute (EC2) instances communicating with known command-and-control servers.

Conclusion

Financial services customers are using AWS to develop machine learning and analytics solutions to solve key business challenges while ensuring security and compliance needs. This post outlined how Amazon SageMaker, along with multiple security services (AWS Config, GuardDuty, KMS), enables building a self-service, secure, and continually compliant data science environment on AWS for a financial service use case.

 

Learn and use 13 AWS security tools to implement SEC recommended protection of stored customer data in the cloud

Post Syndicated from Sireesh Pachava original https://aws.amazon.com/blogs/security/learn-and-use-13-aws-security-tools-to-implement-sec-recommended-protection-stored-customer-data-cloud/

Most businesses collect, process, and store sensitive customer data that needs to be secured to earn customer trust and protect customers against abuses. Regulated businesses must prove they meet guidelines established by regulatory bodies. As an example, in the capital markets, broker-dealers and investment advisors must demonstrate they address the guidelines proposed by the Office of Compliance Inspections (OCIE), a division of the United States Securities Exchange Commission (SEC).

So what do you do as a business to secure and protect customer data in cloud, and to provide assurance to an auditor/regulator on customer’s data protection?

In this post, I will introduce you to 13 key AWS tools that you can use to address different facets of data protection across different types of AWS storage services. As a structure for the post, I will explain the key findings and issues the SEC OCIE found, and will explain how these tools help you meet the toughest compliance obligations and guidance. These tools and use cases apply to other industries as well.

What SEC OCIE observations mean for AWS customers

The SEC established the SEC Regulation S-P (primary rule for privacy notices and safeguard policies) and Regulation S-ID (identity theft red flags rules) as compliance requirements for financial institutions that includes securities firms. In 2019, the OCIE examined broker-dealers’ and investment advisors’ use of network storage solutions, including cloud storage to identify gaps in effective practices to protect stored customer information. OCIE noted gaps in security settings, configuration management, and oversight of vendor network storage solutions. OCIE also noted that firms don’t always use the available security features on storage solutions. The gaps can be summarized into three problem areas as below. These gaps are common to businesses in other industries as well.

  • Misconfiguration – Misconfigured network storage solution and missed security settings
  • Monitoring & Oversight – Inadequate oversight of vendor-provided network storage solutions
  • Data protection – Insufficient data classification policies and procedures

So how can you effectively use AWS security tools and capabilities to review and enhance your security and configuration management practices?

AWS tools and capabilities to help review, monitor and address SEC observations

I will cover the 13 key AWS tools that you can use to address different facets of data protection of storage under the same three (3) broad headings as above: 1. Misconfiguration, 2. Monitoring & Oversight, 3. Data protection.

All of these 13 tools rely on automated monitoring alerts along with detective, preventative, and predictive controls to help enable the available security features and data controls. Effective monitoring, security analysis, and change management are key to help companies, including capital markets firms protect customers’ data and verify the effectiveness of security risk mitigation.

AWS offers a complete range of cloud storage services to help you meet your application and archival compliance requirements. Some of the AWS storage services for common industry use are:

I use Amazon S3 and Amazon EBS for examples in this post.

Establish control guardrails by operationalizing the shared responsibility model

Before covering the 13 tools, let me reinforce the foundational pillar of the cloud security. The AWS shared responsibility model, where security and compliance is a shared responsibility between AWS and you as the AWS customer, is consistent with OCIE recommendations for ownership and accountability, and use of all available security features.

We start with the baseline structure for operationalizing the control guardrails. A lack of clear understanding of the shared responsibility model can result in missed controls or unused security features. Clarifying and operationalizing this shared responsibility model and shared controls helps enable the controls to be applied to both the infrastructure layer and customer layers, but in completely separate contexts or perspectives.

Security of the cloud – AWS is responsible for protecting the infrastructure that runs all of the services offered in the AWS cloud.

Security in the cloud – Your responsibility as a user of AWS is determined by the AWS cloud services that you select. This determines the amount of configuration work you must perform as part of your security responsibilities. You’re responsible for managing data in your care (including encryption options), classifying your assets, and using IAM tools to apply the appropriate permissions.

Misconfiguration – Monitor, detect, and remediate misconfiguration with AWS cloud storage services

Monitoring, detection, and remediation are the specific areas noted by the OCIE. Misconfiguration of settings results in errors such as inadvertent public access, unrestricted access permissions, and unencrypted records. Based on your use case, you can use a wide suite of AWS services to monitor, detect, and remediate misconfiguration.

Access analysis via AWS Identity and Access Management (IAM) Access Analyzer – Identifying if anyone is accessing your resources from outside an AWS account due to misconfiguration is critical. Access Analyzer identifies resources that can be accessed without an AWS account. For example Access Analyzer continuously monitors for new or updated policies, and it analyzes permissions granted using policies for Amazon S3 buckets, AWS Key Management Service (AWS KMS) and AWS IAM roles. To learn more about using IAM Access Analyzer to flag unintended access to S3 buckets, see IAM Access Analyzer flags unintended access to S3 buckets shared through access points.

Actionable security checks via AWS Trusted Advisor – Unrestricted access increases opportunities for malicious activity such as hacking, denial-of-service attacks, and data theft. Trusted Advisor posts security advisories that should be regularly reviewed and acted on. Trusted Advisor can alert you to risks such as Amazon S3 buckets that aren’t secured and Amazon EBS volume snapshots that are marked as public. Bucket permissions that don’t limit who can upload or delete data create potential security vulnerabilities by allowing anyone to add, modify, or remove items in a bucket. Trusted Advisor examines explicit bucket permissions and associated bucket policies that might override the bucket permissions. It also checks security groups for rules that allow unrestricted access to a resource. To learn more about using Trusted Advisor, see How do I start using Trusted Advisor?

Encryption via AWS Key Management Service (AWS KMS) – Simplifying the process to create and manage encryption keys is critical to configuring data encryption by default. You can use AWS KMS master keys to automatically control the encryption of the data stored within services integrated with AWS KMS such as Amazon EBS and Amazon S3. AWS KMS gives you centralized control over the encryption keys used to protect your data. AWS KMS is designed so that no one, including the service operators, can retrieve plaintext master keys from the service. The service uses FIPS140-2 validated hardware security modules (HSMs) to protect the confidentiality and integrity of keys. For example, you can specify that all newly created Amazon EBS volumes be created in encrypted form, with the option to use the default key provided by AWS KMS or a key you create. Amazon S3 inventory can be used to audit and report on the replication and encryption status of objects for business, compliance, and regulatory needs. To learn more about using KMS to enable data encryption on S3, see How to use KMS and IAM to enable independent security controls for encrypted data in S3.

Monitoring & Oversight – AWS storage services provide ongoing monitoring, assessment, and auditing

Continuous monitoring and regular assessment of control environment changes and compliance are key to data storage oversight. They help you validate whether security and access settings and permissions across your organization’s cloud storage are in compliance with your security policies and flag non-compliance. For example, you can use AWS Config or AWS Security Hub to simplify auditing, security analysis, monitoring, and change management.

Configuration compliance monitoring via AWS Config – You can use AWS Config to assess how well your resource configurations align with internal practices, industry guidelines, and regulations by providing a detailed view of the configuration of AWS resources including current, and historical configuration snapshot and changes. AWS Config managed rules are predefined, customizable rules to evaluate whether your AWS resources align with common best practices. Config rules can be used to evaluate the configuration settings, detect and remediate violation of conditions in the rules, and flag non-compliance with internal practices. This helps demonstrate compliance against internal policies and best practices, for data that requires frequent audits. For example you can use a managed rule to quickly assess whether your EBS volumes are encrypted or whether specific tags are applied to your resources. Another example of AWS Config rules is on-going detective controls that check that your S3 buckets don’t allow public read access. The rule checks the block public access setting, the bucket policy, and the bucket access control list (ACL). You can configure the logic that determines compliance with internal practices, which lets you automatically mark IAM roles in use as compliant and inactive roles as non-compliant. To learn more about using AWS Config rule, see Setting up custom AWS Config rule that checks the OS CIS compliance.

Automated compliance checks via AWS Security Hub – Security Hub eliminates the complexity and reduces the effort of managing and improving the security and compliance of your AWS accounts and workloads. It helps improve compliance with automated checks by running continuous and automated account and resource-level configuration checks against the rules in the supported industry best practices and standards, such as the CIS AWS Foundations Benchmarks. Security Hub insights are grouped findings that highlight emerging trends or possible issues. For example, insights help to identify Amazon S3 buckets with public read or write permissions. It also collects findings from partner security products using a standardized AWS security finding format, eliminating the need for time-consuming data parsing and normalization efforts. To learn more about Security Hub, see AWS Foundational Security Best Practices standard now available in Security Hub.

Security and compliance reports via AWS Artifact – As part of independent oversight, third-party auditors test more than 2,600 standards and requirements in the AWS environment throughout the year. AWS Artifact provides on-demand access to AWS security and compliance reports such as AWS Service Organization Control (SOC) reports, Payment Card Industry (PCI) reports, and certifications from accreditation bodies that validate the implementation and operating effectiveness of AWS security controls. You can access these attestations online under the artifacts section of the AWS Management Console. To learn more about accessing Artifact, see Downloading Reports in AWS Artifact.

Data Protection – Data classification policies and procedures for discovering, and protecting data

It’s important to classify institutional data to support application of the appropriate level of security. Data discovery and classification enables the implementation of the correct level of security, privacy, and access controls. Discovery and classification are highly complex given the volume of data involved and the tradeoffs between a strict security posture and the need for business agility.

Controls via S3 Block Public Access – S3 Block Public Access can help controls across an entire AWS Account or at the individual S3 bucket level to ensure that objects do not have public permissions. Block Public Access is a good second layer of protection to ensure you don’t’ inadvertently grant broader access to objects than intended. To learn more about using S3 Block Public Access, see Learn how to use two important Amazon S3 security features – Block Public Access and S3 Object Lock.

S3 configuration monitoring and sensitive data discovery via Amazon Macie – You can use Macie to discover, classify, and protect sensitive data like personally identifiable information (PII) stored in Amazon S3. Macie provides visibility and continuous monitoring of S3 bucket configurations across all accounts within your AWS Organization, and alerts you to any unencrypted buckets, publicly accessible buckets, or buckets shared or replicated with AWS accounts outside your organization. For buckets you specify, Macie uses machine learning and pattern matching to identify objects that contain sensitive data. When sensitive data is located, Macie sends findings to EventBridge allowing for automated actions or integrations with ticketing systems. To learn more about using Macie, see Enhanced Amazon Macie.

WORM data conformance via Amazon S3 Object Lock – Object Lock can help you meet the technical requirements of financial services regulations that require write once, read many (WORM) data storage for certain types of books and records information. To learn more about using S3 Object Lock, see Learn how to use two important Amazon S3 security features – Block Public Access and S3 Object Lock.

Alerts via Amazon GuardDuty – GuardDuty is designed to raise alarms when someone is scanning for potentially vulnerable systems or moving unusually large amounts of data to or from unexpected places. To learn more about GuardDuty findings, see Visualizing Amazon GuardDuty findings.

Note: AWS strongly recommends that you never put sensitive identifying information into free-form fields or metadata, such as function names or tags. The reason being any data entered into metadata might be included in diagnostic logs.

Effective configuration management program features, and practices

OCIE also noted effective industry practices for storage configuration, including:

  • Policies and procedures to support the initial installation and ongoing maintenance and monitoring of storage systems
  • Guidelines for security controls and baseline security configuration standards
  • Vendor management policies and procedures for security configuration assessment after software and hardware patches

In addition to the services already covered, AWS offers several other services and capabilities to help you implement effective control measures.

Security assessments using Amazon Inspector – You can use Amazon Inspector to assess your AWS resources for vulnerabilities or deviations from best practices and produce a detailed list of security findings prioritized by level of severity. For example, Amazon Inspector security assessments can help you check for unintended network accessibility of your Amazon Elastic Compute Cloud (Amazon EC2) instances and for vulnerabilities on those instances. To learn more about assessing network exposure of EC2 instances, see A simpler way to assess the network exposure of EC2 instances: AWS releases new network reachability assessments in Amazon Inspector.

Configuration compliance via AWS Config conformance packs – Conformance packs help you manage configuration compliance of your AWS resources at scale—from policy definition to auditing and aggregated reporting—using a common framework and packaging model. This helps to quickly establish a common baseline for resource configuration policies and best practices across multiple accounts in your organization in a scalable and efficient way. Sample conformance pack templates such as Operational best practices for Amazon S3 can help you to quickly get started on evaluating and configuring your AWS environment. To learn more about AWS Config conformance packs, see Manage custom AWS Config rules with remediations using conformance packs.

Logging and monitoring via AWS CloudTrail – CloudTrail lets you track and automatically respond to account activity that threatens the security of your AWS resources. With Amazon CloudWatch Events integration, you can define workflows that execute when events that can result in security vulnerabilities are detected. For example, you can create a workflow to add a specific policy to an Amazon S3 bucket when CloudTrail logs an API call that makes that bucket public. To learn more about using CloudTrail to respond to unusual API activity, see Announcing CloudTrail Insights: Identify and Respond to Unusual API Activity.

Machine learning based investigations via Amazon Detective – Detective makes it easy to analyze, investigate, and quickly identify the root cause of potential security issues or suspicious activities. Detective automatically collects log data from your AWS resources and uses machine learning, statistical analysis, and graph theory to build a linked set of data that helps you to conduct faster, more efficient security investigations. To learn more about Amazon Detective based investigation, see Amazon Detective – Rapid Security Investigation and Analysis.

Conclusion

AWS security and compliance capabilities are well suited to help you review the SEC OCIE observations, and implement effective practices to safeguard your organization’s data in AWS cloud storage. To review and enhance the security of your cloud data storage, learn about these 13 AWS tools and capabilities. Implementing these wide variety of monitoring, auditing, security analysis, and change management capabilities will help you to remediate the potential gaps in security settings and configurations. Many customers engage AWS Professional Services to help define and implement their security, risk, and compliance strategy, governance structures, operating controls, shared responsibility model, control mappings, and best practices.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Sireesh Pachava

Sai Sireesh is a Senior Advisor in Security, Risk, and Compliance at AWS. He specializes in solving complex strategy, business risk, security, and digital platform issues. A computer engineer with an MS and an MBA, he has held global leadership roles at Russell Investments, Microsoft, Thomson Reuters, and more. He’s a pro-bono director for the non-profit risk professional association PRMIA.

OSPAR 2020 report now available with 105 services in scope

Post Syndicated from Niyaz Noor original https://aws.amazon.com/blogs/security/ospar-2020-report-now-available-with-105-services-in-scope/

We are excited to announce the addition of 41 new services in the scope of our latest Outsourced Service Provider Audit Report (OSPAR) audit cycle, for a total of 105 services in the Asia Pacific (Singapore) Region. The newly added services include:

  • AWS Security Hub, which gives you a comprehensive view of high-priority security alerts and security posture across your AWS accounts.
  • AWS Organizations, which helps you to centrally manage billing; control access, compliance, and security; and share resources across your AWS accounts.
  • Amazon CloudFront, which provides a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency and high transfer speeds, all within a developer-friendly environment.
  • AWS Backup, which is a fully managed backup service that makes it easy to centralize and automate the backup of data across AWS services.

You can download our latest OSPAR report available in AWS Artifact.

An independent third-party auditor performs the OSPAR assessment. The assessment demonstrates that AWS has a system of controls in place that meet the Association of Banks in Singapore’s (ABS) Guidelines on Control Objectives and Procedures for Outsourced Service Providers. Our alignment with the ABS guidelines demonstrates to customers our commitment to meet the security expectations for cloud service providers set by the financial services industry in Singapore. You can leverage the OSPAR assessment report to conduct due diligence, minimizing the effort and costs required for compliance.

As always, we are committed to bringing new services into the scope of our OSPAR program based on your architectural and regulatory needs. Please reach out to your AWS account team if you have questions about the OSPAR report.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Niyaz Noor

Niyaz is the Security Audit Program Manager for the Asia Pacific and Japan Region, leading multiple security certification programs across these Regions. Throughout his career, he has helped numerous cloud service providers obtain global and regional security certification. He is passionate about delivering programs that build customers’ trust and provide them assurance on cloud security.

How Goldman Sachs builds cross-account connectivity to their Amazon MSK clusters with AWS PrivateLink

Post Syndicated from Robert L. Cossin original https://aws.amazon.com/blogs/big-data/how-goldman-sachs-builds-cross-account-connectivity-to-their-amazon-msk-clusters-with-aws-privatelink/

This guest post presents patterns for accessing an Amazon Managed Streaming for Apache Kafka cluster across your AWS account or Amazon Virtual Private Cloud (Amazon VPC) boundaries using AWS PrivateLink. In addition, the post discusses the pattern that the Transaction Banking team at Goldman Sachs (TxB) chose for their cross-account access, the reasons behind their decision, and how TxB satisfies its security requirements with Amazon MSK. Using Goldman Sachs’s implementation as a use case, this post aims to provide you with general guidance that you can use when implementing an Amazon MSK environment.

Overview

Amazon MSK is a fully managed service that makes it easy for you to build and run applications that use Apache Kafka to process streaming data. When you create an MSK cluster, the cluster resources are available to participants within the same Amazon VPC. This allows you to launch the cluster within specific subnets of the VPC, associate it with security groups, and attach IP addresses from your VPC’s address space through elastic network interfaces (ENIs). Network traffic between clients and the cluster stays within the AWS network, with internet access to the cluster not possible by default.

You may need to allow clients access to an MSK cluster in a different VPC within the same or a different AWS account. You have options such as VPC peering or a transit gateway that allow for resources in either VPC to communicate with each other as if they’re within the same network. For more information about access options, see Accessing an Amazon MSK Cluster.

Although these options are valid, this post focuses on a different approach, which uses AWS PrivateLink. Therefore, before we dive deep into the actual patterns, let’s briefly discuss when AWS PrivateLink is a more appropriate strategy for cross-account and cross-VPC access.

VPC peering, illustrated below, is a bidirectional networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses.

VPC peering is more suited for environments that have a high degree of trust between the parties that are peering their VPCs. This is because, after a VPC peering connection is established, the two VPCs can have broad access to each other, with resources in either VPC capable of initiating a connection. You’re responsible for implementing fine-grained network access controls with security groups to make sure that only specific resources intended to be reachable are accessible between the peered VPCs.

You can only establish VPC peering connections across VPCs that have non-overlapping CIDRs. This can pose a challenge when you need to peer VPCs with overlapping CIDRs, such as when peering across accounts from different organizations.

Additionally, if you’re running at scale, you can have hundreds of Amazon VPCs, and VPC peering has a limit of 125 peering connections to a single Amazon VPC. You can use a network hub like transit gateway, which, although highly scalable in enabling you to connect thousands of Amazon VPCs, requires similar bidirectional trust and non-overlapping CIDRs as VPC peering.

In contrast, AWS PrivateLink provides fine-grained network access control to specific resources in a VPC instead of all resources by default, and is therefore more suited for environments that want to follow a lower trust model approach, thus reducing their risk surface. The following diagram shows a service provider VPC that has a service running on Amazon Elastic Compute Cloud (Amazon EC2) instances, fronted by a Network Load Balancer (NLB). The service provider creates a configuration called a VPC endpoint service in the service provider VPC, pointing to the NLB. You can share this endpoint service with another Amazon VPC (service consumer VPC), which can use an interface VPC endpoint powered by AWS PrivateLink to connect to the service. The service consumers use this interface endpoint to reach the end application or service directly.

AWS PrivateLink makes sure that the connections initiated to a specific set of network resources are unidirectional—the connection can only originate from the service consumer VPC and flow into the service provider VPC and not the other way around. Outside of the network resources backed by the interface endpoint, no other resources in the service provider VPC get exposed. AWS PrivateLink allows for VPC CIDR ranges to overlap, and it can relatively scale better because thousands of Amazon VPCs can consume each service.

VPC peering and AWS PrivateLink are therefore two connectivity options suited for different trust models and use cases.

Transaction Banking’s micro-account strategy

An AWS account is a strong isolation boundary that provides both access control and reduced blast radius for issues that may occur due to deployment and configuration errors. This strong isolation is possible because you need to deliberately and proactively configure flows that cross an account boundary. TxB designed a strategy that moves each of their systems into its own AWS account, each of which is called a TxB micro-account. This strategy allows TxB to minimize the chances of a misconfiguration exposing multiple systems. For more information about TxB micro-accounts, see the video AWS re:Invent 2018: Policy Verification and Enforcement at Scale with AWS on YouTube.

To further complement the strong gains realized due to a TxB micro-account segmentation, TxB chose AWS PrivateLink for cross-account and cross-VPC access of their systems. AWS PrivateLink allows TxB service providers to expose their services as an endpoint service and use whitelisting to explicitly configure which other AWS accounts can create interface endpoints to these services. This also allows for fine-grained control of the access patterns for each service. The endpoint service definition only allows access to resources attached to the NLBs and thereby makes it easy to understand the scope of access overall. The one-way initiation of connection from a service consumer to a service provider makes sure that all connectivity is controlled on a point-to-point basis.  Furthermore, AWS PrivateLink allows the CIDR blocks of VPCs to overlap between the TxB micro-accounts. Thus the use of AWS PrivateLink sets TxB up for future growth as a part of their default setup, because thousands of TxB micro-account VPCs can consume each service if needed.

MSK broker access patterns using AWS PrivateLink

As a part of their micro-account strategy, TxB runs an MSK cluster in its own dedicated AWS account, and clients that interact with this cluster are in their respective micro-accounts. Considering this setup and the preference to use AWS PrivateLink for cross-account connectivity, TxB evaluated the following two patterns for broker access across accounts.

Pattern 1: Front each MSK broker with a unique dedicated interface endpoint

In this pattern, each MSK broker is fronted with a unique dedicated NLB in the TxB MSK account hosting the MSK cluster. The TxB MSK account contains an endpoint service for every NLB and is shared with the client account. The client account contains interface endpoints corresponding to the endpoint services. Finally, DNS entries identical to the broker DNS names point to the respective interface endpoint. The following diagram illustrates this pattern in the US East (Ohio) Region.

High-level flow

After setup, clients from their own accounts talk to the brokers using their provisioned default DNS names as follows:

  1. The client resolves the broker DNS name to the interface endpoint IP address inside the client VPC.
  2. The client initiates a TCP connection to the interface endpoint IP over port 9094.
  3. With AWS PrivateLink technology, this TCP connection is routed to the dedicated NLB setup for the respective broker listening on the same port within the TxB MSK account.
  4. The NLB routes the connection to the single broker IP registered behind it on TCP port 9094.

High-level setup

The setup steps in this section are shown for the US East (Ohio) Region, please modify if using another region. In the TxB MSK account, complete the following:

  1. Create a target group with target type as IP, protocol TCP, port 9094, and in the same VPC as the MSK cluster.
    • Register the MSK broker as a target by its IP address.
  2. Create an NLB with a listener of TCP port 9094 and forwarding to the target group created in the previous step.
    • Enable the NLB for the same AZ and subnet as the MSK broker it fronts.
  3. Create an endpoint service configuration for each NLB that requires acceptance and grant permissions to the client account so it can create a connection to this endpoint service.

In the client account, complete the following:

  1. Create an interface endpoint in the same VPC the client is in (this connection request needs to be accepted within the TxB MSK account).
  2. Create a Route 53 private hosted zone, with the domain name kafka.us-east-2.amazonaws.com, and associate it with the same VPC as the clients are in.
  3. Create A-Alias records identical to the broker DNS names to avoid any TLS handshake failures and point it to the interface endpoints of the respective brokers.

Pattern 2: Front all MSK brokers with a single shared interface endpoint

In this second pattern, all brokers in the cluster are fronted with a single unique NLB that has cross-zone load balancing enabled. You make this possible by modifying each MSK broker’s advertised.listeners config to advertise a unique port. You create a unique NLB listener-target group pair for each broker and a single shared listener-target group pair for all brokers. You create an endpoint service configuration for this single NLB and share it with the client account. In the client account, you create an interface endpoint corresponding to the endpoint service. Finally, you create DNS entries identical to the broker DNS names that point to the single interface. The following diagram illustrates this pattern in the US East (Ohio) Region.

High-level flow

After setup, clients from their own accounts talk to the brokers using their provisioned default DNS names as follows:

  1. The client resolves the broker DNS name to the interface endpoint IP address inside the client VPC.
  2. The client initiates a TCP connection to the interface endpoint over port 9094.
  3. The NLB listener within the TxB MSK account on port 9094 receives the connection.
  4. The NLB listener’s corresponding target group load balances the request to one of the brokers registered to it (Broker 1). In response, Broker 1 sends back the advertised DNS name and port (9001) to the client.
  5. The client resolves the broker endpoint address again to the interface endpoint IP and initiates a connection to the same interface endpoint over TCP port 9001.
  6. This connection is routed to the NLB listener for TCP port 9001.
  7. This NLB listener’s corresponding target group is configured to receive the traffic on TCP port 9094, and forwards the request on the same port to the only registered target, Broker 1.

High-level setup

The setup steps in this section are shown for the US East (Ohio) Region, please modify if using another region. In the TxB MSK account, complete the following:

  1. Modify the port that the MSK broker is advertising by running the following command against each running broker. The following example command shows changing the advertised port on a specific broker b-1 to 9001. For each broker you run the below command against, you must change the values of bootstrap-server, entity-name, CLIENT_SECURE, REPLICATION and REPLICATION_SECURE. Please note that while modifying the REPLICATION and REPLICATION_SECURE values, -internal has to be appended to the broker name and the ports 9093 and 9095 shown below should not be changed.
    ./kafka-configs.sh \
    --bootstrap-server b-1.exampleClusterName.abcde.c2.kafka.us-east-2.amazonaws.com:9094 \
    --entity-type brokers \
    --entity-name 1 \
    --alter \
    --command-config kafka_2.12-2.2.1/bin/client.properties \
    --add-config advertised.listeners=[\
    CLIENT_SECURE://b-1.exampleClusterName.abcde.c2.kafka.us-east-2.amazonaws.com:9001,\
    REPLICATION://b-1-internal.exampleClusterName.abcde.c2.kafka.us-east-2.amazonaws.com:9093,\
    REPLICATION_SECURE://b-1-internal.exampleClusterName.abcde.c2.kafka.us-east-2.amazonaws.com:9095]

  2. Create a target group with target type as IP, protocol TCP, port 9094, and in the same VPC as the MSK cluster. The preceding diagram represents this as B-ALL.
    • Register all MSK brokers to B-ALL as a target by its IP address.
  3. Create target groups dedicated for each broker (B1, B2) with the same properties as B-ALL.
    • Register the respective MSK broker to each target group by its IP address.
  4. Perform the same steps for additional brokers if needed and create unique listener-target group corresponding to the advertised port for each broker.
  5. Create an NLB that is enabled for the same subnets that the MSK brokers are in and with cross-zone load balancing enabled.
    • Create a TCP listener for every broker’s advertised port (9001, 9002) that forwards to the corresponding target group you created (B1, B2).
    • Create a special TCP listener 9094 that forwards to the B-ALL target group.
  6. Create an endpoint service configuration for the NLB that requires acceptance and grant permissions to the client account to create a connection to this endpoint service.

In the client account, complete the following:

  1. Create an interface endpoint in the same VPC the client is in (this connection request needs to be accepted within the TxB MSK account).
  2. Create a Route 53 private hosted zone, with the domain name kafka.us-east-2.amazonaws.com and associate it with the same VPC as the client is in.
  3. Under this hosted zone, create A-Alias records identical to the broker DNS names to avoid any TLS handshake failures and point it to the interface endpoint.

This post shows both of these patterns to be using TLS on TCP port 9094 to talk to the MSK brokers. If your security posture allows the use of plaintext communication between the clients and brokers, these patterns apply in that scenario as well, using TCP port 9092.

With both of these patterns, if Amazon MSK detects a broker failure, it mitigates the failure by replacing the unhealthy broker with a new one. In addition, the new MSK broker retains the same IP address and has the same Kafka properties, such as any modified advertised.listener configuration.

Amazon MSK allows clients to communicate with the service on TCP ports 9092, 9094, and 2181. As a byproduct of modifying the advertised.listener in Pattern 2, clients are automatically asked to speak with the brokers on the advertised port. If there is a need for clients in the same account as Amazon MSK to access the brokers, you should create a new Route53 hosted zone in the Amazon MSK account with identical broker DNS names pointing to the NLB DNS name. The Route53 record sets override the MSK broker DNS and allow for all traffic to the brokers to go via the NLB.

Transaction Banking’s MSK broker access pattern

For broker access across TxB micro-accounts, TxB chose Pattern 1, where one interface endpoint per broker is exposed to the client account. TxB streamlined this overall process by automating the creation of the endpoint service within the TxB MSK account and the interface endpoints within the client accounts without any manual intervention.

At the time of cluster creation, the bootstrap broker configuration is retrieved by calling the Amazon MSK APIs and stored in AWS Systems Manager Parameter Store in the client account so that they can be retrieved on application startup. This enables clients to be agnostic of the Kafka broker’s DNS names being launched in a completely different account.

A key driver for TxB choosing Pattern 1 is that it avoids having to modify a broker property like the advertised port. Pattern 2 creates the need for TxB to track which broker is advertising which port and make sure new brokers aren’t reusing the same port. This adds the overhead of having to modify and track the advertised port of new brokers being launched live and having to create a corresponding listener-target group pair for these brokers. TxB avoided this additional overhead by choosing Pattern 1.

On the other hand, Pattern 1 requires the creation of additional dedicated NLBs and interface endpoint connections when more brokers are added to the cluster. TxB limits this management overhead through automation, which requires additional engineering effort.

Also, using Pattern 1 costs more compared to Pattern 2, because each broker in the cluster has a dedicated NLB and an interface endpoint. For a single broker, it costs $37.80 per month to keep the end-to-end connectivity infrastructure up. The breakdown of the monthly connectivity costs is as follows:

  • NLB running cost – 1 NLB x $0.0225 x 720 hours/month = $16.20/month
  • 1 VPC endpoint spread across three AZs – 1 VPCE x 3 ENIs x $0.01 x 720 hours/month = $21.60/month

Additional charges for NLB capacity used and AWS PrivateLink data processed apply. For more information about pricing, see Elastic Load Balancing pricing and AWS PrivateLink pricing.

To summarize, Pattern 1 is best applicable when:

  • You want to minimize the management overhead associated with modifying broker properties, such as advertised port
  • You have automation that takes care of adding and removing infrastructure when new brokers are created or destroyed
  • Simplified and uniform deployments are primary drivers, with cost as a secondary concern

Transaction Banking’s security requirements for Amazon MSK

The TxB micro-account provides a strong application isolation boundary, and accessing MSK brokers using AWS PrivateLink using Pattern 1 allows for tightly controlled connection flows between these TxB micro-accounts. TxB further builds on this foundation through additional infrastructure and data protection controls available in Amazon MSK. For more information, see Security in Amazon Managed Streaming for Apache Kafka.

The following are the core security tenets that TxB’s internal security team require for using Amazon MSK:

  • Encryption at rest using Customer Master Key (CMK) – TxB uses the Amazon MSK managed offering of encryption at rest. Amazon MSK integrates with AWS Key Management Service (AWS KMS) to offer transparent server-side encryption to always encrypt your data at rest. When you create an MSK cluster, you can specify the AWS KMS CMK that AWS KMS uses to generate data keys that encrypt your data at rest. For more information, see Using CMKs and data keys.
  • Encryption in transit – Amazon MSK uses TLS 1.2 for encryption in transit. TxB makes client-broker encryption and encryption between the MSK brokers mandatory.
  • Client authentication with TLS – Amazon MSK uses AWS Certificate Manager Private Certificate Authority (ACM PCA) for client authentication. The ACM PCA can either be a root Certificate Authority (CA) or a subordinate CA. If it’s a root CA, you need to install a self-signed certificate. If it’s a subordinate CA, you can choose its parent to be an ACM PCA root, a subordinate CA, or an external CA. This external CA can be your own CA that issues the certificate and becomes part of the certificate chain when installed as the ACM PCA certificate. TxB takes advantage of this capability and uses certificates signed by ACM PCA that are distributed to the client accounts.
  • Authorization using Kafka Access Control Lists (ACLs) – Amazon MSK allows you to use the Distinguished Name of a client’s TLS certificates as the principal of the Kafka ACL to authorize client requests. To enable Kafka ACLs, you must first have client authentication using TLS enabled. TxB uses the Kafka Admin API to create Kafka ACLs for each topic using the certificate names of the certificates deployed on the consumer and producer client instances. For more information, see Apache Kafka ACLs.

Conclusion

This post illustrated how the Transaction Banking team at Goldman Sachs approaches an application isolation boundary through the TxB micro-account strategy and how AWS PrivateLink complements this strategy.  Additionally, this post discussed how the TxB team builds connectivity to their MSK clusters across TxB micro-accounts and how Amazon MSK takes the undifferentiated heavy lifting away from TxB by allowing them to achieve their core security requirements. You can leverage this post as a reference to build a similar approach when implementing an Amazon MSK environment.

 


About the Authors

Robert L. Cossin is a Vice President at Goldman Sachs in New York. Rob joined Goldman Sachs in 2004 and has worked on many projects within the firm’s cash and securities flows. Most recently, Rob is a technical architect on the Transaction Banking team, focusing on cloud enablement and security.

 

 

 

Harsha W. Sharma is a Solutions Architect with AWS in New York. Harsha joined AWS in 2016 and works with Global Financial Services customers to design and develop architectures on AWS, and support their journey on the cloud.

 

 

BBVA: Helping Global Remote Working with Amazon AppStream 2.0

Post Syndicated from Joe Luis Prieto original https://aws.amazon.com/blogs/architecture/bbva-helping-global-remote-working-with-amazon-appstream-2-0/

This post was co-written with Javier Jose Pecete, Cloud Security Architect at BBVA, and Javier Sanz Enjuto, Head of Platform Protection – Security Architecture at BBVA.

Introduction

Speed and elasticity are key when you are faced with unexpected scenarios such as a massive employee workforce working from home or running more workloads on the public cloud if data centers face staffing reductions. AWS customers can instantly benefit from implementing a fully managed turnkey solution to help cope with these scenarios.

Companies not only need to use technology as the foundation to maintain business continuity and adjust their business model for the future, but they also must work to help their employees adapt to new situations.

About BBVA

BBVA is a customer-centric, global financial services group present in more than 30 countries throughout the world, has more than 126,000 employees, and serves more than 78 million customers.

Founded in 1857, BBVA is a leader in the Spanish market as well as the largest financial institution in Mexico. It has leading franchises in South America and the Sun Belt region of the United States and is the leading shareholder in Turkey’s Garanti BBVA.

The challenge

BBVA implemented a global remote working plan that protects customers and employees alike, including a significant reduction of the number of employees working in its branch offices. It also ensured continued and uninterrupted operations for both consumer and business customers by strengthening digital access to its full suite of services.

Following the company’s policies and adhering to new rules announced by national authorities in the last weeks, more than 86,000 employees from across BBVA’s international network of offices and its central service functions now work remotely.

BBVA, subject to a set of highly regulated requirements, and was looking for a global architecture to accommodate remote work. The solution needed to be fast to implement, adaptable to scale out gradually in the various countries in which it operates, and able to meet its operational, security, and regulatory requirements.

The architecture

BBVA selected Amazon AppStream 2.0 for particular use cases of applications that, due to their sensitivity, are not exposed to the internet (such as financial, employee, and security applications). Having had previous experience with the service, BBVA chose AppStream 2.0 to accommodate the remote work experience.

AppStream 2.0 is a fully managed application streaming service that provides users with instant access to their desktop applications from anywhere, regardless of what device they are using.

AppStream 2.0 works with IT environments, can be managed through the AWS SDK or console, automatically scales globally on demand, and is fully managed on AWS. This means there is no hardware or software to deploy, patch, or upgrade.

AppStream 2.0 can be managed through the AWS SDK (1)

  1. The streamed video and user inputs are sent over HTTPS and are SSL-encrypted between the AppStream 2.0 instance executing your applications, and your end users.
  2. Security groups are used to control network access to the customer VPC.
  3. AppStream 2.0 streaming instance access to the internet is through the customer VPC.

AppStream 2.0 fleets are created by use case to apply security restrictions depending on data sensitivity. Setting clipboard, file transfer, or print to local device options, the fleets control the data movement to and from employees’ AppStream 2.0 streaming sessions.

BBVA relies on a proprietary service called Heimdal to authenticate employees through the corporate identity provider. Heimdal calls the AppStream 2.0 API CreateStreamingURL operation to create a temporary URL to start a streaming session for the specified user, and tries to abstract the user from the service using:

  • FleetName to connect the most appropriate fleet based on the user’s location (BBVA has fleets deployed in Europe and America to improve the user’s experience.)
  • ApplicationId to launch the proper application without having to use an intermediate portal
  • SessionContext in situations where, for instance, the authentication service generates a token and needs to be forwarded to a browser application and injected as a session cookie

BBVA uses AWS Transit Gateway to build a hub-and-spoke network topology (2)

To simplify its overall network architecture, BBVA uses AWS Transit Gateway to build a hub-and-spoke network topology with full control over network routing and security.

There are situations where the application streamed in AppStream 2.0 needs to connect:

  1. On-premises, using AWS Direct Connect plus VPN providing an IPsec-encrypted private connection
  2. To the Internet through an outbound VPC proxy with domain whitelisting and content filtering to control the information and threats in the navigation of the employee

AppStream 2.0 activity is logged into a centralized repository support by Amazon S3 for detecting unusual behavior patterns and by regulatory requirements.

Conclusion

BBVA built a global solution reducing implementation time by 90% compared to on-premises projects, and is meeting its operational and security requirements. As well, the solution is helping with the company’s top concern: protecting the health and safety of its employees.

How financial institutions can approve AWS services for highly confidential data

Post Syndicated from Ilya Epshteyn original https://aws.amazon.com/blogs/security/how-financial-institutions-can-approve-aws-services-for-highly-confidential-data/

As a Principal Solutions Architect within the Worldwide Financial Services industry group, one of the most frequently asked questions I receive is whether a particular AWS service is financial-services-ready. In a regulated industry like financial services, moving to the cloud isn’t a simple lift-and-shift exercise. Instead, financial institutions use a formal service-by-service assessment process, often called whitelisting, to demonstrate how cloud services can help address their regulatory obligations. When this process is not well defined, it can delay efforts to migrate data to the cloud.

In this post, I will provide a framework consisting of five key considerations that financial institutions should focus on to help streamline the whitelisting of cloud services for their most confidential data. I will also outline the key AWS capabilities that can help financial services organizations during this process.

Here are the five key considerations:

  1. Achieving compliance
  2. Data protection
  3. Isolation of compute environments
  4. Automating audits with APIs
  5. Operational access and security

For many of the business and technology leaders that I work with, agility and the ability to innovate quickly are the top drivers for their cloud programs. Financial services institutions migrate to the cloud to help develop personalized digital experiences, break down data silos, develop new products, drive down margins for existing products, and proactively address global risk and compliance requirements. AWS customers who use a wide range of AWS services achieve greater agility as they move through the stages of cloud adoption. Using a wide range of services enables organizations to offload undifferentiated heavy lifting to AWS and focus on their core business and customers.

My goal is to guide financial services institutions as they move their company’s highly confidential data to the cloud — in both production environments and mission-critical workloads. The following considerations will help financial services organizations determine cloud service readiness and to achieve success in the cloud.

1. Achieving compliance

For financial institutions that use a whitelisting process, the first step is to establish that the underlying components of the cloud service provider’s (CSP’s) services can meet baseline compliance needs. A key prerequisite to gaining this confidence is to understand the AWS shared responsibility model. Shared responsibility means that the secure functioning of an application on AWS requires action on the part of both the customer and AWS as the CSP. AWS customers are responsible for their security in the cloud. They control and manage the security of their content, applications, systems, and networks. AWS manages security of the cloud, providing and maintaining proper operations of services and features, protecting AWS infrastructure and services, maintaining operational excellence, and meeting relevant legal and regulatory requirements.

In order to establish confidence in the AWS side of the shared responsibility model, customers can regularly review the AWS System and Organization Controls 2 (SOC 2) Type II report prepared by an independent, third-party auditor. The AWS SOC 2 report contains confidential information that can be obtained by customers under an AWS non-disclosure agreement (NDA) through AWS Artifact, a self-service portal for on-demand access to AWS compliance reports. Sign in to AWS Artifact in the AWS Management Console, or learn more at Getting Started with AWS Artifact.

Key takeaway: Currently, 116 AWS services are in scope for SOC compliance, which will help organizations streamline their whitelisting process. For more information about which services are in scope, see AWS Services in Scope by Compliance Program.

2. Data protection

Financial institutions use comprehensive data loss prevention strategies to protect confidential information. Customers using AWS data services can employ encryption to mitigate the risk of disclosure, alteration of sensitive information, or unauthorized access. The AWS Key Management Service (AWS KMS) allows customers to manage the lifecycle of encryption keys and control how they are used by their applications and AWS services. Allowing encryption keys to be generated and maintained in the FIPS 140-2 validated hardware security modules (HSMs) in AWS KMS is the best practice and most cost-effective option.

For AWS customers who want added flexibility for key generation and storage, AWS KMS allows them to either import their own key material into AWS KMS and keep a copy in their on-premises HSM, or generate and store keys in dedicated AWS CloudHSM instances under their control. For each of these key material generation and storage options, AWS customers can control all the permissions to use keys from any of their applications or AWS services. In addition, every use of a key or modification to its policy is logged to AWS CloudTrail for auditing purposes. This level of control and audit over key management is one of the tools organizations can use to address regulatory requirements for using encryption as a data privacy mechanism.

All AWS services offer encryption features, and most AWS services that financial institutions use integrate with AWS KMS to give organizations control over their encryption keys used to protect their data in the service. AWS offers customer-controlled key management features in twice as many services as any other CSP.

Financial institutions also encrypt data in transit to ensure that it is accessed only by the intended recipient. Encryption in transit must be considered in several areas, including API calls to AWS service endpoints, encryption of data in transit between AWS service components, and encryption in transit within applications. The first two considerations fall within the AWS scope of the shared responsibility model, whereas the latter is the responsibility of the customer.

All AWS services offer Transport Layer Security (TLS) 1.2 encrypted endpoints that can be used for all API calls. Some AWS services also offer FIPS 140-2 endpoints in selected AWS Regions. These FIPS 140-2 endpoints use a cryptographic library that has been validated under the Federal Information Processing Standards (FIPS) 140-2 standard. For financial institutions that operate workloads on behalf of the US government, using FIPS 140-2 endpoints helps them to meet their compliance requirements.

To simplify configuring encryption in transit within an application, which falls under the customer’s responsibility, customers can use the AWS Certificate Manager (ACM) service. ACM enables easy provisioning, management, and deployment of x.509 certificates used for TLS to critical application endpoints hosted in AWS. These integrations provide automatic certificate and private key deployment and automated rotation for Amazon CloudFront, Elastic Load Balancing, Amazon API Gateway, AWS CloudFormation, and AWS Elastic Beanstalk. ACM offers both publicly-trusted and private certificate options to meet the trust model requirements of an application. Organizations may also import their existing public or private certificates to ACM to make use of existing public key infrastructure (PKI) investments.

Key takeaway: AWS KMS allows organizations to manage the lifecycle of encryption keys and control how encryption keys are used for over 50 services. For more information, see AWS Services Integrated with AWS KMS. AWS ACM simplifies the deployment and management of PKI as compared to self-managing in an on-premises environment.

3. Isolation of compute environments

Financial institutions have strict requirements for isolation of compute resources and network traffic control for workloads with highly confidential data. One of the core competencies of AWS as a CSP is to protect and isolate customers’ workloads from each other. Amazon Virtual Private Cloud (Amazon VPC) allows customers to control their AWS environment and keep it separate from other customers’ environments. Amazon VPC enables customers to create a logically separate network enclave within the Amazon Elastic Compute Cloud (Amazon EC2) network to house compute and storage resources. Customers control the private environment, including IP addresses, subnets, network access control lists, security groups, operating system firewalls, route tables, virtual private networks (VPNs), and internet gateways.

Amazon VPC provides robust logical isolation of customers’ resources. For example, every packet flow on the network is individually authorized to validate the correct source and destination before it is transmitted and delivered. It is not possible for information to pass between multiple tenants without specifically being authorized by both the transmitting and receiving customers. If a packet is being routed to a destination without a rule that matches it, the packet is dropped. AWS has also developed the AWS Nitro System, a purpose-built hypervisor with associated custom hardware components that allocates central processing unit (CPU) resources for each instance and is designed to protect the security of customers’ data, even from operators of production infrastructure.

For more information about the isolation model for multi-tenant compute services, such as AWS Lambda, see the Security Overview of AWS Lambda whitepaper. When Lambda executes a function on a customer’s behalf, it manages both provisioning and the resources necessary to run code. When a Lambda function is invoked, the data plane allocates an execution environment to that function or chooses an existing execution environment that has already been set up for that function, then runs the function code in that environment. Each function runs in one or more dedicated execution environments that are used for the lifetime of the function and are then destroyed. Execution environments run on hardware-virtualized lightweight micro-virtual machines (microVMs). A microVM is dedicated to an AWS account, but can be reused by execution environments across functions within an account. Execution environments are never shared across functions, and microVMs are never shared across AWS accounts. AWS continues to innovate in the area of hypervisor security, and resource isolation enables our financial services customers to run even the most sensitive workloads in the AWS Cloud with confidence.

Most financial institutions require that traffic stay private whenever possible and not leave the AWS network unless specifically required (for example, in internet-facing workloads). To keep traffic private, customers can use Amazon VPC to carve out an isolated and private portion of the cloud for their organizational needs. A VPC allows customers to define their own virtual networking environments with segmentation based on application tiers.

To connect to regional AWS services outside of the VPC, organizations may use VPC endpoints, which allow private connectivity between resources in the VPC and supported AWS services. Endpoints are managed virtual devices that are highly available, redundant, and scalable. Endpoints enable private connection between a customer’s VPC and AWS services using private IP addresses. With VPC endpoints, Amazon EC2 instances running in private subnets of a VPC have private access to regional resources without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Furthermore, when customers create an endpoint, they can attach a policy that controls the use of the endpoint to access only specific AWS resources, such as specific Amazon Simple Storage Service (Amazon S3) buckets within their AWS account. Similarly, by using resource-based policies, customers can restrict access to their resources to only allow access from VPC endpoints. For example, by using bucket policies, customers can restrict access to a given Amazon S3 bucket only through the endpoint. This ensures that traffic remains private and only flows through the endpoint without traversing public address space.

Key takeaway: To help customers keep traffic private, more than 45 AWS services have support for VPC Endpoints.

4. Automating audits with APIs

Visibility into user activities and resource configuration changes is a critical component of IT governance, security, and compliance. On-premises logging solutions require installing agents, setting up configuration files and log servers, and building and maintaining data stores to store the data. This complexity may result in poor visibility and fragmented monitoring stacks, which in turn takes longer to troubleshoot and resolve issues. CloudTrail provides a simple, centralized solution to record AWS API calls and resource changes in the cloud that helps alleviate this burden.

CloudTrail provides a history of activity in a customer’s AWS account to help them meet compliance requirements for their internal policies and regulatory standards. CloudTrail helps identify who or what took which action, what resources were acted upon, when the event occurred, and other details to help customers analyze and respond to activity in their AWS account. CloudTrail management events provide insights into the management (control plane) operations performed on resources in an AWS account. For example, customers can log administrative actions, such as creation, deletion, and modification of Amazon EC2 instances. For each event, they receive details such as the AWS account, IAM user role, and IP address of the user that initiated the action as well as time of the action and which resources were affected.

CloudTrail data events provide insights into the resource (data plane) operations performed on or within the resource itself. Data events are often high-volume activities and include operations, such as Amazon S3 object-level APIs, and AWS Lambda function Invoke APIs. For example, customers can log API actions on Amazon S3 objects and receive detailed information, such as the AWS account, IAM user role, IP address of the caller, time of the API call, and other details. Customers can also record activity of their Lambda functions and receive details about Lambda function executions, such as the IAM user or service that made the Invoke API call, when the call was made, and which function was executed.

To help customers simplify continuous compliance and auditing, AWS uniquely offers the AWS Config service to help them assess, audit, and evaluate the configurations of AWS resources. AWS Config continuously monitors and records AWS resource configurations, and allows customers to automate the evaluation of recorded configurations against internal guidelines. With AWS Config, customers can review changes in configurations and relationships between AWS resources and dive into detailed resource configuration histories.

Key takeaway: Over 160 AWS services are integrated with CloudTrail, which helps customers ensure compliance with their internal policies and regulatory standards by providing a history of activity within their AWS account. For more information about how to use CloudTrail with specific AWS services, see AWS Service Topics for CloudTrail in the CloudTrail user guide. For more information on how to enable AWS Config in an environment, see Getting Started with AWS Config.

5. Operational access and security

In our discussions with financial institutions, they’ve told AWS that they are required to have a clear understanding of access to their data. This includes knowing what controls are in place to ensure that unauthorized access does not occur. AWS has implemented layered controls that use preventative and detective measures to ensure that only authorized individuals have access to production environments where customer content resides. For more information about access and security controls, see the AWS SOC 2 report in AWS Artifact.

One of the foundational design principles of AWS security is to keep people away from data to minimize risk. As a result, AWS created an entirely new virtualization platform called the AWS Nitro System. This highly innovative system combines new hardware and software that dramatically increases both performance and security. The AWS Nitro System enables enhanced security with a minimized attack surface because virtualization and security functions are offloaded from the main system board where customer workloads run to dedicated hardware and software. Additionally, the locked-down security model of the AWS Nitro System prohibits all administrative access, including that of Amazon employees, which eliminates the possibility of human error and tampering.

Key takeaway: Review third-party auditor reports (including SOC 2 Type II) available in AWS Artifact, and learn more about the AWS Nitro System.

Conclusion

AWS can help simplify and expedite the whitelisting process for financial services institutions to move to the cloud. When organizations take advantage of a wide range of AWS services, it helps maximize their agility by making use of the existing security and compliance measures built into AWS services to complete whitelisting so financial services organizations can focus on their core business and customers.

After organizations have completed the whitelisting process and determined which cloud services can be used as part of their architecture, the AWS Well-Architected Framework can then be implemented to help build and operate secure, resilient, performant, and cost-effective architectures on AWS.

AWS also has a dedicated team of financial services professionals to help customers navigate a complex regulatory landscape, as well as other resources to guide them in their migration to the cloud – no matter where they are in the process. For more information, see the AWS Financial Services page, or fill out this AWS Financial Services Contact form.

Additional resources

  • AWS Security Documentation
    The security documentation repository shows how to configure AWS services to help meet security and compliance objectives. Cloud security at AWS is the highest priority. AWS customers benefit from a data center and network architecture that are built to meet the requirements of the most security-sensitive organizations.
  • AWS Compliance Center
    The AWS Compliance Center is an interactive tool that provides customers with country-specific requirements and any special considerations for cloud use in the geographies in which they operate. The AWS Compliance Center has quick links to AWS resources to help with navigating cloud adoption in specific countries, and includes details about the compliance programs that are applicable in these jurisdictions. The AWS Compliance Center covers many countries, and more countries continue to be added as they update their regulatory requirements related to technology use.
  • AWS Well-Architected Framework and AWS Well-Architected Tool
    The AWS Well-Architected Framework helps customers understand the pros and cons of decisions they make while building systems on AWS. The AWS Well-Architected Tool helps customers review the state of their workloads and compares them to the latest AWS architectural best practices. For more information about the AWS Well-Architected Framework and security, see the Security Pillar – AWS Well-Architected Framework whitepaper.

If you have feedback about this blog post, submit comments in the Comments section below.

Author

Ilya Epshteyn

Ilya is a solutions architect with AWS. He helps customers to innovate on the AWS platform by building highly available, scalable, and secure architectures. He enjoys spending time outdoors and building Lego creations with his kids.

Transforming DevOps at Broadridge on AWS

Post Syndicated from Som Chatterjee original https://aws.amazon.com/blogs/devops/transforming-devops-for-a-fintech-on-aws/

by Tom Koukourdelis (Broadridge – Vice President, Head of Global Cloud Platform Development and Engineering), Sreedhar Reddy (Broadridge – Vice President, Enterprise Cloud Architecture)

We have seen large enterprises in all industry segments meaningfully utilizing AWS to build new capabilities and deliver business value. While doing so, enterprises have to balance existing systems, processes, tools, and culture while innovating at pace with industry disruptors. Broadridge Financial Solutions, Inc. (NYSE: BR) is no exception. Broadridge is a $4 billion global FinTech leader and a leading provider of investor communications and technology-driven solutions to banks, broker-dealers, asset and wealth managers, and corporate issuers.

This blog post explores how we adopted AWS at scale while being secured and compliant, as well as delivering a high degree of productivity for our builders on AWS. It also describes the steps we took to create technical (a cloud solution as a foundation based on AWS) and procedural (organizational) capabilities by leveraging AWS cloud adoption constructs. The improvement in our builder productivity and agility directly contributes to rolling out differentiated business capabilities addressing our customer needs in a timely manner. In this post, we share real-life learnings and takeaways to adopt AWS at scale, transform business and application team experiences, and deliver customer delight.

Background

At Broadridge we have number of distributed and mainframe systems supporting multiple financial services domains and sub-domains such as post trade, proxy communications, financial and regulatory reporting, portfolio management, and financial operations. The majority of these systems were built and deployed years ago at on-premises data centers all over the US and abroad.

Builder personas at Broadridge are diverse in terms of location, culture, and the technology stack they use to build and support applications (we use a number of front-end JS frameworks; .NET; Java; ColdFusion for web development; ORMs for data entity relational mapping; IBM MQ; Apache Camel for messaging; databases like SQL, Oracle, Sybase, and other open source stacks for transaction management; databases, and batch processing on virtualized and bare metal instances). With more than 200 on-premises distributed applications and mainframe systems across front-, mid-, and back-office ecosystems, we wanted to leverage AWS to improve efficiency and build agility, and to reduce costs. The ability to reach customers at new geographies, reduced time to market, and opportunities to build new business competencies were key parameters as well.

Broadridge’s core tenents for cloud adoption

When AWS adoption within Broadridge attained a critical mass (known as the Foundation stage of adoption), the business and technology leadership teams defined our posture of cloud adoption and shared them with teams across the organization using the following tenets. Enterprises looking to adopt AWS at scale should define similar tenets fit for their organizations in plain language understandable by everyone across the board.

  • Iterate: Understanding that we cannot disrupt ongoing initiatives, small and iterative approach of moving workloads to cloud in waves— rinse and repeat— was to be adopted. Staying away from long-drawn, capital-intensive big bangs were to be avoided.
  • Fully automate: Starting from infrastructure deployment to application build, test, and release, we decided early on that automation and no-touch deployment are the right approach both to leverage cloud capabilities and to fuel a shift toward a matured DevOps culture.
  • Trust but verify only exceptions: Security and regulatory compliance are paramount for an organization like Broadridge. Guardrails (such as service control policies, managed AWS Config rules, multi-account strategy) and controls (such as PCI, NIST control frameworks) are iteratively developed to baseline every AWS account and AWS resource deployed. Manual security verification of workloads isn’t needed unless an exception is raised. Defense in depth (distancing attack surface from sensitive data and resources using multi-layered security) strategies were to be applied.
  • Go fast; re-hosting is acceptable: Not every workload needs to go through years of rewriting and refactoring before it is deemed suitable for the cloud. Minor tweaking (light touch re-platforming) to go fast (such as on-premises Oracle to RDS for Oracle) is acceptable.
  • Timeliness and small wins are key: Organizations spend large sums of capital to completely rewrite applications and by the time they are done, the business goal and customer expectation will have changed. That leads to material dissatisfaction with customers. We wanted to avoid that by setting small, measurable targets.
  • Cloud fluency: Investment in training and upskilling builders and leaders across the organization (developers, infra-ops, sec-ops, managers, salesforce, HR, and executive leadership) were to be to made to build fluency on the cloud.

The first milestone

The first milestone in our adoption journey was synonymous with Project stage of adoption and had the following characteristics.

A controlled sprawl of shadow IT

We first gave small teams with little to no exposure to critical business functions (such as customer data and SLA-oriented workloads) sandboxes to test out proofs of concepts (PoC) on AWS. We created the cloud sandboxes with least privilege, and added additional privileges upon request after verification. During this time, our key AWS usage characteristics were:

  • Manual AWS account setup with least privilege
  • Manual IAM role creation with role boundaries and authentication and authorization from the existing enterprise Active Directory
  • Integration with existing Security Information and Event Management (SIEM) tools to audit role sprawl and config changes
  • Proofs of concepts only
  • Account tagging for chargeback and tracking purposes
  • No automated build, test, deploy, or integration with existing delivery pipeline
  • Small and definitive timeframes for PoCs with defined goals

A typical AWS environment at this stage will resemble that shown in the following diagram:

Representative AWS usage during first milestone

As shown above, at this time the corporate assets were connected to a highly restrictive AWS environment through VPN. The access to the AWS environment were setup based on AWS Identity primitives or IAM roles mapped to and federated with the on-premises Active Directory. There was a single VPC setup for a sandbox account with no egress to the internet. There were no customer data hosted on this AWS environment and the AWS environment was connected with our SIEM of choice.

Early adopters became first educators and mentors

Members of the first teams to carry out proofs of concept on AWS shared learnings with each other and with the leadership team within Broadridge. This helped build communities of practices (CoPs) over time. Initial CoPs established were for networking and security, and were later extended to various practices like Terraform, Chef, and Jenkins.

Tech PMO team within Broadridge as the quasi-central cloud team

Ownership is vital no matter how small the effort and insignificant the impact of risky experimentation. The ownership of account setup, role creation, integration with on-premises AD and SIEM, and oversight to ensure that the experimentation does not pose any risk to the brand led us to build a central cloud team with experienced AWS and infrastructure practitioners. This team created a process for cloud migration with first manual guardrails of allowed and disallowed actions, manual interventions, and checkpoints built in every step.

At this stage, a representative pattern of work products across teams resembles what is shown below.

Work products across teams during initial stages of AWS usage

As the diagram suggests, individual application teams built overlapping—and, in many cases, identical—technical building blocks across the teams. This was acceptable as the teams were experimenting and running PoCs on AWS. In an actual production application delivery, the blocks marked with a * would be considered technical and functional waste—that is, undifferentiated lift which increases the cost of doing business.

The second milestone

In hindsight, this is perhaps the most important milestone in our cloud adoption journey. This step was marked with following key characteristics:

  • Every new team doing PoCs are rebuilding the same building blocks: This includes networking (VPCs and security groups), identity primitives (account, roles, and policies), monitoring (Amazon CloudWatch setup and custom metrics), and compute (images with org-mandated security patches).
  • The teams usually asking the same first fundamental questions: These include questions such as: What is an ideal CIDR block range? How do we integrate with SIEM? How do we spin up web servers on Amazon EC2? How do we secure access to data? How do we setup workload monitoring?
  • Security reviews rarely finding new security gaps but adding time to the process: A central security group as part of the central cloud team reviewed every new account request and every new service usage request without finding new security gaps when the application team used the baseline guardrails.
  • Manual effort is spent on tagging, chargeback, and other approvals: A portion of the application PoC/minimum viable product (MVP) lifecycle was spent on housekeeping. While housekeeping was necessary, the effort spent was undifferentiated.

The follow diagram represents the efforts for every team during the first phase.

Team wise efforts showing duplicative work

As shown above, every application team spent effort on building nearly the same capabilities before they could begin developing their team specific application functionalities and assets. The common blocks of work are undifferentiated and leads to spending effort which also varies depending on the efficiency of the team.

During this step, learnings from the PoCs led us to establish the tenets shared earlier in this post. To address the learnings, Broadridge established a cloud platform team. The cloud platform team, also referred to as the cloud enablement engine (CEE), is a team of builders who create the foundational building blocks on AWS that address common infrastructure, security, monitoring, auditing, and break-glass controls. At the same time, we established a cloud business office (CBO) as a liaison between the application and business teams and the CEE. CBO exists to manage and prioritize foundational requirements from multiple application teams as they go online on AWS and helps create the product backlog for CEE.

Cloud Enablement Engine Responsibilities:

  • Build out foundational building blocks utilizing AWS multi-account strategy
  • Build security guardrails, compliance controls, infrastructure as code automation, auditing and monitoring controls
  • Implement cloud platform backlog that funnel from CBO as common asks from app teams
  • Work with our AWS team to understand service roadmap, future releases, and provide feedback

Cloud Business Office Responsibilities:

  • Identify and prioritize repeating technical building blocks that cuts across multiple teams
  • Establish acceptable architecture patterns based on application use cases
  • Manage cloud programs to ensure CEE deliverables and business expectations align
  • Identify skilling needs, budget, and track spend
  • Contribute to the cloud platform backlog
  • Work with AWS team to understand service roadmap, future releases, and provide feedback

These teams were set up to scale AWS adoption, put building blocks into the hands of the applications teams, and ultimately deliver differentiated capabilities to Broadridge’s business teams and end customers. The following diagram translates the relationship and modus operandi among the teams:

CEE and CBO working model

Upon establishing the conceptual working model, the CBO and CEE teams looked at solutions from AWS to enable them to achieve the working model quickly. The starting point was AWS Landing Zone (ALZ). ALZ is an AWS solution based on the AWS multi-account strategy. It is a set of vetted constructs and best practices that we use as mechanisms to accelerate AWS adoption.

AWS multi-account strategy

The multi-account strategy employs best practices around separation of concerns, reduction of blast radius, account setup based on Software Development Life Cycle (SDLC) phases, and base operational roles for auditing, monitoring, security, and compliance, as shown in the above diagram. This strategy defines the need for having centralized shared or core accounts, which works as the master account for monitoring, governance, security, and auditing. A number of AWS services like Amazon GuardDuty, AWS Security Hub, and AWS Config configurations are set in these centralized accounts. Spoke or child accounts are vended as per a team’s requirement which are spun up with these governance, monitoring, and security defaults connected to the centralized account for log capturing, threat detection, configuration management, and security management.

The third milestone

The third milestone is synonymous with the Foundation stage of adoption

Using the ALZ construct, our CEE team developed a core set of principles to be used by every application team. Based on our core tenets, the CEE team built out an entry point (a web-based UI workflow application). This web UI was the entry point for any application team requesting an environment within AWS for experimentation or to begin the application development life cycle. Simplistically, the web UI sat on top of an automation engine built using APIs from AWS, ALZ components (Account Vending Machine, Shared Services Account, Logging Account, Security Account, default security groups, default IAM roles, and AD groups), and Terraform based code. The CBO team helped establish the common architecture patterns that was codified into this engine.

Team on-boarding workflow using foundational building blocks on AWS

An Angular based web UI is the starting point for application team to request for the AWS accounts. The web UI entry point asks a number of questions validating the type of account requested along with its intended purpose, ingress/egress requirements, high availability and disaster recovery requirements, business unit for charge back and ownership purposes. Once all information is entered, it sends out a notification based on a preset organization dispatch matrix rule. Upon receiving the request, the approver has the option to approve it or asks further clarification question. Once satisfactorily answered the approver approves the account vending request and a Terraform code is kicked in to create the default account.

When an account is created through this process, the following defaults are set up for a secure environment for development, testing, and staging. Similar guardrails are deployed in the production accounts as well.

  • Creates a new account under an existing AWS Organizational Unit (OU) based on the input parameters. Tags the chargeback codes, custom tags, and also integrates the resources with existing CMDB
  • Connects the new account to the master shared services and logging account as per the AWS Landing Zone constructs
  • Integrates with the CloudWatch event bus as a sender account
  • Runs stsAssumeRole commands on the new account to create infosec cross-account roles
  • Defines actions, conditions, role limits, and account policies
  • Creates environment variables related to the account in the parameter store within AWS Systems Manager
  • Connects the new account to TrendMicro for AV purposes
  • Attaches the default VPC of the new account to an existing AWS Transit Gateway
  • Generates a Splunk key for the account to store in the Splunk KV store
  • Uses AWS APIs to attach Enterprise support to the new account
  • Creates or amends a new AD group based on the IAM role
  • Integrates as an Amazon Macie member account
  • Enables AWS Security Hub for the account by running an enable-security-hub call
  • Sets up Chef runner for the new account
  • Runs account setting lock procedures to set Amazon S3 public settings, EBS default encryption setting
  • Enable firewall by setting AWS WAF rules for the account
  • Integrates the newly created account with CloudHealth and Dome9

Deploying all these guardrails in any new accounts removes the need for manual setup and intervention. This gives application developers the needed freedom to stop worrying about infrastructure and access provisioning while giving them a higher speed to value.

Using these technical and procedural cloud adoption constructs, we have been able to reduce application onboarding time. This has led to quicker delivery of business capability with the application teams focusing only on what differentiates their business rather than repeatedly building undifferentiated work products. This has also led to creation of mature building blocks over time for use of the application teams. Using these building blocks the teams are also modernizing applications by iteratively replacing old application blocks.

Conclusion

In summary, we are able to deliver better business outcomes and differentiated customer experience by:

  • Building common asks as reusable and automated enterprise assets and improving the overall enterprise-wide maturity by indexing on and growing these assets.
  • Depending on an experienced team to deliver baseline operational controls and guardrails.
  • Improving their security posture with higher-level and managed AWS security services instead of rebuilding everything from the ground up.
  • Using the Cloud Business Office to improve funneling of common asks. This helps the next team on AWS to benefit from a readily available set of approved services and application blueprints.

We will continue to build on and maturing these reusable building blocks by using AWS services and new feature releases.

 

The content and opinions in this blog are those of the third-party author and AWS is not responsible for the content or accuracy of this post.

 

Deploying applications at FINRA using AWS CodeDeploy

Post Syndicated from Nikunj Vaidya original https://aws.amazon.com/blogs/devops/deploying-applications-at-finra-using-aws-codedeploy/

by Geethalaksmi Ramachandran (FINRA – Director, Application Engineering), Avinash Chukka (FINRA – Senior Application Engineer)

At FINRA, a financial regulatory organization that oversees the broker-dealer industry with market intelligence, we have been utilizing the AWS CodeDeploy services to deploy applications on the cloud as well as on on-premises production servers. This blog post provides insight into our operations and experience with CodeDeploy.

Migration overview

Since 2014, we have gone through a systematic effort to migrate over 100 applications from on-premises resources to the AWS Cloud for everything from case management to data ingestion and processing. The applications comprise multiple components or microservices as individually deployable units or a single monolith application with multiple shared components.

Most of the application components, running on Linux and Windows, were entirely redesigned, containerized, and gradually deployed from on-premises to cloud-based Amazon ECS clusters. Fifteen applications were to be deferred for migration or planned for retirement due to other dependencies. Those deferred applications are currently running on on-premises bare-metal servers hosted across 38 Windows servers and 15 Linux servers per environment with a total of 150 Windows servers and 60 Linux servers across all the environments.

The applications were deployed to the on-premises servers earlier using the XL Deploy tool from XebiaLabs. The tool has now been decommissioned and replaced by CodeDeploy to attain more reliability and consistency in deployments across various applications.

Infrastructure and Workflow overview

FINRA’s AWS Cloud infrastructure consists of Amazon EC2 instances, ECS, Amazon EMR clusters, and many resources from other AWS services. We host web applications in ECS clusters, running approximately 200 clusters on each environment and EC2 instances. The infrastructure uses AWS CloudFormation and AWS Java SDK as Infrastructure-as-Code (IaC).

The CI/CD pipeline comprises of:

  • A source stage (per branch) stored in a BitBucket repository
  • A build stage executed on Jenkins build slaves
  • A deploy stage involving deployment from:
    • AWS CloudFormation and SDK to ECS clusters
    • CodeDeploy to EC2 instances
    • CodeDeploy Service to on-premises servers across various development, quality assurance (QA), staging, and production environments.

Jenkins masters running on EC2 instances within an Auto Scaling group orchestrate the CI/CD pipeline. The master spawns the build slaves as ECS tasks to execute a build job. Once the image is built and containerized in the build stage, the build artifact is stored in Artifactory repos for shared common libraries or staged in S3 to be used in deployments. The Jenkins slave invokes the appropriate deployment service – AWS CloudFormation and SDK or CodeDeploy depending on the target server environment, as detailed in the preceding paragraphs. On completion of the deployment, the automated smoke tests are launched.

The following diagram depicts the CD workflow for the on-premises instances:

The Delivery pipeline deploys across various environments such as development, QA, user acceptance testing (UAT), and production. Approval gates control deployments to UAT and production.

CodeDeploy Operations

Our experience of utilizing CodeDeploy services has been “very smooth” since we moved from the XebiaLabs XL Deploy tool three months ago. The main factors that led FINRA to select CodeDeploy for our organization were:

  • Being able to use the same set of deployment tools between on-premises and cloud-based instances
  • Easy portability and reuse of scripts
  • Shared community knowledge rather than isolated expertise

The default deployment parameters were well-suited for our environments and didn’t require altering values or customization. Depending on the application being deployed, deployments can be carried out on cloud-based instances or on-premises instances. Cloud-based instances use AWS CloudFormation templates to trigger CodeDeploy; on-premises instances use AWS CLI-based scripts to trigger CodeDeploy.

The cloud-based deployments in production follow “blue-green” strategy for some of the applications, in which rollback is a critical requirement for minimal disruption. Other applications in cloud follow the “rolling updates” method, where as the on-premises servers in production are upgraded using “in-place” deployment method. The CodeDeploy agents running on on-premises servers are configured with roles to query for required artifacts stored on specific S3 buckets when deploying the package.

The applications’ deployment mappings to the instances are configured based on EC2 Auto Scaling groups in the cloud and based on tags for on-premises resources. Each component is logically mapped to a CodeDeploy deployment group. However, at one point, the maximum number of tags added to the CodeDeploy on-premises instance was restricted to a maximum tag limit of 10, but the instance needed 13 tags corresponding to 13 deployment groups.

We overcame this limitation by adding a common tenth tag on the on-premises instance and also on the remaining deployment groups (10-13) and stored the mapping of instances to deployment groups externally. The deployment script first looks up the mapping and proceeds with the deployment by validating if it matches the target server name, then runs deployments only on the matching target servers and skips deployments on the unmatched servers, as shown in the following diagram.

CodeDeploy offers the following benefits to FINRA:

  • Deployment configurations written as code: CodeDeploy configuration uses CloudFormation templates as Infrastructure-as-Code, which makes it easier to create and maintain.
  • Version controlled deployment code: AWS CloudFormation templates, deployment configuration, and deployment scripts are maintained in the source code repository and version-controlled.
  • Reusability: Most CodeDeploy resource provisioning code is reusable across all the on-premises instances and on different platforms, such as Linux (RHEL) and Windows.
  • Zero maintenance of deployment tool: As a managed service, CodeDeploy does not require maintenance and upgrade.
  • Secrets Management: CodeDeploy integrates with central secrets management systems, and externalizes environment configurations.

Monitoring

FINRA uses the in-house developed DevOps Dashboard to monitor the build and deploy stages, based upon a Grafana UI extracting CI and CD data from Jenkins.

The cloud instances and on-premises servers are configured with agents to stream the real-time logs to a central Splunk Server, where the logs are analyzed and health-monitored. Optionally, the deployment logs are forwarded to the functional owners via email attachments. These logs become critical for the troubleshooting activity in post-mortems of past events. Due to the restrictions on accessing the base instances across various functional teams, the above mechanism enables us to gain visibility into health of the CI/CD infrastructure.

Looking Forward

We plan to migrate the remaining on-premises applications to the cloud after necessary refactoring and retiring of the application dependencies over the next few years.

Our eventual goal is to move towards serverless technologies to eliminate server infrastructure management.

Conclusion

This post reviewed the Infrastructure at FINRA on both AWS Cloud and on-premises, the CI/CD pipeline, and the CodeDeploy workflow integration, as well as examining insights into CodeDeploy use.

The content and opinions in this blog are those of the third-party author and AWS is not responsible for the content or accuracy of this post.

 

 

Architecture Monthly Magazine: Architecting for Financial Services

Post Syndicated from Annik Stahl original https://aws.amazon.com/blogs/architecture/architecture-monthly-magazine-architecting-for-financial-services/

Architecture Monthly - October - Bull and BearThis month’s Architecture Monthly magazine delves into the high-stakes world of banking, insurance, and securities. From capital markets and insurance, to global investment banks, payments, and emerging fintech startups, AWS helps customers innovate, modernize, and transform.

We’re featuring two field experts in October’s issue. First, we interviewed Ed Pozarycki, a Solutions Architect manager in the AWS Financial Services vertical, who spoke to us about patterns, trends, and the special challenges architects face when building systems for financial organizations. And this month we’re rolling out a new feature: Ask an Expert, where we’ll ask AWS professionals three questions about the current magazine’s theme.In this issue, Lana Kalashnyk, Principal Blockchain Architect, told us three things to know about blockchain and cryptocurrencies.

In October’s Issue

For October’s magazine, we’ve assembled architectural best practices about financial services from all over AWS, and we’ve made sure that a broad audience can appreciate it.

  • Interview: Ed Pozarycki, Solutions Architecture Manager, Financial Services
  • Blog post: Tips For Building a Cloud Security Operating Model in the Financial Services Industry
  • Case study: Aon Securities, Inc.
  • Ask an Expert: 3 Things to Know About Blockchain & Cryptocurrencies
  • On-demand webinar: The New Age of Banking & Transforming Customer Experiences
  • Whitepaper: Financial Services Grid Computing on AWS

How to Access the Magazine

We hope you’re enjoying Architecture Monthly, and we’d like to hear from you—leave us star rating and comment on the Amazon Kindle page or contact us anytime at [email protected].

Financial Services at re:Invent

We have a full re:Invent program planned for the Financial Services industry in December, including leadership, breakout, and builder sessions, plus chalk talks and workshops. Register today.

Cloud-Powered, Next-Generation Banking

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/cloud-powered-next-generation-banking/

Traditional banks make extensive use of labor-intensive, human-centric control structures such as Production Support groups, Security Response teams, and Contingency Planning organizations. These control structures were deemed necessary in order to segment responsibilities and to maintain a security posture that is risk averse. Unfortunately, this traditional model tends to keep the subject matter experts in these organizations at a distance from the development teams, reducing efficiency and getting in the way of innovation.

Banks and other financial technology (fintech) companies have realized that they need to move faster in order to meet the needs of the newest generation of customers. These customers, some in markets that have not been well-served by the traditional banks, expect a rich, mobile-first experience, top-notch customer service, and access to a broad array of services and products. They prefer devices to retail outlets, and want to patronize a bank that is responsive to their needs.

AWS-Powered Banking
Today I would like to tell you about a couple of AWS-powered banks that are addressing these needs. Both of these banks are born-in-the-cloud endeavors, and take advantage of the scale, power, and flexibility of AWS in new and interesting ways. For example, they make extensive use of microservices, deploy fresh code dozens or hundreds of times per day, and use analytics & big data to better understand their customers. They also apply automation to their compliance and control tasks, scanning code for vulnerabilities as it is committed, and also creating systems that systemically grant and enforce use of least-privilege IAM roles.

NuBank – Headquartered in Brazil and serving over 10 million customers, NuBank has been recognized by Fast Company as one of the most innovative companies in the world. They were founded in 2013 and reached unicorn status (a valuation of one billion dollars), just four years later. After their most recent round of funding, their valuation has jumped to ten billion dollars. Here are some resources to help you learn more about how they use AWS:

Starling – Headquartered in London and founded in 2014, Starling is backed by over $300M in funding. Their mobile apps provide instant notification of transactions, support freezing and unfreezing of cards, and provide in-app chat with customer service representatives. Here are some resources to help you learn more about how they use AWS:

Both banks are strong supporters of open banking, with support for APIs that allow third-party developers to build applications and services (read more about the NuBank API and the Starling API).

I found two of the videos (How the Cloud… and Automated Privilege Management…) particularly interesting. The two videos detail how NuBank and Starling have implemented Compliance as Code, with an eye toward simplifying permissions management and increasing the overall security profile of their respective banks.

I hope that you have enjoyed this quick look at how two next-generation banks are making use of AWS. The videos that I linked above contain tons of great technical information that you should also find of interest!

Jeff;

 

 

 

 

 

 

Tips for building a cloud security operating model in the financial services industry

Post Syndicated from Stephen Quigg original https://aws.amazon.com/blogs/security/tips-for-building-a-cloud-security-operating-model-in-the-financial-services-industry/

My team helps financial services customers understand how AWS services operate so that you can incorporate AWS into your existing processes and security operations centers (SOCs). As soon as you create your first AWS account for your organization, you’re live in the cloud. So, from day one, you should be equipped with certain information: you should understand some basics about how our products and services work, you should know how to spot when something bad could happen, and you should understand how to recover from that situation. Below is some of the advice I frequently offer to financial services customers who are just getting started.

How to think about cloud security

Security is security – the principles don’t change. Many of the on-premises security processes that you have now can extend directly to an AWS deployment. For example, your processes for vulnerability management, security monitoring, and security logging can all be transitioned over.

That said, AWS is more than just infrastructure. I sometimes talk to customers who are only thinking about the security of their AWS Virtual Private Clouds (VPCs), and about the Amazon Elastic Compute Cloud (EC2) instances running in those VPCs. And that’s good; its traditional network security that remains quite standard. But I also ask my customers questions that focus on other services they may be using. For example:

  • How are you thinking about who has Database Administrator (DBA) rights for Amazon Aurora Serverless? Aurora Serverless is a managed database service that lets AWS do the heavy lifting for many DBA tasks.
  • Do you understand how to configure (and monitor the configuration of) your Amazon Athena service? Athena lets you query large amounts of information that you’ve stored in Amazon Simple Storage Service (S3).
  • How will you secure and monitor your AWS Lambda deployments? Lambda is a serverless platform that has no infrastructure for you to manage.

Understanding AWS security services

As a customer, it’s important to understand the information that’s available to you about the state of your cloud infrastructure. Typically, AWS delivers much of that information via the Amazon CloudWatch service. So, I encourage my customers to get comfortable with CloudWatch, alongside our AWS security services. The key services that any security team needs to understand include:

  • Amazon GuardDuty, which is a threat detection system for the cloud.
  • AWS Cloudtrail, which is the log of AWS API services.
  • VPC Flow Logs, which enables you to capture information about the IP traffic going to and from network interfaces in your VPC.
  • AWS Config, which records all the configuration changes that your teams have made to AWS resources, allowing you to assess those changes.
  • AWS Security Hub, which offers a “single pane of glass” that helps you assess AWS resources and collect information from across your security services. It gives you a unified view of resources per Region, so that you can more easily manage your security and compliance workflow.

These tools make it much quicker for you to get up to speed on your cloud security status and establish a position of safety.

Getting started with automation in the cloud

You don’t have to be a software developer to use AWS. You don’t have to write any code; the basics are straightforward. But to optimize your use of AWS and to get faster at automating, there is a real advantage if you have coding skills. Automation is the core of the operating model. We have a number of tutorials that can help you get up to speed.

Self-service cloud security resources for financial services customers

There are people like me who can come and talk to you. But to keep you from having to wait for us, we also offer a lot of self-service cloud security resources on our website.

We offer a free digital training course on AWS security fundamentals, plus webinars on financial services topics. We also offer an AWS security certification, which lets you show that your security knowledge has been validated by a third-party.

There are also a number of really good videos you can watch. For example, we had our inaugural security conference, re:Inforce, in Boston this past June. The videos and slides from the conference are now on YouTube, so you can sit and watch at your own pace. If you’re not sure where to start, try this list of popular sessions.

Finding additional help

You can work with a number of technology partners to help extend your security tools and processes to the cloud.

  • Our AWS Professional Services team can come and help you on site. In addition, we can simulate security incidents with you tohelp you get comfortable with security and cloud technology and how to respond to incidents.
  • AWS security consulting partners can also help you develop processes or write the code that you might need.
  • The AWS Marketplace is a wonderful self-service location where you can get all sorts of great security solutions, including finding a consulting partner.

And if you’re interested in speaking directly to AWS, you can always get in touch. There are forms on our website, or you can reach out to your AWS account manager and they can help you find the resources that are necessary for your business.

Conclusion

Financial services customers face some tough security challenges. You handle large amounts of data, and it’s really important that this data is stored securely and that its privacy is respected. We know that our customers do lots of due diligence of AWS before adopting our services, and they have many different regulatory environments within which they have to work. In turn, we want to help customers understand how they can build a cloud security operating model that meets their needs while using our services.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Stephen Quigg

Stephen Quigg is a Principal Securities Solutions Architect within AWS Financial Services. Quigg started his AWS career in Sydney, Australia, but returned home to Scotland three years ago having missed the wind and rain too much. He manages to fit some work in between being a husband and father to two angelic children and making music.

AWS and the European Banking Authority Guidelines on Outsourcing

Post Syndicated from Chad Woolf original https://aws.amazon.com/blogs/security/aws-european-banking-authority-guidelines-on-outsourcing/

Financial institutions across the globe use AWS to transform the way they do business. It’s exciting to watch our customers in the financial services industry innovate on AWS in unique ways, across all geos and use cases. Regulations continue to evolve in this space, and we’re working hard to help customers proactively respond to new rules and guidelines. In many cases, the AWS Cloud makes it easier than ever before for customers to comply with different regulations and frameworks around the world.

The European Banking Authority (EBA), an EU financial supervisory authority, recently provided EU financial institutions (which includes credit institutions, certain investment firms, and payment institutions) with new outsourcing guidelines (PDF), which also apply to the use of cloud services. We’re ready and able to support our customers’ compliance with their obligations under the EBA Guidelines and to help meet and exceed their regulators’ expectations. We offer our customers a wide range of services that can simplify and directly assist in complying with the new guidelines, which take effect on September 30, 2019.

What do the EBA Guidelines mean for AWS customers?

The EBA Guidelines establish technology-neutral outsourcing requirements for EU financial institutions, and there is a particular focus on the outsourcing of “critical or important functions.” For AWS and our customers, the key takeaway is that the EBA Guidelines allow for EU financial institutions to use cloud services for material, regulated workloads. When considering or using third-party services, many EU financial institutions already follow due diligence, risk management, and regulatory notification processes that are similar to those processes laid out in the EBA Guidelines. To meet and exceed the EBA Guidelines’ requirements on security, resiliency, and assurance, EU financial institutions can use a variety of AWS security and compliance services.

Risk-based approach

The EBA Guidelines incorporate a risk-based approach that expects regulated entities to identify, assess, and mitigate the risks associated with any outsourcing arrangement. The risk-based approach outlined in the EBA Guidelines is consistent with the long-standing AWS shared responsibility model. This approach applies throughout the EBA Guidelines, including the areas of risk assessment, contractual and audit requirements, data location and transfer, and security implementation.

  • Risk assessment: The EBA Guidelines emphasize the need for EU financial institutions to assess the potential impact of outsourcing arrangements on their operational risk. The AWS shared responsibility model helps customers formulate their risk assessment approach because it illustrates how their security and management responsibilities change depending on the AWS services they use. For example, AWS operates some controls on behalf of customers, such as data center security, while customers operate other controls, such as event logging. In practice, AWS services help customers assess and improve their risk profile relative to traditional, on-premises environments.
  • Contractual and audit requirements: The EBA Guidelines lay out requirements for the written agreement between an EU financial institution and its service provider, including access and audit rights. For EU financial institutions running regulated workloads on AWS services, we offer the EBA Financial Services Addendum to address the EBA Guidelines’ contractual requirements. We also provide these institutions the ability to comply with the audit requirements in the EBA Guidelines through the AWS Security & Audit Series, including participation in an Audit Symposium, to facilitate customer audits. To align with regulatory requirements and expectations, our EBA addendum and audit program incorporate feedback that we’ve received from a variety of financial supervisory authorities across EU member states. EU financial services customers interested in learning more about the addendum or about the audit engagements offered by AWS can reach out to their AWS account teams.
  • Data location and transfer: The EBA Guidelines do not put restrictions on where an EU financial institution can store and process its data, but rather state that EU financial institutions should “adopt a risk-based approach to data storage and data processing location(s) (i.e. country or region) and information security considerations.” Our customers can choose which AWS Regions they store their content in, and we will not move or replicate your customer content outside of your chosen Regions unless you instruct us to do so. Customers can replicate and back up their customer content in more than one AWS Region to meet a variety of objectives, such as availability goals and geographic requirements.
  • Security implementation: The EBA Guidelines require EU financial institutions to consider, implement, and monitor various security measures. Using AWS services, customers can meet this requirement in a scalable and cost-effective way while improving their security posture. Customers can use AWS Config or AWS Security Hub to simplify auditing, security analysis, change management, and operational troubleshooting. As part of their cybersecurity measures, customers can activate Amazon GuardDuty, which provides intelligent threat detection and continuous monitoring, to generate detailed and actionable security alerts. Amazon Inspector automatically assesses a customer’s AWS resources for vulnerabilities or deviations from best practices and then produces a detailed list of security findings prioritized by level of severity. Customers can also enhance their security by using AWS Key Management Service (creation and control of encryption keys), AWS Shield (DDoS protection), and AWS WAF (filtering of malicious web traffic). These are just a few of the 500+ services and features we offer that enable strong availability, security, and compliance for our customers.

As reflected in the EBA Guidelines, it’s important to take a balanced approach when evaluating responsibilities in a cloud implementation. We are responsible for the security of the AWS Global Infrastructure. In the EU, we currently operate AWS Regions in Ireland, Frankfurt, London, Paris, and Stockholm, with our new Milan Region opening soon. For all of our data centers, we assess and manage environmental risks, employ extensive physical and personnel security controls, and guard against outages through our resiliency and testing procedures. In addition, independent, third-party auditors test more than 2,600 standards and requirements in the AWS environment throughout the year.

Conclusion

We encourage customers to learn about how the EBA Guidelines apply to their organization. Our teams of security, compliance, and legal experts continue to work with our EU financial services customers, both large and small, to support their journey to the AWS Cloud. AWS is closely following how regulatory authorities apply the EBA Guidelines locally and will provide further updates as needed. If you have any questions about compliance with the EBA Guidelines and their application to your use of AWS, or if you require the EBA Financial Services Addendum, please reach out to your account representative or request to be contacted.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Chad Woolf

Chad joined Amazon in 2010 and built the AWS compliance functions from the ground up, including audit and certifications, privacy, contract compliance, control automation engineering and security process monitoring. Chad’s work also includes enabling public sector and regulated industry adoption of the AWS Cloud, compliance with complex privacy regulations such as GDPR and operating a trade and product compliance team in conjunction with global region expansion. Prior to joining AWS, Chad spent 12 years with Ernst & Young as a Senior Manager working directly with Fortune 100 companies consulting on IT process, security, risk, and vendor management advisory work, as well as designing and deploying global security and assurance software solutions. Chad holds a Masters of Information Systems Management and a Bachelors of Accounting from Brigham Young University, Utah. Follow Chad on Twitter.

AWS achieves OSPAR outsourcing standard for Singapore financial industry

Post Syndicated from Brandon Lim original https://aws.amazon.com/blogs/security/aws-achieves-ospar-outsourcing-standard-for-singapore-financial-industry/

AWS has achieved the Outsourced Service Provider Audit Report (OSPAR) attestation for 66 services in the Asia Pacific (Singapore) Region. The OSPAR assessment is performed by an independent third party auditor. AWS’s OSPAR demonstrates that AWS has a system of controls in place that meet the Association of Banks in Singapore’s Guidelines on Control Objectives and Procedures for Outsourced Service Providers (ABS Guidelines).

The ABS Guidelines are intended to assist financial institutions in understanding approaches to due diligence, vendor management, and key technical and organizational controls that should be implemented in cloud outsourcing arrangements, particularly for material workloads. The ABS Guidelines are closely aligned with the Monetary Authority of Singapore’s Outsourcing Guidelines, and they’re one of the standards that the financial services industry in Singapore uses to assess the capability of their outsourced service providers (including cloud service providers).

AWS’s alignment with the ABS Guidelines demonstrates to customers AWS’s commitment to meeting the high expectations for cloud service providers set by the financial services industry in Singapore. Customers can leverage OSPAR to conduct their due diligence, minimizing the effort and costs required for compliance. AWS’s OSPAR report is now available in AWS Artifact.

You can find additional resources about regulatory requirements in the Singapore financial industry at the AWS Compliance Center. If you have questions about AWS’s OSPAR, or if you’d like to inquire about how to use AWS for your material workloads, please contact your AWS account team.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

Brandon Lim

Brandon is the Head of Security Assurance for Financial Services, Asia-Pacific. Brandon leads AWS’s regulatory and security engagement efforts for the Financial Services industry across the Asia Pacific region. He is passionate about working with Financial Services Regulators in the region to drive innovation and cloud adoption for the financial industry.