Tag Archives: Financial Services

Transforming DevOps at Broadridge on AWS

Post Syndicated from Som Chatterjee original https://aws.amazon.com/blogs/devops/transforming-devops-for-a-fintech-on-aws/

by Tom Koukourdelis (Broadridge – Vice President, Head of Global Cloud Platform Development and Engineering), Sreedhar Reddy (Broadridge – Vice President, Enterprise Cloud Architecture)

We have seen large enterprises in all industry segments meaningfully utilizing AWS to build new capabilities and deliver business value. While doing so, enterprises have to balance existing systems, processes, tools, and culture while innovating at pace with industry disruptors. Broadridge Financial Solutions, Inc. (NYSE: BR) is no exception. Broadridge is a $4 billion global FinTech leader and a leading provider of investor communications and technology-driven solutions to banks, broker-dealers, asset and wealth managers, and corporate issuers.

This blog post explores how we adopted AWS at scale while being secured and compliant, as well as delivering a high degree of productivity for our builders on AWS. It also describes the steps we took to create technical (a cloud solution as a foundation based on AWS) and procedural (organizational) capabilities by leveraging AWS cloud adoption constructs. The improvement in our builder productivity and agility directly contributes to rolling out differentiated business capabilities addressing our customer needs in a timely manner. In this post, we share real-life learnings and takeaways to adopt AWS at scale, transform business and application team experiences, and deliver customer delight.

Background

At Broadridge we have number of distributed and mainframe systems supporting multiple financial services domains and sub-domains such as post trade, proxy communications, financial and regulatory reporting, portfolio management, and financial operations. The majority of these systems were built and deployed years ago at on-premises data centers all over the US and abroad.

Builder personas at Broadridge are diverse in terms of location, culture, and the technology stack they use to build and support applications (we use a number of front-end JS frameworks; .NET; Java; ColdFusion for web development; ORMs for data entity relational mapping; IBM MQ; Apache Camel for messaging; databases like SQL, Oracle, Sybase, and other open source stacks for transaction management; databases, and batch processing on virtualized and bare metal instances). With more than 200 on-premises distributed applications and mainframe systems across front-, mid-, and back-office ecosystems, we wanted to leverage AWS to improve efficiency and build agility, and to reduce costs. The ability to reach customers at new geographies, reduced time to market, and opportunities to build new business competencies were key parameters as well.

Broadridge’s core tenents for cloud adoption

When AWS adoption within Broadridge attained a critical mass (known as the Foundation stage of adoption), the business and technology leadership teams defined our posture of cloud adoption and shared them with teams across the organization using the following tenets. Enterprises looking to adopt AWS at scale should define similar tenets fit for their organizations in plain language understandable by everyone across the board.

  • Iterate: Understanding that we cannot disrupt ongoing initiatives, small and iterative approach of moving workloads to cloud in waves— rinse and repeat— was to be adopted. Staying away from long-drawn, capital-intensive big bangs were to be avoided.
  • Fully automate: Starting from infrastructure deployment to application build, test, and release, we decided early on that automation and no-touch deployment are the right approach both to leverage cloud capabilities and to fuel a shift toward a matured DevOps culture.
  • Trust but verify only exceptions: Security and regulatory compliance are paramount for an organization like Broadridge. Guardrails (such as service control policies, managed AWS Config rules, multi-account strategy) and controls (such as PCI, NIST control frameworks) are iteratively developed to baseline every AWS account and AWS resource deployed. Manual security verification of workloads isn’t needed unless an exception is raised. Defense in depth (distancing attack surface from sensitive data and resources using multi-layered security) strategies were to be applied.
  • Go fast; re-hosting is acceptable: Not every workload needs to go through years of rewriting and refactoring before it is deemed suitable for the cloud. Minor tweaking (light touch re-platforming) to go fast (such as on-premises Oracle to RDS for Oracle) is acceptable.
  • Timeliness and small wins are key: Organizations spend large sums of capital to completely rewrite applications and by the time they are done, the business goal and customer expectation will have changed. That leads to material dissatisfaction with customers. We wanted to avoid that by setting small, measurable targets.
  • Cloud fluency: Investment in training and upskilling builders and leaders across the organization (developers, infra-ops, sec-ops, managers, salesforce, HR, and executive leadership) were to be to made to build fluency on the cloud.

The first milestone

The first milestone in our adoption journey was synonymous with Project stage of adoption and had the following characteristics.

A controlled sprawl of shadow IT

We first gave small teams with little to no exposure to critical business functions (such as customer data and SLA-oriented workloads) sandboxes to test out proofs of concepts (PoC) on AWS. We created the cloud sandboxes with least privilege, and added additional privileges upon request after verification. During this time, our key AWS usage characteristics were:

  • Manual AWS account setup with least privilege
  • Manual IAM role creation with role boundaries and authentication and authorization from the existing enterprise Active Directory
  • Integration with existing Security Information and Event Management (SIEM) tools to audit role sprawl and config changes
  • Proofs of concepts only
  • Account tagging for chargeback and tracking purposes
  • No automated build, test, deploy, or integration with existing delivery pipeline
  • Small and definitive timeframes for PoCs with defined goals

A typical AWS environment at this stage will resemble that shown in the following diagram:

Representative AWS usage during first milestone

As shown above, at this time the corporate assets were connected to a highly restrictive AWS environment through VPN. The access to the AWS environment were setup based on AWS Identity primitives or IAM roles mapped to and federated with the on-premises Active Directory. There was a single VPC setup for a sandbox account with no egress to the internet. There were no customer data hosted on this AWS environment and the AWS environment was connected with our SIEM of choice.

Early adopters became first educators and mentors

Members of the first teams to carry out proofs of concept on AWS shared learnings with each other and with the leadership team within Broadridge. This helped build communities of practices (CoPs) over time. Initial CoPs established were for networking and security, and were later extended to various practices like Terraform, Chef, and Jenkins.

Tech PMO team within Broadridge as the quasi-central cloud team

Ownership is vital no matter how small the effort and insignificant the impact of risky experimentation. The ownership of account setup, role creation, integration with on-premises AD and SIEM, and oversight to ensure that the experimentation does not pose any risk to the brand led us to build a central cloud team with experienced AWS and infrastructure practitioners. This team created a process for cloud migration with first manual guardrails of allowed and disallowed actions, manual interventions, and checkpoints built in every step.

At this stage, a representative pattern of work products across teams resembles what is shown below.

Work products across teams during initial stages of AWS usage

As the diagram suggests, individual application teams built overlapping—and, in many cases, identical—technical building blocks across the teams. This was acceptable as the teams were experimenting and running PoCs on AWS. In an actual production application delivery, the blocks marked with a * would be considered technical and functional waste—that is, undifferentiated lift which increases the cost of doing business.

The second milestone

In hindsight, this is perhaps the most important milestone in our cloud adoption journey. This step was marked with following key characteristics:

  • Every new team doing PoCs are rebuilding the same building blocks: This includes networking (VPCs and security groups), identity primitives (account, roles, and policies), monitoring (Amazon CloudWatch setup and custom metrics), and compute (images with org-mandated security patches).
  • The teams usually asking the same first fundamental questions: These include questions such as: What is an ideal CIDR block range? How do we integrate with SIEM? How do we spin up web servers on Amazon EC2? How do we secure access to data? How do we setup workload monitoring?
  • Security reviews rarely finding new security gaps but adding time to the process: A central security group as part of the central cloud team reviewed every new account request and every new service usage request without finding new security gaps when the application team used the baseline guardrails.
  • Manual effort is spent on tagging, chargeback, and other approvals: A portion of the application PoC/minimum viable product (MVP) lifecycle was spent on housekeeping. While housekeeping was necessary, the effort spent was undifferentiated.

The follow diagram represents the efforts for every team during the first phase.

Team wise efforts showing duplicative work

As shown above, every application team spent effort on building nearly the same capabilities before they could begin developing their team specific application functionalities and assets. The common blocks of work are undifferentiated and leads to spending effort which also varies depending on the efficiency of the team.

During this step, learnings from the PoCs led us to establish the tenets shared earlier in this post. To address the learnings, Broadridge established a cloud platform team. The cloud platform team, also referred to as the cloud enablement engine (CEE), is a team of builders who create the foundational building blocks on AWS that address common infrastructure, security, monitoring, auditing, and break-glass controls. At the same time, we established a cloud business office (CBO) as a liaison between the application and business teams and the CEE. CBO exists to manage and prioritize foundational requirements from multiple application teams as they go online on AWS and helps create the product backlog for CEE.

Cloud Enablement Engine Responsibilities:

  • Build out foundational building blocks utilizing AWS multi-account strategy
  • Build security guardrails, compliance controls, infrastructure as code automation, auditing and monitoring controls
  • Implement cloud platform backlog that funnel from CBO as common asks from app teams
  • Work with our AWS team to understand service roadmap, future releases, and provide feedback

Cloud Business Office Responsibilities:

  • Identify and prioritize repeating technical building blocks that cuts across multiple teams
  • Establish acceptable architecture patterns based on application use cases
  • Manage cloud programs to ensure CEE deliverables and business expectations align
  • Identify skilling needs, budget, and track spend
  • Contribute to the cloud platform backlog
  • Work with AWS team to understand service roadmap, future releases, and provide feedback

These teams were set up to scale AWS adoption, put building blocks into the hands of the applications teams, and ultimately deliver differentiated capabilities to Broadridge’s business teams and end customers. The following diagram translates the relationship and modus operandi among the teams:

CEE and CBO working model

Upon establishing the conceptual working model, the CBO and CEE teams looked at solutions from AWS to enable them to achieve the working model quickly. The starting point was AWS Landing Zone (ALZ). ALZ is an AWS solution based on the AWS multi-account strategy. It is a set of vetted constructs and best practices that we use as mechanisms to accelerate AWS adoption.

AWS multi-account strategy

The multi-account strategy employs best practices around separation of concerns, reduction of blast radius, account setup based on Software Development Life Cycle (SDLC) phases, and base operational roles for auditing, monitoring, security, and compliance, as shown in the above diagram. This strategy defines the need for having centralized shared or core accounts, which works as the master account for monitoring, governance, security, and auditing. A number of AWS services like Amazon GuardDuty, AWS Security Hub, and AWS Config configurations are set in these centralized accounts. Spoke or child accounts are vended as per a team’s requirement which are spun up with these governance, monitoring, and security defaults connected to the centralized account for log capturing, threat detection, configuration management, and security management.

The third milestone

The third milestone is synonymous with the Foundation stage of adoption

Using the ALZ construct, our CEE team developed a core set of principles to be used by every application team. Based on our core tenets, the CEE team built out an entry point (a web-based UI workflow application). This web UI was the entry point for any application team requesting an environment within AWS for experimentation or to begin the application development life cycle. Simplistically, the web UI sat on top of an automation engine built using APIs from AWS, ALZ components (Account Vending Machine, Shared Services Account, Logging Account, Security Account, default security groups, default IAM roles, and AD groups), and Terraform based code. The CBO team helped establish the common architecture patterns that was codified into this engine.

Team on-boarding workflow using foundational building blocks on AWS

An Angular based web UI is the starting point for application team to request for the AWS accounts. The web UI entry point asks a number of questions validating the type of account requested along with its intended purpose, ingress/egress requirements, high availability and disaster recovery requirements, business unit for charge back and ownership purposes. Once all information is entered, it sends out a notification based on a preset organization dispatch matrix rule. Upon receiving the request, the approver has the option to approve it or asks further clarification question. Once satisfactorily answered the approver approves the account vending request and a Terraform code is kicked in to create the default account.

When an account is created through this process, the following defaults are set up for a secure environment for development, testing, and staging. Similar guardrails are deployed in the production accounts as well.

  • Creates a new account under an existing AWS Organizational Unit (OU) based on the input parameters. Tags the chargeback codes, custom tags, and also integrates the resources with existing CMDB
  • Connects the new account to the master shared services and logging account as per the AWS Landing Zone constructs
  • Integrates with the CloudWatch event bus as a sender account
  • Runs stsAssumeRole commands on the new account to create infosec cross-account roles
  • Defines actions, conditions, role limits, and account policies
  • Creates environment variables related to the account in the parameter store within AWS Systems Manager
  • Connects the new account to TrendMicro for AV purposes
  • Attaches the default VPC of the new account to an existing AWS Transit Gateway
  • Generates a Splunk key for the account to store in the Splunk KV store
  • Uses AWS APIs to attach Enterprise support to the new account
  • Creates or amends a new AD group based on the IAM role
  • Integrates as an Amazon Macie member account
  • Enables AWS Security Hub for the account by running an enable-security-hub call
  • Sets up Chef runner for the new account
  • Runs account setting lock procedures to set Amazon S3 public settings, EBS default encryption setting
  • Enable firewall by setting AWS WAF rules for the account
  • Integrates the newly created account with CloudHealth and Dome9

Deploying all these guardrails in any new accounts removes the need for manual setup and intervention. This gives application developers the needed freedom to stop worrying about infrastructure and access provisioning while giving them a higher speed to value.

Using these technical and procedural cloud adoption constructs, we have been able to reduce application onboarding time. This has led to quicker delivery of business capability with the application teams focusing only on what differentiates their business rather than repeatedly building undifferentiated work products. This has also led to creation of mature building blocks over time for use of the application teams. Using these building blocks the teams are also modernizing applications by iteratively replacing old application blocks.

Conclusion

In summary, we are able to deliver better business outcomes and differentiated customer experience by:

  • Building common asks as reusable and automated enterprise assets and improving the overall enterprise-wide maturity by indexing on and growing these assets.
  • Depending on an experienced team to deliver baseline operational controls and guardrails.
  • Improving their security posture with higher-level and managed AWS security services instead of rebuilding everything from the ground up.
  • Using the Cloud Business Office to improve funneling of common asks. This helps the next team on AWS to benefit from a readily available set of approved services and application blueprints.

We will continue to build on and maturing these reusable building blocks by using AWS services and new feature releases.

 

The content and opinions in this blog are those of the third-party author and AWS is not responsible for the content or accuracy of this post.

 

Deploying applications at FINRA using AWS CodeDeploy

Post Syndicated from Nikunj Vaidya original https://aws.amazon.com/blogs/devops/deploying-applications-at-finra-using-aws-codedeploy/

by Geethalaksmi Ramachandran (FINRA – Director, Application Engineering), Avinash Chukka (FINRA – Senior Application Engineer)

At FINRA, a financial regulatory organization that oversees the broker-dealer industry with market intelligence, we have been utilizing the AWS CodeDeploy services to deploy applications on the cloud as well as on on-premises production servers. This blog post provides insight into our operations and experience with CodeDeploy.

Migration overview

Since 2014, we have gone through a systematic effort to migrate over 100 applications from on-premises resources to the AWS Cloud for everything from case management to data ingestion and processing. The applications comprise multiple components or microservices as individually deployable units or a single monolith application with multiple shared components.

Most of the application components, running on Linux and Windows, were entirely redesigned, containerized, and gradually deployed from on-premises to cloud-based Amazon ECS clusters. Fifteen applications were to be deferred for migration or planned for retirement due to other dependencies. Those deferred applications are currently running on on-premises bare-metal servers hosted across 38 Windows servers and 15 Linux servers per environment with a total of 150 Windows servers and 60 Linux servers across all the environments.

The applications were deployed to the on-premises servers earlier using the XL Deploy tool from XebiaLabs. The tool has now been decommissioned and replaced by CodeDeploy to attain more reliability and consistency in deployments across various applications.

Infrastructure and Workflow overview

FINRA’s AWS Cloud infrastructure consists of Amazon EC2 instances, ECS, Amazon EMR clusters, and many resources from other AWS services. We host web applications in ECS clusters, running approximately 200 clusters on each environment and EC2 instances. The infrastructure uses AWS CloudFormation and AWS Java SDK as Infrastructure-as-Code (IaC).

The CI/CD pipeline comprises of:

  • A source stage (per branch) stored in a BitBucket repository
  • A build stage executed on Jenkins build slaves
  • A deploy stage involving deployment from:
    • AWS CloudFormation and SDK to ECS clusters
    • CodeDeploy to EC2 instances
    • CodeDeploy Service to on-premises servers across various development, quality assurance (QA), staging, and production environments.

Jenkins masters running on EC2 instances within an Auto Scaling group orchestrate the CI/CD pipeline. The master spawns the build slaves as ECS tasks to execute a build job. Once the image is built and containerized in the build stage, the build artifact is stored in Artifactory repos for shared common libraries or staged in S3 to be used in deployments. The Jenkins slave invokes the appropriate deployment service – AWS CloudFormation and SDK or CodeDeploy depending on the target server environment, as detailed in the preceding paragraphs. On completion of the deployment, the automated smoke tests are launched.

The following diagram depicts the CD workflow for the on-premises instances:

The Delivery pipeline deploys across various environments such as development, QA, user acceptance testing (UAT), and production. Approval gates control deployments to UAT and production.

CodeDeploy Operations

Our experience of utilizing CodeDeploy services has been “very smooth” since we moved from the XebiaLabs XL Deploy tool three months ago. The main factors that led FINRA to select CodeDeploy for our organization were:

  • Being able to use the same set of deployment tools between on-premises and cloud-based instances
  • Easy portability and reuse of scripts
  • Shared community knowledge rather than isolated expertise

The default deployment parameters were well-suited for our environments and didn’t require altering values or customization. Depending on the application being deployed, deployments can be carried out on cloud-based instances or on-premises instances. Cloud-based instances use AWS CloudFormation templates to trigger CodeDeploy; on-premises instances use AWS CLI-based scripts to trigger CodeDeploy.

The cloud-based deployments in production follow “blue-green” strategy for some of the applications, in which rollback is a critical requirement for minimal disruption. Other applications in cloud follow the “rolling updates” method, where as the on-premises servers in production are upgraded using “in-place” deployment method. The CodeDeploy agents running on on-premises servers are configured with roles to query for required artifacts stored on specific S3 buckets when deploying the package.

The applications’ deployment mappings to the instances are configured based on EC2 Auto Scaling groups in the cloud and based on tags for on-premises resources. Each component is logically mapped to a CodeDeploy deployment group. However, at one point, the maximum number of tags added to the CodeDeploy on-premises instance was restricted to a maximum tag limit of 10, but the instance needed 13 tags corresponding to 13 deployment groups.

We overcame this limitation by adding a common tenth tag on the on-premises instance and also on the remaining deployment groups (10-13) and stored the mapping of instances to deployment groups externally. The deployment script first looks up the mapping and proceeds with the deployment by validating if it matches the target server name, then runs deployments only on the matching target servers and skips deployments on the unmatched servers, as shown in the following diagram.

CodeDeploy offers the following benefits to FINRA:

  • Deployment configurations written as code: CodeDeploy configuration uses CloudFormation templates as Infrastructure-as-Code, which makes it easier to create and maintain.
  • Version controlled deployment code: AWS CloudFormation templates, deployment configuration, and deployment scripts are maintained in the source code repository and version-controlled.
  • Reusability: Most CodeDeploy resource provisioning code is reusable across all the on-premises instances and on different platforms, such as Linux (RHEL) and Windows.
  • Zero maintenance of deployment tool: As a managed service, CodeDeploy does not require maintenance and upgrade.
  • Secrets Management: CodeDeploy integrates with central secrets management systems, and externalizes environment configurations.

Monitoring

FINRA uses the in-house developed DevOps Dashboard to monitor the build and deploy stages, based upon a Grafana UI extracting CI and CD data from Jenkins.

The cloud instances and on-premises servers are configured with agents to stream the real-time logs to a central Splunk Server, where the logs are analyzed and health-monitored. Optionally, the deployment logs are forwarded to the functional owners via email attachments. These logs become critical for the troubleshooting activity in post-mortems of past events. Due to the restrictions on accessing the base instances across various functional teams, the above mechanism enables us to gain visibility into health of the CI/CD infrastructure.

Looking Forward

We plan to migrate the remaining on-premises applications to the cloud after necessary refactoring and retiring of the application dependencies over the next few years.

Our eventual goal is to move towards serverless technologies to eliminate server infrastructure management.

Conclusion

This post reviewed the Infrastructure at FINRA on both AWS Cloud and on-premises, the CI/CD pipeline, and the CodeDeploy workflow integration, as well as examining insights into CodeDeploy use.

The content and opinions in this blog are those of the third-party author and AWS is not responsible for the content or accuracy of this post.

 

 

Architecture Monthly Magazine: Architecting for Financial Services

Post Syndicated from Annik Stahl original https://aws.amazon.com/blogs/architecture/architecture-monthly-magazine-architecting-for-financial-services/

Architecture Monthly - October - Bull and BearThis month’s Architecture Monthly magazine delves into the high-stakes world of banking, insurance, and securities. From capital markets and insurance, to global investment banks, payments, and emerging fintech startups, AWS helps customers innovate, modernize, and transform.

We’re featuring two field experts in October’s issue. First, we interviewed Ed Pozarycki, a Solutions Architect manager in the AWS Financial Services vertical, who spoke to us about patterns, trends, and the special challenges architects face when building systems for financial organizations. And this month we’re rolling out a new feature: Ask an Expert, where we’ll ask AWS professionals three questions about the current magazine’s theme.In this issue, Lana Kalashnyk, Principal Blockchain Architect, told us three things to know about blockchain and cryptocurrencies.

In October’s Issue

For October’s magazine, we’ve assembled architectural best practices about financial services from all over AWS, and we’ve made sure that a broad audience can appreciate it.

  • Interview: Ed Pozarycki, Solutions Architecture Manager, Financial Services
  • Blog post: Tips For Building a Cloud Security Operating Model in the Financial Services Industry
  • Case study: Aon Securities, Inc.
  • Ask an Expert: 3 Things to Know About Blockchain & Cryptocurrencies
  • On-demand webinar: The New Age of Banking & Transforming Customer Experiences
  • Whitepaper: Financial Services Grid Computing on AWS

How to Access the Magazine

We hope you’re enjoying Architecture Monthly, and we’d like to hear from you—leave us star rating and comment on the Amazon Kindle page or contact us anytime at [email protected].

Financial Services at re:Invent

We have a full re:Invent program planned for the Financial Services industry in December, including leadership, breakout, and builder sessions, plus chalk talks and workshops. Register today.

Cloud-Powered, Next-Generation Banking

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/cloud-powered-next-generation-banking/

Traditional banks make extensive use of labor-intensive, human-centric control structures such as Production Support groups, Security Response teams, and Contingency Planning organizations. These control structures were deemed necessary in order to segment responsibilities and to maintain a security posture that is risk averse. Unfortunately, this traditional model tends to keep the subject matter experts in these organizations at a distance from the development teams, reducing efficiency and getting in the way of innovation.

Banks and other financial technology (fintech) companies have realized that they need to move faster in order to meet the needs of the newest generation of customers. These customers, some in markets that have not been well-served by the traditional banks, expect a rich, mobile-first experience, top-notch customer service, and access to a broad array of services and products. They prefer devices to retail outlets, and want to patronize a bank that is responsive to their needs.

AWS-Powered Banking
Today I would like to tell you about a couple of AWS-powered banks that are addressing these needs. Both of these banks are born-in-the-cloud endeavors, and take advantage of the scale, power, and flexibility of AWS in new and interesting ways. For example, they make extensive use of microservices, deploy fresh code dozens or hundreds of times per day, and use analytics & big data to better understand their customers. They also apply automation to their compliance and control tasks, scanning code for vulnerabilities as it is committed, and also creating systems that systemically grant and enforce use of least-privilege IAM roles.

NuBank – Headquartered in Brazil and serving over 10 million customers, NuBank has been recognized by Fast Company as one of the most innovative companies in the world. They were founded in 2013 and reached unicorn status (a valuation of one billion dollars), just four years later. After their most recent round of funding, their valuation has jumped to ten billion dollars. Here are some resources to help you learn more about how they use AWS:

Starling – Headquartered in London and founded in 2014, Starling is backed by over $300M in funding. Their mobile apps provide instant notification of transactions, support freezing and unfreezing of cards, and provide in-app chat with customer service representatives. Here are some resources to help you learn more about how they use AWS:

Both banks are strong supporters of open banking, with support for APIs that allow third-party developers to build applications and services (read more about the NuBank API and the Starling API).

I found two of the videos (How the Cloud… and Automated Privilege Management…) particularly interesting. The two videos detail how NuBank and Starling have implemented Compliance as Code, with an eye toward simplifying permissions management and increasing the overall security profile of their respective banks.

I hope that you have enjoyed this quick look at how two next-generation banks are making use of AWS. The videos that I linked above contain tons of great technical information that you should also find of interest!

Jeff;

 

 

 

 

 

 

Tips for building a cloud security operating model in the financial services industry

Post Syndicated from Stephen Quigg original https://aws.amazon.com/blogs/security/tips-for-building-a-cloud-security-operating-model-in-the-financial-services-industry/

My team helps financial services customers understand how AWS services operate so that you can incorporate AWS into your existing processes and security operations centers (SOCs). As soon as you create your first AWS account for your organization, you’re live in the cloud. So, from day one, you should be equipped with certain information: you should understand some basics about how our products and services work, you should know how to spot when something bad could happen, and you should understand how to recover from that situation. Below is some of the advice I frequently offer to financial services customers who are just getting started.

How to think about cloud security

Security is security – the principles don’t change. Many of the on-premises security processes that you have now can extend directly to an AWS deployment. For example, your processes for vulnerability management, security monitoring, and security logging can all be transitioned over.

That said, AWS is more than just infrastructure. I sometimes talk to customers who are only thinking about the security of their AWS Virtual Private Clouds (VPCs), and about the Amazon Elastic Compute Cloud (EC2) instances running in those VPCs. And that’s good; its traditional network security that remains quite standard. But I also ask my customers questions that focus on other services they may be using. For example:

  • How are you thinking about who has Database Administrator (DBA) rights for Amazon Aurora Serverless? Aurora Serverless is a managed database service that lets AWS do the heavy lifting for many DBA tasks.
  • Do you understand how to configure (and monitor the configuration of) your Amazon Athena service? Athena lets you query large amounts of information that you’ve stored in Amazon Simple Storage Service (S3).
  • How will you secure and monitor your AWS Lambda deployments? Lambda is a serverless platform that has no infrastructure for you to manage.

Understanding AWS security services

As a customer, it’s important to understand the information that’s available to you about the state of your cloud infrastructure. Typically, AWS delivers much of that information via the Amazon CloudWatch service. So, I encourage my customers to get comfortable with CloudWatch, alongside our AWS security services. The key services that any security team needs to understand include:

  • Amazon GuardDuty, which is a threat detection system for the cloud.
  • AWS Cloudtrail, which is the log of AWS API services.
  • VPC Flow Logs, which enables you to capture information about the IP traffic going to and from network interfaces in your VPC.
  • AWS Config, which records all the configuration changes that your teams have made to AWS resources, allowing you to assess those changes.
  • AWS Security Hub, which offers a “single pane of glass” that helps you assess AWS resources and collect information from across your security services. It gives you a unified view of resources per Region, so that you can more easily manage your security and compliance workflow.

These tools make it much quicker for you to get up to speed on your cloud security status and establish a position of safety.

Getting started with automation in the cloud

You don’t have to be a software developer to use AWS. You don’t have to write any code; the basics are straightforward. But to optimize your use of AWS and to get faster at automating, there is a real advantage if you have coding skills. Automation is the core of the operating model. We have a number of tutorials that can help you get up to speed.

Self-service cloud security resources for financial services customers

There are people like me who can come and talk to you. But to keep you from having to wait for us, we also offer a lot of self-service cloud security resources on our website.

We offer a free digital training course on AWS security fundamentals, plus webinars on financial services topics. We also offer an AWS security certification, which lets you show that your security knowledge has been validated by a third-party.

There are also a number of really good videos you can watch. For example, we had our inaugural security conference, re:Inforce, in Boston this past June. The videos and slides from the conference are now on YouTube, so you can sit and watch at your own pace. If you’re not sure where to start, try this list of popular sessions.

Finding additional help

You can work with a number of technology partners to help extend your security tools and processes to the cloud.

  • Our AWS Professional Services team can come and help you on site. In addition, we can simulate security incidents with you tohelp you get comfortable with security and cloud technology and how to respond to incidents.
  • AWS security consulting partners can also help you develop processes or write the code that you might need.
  • The AWS Marketplace is a wonderful self-service location where you can get all sorts of great security solutions, including finding a consulting partner.

And if you’re interested in speaking directly to AWS, you can always get in touch. There are forms on our website, or you can reach out to your AWS account manager and they can help you find the resources that are necessary for your business.

Conclusion

Financial services customers face some tough security challenges. You handle large amounts of data, and it’s really important that this data is stored securely and that its privacy is respected. We know that our customers do lots of due diligence of AWS before adopting our services, and they have many different regulatory environments within which they have to work. In turn, we want to help customers understand how they can build a cloud security operating model that meets their needs while using our services.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Stephen Quigg

Stephen Quigg is a Principal Securities Solutions Architect within AWS Financial Services. Quigg started his AWS career in Sydney, Australia, but returned home to Scotland three years ago having missed the wind and rain too much. He manages to fit some work in between being a husband and father to two angelic children and making music.

AWS and the European Banking Authority Guidelines on Outsourcing

Post Syndicated from Chad Woolf original https://aws.amazon.com/blogs/security/aws-european-banking-authority-guidelines-on-outsourcing/

Financial institutions across the globe use AWS to transform the way they do business. It’s exciting to watch our customers in the financial services industry innovate on AWS in unique ways, across all geos and use cases. Regulations continue to evolve in this space, and we’re working hard to help customers proactively respond to new rules and guidelines. In many cases, the AWS Cloud makes it easier than ever before for customers to comply with different regulations and frameworks around the world.

The European Banking Authority (EBA), an EU financial supervisory authority, recently provided EU financial institutions (which includes credit institutions, certain investment firms, and payment institutions) with new outsourcing guidelines (PDF), which also apply to the use of cloud services. We’re ready and able to support our customers’ compliance with their obligations under the EBA Guidelines and to help meet and exceed their regulators’ expectations. We offer our customers a wide range of services that can simplify and directly assist in complying with the new guidelines, which take effect on September 30, 2019.

What do the EBA Guidelines mean for AWS customers?

The EBA Guidelines establish technology-neutral outsourcing requirements for EU financial institutions, and there is a particular focus on the outsourcing of “critical or important functions.” For AWS and our customers, the key takeaway is that the EBA Guidelines allow for EU financial institutions to use cloud services for material, regulated workloads. When considering or using third-party services, many EU financial institutions already follow due diligence, risk management, and regulatory notification processes that are similar to those processes laid out in the EBA Guidelines. To meet and exceed the EBA Guidelines’ requirements on security, resiliency, and assurance, EU financial institutions can use a variety of AWS security and compliance services.

Risk-based approach

The EBA Guidelines incorporate a risk-based approach that expects regulated entities to identify, assess, and mitigate the risks associated with any outsourcing arrangement. The risk-based approach outlined in the EBA Guidelines is consistent with the long-standing AWS shared responsibility model. This approach applies throughout the EBA Guidelines, including the areas of risk assessment, contractual and audit requirements, data location and transfer, and security implementation.

  • Risk assessment: The EBA Guidelines emphasize the need for EU financial institutions to assess the potential impact of outsourcing arrangements on their operational risk. The AWS shared responsibility model helps customers formulate their risk assessment approach because it illustrates how their security and management responsibilities change depending on the AWS services they use. For example, AWS operates some controls on behalf of customers, such as data center security, while customers operate other controls, such as event logging. In practice, AWS services help customers assess and improve their risk profile relative to traditional, on-premises environments.
  • Contractual and audit requirements: The EBA Guidelines lay out requirements for the written agreement between an EU financial institution and its service provider, including access and audit rights. For EU financial institutions running regulated workloads on AWS services, we offer the EBA Financial Services Addendum to address the EBA Guidelines’ contractual requirements. We also provide these institutions the ability to comply with the audit requirements in the EBA Guidelines through the AWS Security & Audit Series, including participation in an Audit Symposium, to facilitate customer audits. To align with regulatory requirements and expectations, our EBA addendum and audit program incorporate feedback that we’ve received from a variety of financial supervisory authorities across EU member states. EU financial services customers interested in learning more about the addendum or about the audit engagements offered by AWS can reach out to their AWS account teams.
  • Data location and transfer: The EBA Guidelines do not put restrictions on where an EU financial institution can store and process its data, but rather state that EU financial institutions should “adopt a risk-based approach to data storage and data processing location(s) (i.e. country or region) and information security considerations.” Our customers can choose which AWS Regions they store their content in, and we will not move or replicate your customer content outside of your chosen Regions unless you instruct us to do so. Customers can replicate and back up their customer content in more than one AWS Region to meet a variety of objectives, such as availability goals and geographic requirements.
  • Security implementation: The EBA Guidelines require EU financial institutions to consider, implement, and monitor various security measures. Using AWS services, customers can meet this requirement in a scalable and cost-effective way while improving their security posture. Customers can use AWS Config or AWS Security Hub to simplify auditing, security analysis, change management, and operational troubleshooting. As part of their cybersecurity measures, customers can activate Amazon GuardDuty, which provides intelligent threat detection and continuous monitoring, to generate detailed and actionable security alerts. Amazon Inspector automatically assesses a customer’s AWS resources for vulnerabilities or deviations from best practices and then produces a detailed list of security findings prioritized by level of severity. Customers can also enhance their security by using AWS Key Management Service (creation and control of encryption keys), AWS Shield (DDoS protection), and AWS WAF (filtering of malicious web traffic). These are just a few of the 500+ services and features we offer that enable strong availability, security, and compliance for our customers.

As reflected in the EBA Guidelines, it’s important to take a balanced approach when evaluating responsibilities in a cloud implementation. We are responsible for the security of the AWS Global Infrastructure. In the EU, we currently operate AWS Regions in Ireland, Frankfurt, London, Paris, and Stockholm, with our new Milan Region opening soon. For all of our data centers, we assess and manage environmental risks, employ extensive physical and personnel security controls, and guard against outages through our resiliency and testing procedures. In addition, independent, third-party auditors test more than 2,600 standards and requirements in the AWS environment throughout the year.

Conclusion

We encourage customers to learn about how the EBA Guidelines apply to their organization. Our teams of security, compliance, and legal experts continue to work with our EU financial services customers, both large and small, to support their journey to the AWS Cloud. AWS is closely following how regulatory authorities apply the EBA Guidelines locally and will provide further updates as needed. If you have any questions about compliance with the EBA Guidelines and their application to your use of AWS, or if you require the EBA Financial Services Addendum, please reach out to your account representative or request to be contacted.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Chad Woolf

Chad joined Amazon in 2010 and built the AWS compliance functions from the ground up, including audit and certifications, privacy, contract compliance, control automation engineering and security process monitoring. Chad’s work also includes enabling public sector and regulated industry adoption of the AWS Cloud, compliance with complex privacy regulations such as GDPR and operating a trade and product compliance team in conjunction with global region expansion. Prior to joining AWS, Chad spent 12 years with Ernst & Young as a Senior Manager working directly with Fortune 100 companies consulting on IT process, security, risk, and vendor management advisory work, as well as designing and deploying global security and assurance software solutions. Chad holds a Masters of Information Systems Management and a Bachelors of Accounting from Brigham Young University, Utah. Follow Chad on Twitter.

AWS achieves OSPAR outsourcing standard for Singapore financial industry

Post Syndicated from Brandon Lim original https://aws.amazon.com/blogs/security/aws-achieves-ospar-outsourcing-standard-for-singapore-financial-industry/

AWS has achieved the Outsourced Service Provider Audit Report (OSPAR) attestation for 66 services in the Asia Pacific (Singapore) Region. The OSPAR assessment is performed by an independent third party auditor. AWS’s OSPAR demonstrates that AWS has a system of controls in place that meet the Association of Banks in Singapore’s Guidelines on Control Objectives and Procedures for Outsourced Service Providers (ABS Guidelines).

The ABS Guidelines are intended to assist financial institutions in understanding approaches to due diligence, vendor management, and key technical and organizational controls that should be implemented in cloud outsourcing arrangements, particularly for material workloads. The ABS Guidelines are closely aligned with the Monetary Authority of Singapore’s Outsourcing Guidelines, and they’re one of the standards that the financial services industry in Singapore uses to assess the capability of their outsourced service providers (including cloud service providers).

AWS’s alignment with the ABS Guidelines demonstrates to customers AWS’s commitment to meeting the high expectations for cloud service providers set by the financial services industry in Singapore. Customers can leverage OSPAR to conduct their due diligence, minimizing the effort and costs required for compliance. AWS’s OSPAR report is now available in AWS Artifact.

You can find additional resources about regulatory requirements in the Singapore financial industry at the AWS Compliance Center. If you have questions about AWS’s OSPAR, or if you’d like to inquire about how to use AWS for your material workloads, please contact your AWS account team.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

Brandon Lim

Brandon is the Head of Security Assurance for Financial Services, Asia-Pacific. Brandon leads AWS’s regulatory and security engagement efforts for the Financial Services industry across the Asia Pacific region. He is passionate about working with Financial Services Regulators in the region to drive innovation and cloud adoption for the financial industry.

Singapore financial services: new resources for customer side of the shared responsibility model

Post Syndicated from Darran Boyd original https://aws.amazon.com/blogs/security/singapore-financial-services-new-resources-for-customer-side-of-shared-responsibility-model/

Based on customer feedback, we’ve updated our AWS User Guide to Financial Services Regulations and Guidelines in Singapore whitepaper, as well as our AWS Monetary Authority of Singapore Technology Risk Management Guidelines (MAS TRM Guidelines) Workbook, which is available for download via AWS Artifact. Both resources now include considerations and best practices for the customer portion of the AWS Shared Responsibility Model.

The whitepaper provides considerations for financial institutions as they assess their responsibilities when using AWS services with regard to the MAS Outsourcing Guidelines, MAS TRM Guidelines, and Association of Banks in Singapore (ABS) Cloud Computing Implementation Guide.

The MAS TRM Workbook provides best practices for the customer portion of the AWS Shared Responsibility Model—that is, guidance on how you can manage security in the AWS Cloud. The guidance and best practices are sourced from the AWS Well-Architected Framework.

The Well-Architected Framework helps you understand the pros and cons of decisions you make while building systems on AWS. By using the Framework, you will learn architectural best practices for designing and operating reliable, secure, efficient, and cost-effective systems in the cloud. It provides a way for you to consistently measure your architectures against best practices and identify areas for improvement. The process for reviewing an architecture is a constructive conversation about architectural decisions, and is not an audit mechanism. We believe that having well-architected systems greatly increases the likelihood of business success. For more information, see the AWS Well-Architected homepage.

The compliance controls provided by the workbook also continue to address the AWS side of the Shared Responsibility Model (security of the AWS Cloud).

View the updated whitepaper here, or download the updated AWS MAS TRM Guidelines Workbook via AWS Artifact.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Boyd author photo

Darran Boyd

Darran is a Principal Security Solutions Architect at AWS, responsible for helping remove security blockers for our customers and accelerating their journey to the AWS Cloud. Darran’s focus and passion is to deliver strategic security initiatives that un-lock and enable our customers at scale across the financial services industry and beyond… Cx0 to <code>

New whitepaper: Achieving Operational Resilience in the Financial Sector and Beyond

Post Syndicated from Rahul Prabhakar original https://aws.amazon.com/blogs/security/new-whitepaper-achieving-operational-resilience-in-the-financial-sector-and-beyond/

AWS has released a new whitepaper, Amazon Web Services’ Approach to Operational Resilience in the Financial Sector and Beyond, in which we discuss how AWS and customers build for resiliency on the AWS cloud. We’re constantly amazed at the applications our customers build using AWS services — including what our financial services customers have built, from credit risk simulations to mobile banking applications. Depending on their internal and regulatory requirements, financial services companies may need to meet specific resiliency objectives and withstand low-probability events that could otherwise disrupt their businesses. We know that financial regulators are also interested in understanding how the AWS cloud allows customers to meet those objectives. This new whitepaper addresses these topics.

The paper walks through the AWS global infrastructure and how we build to withstand failures. Reflecting how AWS and customers share responsibility for resilience, the paper also outlines how a financial institution can build mission-critical applications to leverage, for example, multiple Availability Zones to improve their resiliency compared to a traditional, on-premises environment.

Security and resiliency remain our highest priority. We encourage you to check out the paper and provide feedback. We’d love to hear from you, so don’t hesitate to get in touch with us by reaching out to your account executive or contacting AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Three key trends in financial services cloud compliance

Post Syndicated from Igor Kleyman original https://aws.amazon.com/blogs/security/three-key-trends-in-financial-services-cloud-compliance/

As financial institutions increasingly move their technology infrastructure to the cloud, financial regulators are tailoring their oversight to the unique features of a cloud environment. Regulators have followed a variety of approaches, sometimes issuing new rules and guidance tailored to the cloud. Other times, they have updated existing guidelines for managing technology providers to be more applicable for emerging technologies. In each case, however, policymakers’ heightened focus on cybersecurity and privacy has led to increased scrutiny on how financial institutions manage security and compliance.

Because we strive to ensure you can use AWS to meet the highest security standards, we also closely monitor regulatory developments and look for trends to help you stay ahead of the curve. Here are three common themes we’ve seen emerge in the regulatory landscape:

Data security and data management

Regulators expect financial institutions to implement controls and safety measures to protect the security and confidentiality of data stored in the cloud. AWS services are content agnostic—we treat all customer data and associated assets as highly confidential. We have implemented sophisticated technical and physical measures against unauthorized access. Encryption is an important step to help protect sensitive information. You can use AWS Key Management Service (KMS), which is integrated into many services, to encrypt data. KMS also makes it easy to create and control your encryption keys.

Cybersecurity

Financial regulators expect financial institutions to maintain a strong cybersecurity posture. In the cloud, security is a shared responsibility between the cloud provider and the customer: AWS manages security of the cloud, and customers are responsible for managing security in the cloud. To manage security of the cloud, AWS has developed and implemented a security control environment designed to protect the confidentiality, integrity, and availability of your systems and content. AWS infrastructure complies with global and regional regulatory requirements and best practices. You can help ensure security in the cloud by leveraging AWS services. Some new services strive to automate security. Amazon Inspector performs automated security assessments to scan cloud environments for vulnerabilities or deviations from best practices. AWS is also on the cutting edge of using automated reasoning to ensure established security protocols are in place. You can leverage automated proofs with a tool called Zelkova, which is integrated within certain AWS services. Zelkova helps you obtain higher levels of security assurance about your most sensitive systems and operations. Financial institutions can also perform vulnerability scans and penetration testing on their AWS environments—another recurring expectation of financial regulators.

Risk management

Regulators expect financial institutions to have robust risk management processes when using the cloud. Continuous monitoring is key to ensuring that you are managing the risk of your cloud environment, and AWS offers financial institutions a number of tools for governance and traceability. You can have complete visibility of your AWS resources by using services such as AWS CloudTrail, Amazon CloudWatch, and AWS Config to monitor, analyze, and audit events that occur in your cloud environment. You can also use AWS CloudTrail to log and retain account activity related to actions across your AWS infrastructure.

We understand how important security and compliance are for financial institutions, and we strive to ensure that you can use AWS to meet the highest regulatory standards. Here is a selection of resources we created to help you make sense of the changing regulatory landscape around the world:

You can go to our security and compliance resources page for additional information. Have more questions? Reach out to your Account Manager or request to be contacted.

Want more AWS Security news? Follow us on Twitter.

AWS Compliance Center for financial services now available

Post Syndicated from Frank Fallon original https://aws.amazon.com/blogs/security/aws-compliance-center-financial-services/

On Tuesday, September 4, AWS announced the launch of an AWS Compliance Center for our Financial Services (FS) customers. This addition to our compliance offerings gives you a central location to research cloud-related regulatory requirements that impact the financial services industry. Prior to the launch of the AWS Compliance Center, customers preparing to adopt AWS for their FS workloads typically had to browse multiple in-depth sources to understand the expectations of regulatory agencies in each country.

The AWS Compliance Center is designed to make this process easier. It aggregates any given country’s regulatory position regarding the adoption and operation of cloud services. Key components of the FS industry—including regulatory approvals, data privacy, and data protection—are explained, along with the steps you must take throughout your adoption of AWS services to help satisfy regulatory requirements. You can browse the information in the portal and export it as printable documents.

We expect the AWS Compliance Center to evolve as our customers’ compliance needs change and as regulators begin to address the challenges and opportunities that cloud services create in the FS industry. The AWS Compliance Center covers 13 countries, and we’ll continue to enhance it with additional countries and information based on your needs.

Creating a 1.3 Million vCPU Grid on AWS using EC2 Spot Instances and TIBCO GridServer

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/creating-a-1-3-million-vcpu-grid-on-aws-using-ec2-spot-instances-and-tibco-gridserver/

Many of my colleagues are fortunate to be able to spend a good part of their day sitting down with and listening to our customers, doing their best to understand ways that we can better meet their business and technology needs. This information is treated with extreme care and is used to drive the roadmap for new services and new features.

AWS customers in the financial services industry (often abbreviated as FSI) are looking ahead to the Fundamental Review of Trading Book (FRTB) regulations that will come in to effect between 2019 and 2021. Among other things, these regulations mandate a new approach to the “value at risk” calculations that each financial institution must perform in the four hour time window after trading ends in New York and begins in Tokyo. Today, our customers report this mission-critical calculation consumes on the order of 200,000 vCPUs, growing to between 400K and 800K vCPUs in order to meet the FRTB regulations. While there’s still some debate about the magnitude and frequency with which they’ll need to run this expanded calculation, the overall direction is clear.

Building a Big Grid
In order to make sure that we are ready to help our FSI customers meet these new regulations, we worked with TIBCO to set up and run a proof of concept grid in the AWS Cloud. The periodic nature of the calculation, along with the amount of processing power and storage needed to run it to completion within four hours, make it a great fit for an environment where a vast amount of cost-effective compute power is available on an on-demand basis.

Our customers are already using the TIBCO GridServer on-premises and want to use it in the cloud. This product is designed to run grids at enterprise scale. It runs apps in a virtualized fashion, and accepts requests for resources, dynamically provisioning them on an as-needed basis. The cloud version supports Amazon Linux as well as the PostgreSQL-compatible edition of Amazon Aurora.

Working together with TIBCO, we set out to create a grid that was substantially larger than the current high-end prediction of 800K vCPUs, adding a 50% safety factor and then rounding up to reach 1.3 million vCPUs (5x the size of the largest on-premises grid). With that target in mind, the account limits were raised as follows:

  • Spot Instance Limit – 120,000
  • EBS Volume Limit – 120,000
  • EBS Capacity Limit – 2 PB

If you plan to create a grid of this size, you should also bring your friendly local AWS Solutions Architect into the loop as early as possible. They will review your plans, provide you with architecture guidance, and help you to schedule your run.

Running the Grid
We hit the Go button and launched the grid, watching as it bid for and obtained Spot Instances, each of which booted, initialized, and joined the grid within two minutes. The test workload used the Strata open source analytics & market risk library from OpenGamma and was set up with their assistance.

The grid grew to 61,299 Spot Instances (1.3 million vCPUs drawn from 34 instance types spanning 3 generations of EC2 hardware) as planned, with just 1,937 instances reclaimed and automatically replaced during the run, and cost $30,000 per hour to run, at an average hourly cost of $0.078 per vCPU. If the same instances had been used in On-Demand form, the hourly cost to run the grid would have been approximately $93,000.

Despite the scale of the grid, prices for the EC2 instances did not move during the bidding process. This is due to the overall size of the AWS Cloud and the smooth price change model that we launched late last year.

To give you a sense of the compute power, we computed that this grid would have taken the #1 position on the TOP 500 supercomputer list in November 2007 by a considerable margin, and the #2 position in June 2008. Today, it would occupy position #360 on the list.

I hope that you enjoyed this AWS success story, and that it gives you an idea of the scale that you can achieve in the cloud!

Jeff;

Security Breaches Don’t Affect Stock Price

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/01/security_breach.html

Interesting research: “Long-term market implications of data breaches, not,” by Russell Lange and Eric W. Burger.

Abstract: This report assesses the impact disclosure of data breaches has on the total returns and volatility of the affected companies’ stock, with a focus on the results relative to the performance of the firms’ peer industries, as represented through selected indices rather than the market as a whole. Financial performance is considered over a range of dates from 3 days post-breach through 6 months post-breach, in order to provide a longer-term perspective on the impact of the breach announcement.

Key findings:

  • While the difference in stock price between the sampled breached companies and their peers was negative (1.13%) in the first 3 days following announcement of a breach, by the 14th day the return difference had rebounded to + 0.05%, and on average remained positive through the period assessed.
  • For the differences in the breached companies’ betas and the beta of their peer sets, the differences in the means of 8 months pre-breach versus post-breach was not meaningful at 90, 180, and 360 day post-breach periods.

  • For the differences in the breached companies’ beta correlations against the peer indices pre- and post-breach, the difference in the means of the rolling 60 day correlation 8 months pre- breach versus post-breach was not meaningful at 90, 180, and 360 day post-breach periods.

  • In regression analysis, use of the number of accessed records, date, data sensitivity, and malicious versus accidental leak as variables failed to yield an R2 greater than 16.15% for response variables of 3, 14, 60, and 90 day return differential, excess beta differential, and rolling beta correlation differential, indicating that the financial impact on breached companies was highly idiosyncratic.

  • Based on returns, the most impacted industries at the 3 day post-breach date were U.S. Financial Services, Transportation, and Global Telecom. At the 90 day post-breach date, the three most impacted industries were U.S. Financial Services, U.S. Healthcare, and Global Telecom.

The market isn’t going to fix this. If we want better security, we need to regulate the market.

Note: The article is behind a paywall. An older version is here. A similar article is here.

A New Guide to Banking Regulations and Guidelines in India

Post Syndicated from Oliver Bell original https://aws.amazon.com/blogs/security/a-new-guide-to-banking-regulations-and-guidelines-in-india/

Indian flag

The AWS User Guide to Banking Regulations and Guidelines in India was published in December 2017 and includes information that can help banks regulated by the Reserve Bank of India (RBI) assess how to implement an appropriate information security, risk management, and governance program in the AWS Cloud.

The guide focuses on the following key considerations:

  • Outsourcing guidelines – Guidance for banks entering an outsourcing arrangement, including risk-management practices such as conducting due diligence and maintaining effective oversight. Learn how to conduct an assessment of AWS services and align your governance requirements with the AWS Shared Responsibility Model.
  • Information security – Detailed requirements to help banks identify and manage information security in the cloud.

This guide joins the existing Financial Services guides for other jurisdictions, such as Singapore, Australia, and Hong Kong. AWS will publish additional guides in 2018 to help you understand regulatory requirements in other markets around the world.

– Oliver

Serverless @ re:Invent 2017

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/serverless-reinvent-2017/

At re:Invent 2014, we announced AWS Lambda, what is now the center of the serverless platform at AWS, and helped ignite the trend of companies building serverless applications.

This year, at re:Invent 2017, the topic of serverless was everywhere. We were incredibly excited to see the energy from everyone attending 7 workshops, 15 chalk talks, 20 skills sessions and 27 breakout sessions. Many of these sessions were repeated due to high demand, so we are happy to summarize and provide links to the recordings and slides of these sessions.

Over the course of the week leading up to and then the week of re:Invent, we also had over 15 new features and capabilities across a number of serverless services, including AWS Lambda, Amazon API Gateway, AWS [email protected], AWS SAM, and the newly announced AWS Serverless Application Repository!

AWS Lambda

Amazon API Gateway

  • Amazon API Gateway Supports Endpoint Integrations with Private VPCs – You can now provide access to HTTP(S) resources within your VPC without exposing them directly to the public internet. This includes resources available over a VPN or Direct Connect connection!
  • Amazon API Gateway Supports Canary Release Deployments – You can now use canary release deployments to gradually roll out new APIs. This helps you more safely roll out API changes and limit the blast radius of new deployments.
  • Amazon API Gateway Supports Access Logging – The access logging feature lets you generate access logs in different formats such as CLF (Common Log Format), JSON, XML, and CSV. The access logs can be fed into your existing analytics or log processing tools so you can perform more in-depth analysis or take action in response to the log data.
  • Amazon API Gateway Customize Integration Timeouts – You can now set a custom timeout for your API calls as low as 50ms and as high as 29 seconds (the default is 30 seconds).
  • Amazon API Gateway Supports Generating SDK in Ruby – This is in addition to support for SDKs in Java, JavaScript, Android and iOS (Swift and Objective-C). The SDKs that Amazon API Gateway generates save you development time and come with a number of prebuilt capabilities, such as working with API keys, exponential back, and exception handling.

AWS Serverless Application Repository

Serverless Application Repository is a new service (currently in preview) that aids in the publication, discovery, and deployment of serverless applications. With it you’ll be able to find shared serverless applications that you can launch in your account, while also sharing ones that you’ve created for others to do the same.

AWS [email protected]

[email protected] now supports content-based dynamic origin selection, network calls from viewer events, and advanced response generation. This combination of capabilities greatly increases the use cases for [email protected], such as allowing you to send requests to different origins based on request information, showing selective content based on authentication, and dynamically watermarking images for each viewer.

AWS SAM

Twitch Launchpad live announcements

Other service announcements

Here are some of the other highlights that you might have missed. We think these could help you make great applications:

AWS re:Invent 2017 sessions

Coming up with the right mix of talks for an event like this can be quite a challenge. The Product, Marketing, and Developer Advocacy teams for Serverless at AWS spent weeks reading through dozens of talk ideas to boil it down to the final list.

From feedback at other AWS events and webinars, we knew that customers were looking for talks that focused on concrete examples of solving problems with serverless, how to perform common tasks such as deployment, CI/CD, monitoring, and troubleshooting, and to see customer and partner examples solving real world problems. To that extent we tried to settle on a good mix based on attendee experience and provide a track full of rich content.

Below are the recordings and slides of breakout sessions from re:Invent 2017. We’ve organized them for those getting started, those who are already beginning to build serverless applications, and the experts out there already running them at scale. Some of the videos and slides haven’t been posted yet, and so we will update this list as they become available.

Find the entire Serverless Track playlist on YouTube.

Talks for people new to Serverless

Advanced topics

Expert mode

Talks for specific use cases

Talks from AWS customers & partners

Looking to get hands-on with Serverless?

At re:Invent, we delivered instructor-led skills sessions to help attendees new to serverless applications get started quickly. The content from these sessions is already online and you can do the hands-on labs yourself!
Build a Serverless web application

Still looking for more?

We also recently completely overhauled the main Serverless landing page for AWS. This includes a new Resources page containing case studies, webinars, whitepapers, customer stories, reference architectures, and even more Getting Started tutorials. Check it out!

Power data ingestion into Splunk using Amazon Kinesis Data Firehose

Post Syndicated from Tarik Makota original https://aws.amazon.com/blogs/big-data/power-data-ingestion-into-splunk-using-amazon-kinesis-data-firehose/

In late September, during the annual Splunk .conf, Splunk and Amazon Web Services (AWS) jointly announced that Amazon Kinesis Data Firehose now supports Splunk Enterprise and Splunk Cloud as a delivery destination. This native integration between Splunk Enterprise, Splunk Cloud, and Amazon Kinesis Data Firehose is designed to make AWS data ingestion setup seamless, while offering a secure and fault-tolerant delivery mechanism. We want to enable customers to monitor and analyze machine data from any source and use it to deliver operational intelligence and optimize IT, security, and business performance.

With Kinesis Data Firehose, customers can use a fully managed, reliable, and scalable data streaming solution to Splunk. In this post, we tell you a bit more about the Kinesis Data Firehose and Splunk integration. We also show you how to ingest large amounts of data into Splunk using Kinesis Data Firehose.

Push vs. Pull data ingestion

Presently, customers use a combination of two ingestion patterns, primarily based on data source and volume, in addition to existing company infrastructure and expertise:

  1. Pull-based approach: Using dedicated pollers running the popular Splunk Add-on for AWS to pull data from various AWS services such as Amazon CloudWatch or Amazon S3.
  2. Push-based approach: Streaming data directly from AWS to Splunk HTTP Event Collector (HEC) by using AWS Lambda. Examples of applicable data sources include CloudWatch Logs and Amazon Kinesis Data Streams.

The pull-based approach offers data delivery guarantees such as retries and checkpointing out of the box. However, it requires more ops to manage and orchestrate the dedicated pollers, which are commonly running on Amazon EC2 instances. With this setup, you pay for the infrastructure even when it’s idle.

On the other hand, the push-based approach offers a low-latency scalable data pipeline made up of serverless resources like AWS Lambda sending directly to Splunk indexers (by using Splunk HEC). This approach translates into lower operational complexity and cost. However, if you need guaranteed data delivery then you have to design your solution to handle issues such as a Splunk connection failure or Lambda execution failure. To do so, you might use, for example, AWS Lambda Dead Letter Queues.

How about getting the best of both worlds?

Let’s go over the new integration’s end-to-end solution and examine how Kinesis Data Firehose and Splunk together expand the push-based approach into a native AWS solution for applicable data sources.

By using a managed service like Kinesis Data Firehose for data ingestion into Splunk, we provide out-of-the-box reliability and scalability. One of the pain points of the old approach was the overhead of managing the data collection nodes (Splunk heavy forwarders). With the new Kinesis Data Firehose to Splunk integration, there are no forwarders to manage or set up. Data producers (1) are configured through the AWS Management Console to drop data into Kinesis Data Firehose.

You can also create your own data producers. For example, you can drop data into a Firehose delivery stream by using Amazon Kinesis Agent, or by using the Firehose API (PutRecord(), PutRecordBatch()), or by writing to a Kinesis Data Stream configured to be the data source of a Firehose delivery stream. For more details, refer to Sending Data to an Amazon Kinesis Data Firehose Delivery Stream.

You might need to transform the data before it goes into Splunk for analysis. For example, you might want to enrich it or filter or anonymize sensitive data. You can do so using AWS Lambda. In this scenario, Kinesis Data Firehose buffers data from the incoming source data, sends it to the specified Lambda function (2), and then rebuffers the transformed data to the Splunk Cluster. Kinesis Data Firehose provides the Lambda blueprints that you can use to create a Lambda function for data transformation.

Systems fail all the time. Let’s see how this integration handles outside failures to guarantee data durability. In cases when Kinesis Data Firehose can’t deliver data to the Splunk Cluster, data is automatically backed up to an S3 bucket. You can configure this feature while creating the Firehose delivery stream (3). You can choose to back up all data or only the data that’s failed during delivery to Splunk.

In addition to using S3 for data backup, this Firehose integration with Splunk supports Splunk Indexer Acknowledgments to guarantee event delivery. This feature is configured on Splunk’s HTTP Event Collector (HEC) (4). It ensures that HEC returns an acknowledgment to Kinesis Data Firehose only after data has been indexed and is available in the Splunk cluster (5).

Now let’s look at a hands-on exercise that shows how to forward VPC flow logs to Splunk.

How-to guide

To process VPC flow logs, we implement the following architecture.

Amazon Virtual Private Cloud (Amazon VPC) delivers flow log files into an Amazon CloudWatch Logs group. Using a CloudWatch Logs subscription filter, we set up real-time delivery of CloudWatch Logs to an Kinesis Data Firehose stream.

Data coming from CloudWatch Logs is compressed with gzip compression. To work with this compression, we need to configure a Lambda-based data transformation in Kinesis Data Firehose to decompress the data and deposit it back into the stream. Firehose then delivers the raw logs to the Splunk Http Event Collector (HEC).

If delivery to the Splunk HEC fails, Firehose deposits the logs into an Amazon S3 bucket. You can then ingest the events from S3 using an alternate mechanism such as a Lambda function.

When data reaches Splunk (Enterprise or Cloud), Splunk parsing configurations (packaged in the Splunk Add-on for Kinesis Data Firehose) extract and parse all fields. They make data ready for querying and visualization using Splunk Enterprise and Splunk Cloud.

Walkthrough

Install the Splunk Add-on for Amazon Kinesis Data Firehose

The Splunk Add-on for Amazon Kinesis Data Firehose enables Splunk (be it Splunk Enterprise, Splunk App for AWS, or Splunk Enterprise Security) to use data ingested from Amazon Kinesis Data Firehose. Install the Add-on on all the indexers with an HTTP Event Collector (HEC). The Add-on is available for download from Splunkbase.

HTTP Event Collector (HEC)

Before you can use Kinesis Data Firehose to deliver data to Splunk, set up the Splunk HEC to receive the data. From Splunk web, go to the Setting menu, choose Data Inputs, and choose HTTP Event Collector. Choose Global Settings, ensure All tokens is enabled, and then choose Save. Then choose New Token to create a new HEC endpoint and token. When you create a new token, make sure that Enable indexer acknowledgment is checked.

When prompted to select a source type, select aws:cloudwatch:vpcflow.

Create an S3 backsplash bucket

To provide for situations in which Kinesis Data Firehose can’t deliver data to the Splunk Cluster, we use an S3 bucket to back up the data. You can configure this feature to back up all data or only the data that’s failed during delivery to Splunk.

Note: Bucket names are unique. Thus, you can’t use tmak-backsplash-bucket.

aws s3 create-bucket --bucket tmak-backsplash-bucket --create-bucket-configuration LocationConstraint=ap-northeast-1

Create an IAM role for the Lambda transform function

Firehose triggers an AWS Lambda function that transforms the data in the delivery stream. Let’s first create a role for the Lambda function called LambdaBasicRole.

Note: You can also set this role up when creating your Lambda function.

$ aws iam create-role --role-name LambdaBasicRole --assume-role-policy-document file://TrustPolicyForLambda.json

Here is TrustPolicyForLambda.json.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "lambda.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

 

After the role is created, attach the managed Lambda basic execution policy to it.

$ aws iam attach-role-policy 
  --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole 
  --role-name LambdaBasicRole

 

Create a Firehose Stream

On the AWS console, open the Amazon Kinesis service, go to the Firehose console, and choose Create Delivery Stream.

In the next section, you can specify whether you want to use an inline Lambda function for transformation. Because incoming CloudWatch Logs are gzip compressed, choose Enabled for Record transformation, and then choose Create new.

From the list of the available blueprint functions, choose Kinesis Data Firehose CloudWatch Logs Processor. This function unzips data and place it back into the Firehose stream in compliance with the record transformation output model.

Enter a name for the Lambda function, choose Choose an existing role, and then choose the role you created earlier. Then choose Create Function.

Go back to the Firehose Stream wizard, choose the Lambda function you just created, and then choose Next.

Select Splunk as the destination, and enter your Splunk Http Event Collector information.

Note: Amazon Kinesis Data Firehose requires the Splunk HTTP Event Collector (HEC) endpoint to be terminated with a valid CA-signed certificate matching the DNS hostname used to connect to your HEC endpoint. You receive delivery errors if you are using a self-signed certificate.

In this example, we only back up logs that fail during delivery.

To monitor your Firehose delivery stream, enable error logging. Doing this means that you can monitor record delivery errors.

Create an IAM role for the Firehose stream by choosing Create new, or Choose. Doing this brings you to a new screen. Choose Create a new IAM role, give the role a name, and then choose Allow.

If you look at the policy document, you can see that the role gives Kinesis Data Firehose permission to publish error logs to CloudWatch, execute your Lambda function, and put records into your S3 backup bucket.

You now get a chance to review and adjust the Firehose stream settings. When you are satisfied, choose Create Stream. You get a confirmation once the stream is created and active.

Create a VPC Flow Log

To send events from Amazon VPC, you need to set up a VPC flow log. If you already have a VPC flow log you want to use, you can skip to the “Publish CloudWatch to Kinesis Data Firehose” section.

On the AWS console, open the Amazon VPC service. Then choose VPC, Your VPC, and choose the VPC you want to send flow logs from. Choose Flow Logs, and then choose Create Flow Log. If you don’t have an IAM role that allows your VPC to publish logs to CloudWatch, choose Set Up Permissions and Create new role. Use the defaults when presented with the screen to create the new IAM role.

Once active, your VPC flow log should look like the following.

Publish CloudWatch to Kinesis Data Firehose

When you generate traffic to or from your VPC, the log group is created in Amazon CloudWatch. The new log group has no subscription filter, so set up a subscription filter. Setting this up establishes a real-time data feed from the log group to your Firehose delivery stream.

At present, you have to use the AWS Command Line Interface (AWS CLI) to create a CloudWatch Logs subscription to a Kinesis Data Firehose stream. However, you can use the AWS console to create subscriptions to Lambda and Amazon Elasticsearch Service.

To allow CloudWatch to publish to your Firehose stream, you need to give it permissions.

$ aws iam create-role --role-name CWLtoKinesisFirehoseRole --assume-role-policy-document file://TrustPolicyForCWLToFireHose.json


Here is the content for TrustPolicyForCWLToFireHose.json.

{
  "Statement": {
    "Effect": "Allow",
    "Principal": { "Service": "logs.us-east-1.amazonaws.com" },
    "Action": "sts:AssumeRole"
  }
}

 

Attach the policy to the newly created role.

$ aws iam put-role-policy 
    --role-name CWLtoKinesisFirehoseRole 
    --policy-name Permissions-Policy-For-CWL 
    --policy-document file://PermissionPolicyForCWLToFireHose.json

Here is the content for PermissionPolicyForCWLToFireHose.json.

{
    "Statement":[
      {
        "Effect":"Allow",
        "Action":["firehose:*"],
        "Resource":["arn:aws:firehose:us-east-1:YOUR-AWS-ACCT-NUM:deliverystream/ FirehoseSplunkDeliveryStream"]
      },
      {
        "Effect":"Allow",
        "Action":["iam:PassRole"],
        "Resource":["arn:aws:iam::YOUR-AWS-ACCT-NUM:role/CWLtoKinesisFirehoseRole"]
      }
    ]
}

Finally, create a subscription filter.

$ aws logs put-subscription-filter 
   --log-group-name " /vpc/flowlog/FirehoseSplunkDemo" 
   --filter-name "Destination" 
   --filter-pattern "" 
   --destination-arn "arn:aws:firehose:us-east-1:YOUR-AWS-ACCT-NUM:deliverystream/FirehoseSplunkDeliveryStream" 
   --role-arn "arn:aws:iam::YOUR-AWS-ACCT-NUM:role/CWLtoKinesisFirehoseRole"

When you run the AWS CLI command preceding, you don’t get any acknowledgment. To validate that your CloudWatch Log Group is subscribed to your Firehose stream, check the CloudWatch console.

As soon as the subscription filter is created, the real-time log data from the log group goes into your Firehose delivery stream. Your stream then delivers it to your Splunk Enterprise or Splunk Cloud environment for querying and visualization. The screenshot following is from Splunk Enterprise.

In addition, you can monitor and view metrics associated with your delivery stream using the AWS console.

Conclusion

Although our walkthrough uses VPC Flow Logs, the pattern can be used in many other scenarios. These include ingesting data from AWS IoT, other CloudWatch logs and events, Kinesis Streams or other data sources using the Kinesis Agent or Kinesis Producer Library. We also used Lambda blueprint Kinesis Data Firehose CloudWatch Logs Processor to transform streaming records from Kinesis Data Firehose. However, you might need to use a different Lambda blueprint or disable record transformation entirely depending on your use case. For an additional use case using Kinesis Data Firehose, check out This is My Architecture Video, which discusses how to securely centralize cross-account data analytics using Kinesis and Splunk.

 


Additional Reading

If you found this post useful, be sure to check out Integrating Splunk with Amazon Kinesis Streams and Using Amazon EMR and Hunk for Rapid Response Log Analysis and Review.


About the Authors

Tarik Makota is a solutions architect with the Amazon Web Services Partner Network. He provides technical guidance, design advice and thought leadership to AWS’ most strategic software partners. His career includes work in an extremely broad software development and architecture roles across ERP, financial printing, benefit delivery and administration and financial services. He holds an M.S. in Software Development and Management from Rochester Institute of Technology.

 

 

 

Roy Arsan is a solutions architect in the Splunk Partner Integrations team. He has a background in product development, cloud architecture, and building consumer and enterprise cloud applications. More recently, he has architected Splunk solutions on major cloud providers, including an AWS Quick Start for Splunk that enables AWS users to easily deploy distributed Splunk Enterprise straight from their AWS console. He’s also the co-author of the AWS Lambda blueprints for Splunk. He holds an M.S. in Computer Science Engineering from the University of Michigan.

 

 

 

Now You Can Use Amazon ElastiCache for Redis with In-Transit and At-Rest Encryption to Help Protect Sensitive Information

Post Syndicated from Manan Goel original https://aws.amazon.com/blogs/security/amazon-elasticache-now-supports-encryption-for-elasticache-for-redis/

Amazon ElastiCache image

Amazon ElastiCache for Redis now supports encryption for secure internode communications to help keep personally identifiable information (PII) safe. Both encryption in transit and at rest are supported. The new encryption in-transit feature enables you to encrypt all communications between clients and Redis servers as well as between Redis servers (primary and read replica nodes). The encryption at-rest feature allows you to encrypt your ElastiCache for Redis backups on disk and in Amazon S3. Additionally, you can use the Redis AUTH command for an added level of authentication.

If you are in the Financial Services, Healthcare, and Telecommunications sectors, this new encryption functionality can help you protect your sensitive data sets and meet compliance requirements. You can start using the new functionality by enabling it at the time of cluster creation via the ElastiCache console or through the API. You don’t have to manage the lifecycle of your certificates because ElastiCache for Redis automatically manages the issuance, renewal, and expiration of your certificates. For more information, see Enabling In-Transit Encryption and Enabling At-Rest Encryption.

There is no additional charge to use this feature, and it is available in the US West (Oregon), US West (N. California), US East (Ohio), US East (N. Virginia), Canada (Central), EU (Ireland), and South America (São Paulo) Regions. We will make this feature available in other AWS Regions as well.

For more information about this feature and Amazon ElastiCache for Redis, see the ElastiCache for Redis FAQs.

– Manan