Tag Archives: Foundational (100)

Delivering on the AWS Digital Sovereignty Pledge: Control without compromise

Post Syndicated from Matt Garman original https://aws.amazon.com/blogs/security/delivering-on-the-aws-digital-sovereignty-pledge-control-without-compromise/

At AWS, earning and maintaining customer trust is the foundation of our business. We understand that protecting customer data is key to achieving this. We also know that trust must continue to be earned through transparency and assurances.

In November 2022, we announced the new AWS Digital Sovereignty Pledge, our commitment to offering all AWS customers the most advanced set of sovereignty controls and features available in the cloud. Two pillars of this are verifiable control over data access, and the ability to encrypt everything everywhere. We already offer a range of data protection features, accreditations, and contractual commitments that give customers control over where they locate their data, who can access it, and how it is used. Today, I’d like to update you on how we are continuing to earn your trust with verifiable control over customer data access and external control of your encryption keys.

AWS Nitro System achieves independent third-party validation

We are committed to helping our customers meet evolving sovereignty requirements and providing greater transparency and assurances to how AWS services are designed and operated. With the AWS Nitro System, which is the foundation of AWS computing service Amazon EC2, we designed and delivered first-of-a-kind innovation by eliminating any mechanism AWS personnel have to access customer data on Nitro. Our removal of an operator access mechanism was unique in 2017 when we first launched the Nitro System.

As we continue to deliver on our digital sovereignty pledge of customer control over data access, I’m excited to share with you an independent report on the security design of the AWS Nitro System. We engaged NCC Group, a global cybersecurity consulting firm, to conduct an architecture review of our security claims of the Nitro System and produce a public report. This report confirms that the AWS Nitro System, by design, has no mechanism for anyone at AWS to access your data on Nitro hosts. The report evaluates the architecture of the Nitro System and our claims about operator access. It concludes that “As a matter of design, NCC Group found no gaps in the Nitro System that would compromise these security claims.” It also goes on to state, “NCC Group finds…there is no indication that a cloud service provider employee can obtain such access…to any host.” Our computing infrastructure, the Nitro System, has no operator access mechanism, and now is supported by a third-party analysis of those data controls. Read more in the NCC Group report.

New AWS Service Term

At AWS, security is our top priority. The NCC report shows the Nitro System is an exceptional computing backbone for AWS, with security at its core. The Nitro controls that prevent operator access are so fundamental to the Nitro System that we’ve added them in our AWS Service Terms, which are applicable to anyone who uses AWS.

Our AWS Service Terms now include the following on the Nitro System:

AWS personnel do not have access to Your Content on AWS Nitro System EC2 instances. There are no technical means or APIs available to AWS personnel to read, copy, extract, modify, or otherwise access Your Content on an AWS Nitro System EC2 instance or encrypted-EBS volume attached to an AWS Nitro System EC2 instance. Access to AWS Nitro System EC2 instance APIs – which enable AWS personnel to operate the system without access to Your Content – is always logged, and always requires authentication and authorization.

External control of your encryption keys with AWS KMS External Key Store

As part of our promise to continue to make the AWS Cloud sovereign-by-design, we pledged to continue to invest in an ambitious roadmap of capabilities, which includes our encryption capabilities. At re:Invent 2022, we took further steps to deliver on this roadmap of encrypt everything everywhere with encryption keys managed inside or outside the AWS Cloud by announcing the availability of AWS Key Management Service (AWS KMS) External Key Store (XKS). This innovation supports our customers who have a regulatory need to store and use their encryption keys outside the AWS Cloud. The open source XKS specification offers customers the flexibility to adapt to different HSM deployment use cases. While AWS KMS also prevents AWS personnel from accessing customer keys, this new capability may help some customers demonstrate compliance with specific regulations or industry expectations requiring storage and use of encryption keys outside of an AWS data center for certain workloads.

In order to accelerate our customers’ ability to adopt XKS for regulatory purposes, we collaborated with external HSM, key management, and integration service providers that our customers trust. To date, ThalesEntrustFortanix, DuoKey, and HashiCorp have launched XKS implementations, and SalesforceAtos, and T-Systems have announced that they are building integrated service offerings around XKS. In addition, many SaaS solutions offer integration with AWS KMS for key management of their encryption offerings. Customers using these solutions, such as the offerings from Databricks, MongoDB, Reltio, Slack, Snowflake, and Zoom, can now utilize keys in external key managers via XKS to secure data. This allows customers to simplify their key management strategies across AWS as well as certain SaaS solutions by providing a centralized place to manage access policies and audit key usage.

We remain committed to helping our customers meet security, privacy, and digital sovereignty requirements. We will continue to innovate sovereignty features, controls, and assurances within the global AWS Cloud and deliver them without compromise to the full power of AWS.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Matt Garman

Matt Garman

Matt is currently the Senior Vice President of AWS Sales, Marketing and Global Services at AWS, and also sits on Amazon’s executive leadership S-Team. Matt joined Amazon in 2006, and has held several leadership positions in AWS over that time. Matt previously served as Vice President of the Amazon EC2 and Compute Services businesses for AWS for over 10 years. Matt was responsible for P&L, product management, and engineering and operations for all compute and storage services in AWS. He started at Amazon when AWS first launched in 2006 and served as one of the first product managers, helping to launch the initial set of AWS services. Prior to Amazon, he spent time in product management roles at early stage Internet startups. Matt earned a BS and MS in Industrial Engineering from Stanford University, and an MBA from the Kellogg School of Management at Northwestern University.

AWS Security Profile: Tatyana Yatskevich, Principal Solutions Architect for AWS Identity

Post Syndicated from Maddie Bacon original https://aws.amazon.com/blogs/security/aws-security-profile-tatyana-yatskevich-principal-solutions-architect-for-aws-identity/

AWS Security Profile: Tatyana Yatskevich, Principal Solutions Architect for AWS Identity

In the AWS Security Profile series, I interview some of the humans who work in AWS Security and help keep our customers safe and secure. In this profile, I interviewed Tatyana Yatskevich, Principal Solutions Architect for AWS Identity.


How long have you been at AWS and what do you do in your current role?

I’ve been at AWS for about five and a half years now. I’ve had several different roles, but I’m currently part of the Identity Solutions team, which is a team of solutions architects who are embedded into the Identity and Control Service. Our team focuses on staying current with customer use cases and emerging problems in the identity space so that we can facilitate the development of new capabilities and prescriptive guidance from AWS.

To keep up with the demand in certain industries, we work with some enterprise customers that operate large cloud environments on AWS. Knowing what these customers need to do to achieve their business outcomes while operating under stringent regulatory compliance requirements helps us provide valuable input into our service and feature development process and support customers in their cloud journey in the most efficient manner.

How did you get started in security?

At the beginning of my career, I mostly just happened to work on security-related projects. I performed security and vulnerability assessments, facilitated remediation work, and managed traditional on-premises security solutions such as web proxies, firewalls, and VPNs. Through these projects, I developed an interest in the security field because of its wide reach and impact, and because it presents a lot of opportunities for growth and problem solving as new challenges arise almost daily. My roles at AWS have been a logical continuation of my security-focused career. Here, I’ve mostly been motivated by empowering security teams to become business enablers, rather than being perceived as blockers to innovation and agility.

How do you explain your job to non-technical friends and family?

I usually give an example of a service or feature that most of us interact with on a regular basis, such as a banking application. I explain that it takes a lot of engineering work to build that application from the ground up and deliver on the user experience and security. That engineering work involves the use of many different technologies that support the user sign-in process, or storage of your personal information like your social security or credit card numbers. My job is to help companies that provide these services implement the proper security controls so that your personal information is used in accordance with local laws and isn’t disclosed for unauthorized use.

In your opinion, what’s one of the coolest things happening in identity right now?

I think it’s the increased role of identity, authentication, and authorization controls in the overall security model of newly built applications. It spans from helping to ensure secure workforce mobility now that providing access to business applications from anywhere is critical to business competitiveness, to keeping Internet of Things (IoT) infrastructure protected and operated in accordance with zero trust. The realization of the power and the increasing usage of identity-specific controls to manage access to digital assets is the coolest trend in identity right now.

What are you currently working on that you’re excited about?

One of the areas that I’m highly invested in is data perimeters. A data perimeter is a set of capabilities that help customers keep their data within their organizational boundary and mitigate the risks of data exfiltration or unintended access to data. We have customers in a wide variety of industries, such as the financial sector, telecom, media and entertainment, and public sector. There are compliance and regulatory requirements that they operate under. A lot of those requirements emphasize controls that guard sensitive data from unauthorized access and prevent movement of that data to places outside of company’s control.

To help customers meet these requirements in a scalable way, we continuously invest in the development of new capabilities. I talk to some of our largest enterprise customers on a regular basis to understand their challenges in this area, and I work with service teams to introduce new capabilities to meet new requirements. I also lead efforts to extend customer-facing guidance and solutions so that customers can design and implement data perimeters on their own. And I present at AWS events to reach more customers, with the most recent being our presentation with Goldman Sachs at re:Invent 2022.

Tell me about that presentation.

I co-presented a chalk talk with Shubham Shukla, Vice President of Cloud Enablement at Goldman Sachs, called Establishing a Data Perimeter on AWS. The session gave an overview of data perimeter capabilities and showcased Goldman Sachs’ experience implementing data perimeter controls at scale in their multi-account AWS environment. What’s cool about that session, I think, is that it’s always good to present about AWS best practices and our view of how certain things should be done, but it’s extra powerful when we include a customer. This is especially true when a large enterprise customer such as Goldman Sachs shares their experience and talks about how they do certain things in practice, like mapping specific requirements to the actual implementation and talking through lessons learned and their perspective on the problem and solution. A lot of our customers are interested in learning from other customers how to build and operate enterprise security controls at scale. We did a similar presentation with Vanguard at re:Inforce 2022, and I look forward to future opportunities to showcase the awesome work being done by our customers.

What is your favorite Amazon Leadership Principle and why?

Customer Obsession. For me, the core of it is building deeper, longer lasting relationships with our customers and taking their learnings back to our business to work backwards from the actual customer needs. Building better products, helping customers meet their business goals, and having wide-reaching impact is what makes me so excited to come to work every day.

What’s the thing you’re most proud of in your career?

As part of my former role as a security consultant in the AWS Professional Services organization, I led security-related projects to either help customers migrate their workloads to AWS or perform security assessments of their existing AWS environment. Part of that role involved developing mechanisms to better engage with customers on security-related topics and help them develop their own security strategy for running workloads on AWS. That work sometimes involved challenging conversations with customers. I would explain the value of the technology that AWS provides and help customers figure out how to implement AWS services to meet both their business and security needs. I took learnings from these conversations and developed some internal assets that helped newer AWS security consultants conduct those conversations more effectively, and I mentored them throughout the process.

If you had to pick an industry outside of security, what would you want to do?

I would be in the travel industry. I absolutely love visiting new places and exploring nature. I love learning the history and culture of different regions, and trying out different cuisines. It’s something that helps me learn more about myself through new experiences and ultimately be a happier person.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Maddie Bacon

Maddie (she/her) is a technical writer for Amazon Security with a passion for creating meaningful content that focuses on the human side of security and encourages a security-first mindset. She previously worked as a reporter and editor, and has a BA in Mathematics. In her spare time, she enjoys reading, traveling, and staunchly defending the Oxford comma.

Tatyana Yatskevich

Tatyana Yatskevich

Tatyana is a Principal Solutions Architect in AWS Identity. She works with customers to help them build and operate in AWS in the most secure and efficient manner.

AWS achieves an AAA Pinakes rating for Spanish financial entities

Post Syndicated from Daniel Fuertes original https://aws.amazon.com/blogs/security/aws-achieves-an-aaa-pinakes-rating-for-spanish-financial-entities/

Amazon Web Services (AWS) is pleased to announce that we have achieved an AAA rating from Pinakes. The scope of this qualification covers 166 services in 25 global AWS Regions.

The Spanish banking association Centro de Cooperación Interbancaria (CCI) developed Pinakes, a rating framework intended to manage and monitor the cybersecurity controls of service providers that Spanish financial entities depend on. The requirements arise from the European Banking Authority guidelines (EBA/GL/2019/02).

Pinakes evaluates the cybersecurity levels of service providers through 1,315 requirements across 4 categories (confidentiality, integrity, availability of information, and general) and 14 domains:

  • Information security management program
  • Facility security
  • Third-party management
  • Normative compliance
  • Network controls
  • Access control
  • Incident management
  • Encryption
  • Secure development
  • Monitoring
  • Malware protection
  • Resilience
  • Systems operation
  • Staff safety

Each requirement is associated to a rating level (A+, A, B, C, D), ranging from the highest A+ (provider has implemented the most diligent measures and controls for cybersecurity management) to the lowest D (minimum security requirements are met).

An independent third-party auditor has verified the implementation status for each section. As a result, AWS has been qualified with A ratings for Confidentiality, Integrity and Availability, getting an overall rating of AAA.

Our Spanish financial customers can refer to the AWS Pinakes rating to confirm that the AWS control environment is appropriately designed and implemented. By receiving an AAA, AWS demonstrates our commitment to meet the heightened security expectations for cloud service providers set by the CCI. The full evaluation report will be published on AWS Artifact upon request. Pinakes participants who are AWS customers can contact their AWS account manager to request access to it.

As always, we value your feedback and questions. Reach out to the AWS Compliance team through the Contact Us page. To learn more about our other compliance and security programs, see AWS Compliance Programs.

 
If you have feedback about this post, please submit them in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Daniel Fuertes

Daniel Fuertes

Daniel is a Security Audit Program Manager at AWS based in Madrid, Spain. Daniel leads multiple security audits, attestations, and certification programs in Spain and other EMEA countries. Daniel has nine years of experience in security assurance, including previous experience as an auditor for the PCI DSS security framework.

Borja Larrumbide

Borja Larrumbide

Borja is a Security Assurance Manager for AWS in Spain and Portugal. Previously, he worked at companies such as Microsoft and BBVA in different roles and sectors. Borja is a seasoned security assurance practitioner with years of experience engaging key stakeholders at national and international levels. His areas of interest include security, privacy, risk management, and compliance.

A sneak peek at the data protection sessions for re:Inforce 2023

Post Syndicated from Katie Collins original https://aws.amazon.com/blogs/security/a-sneak-peek-at-the-data-protection-sessions-for-reinforce-2023/

reInforce 2023

A full conference pass is $1,099. Register today with the code secure150off to receive a limited time $150 discount, while supplies last.


AWS re:Inforce is fast approaching, and this post can help you plan your agenda. AWS re:Inforce is a security learning conference where you can gain skills and confidence in cloud security, compliance, identity, and privacy. As a re:Inforce attendee, you have access to hundreds of technical and non-technical sessions, an Expo featuring AWS experts and security partners with AWS Security Competencies, and keynote and leadership sessions featuring Security leadership. AWS re:Inforce 2023 will take place in-person in Anaheim, CA, on June 13 and 14. re:Inforce 2023 features content in the following six areas:

  • Data Protection
  • Governance, Risk, and Compliance
  • Identity and Access Management
  • Network and Infrastructure Security
  • Threat Detection and Incident Response
  • Application Security

The data protection track will showcase services and tools that you can use to help achieve your data protection goals in an efficient, cost-effective, and repeatable manner. You will hear from AWS customers and partners about how they protect data in transit, at rest, and in use. Learn how experts approach data management, key management, cryptography, data security, data privacy, and encryption. This post will highlight of some of the data protection offerings that you can add to your agenda. To learn about sessions from across the content tracks, see the AWS re:Inforce catalog preview.
 

“re:Inforce is a great opportunity for us to hear directly from our customers, understand their unique needs, and use customer input to define solutions that protect sensitive data. We also use this opportunity to deliver content focused on the latest security research and trends, and I am looking forward to seeing you all there. Security is everyone’s job, and at AWS, it is job zero.”
Ritesh Desai, General Manager, AWS Secrets Manager

 

Breakout sessions, chalk talks, and lightning talks

DAP301: Moody’s database secrets management at scale with AWS Secrets Manager
Many organizations must rotate database passwords across fleets of on-premises and cloud databases to meet various regulatory standards and enforce security best practices. One-time solutions such as scripts and runbooks for password rotation can be cumbersome. Moody’s sought a custom solution that satisfies the goal of managing database passwords using well-established DevSecOps processes. In this session, Moody’s discusses how they successfully used AWS Secrets Manager and AWS Lambda, along with open-source CI/CD system Jenkins, to implement database password lifecycle management across their fleet of databases spanning nonproduction and production environments.

DAP401: Security design of the AWS Nitro System
The AWS Nitro System is the underlying platform for all modern Amazon EC2 instances. In this session, learn about the inner workings of the Nitro System and discover how it is used to help secure your most sensitive workloads. Explore the unique design of the Nitro System’s purpose-built hardware and software components and how they operate together. Dive into specific elements of the Nitro System design, including eliminating the possibility of operator access and providing a hardware root of trust and cryptographic system integrity protections. Learn important aspects of the Amazon EC2 tenant isolation model that provide strong mitigation against potential side-channel issues.

DAP322: Integrating AWS Private CA with SPIRE and Ottr at Coinbase
Coinbase is a secure online platform for buying, selling, transferring, and storing cryptocurrency. This lightning talk provides an overview of how Coinbase uses AWS services, including AWS Private CA, AWS Secrets Manager, and Amazon RDS, to build out a Zero Trust architecture with SPIRE for service-to-service authentication. Learn how short-lived certificates are issued safely at scale for X.509 client authentication (i.e., Amazon MSK) with Ottr.

DAP331: AWS Private CA: Building better resilience and revocation techniques
In this chalk talk, explore the concept of PKI resiliency and certificate revocation for AWS Private CA, and discover the reasons behind multi-Region resilient private PKI. Dive deep into different revocation methods like certificate revocation list (CRL) and Online Certificate Status Protocol (OCSP) and compare their advantages and limitations. Leave this talk with the ability to better design resiliency and revocations.

DAP231: Securing your application data with AWS storage services
Critical applications that enterprises have relied on for years were designed for the database block storage and unstructured file storage prevalent on premises. Now, organizations are growing with cloud services and want to bring their security best practices along. This chalk talk explores the features for securing application data using Amazon FSx, Amazon Elastic File System (Amazon EFS), and Amazon Elastic Block Store (Amazon EBS). Learn about the fundamentals of securing your data, including encryption, access control, monitoring, and backup and recovery. Dive into use cases for different types of workloads, such as databases, analytics, and content management systems.

Hands-on sessions (builders’ sessions and workshops)

DAP353: Privacy-enhancing data collaboration with AWS Clean Rooms
Organizations increasingly want to protect sensitive information and reduce or eliminate raw data sharing. To help companies meet these requirements, AWS has built AWS Clean Rooms. This service allows organizations to query their collective data without needing to expose the underlying datasets. In this builders’ session, get hands-on with AWS Clean Rooms preventative and detective privacy-enhancing controls to mitigate the risk of exposing sensitive data.

DAP371: Post-quantum crypto with AWS KMS TLS endpoints, SDKs, and libraries
This hands-on workshop demonstrates post-quantum cryptographic algorithms and compares their performance and size to classical ones. Learn how to use AWS Key Management Service (AWS KMS) with the AWS SDK for Java to establish a quantum-safe tunnel to transfer the most critical digital secrets and protect them from a theoretical computer targeting these communications in the future. Find out how the tunnels use classical and quantum-resistant key exchanges to offer the best of both worlds, and discover the performance implications.

DAP271: Data protection risk assessment for AWS workloads
Join this workshop to learn how to simplify the process of selecting the right tools to mitigate your data protection risks while reducing costs. Follow the data protection lifecycle by conducting a risk assessment, selecting the effective controls to mitigate those risks, deploying and configuring AWS services to implement those controls, and performing continuous monitoring for audits. Leave knowing how to apply the right controls to mitigate your business risks using AWS advanced services for encryption, permissions, and multi-party processing.

If these sessions look interesting to you, join us in California by registering for re:Inforce 2023. We look forward to seeing you there!

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Katie Collins

Katie Collins

Katie is a Product Marketing Manager in AWS Security, where she brings her enthusiastic curiosity to deliver products that drive value for customers. Her experience also includes product management at both startups and large companies. With a love for travel, Katie is always eager to visit new places while enjoying a great cup of coffee.

A sneak peek at the application security sessions for re:Inforce 2023

Post Syndicated from Paul Hawkins original https://aws.amazon.com/blogs/security/a-sneak-peek-at-the-application-security-sessions-for-reinforce-2023/

reInforce 2023

A full conference pass is $1,099. Register today with the code secure150off to receive a limited time $150 discount, while supplies last.


AWS re:Inforce is a security learning conference where you can gain skills and confidence in cloud security, compliance, identity, and privacy. As a re:Inforce attendee, you have access to hundreds of technical and non-technical sessions, an Expo featuring AWS experts and security partners with AWS Security Competencies, and keynote and leadership sessions featuring Security leadership. AWS re:Inforce 2023 will take place in-person in Anaheim, CA, on June 13 and 14.

In line with recent updates to the Security Pillar of the Well-Architected Framework, we added a new track to the conference on application security. The new track will help you discover how AWS, customers, and AWS Partners move fast while understanding the security of the software they build.

In these sessions, you’ll hear directly from AWS leaders and get hands-on with the tools that help you ship securely. You’ll hear how organization and culture help security accelerate your business, and you’ll dive deep into the technology that helps you more swiftly get features to customers. You might even find some new tools that make it simpler to empower your builders to move quickly and ship securely.

To learn about sessions from across the content tracks, see the AWS re:Inforce catalog preview.

Breakout sessions, chalk talks, and lightning talks

APS221: From code to insight: Amazon Inspector & AWS Lambda in action
In this lightning talk, see a demo of Amazon Inspector support for AWS Lambda. Inspect a Java application running on Lambda for security vulnerabilities using Amazon Inspector for an automated and continual vulnerability assessment. In addition, explore how Amazon Inspector can help you identify package and code vulnerabilities in your serverless applications.

APS302: From IDE to code review, increasing quality and security
Attend this session to discover how to improve the quality and security of your code early in the development cycle. Explore how you can integrate Amazon Code Whisperer, Amazon CodeGuru reviewer, and Amazon Inspector into your development workflow, which can help you identify potential issues and automate your code review process.

APS201: How AWS and MongoDB raise the security bar with distributed ownership
In this session, explore how AWS and MongoDB have approached creating their Security Guardians and Security Champions programs. Learn practical advice on scoping, piloting, measuring, scaling, and managing a program with the goal of accelerating development with a high security bar, and discover tips on how to bridge the gap between your security and development teams. Learn how a guardians or champions program can improve security outside of your dev teams for your company as a whole when applied broadly across your organization.

APS202: AppSec tooling & culture tips from AWS & Toyota Motor North America
In this session, AWS and Toyota Motor North America (TMNA) share how they scale AppSec tooling and culture across enterprise organizations. Discover practical advice and lessons learned from the AWS approach to threat modeling across hundreds of service teams. In addition, gain insight on how TMNA has embedded security engineers into business units working on everything from mainframes to serverless. Learn ways to support teams at varying levels of maturity, the importance of differentiating between risks and vulnerabilities, and how to balance the use of cloud-native, third-party, and open-source tools at scale.

APS331: Shift security left with the AWS integrated builder experience
As organizations start building their applications on AWS, they use a wide range of highly capable builder services that address specific parts of the development and management of these applications. The AWS integrated builder experience (IBEX) brings those separate pieces together to create innovative solutions for both customers and AWS Partners building on AWS. In this chalk talk, discover how you can use IBEX to shift security left by designing applications with security best practices built in and by unlocking agile software to help prevent security issues and bottlenecks.

Hands-on sessions (builders’ sessions and workshops)

APS271: Threat modeling for builders
Attend this facilitated workshop to get hands-on experience creating a threat model for a workload. Learn some of the background and reasoning behind threat modeling, and explore tools and techniques for modeling systems, identifying threats, and selecting mitigations. In addition, explore the process for creating a system model and corresponding threat model for a serverless workload. Learn how AWS performs threat modeling internally, and discover the principles to effectively perform threat modeling on your own workloads.

APS371: Integrating open-source security tools with the AWS code services
AWS, open-source, and partner tooling works together to accelerate your software development lifecycle. In this workshop, learn how to use Automated Security Helper (ASH), a source-application security tool, to quickly integrate various security testing tools into your software build and deployment flows. AWS experts guide you through the process of security testing locally on your machines and within the AWS CodeCommit, AWS CodeBuild, and AWS CodePipeline services. In addition, discover how to identify potential security issues in your applications through static analysis, software composition analysis, and infrastructure-as-code testing.

APS352: Practical shift left for builders
Providing early feedback in the development lifecycle maximizes developer productivity and enables engineering teams to deliver quality products. In this builders’ session, learn how to use AWS Developer Tools to empower developers to make good security decisions early in the development cycle. Tools such as Amazon CodeGuru, Amazon CodeWhisperer, and Amazon DevOps Guru can provide continuous real-time feedback upon each code commit into a source repository and supplement this with ML capabilities integrated into the code review stage.

APS351: Secure software factory on AWS through the DoD DevSecOps lens
Modern information systems within regulated environments are driven by the need to develop software with security at the forefront. Increasingly, organizations are adopting DevSecOps and secure software factory patterns to improve the security of their software delivery lifecycle. In this builder’s session, we will explore the options available on AWS to create a secure software factory aligned with the DoD Enterprise DevSecOps initiative. We will focus on the security of the software supply chain as we deploy code from source to an Amazon Elastic Kubernetes Service (EKS) environment.

If these sessions look interesting to you, join us in California by registering for re:Inforce 2023. We look forward to seeing you there!

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Paul Hawkins

Paul helps customers of all sizes understand how to think about cloud security so they can build the technology and culture where security is a business enabler. He takes an optimistic approach to security and believes that getting the foundations right is the key to improving your security posture.

Alexa Smart Properties creates value for hospitality, senior living, and healthcare properties with Amazon QuickSight Embedded

Post Syndicated from Preet Jassi original https://aws.amazon.com/blogs/big-data/alexa-smart-properties-creates-value-for-hospitality-senior-living-and-healthcare-properties-with-amazon-quicksight-embedded/

This is a guest post by Preet Jassi from Alexa Smart Properties.

Alexa Smart Properties (ASP) is powered by a set of technologies that property owners, property managers, and third-party solution providers can use to deploy and manage Alexa-enabled devices at scale. Alexa can simplify tasks like playing music, controlling lights, or communicating with on-site staff. Our team got its start by building products for hospitality and residential properties, but we have since expanded our products to serve senior living and healthcare properties.

With Alexa now available in hotels, hospitals, senior living homes, and other facilities, we hear stories from our customers every day about how much they love Alexa. Everything from helping veterans with visual impairments gain access to information, to enabling a senior living home resident who had fallen and sustained an injury to immediately alert staff. It’s a great feeling when you can say, “The product I work on every day makes a difference in people’s lives!”

Our team builds the software that leading hospitality, healthcare, and senior living facilities use to manage Alexa devices in their properties. We partner directly with organizations that manage their own properties as well as third-party solution providers to provide comprehensive strategy and deployment support for Alexa devices and skills, making sure that they are ready for end-user customers. Our primary goal is to create value for properties through improved customer satisfaction, cost savings, and incremental revenue. We wanted a way to measure that impact in a fast, efficient, easily accessible way from a return on investment (ROI) perspective.

After we had established what capabilities we needed to close our analytics gap, we got in touch with the Amazon QuickSight team to help. In this post, we discuss our requirements and why Amazon QuickSight Embedded was the right fit for what we needed.

Telling the ROI story with data

As a business-to-business-to-consumer product, our team serves the needs of two customers: the end-users who enjoy Alexa-enabled devices at the properties, and the property managers or solution providers that manage the Alexa deployment. We needed to prove to the latter group of customers that deploying Alexa would not only help them delight their customers, but save money as well.

We had the data necessary to tell that ROI story, but we needed an analytics solution that would allow us to provide insights that can be communicated to leadership.

These were our requirements:

  • Embeddable dashboards – We wanted to embed analytics into our Alexa Smart Properties management console, used by both enterprise customers and solution providers. With QuickSight, dashboards are embedded for aggregated Alexa usage analytics.
  • Easy access to insights – We wanted a tool that was accessible to all of our customers, whether they had a technical background or not. QuickSight provides a beautiful, user-friendly user interface (UI) that our customers can use to interpret their data and analytics.
  • Customizable and rich visuals – Our customers needed to be able to dive deep. QuickSight allows you to drill down into the data to easily create and change whatever visuals you need. Our customers love the look of the visuals and how easy it is to share them with their customers.

Analytics drive engagement

With QuickSight, we can now show detailed device usage information, including quantity and frequency, with insights that connect the dots between that engagement and cost savings. For example, property managers can look at total dialog counts to determine that their guests are using Alexa often, which validates their investment.

The following screenshots show an example of the dashboard our solution providers can access, which they can use to send reports to staff at the properties they serve.

Active devices dashboard

Dialogs dashboard

The following screenshots show an example of the Communications tab, which shows how properties use communications features to save costs (both in terms of time and equipment). Customers save time and money on protective equipment by using Alexa’s remote communication features, which enable caretakers to virtually check in on patients instead of visiting a property in person. These metrics help our customers calculate the cost savings from using Alexa.

Communications tab of analytics dashboard

All actions dashboard

In the last year, the Analytics page in our management console has had over 20,000 page views from customers who are accessing the data and insights there to understand the impact Alexa has had on their businesses.

Insights validate investment

With QuickSight embedded dashboards, our direct-property customers and solution providers now have an easy-to-understand visual representation of how Alexa is making a difference for the guests and patients at each property. Embedded dashboards simplify the viewing, analyzing, and insight gathering for key usage metrics that help both enterprise property owners and solution providers connect the dots between Alexa’s use and money saved. Because we use Amazon Redshift to house our data, QuickSight’s seamless integration made it a fantastic choice.

Going forward, we plan to expand and improve upon the analytics foundation we’ve built with QuickSight by providing programmatic access to data—for example, a CSV file that can be sent to a customer’s Amazon Simple Storage Service (Amazon S3) bucket—as well as adding more data to our dashboards, thereby creating new opportunities for deeper insights.

To learn more about how you can embed customized data visuals, interactive dashboards, and natural language querying into any application, visit Amazon QuickSight Embedded.


About the Author

Preet Jassi is a Principal Product Manager Technical with Alexa Smart Properties. Preet fell in love with technology in Grade 5 where he built his first website for his elementary school. Prior to completing his MBA at Cornell, Preet was a UI Team Lead with over 6 years of experience as a software engineer post BSc. Preet’s passion is combining his love of technology (specifically analytics and artificial intelligence), with design, and business strategy to build products that customers love, spending time with family, and keeping active. He currently manages the Developer Experience for Alexa Smart Properties focusing on making it quick and easy to deploy Alexa devices in properties and he loves hearing quotes from end customers on how Alexa has changed their lives.

Scaling security and compliance

Post Syndicated from Chad Woolf original https://aws.amazon.com/blogs/security/scaling-security-and-compliance/

At Amazon Web Services (AWS), we move fast and continually iterate to meet the evolving needs of our customers. We design services that can help our customers meet even the most stringent security and compliance requirements. Additionally, our service teams work closely with our AWS Security Guardians program to coordinate security efforts and to maintain a high quality bar. We also have internal compliance teams that continually monitor security control requirements from all over the world and engage with external auditors to achieve third-party validation of our services against these requirements.

In this post, I’ll cover some key strategies and best practices that we use to scale security and compliance while maintaining a culture of innovation.

Security as the foundation

At AWS, security is our top priority. Although compliance might be challenging, treating security as an integral part of everything we do at AWS makes it possible for us to adhere to a broad range of compliance programs, to document our compliance, and to successfully demonstrate our compliance status to our auditors and customers.

Over time, as the auditors get deeper into what we’re doing, we can also help improve and refine their approach, as well. This increases the depth and quality of the reports that we provide directly to our customers.

The challenge of scaling securely

Many customers struggle with balancing security, compliance, and production. These customers have applications that they want to quickly make available to their own customer base. They might need to audit these applications. The traditional process can include writing the application, putting it into production, and then having the audit team take a look to make sure it meets compliance standards. This approach can cause issues, because retroactively adding compliance requirements can result in rework and churn for the development team.

Enforcing compliance requirements in this way doesn’t scale and eventually causes more complexity and friction between teams. So how do you scale quickly and securely?

Speak their language

The first way to earn trust with development teams is to speak their language. It’s critical to use terms and references that developers use, and to know what tools they are using to develop, deploy, and secure code. It’s not efficient or realistic to ask the engineering teams to do the translation of diverse (and often vague) compliance requirements into engineering specs. The compliance teams must do the hard work of translating what is required into what specifically must be done, using language that engineers are familiar with.

Another strategy to scale is to embed compliance requirements into the way developers do their daily work. It’s important that compliance teams enable developers to do their work just as they normally do, without compliance needing to intervene. If you’re successful at that strategy—and the compliant path becomes the simplest and most natural path—then that approach can lead to a very scalable compliance program that fosters understanding between teams and increased collaboration. This approach has helped break down the barriers between the developer and audit/compliance organizations.

Treat auditors and regulators as partners

I believe that you should treat auditors and regulators as true business partners. An independent auditor or regulator understands how a wide range of customers will use the security assurance artifacts that you are producing, and therefore will have valuable insights into how your reports can best be used. I think people can fall into the trap of treating regulators as adversaries. The best approach is to communicate openly with regulators, helping them understand your business and the value you bring to your customers, and getting them ramped up on your technology and processes.

At AWS, we help auditors and regulators get ramped up in various ways. For example, we have the Digital Audit Symposium, which contains presentations on how we control and operate particular services in terms of security and compliance. We also offer the Cloud Audit Academy, a learning path that provides both cloud-agnostic and AWS-specific training to help existing and prospective auditing, risk, and compliance professionals understand how to audit regulated cloud workloads. We’ve learned that being a partner with auditors and regulators is key in scaling compliance.

Conclusion

Having security as a foundation is essential to driving and scaling compliance efforts. Speaking the language of developers helps them continue to work without disruption, and makes the simple path the compliant path. Although some barriers still exist, especially for organizations in highly regulated industries such as financial services and healthcare, treating auditors like partners is a positive strategic shift in perspective. The more proactive you are in helping them accomplish what they need, the faster you will realize the value they bring to your business.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Chad Woolf

Chad joined Amazon in 2010 and built the AWS compliance functions from the ground up, including audit and certifications, privacy, contract compliance, control automation engineering and security process monitoring. Chad’s work also includes enabling public sector and regulated industry adoption of the AWS cloud and leads the AWS trade and product compliance team.

AWS Security Profile – Cryptography Edition: Panos Kampanakis, Principal Security Engineer

Post Syndicated from Roger Park original https://aws.amazon.com/blogs/security/aws-security-profile-panos-kampanakis/

AWS Security Profile – Cryptography Edition: Panos Kampanakis, Principal Security Engineer

In the AWS Security Profile — Cryptography Edition series, we interview Amazon Web Services (AWS) thought leaders who help keep our customers safe and secure. This interview features Panos Kampanakis, Principal Security Engineer, AWS Cryptography. Panos shares thoughts on data protection, cloud security, post-quantum cryptography, and more.


What do you do in your current role and how long have you been at AWS?

I have been with AWS for two years. I started as a Technical Program Manager in AWS Cryptography, where I led some AWS Cryptography projects related to cryptographic libraries and FIPS, but I’m currently working as a Principal Security Engineer on a team that focuses on applied cryptography, research, and cryptographic software. I also participate in standardization efforts in the security space, especially in cryptographic applications. It’s a very active space that can consume as much time as you have to offer.

How did you get started in the data protection/ cryptography space? What about it piqued your interest?

I always found cybersecurity fascinating. The idea of proactively focusing on security and enabling engineers to protect their assets against malicious activity was exciting. After working in organizations that deal with network security, application security, vulnerability management, and security information sharing, I found myself going back to what I did in graduate school: applied cryptography. 

Cryptography is a constantly evolving, fundamental area of security that requires breadth of technical knowledge and understanding of mathematics. It provides a challenging environment for those that like to constantly learn. Cryptography is so critical to the security and privacy of data and assets that it is top of mind for the private and public sector worldwide.

How do you explain your job to your non-tech friends?

I usually tell them that my work focuses on protecting digital assets, information, and the internet from malicious actors. With cybersecurity incidents constantly in the news, it’s an easy picture to paint. Some of my non-technical friends still joke that I work as a security guard!

What makes cryptography exciting to you?

Cryptography is fundamental to security. It’s critical for the protection of data and many other secure information use cases. It combines deep mathematical topics, data information, practical performance challenges that threaten deployments at scale, compliance with various requirements, and subtle potential security issues. It’s certainly a challenging space that keeps evolving. Post-quantum or privacy preserving cryptography are examples of areas that have gained a lot of attention recently and have been consistently growing.

Given the consistent evolution of security in general, this is an important and impactful space where you can work on challenging topics. Additionally, working in cryptography, you are surrounded by intelligent people who you can learn from.

AWS has invested in the migration to post-quantum cryptography by contributing to post-quantum key agreement and post-quantum signature schemes to protect the confidentiality, integrity, and authenticity of customer data. What should customers do to prepare for post-quantum cryptography?

There are a few things that customers can do while waiting for the ratification of the new quantum-safe algorithms and their deployment. For example, you can inventory the use of asymmetric cryptography in your applications and software. Admittedly, this is not a simple task, but with proper subject matter expertise and instrumentation where necessary, you can identify where you’re using quantum-vulnerable algorithms in order to prioritize the uses. AWS is doing this exercise to have a prioritized plan for the upcoming migration.

You can also study and experiment with the potential impact of these new algorithms in critical use cases. There have been many studies on transport protocols like TLS, virtual private networks (VPNs), Secure Shell (SSH), and QUIC, but organizations might have unique uses that haven’t been accounted for yet. For example, a firm that specializes in document signing might require efficient signature methods with small size constraints, so deploying Dilithium, NIST’s preferred quantum-safe signature, could come at a cost. Evaluating its impact and performance implications would be important. If you write your own crypto software, you can also strive for algorithm agility, which would allow you to swap in new algorithms when they become available. 

More importantly, you should push your vendors, your hardware suppliers, the software and open-source community, and cloud providers to adjust and enable their solutions to become quantum-safe in the near future.

What’s been the most dramatic change you’ve seen in the data protection and post-quantum cryptography landscape?

The transition from typical cryptographic algorithms to ones that can operate on encrypted data is an important shift in the last decade. This is a field that’s still seeing great development. It’s interesting how the power of data has brought forward a whole new area of being able to operate on encrypted information so that we can benefit from the analytics. For more information on the work that AWS is doing in this space, see Cryptographic Computing.

In terms of post-quantum cryptography, it’s exciting to see how an important potential risk brought a community from academia, industry, and research together to collaborate and bring new schemes to life. It’s also interesting how existing cryptography has reached optimal efficiency levels that the new cryptographic primitives sometimes cannot meet, which pushes the industry to reconsider some of our uses. Sometimes the industry might overestimate the potential impact of quantum computing to technology, but I don’t believe we should disregard the effect of heavier algorithms on performance, our carbon footprint, energy consumption, and cost. We ought to aim for efficient solutions that don’t undermine security.

Where do you see post-quantum cryptography heading in the future?

Post-quantum cryptography has received a lot of attention, and a transition is about to start ramping up after we have ratified algorithms. Although it’s sometimes considered a Herculian effort, some use cases can transition smoothly.

AWS and other industry peers and researchers have already evaluated some post-quantum migration strategies. With proper prioritization and focus, we can address some of the most important applications and gradually transition the rest. There might be some applications that will have no clear path to a post-quantum future, but most will. At AWS, we are committed to making the transitions necessary to protect our customer data against future threats.

What are you currently working on that you look forward to sharing with customers’?

I’m currently focused on bringing post-quantum algorithms to our customers’ cryptographic use cases. I’m looking into the challenges that this upcoming migration will bring and participating in standards and industry collaborations that will hopefully enable a simpler transition for everyone. 

I also engage on various topics with our cryptographic libraries teams (for example, AWS-LC and s2n-tls). We build these libraries with security and performance in mind, and they are used in software across AWS.

Additionally, I work with some AWS service teams to help enable compliance with various cryptographic requirements and regulations.

Is there something you wish customers would ask you about more often?

I wish customers asked more often about provable security and how to integrate such solutions in their software. This is a fascinating field that can prevent serious issues where cryptography can go wrong. It’s a complicated topic. I would like for customers to become more aware of the importance of provable security especially in open-source software before adopting it in their solutions. Using provably secure software that is designed for performance and compliance with crypto requirements is beneficial to everyone.

I also wish customers asked more about why AWS made certain choices when deploying new mechanisms. In areas of active research, it’s often simpler to experimentally build a proof-of-concept of a new mechanism and test and prove its performance in a controlled benchmark scenario. On the other hand, it’s usually not trivial to deploy new solutions at scale (especially given the size and technological breadth of AWS), to help ensure backwards compatibility, commit to supporting these solutions in the long run, and make sure they’re suitable for various uses. I wish I had more opportunities to go over with customers the effort that goes into vetting and deploying new mechanisms at scale.

You have frequently contributed to cybersecurity publications, what is your favorite recent article and why?

I’m excited about a vision paper that I co-authored with Tancrède Lepoint called Do we need to change some things? Open questions posed by the upcoming post-quantum migration to existing standards and deployments. We are presenting this paper at the Security Standardisation Research Conference 2023. The paper discussed some open questions posed by the upcoming post-quantum transition. It also proposed some standards updates and research topics on cryptographic issues that we haven’t addressed yet.

How about outside of work—any hobbies?

I used to play basketball when I was younger, but I no longer have time. I spend most of my time with my family and little toddlers who have infinite amounts of energy. When I find an opportunity, I like reading books and short stories or watching quality films.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Roger Park

Roger Park

Roger is a Senior Security Content Specialist at AWS Security focusing on data protection. He has worked in cybersecurity for almost ten years as a writer and content producer. In his spare time, he enjoys trying new cuisines, gardening, and collecting records.

Panos Kampanakis

Panos Kampanakis

Panos has extensive experience with cyber security, applied cryptography, security automation, and vulnerability management. In his professional career, he has trained and presented on various security topics at technical events for numerous years. He has co-authored cybersecurity publications and participated in various security standards bodies to provide common interoperable protocols and languages for security information sharing, cryptography, and PKI. Currently, he works with engineers and industry standards partners to provide cryptographically secure tools, protocols, and standards.

AWS Security Profile: Ryan Dsouza, Principal Solutions Architect

Post Syndicated from Maddie Bacon original https://aws.amazon.com/blogs/security/aws-security-profile-ryan-dsouza-principal-solutions-architect/

AWS Security Profile: Ryan Dsouza, Principal Solutions Architect

In the AWS Security Profile series, I interview some of the humans who work in Amazon Web Services Security and help keep our customers safe and secure. This interview is with Ryan Dsouza, Principal Solutions Architect for industrial internet of things (IIoT) security. 


How long have you been at AWS and what do you do in your current role?

I’ve been with AWS for over five years and have held several positions working with customers, AWS Partner Network partners, and standards organizations on IoT and IIoT solutions. Currently, I’m a Principal Solutions Architect for IIoT security. In this role, I’m the global technical leader and subject matter expert for operational technology (OT) and IIoT security, which means that I lead our OT/IIoT strategy and roadmap, translate customer requirements into technical solutions, and work with industry standards such as ISA/IEC 62443 to support IIoT and cloud technologies. I also work with our strategic OT/IIoT security partners to design and build integrations and solutions on AWS. And I work with some of our strategic customers to help them plan, assess, and manage the risk that comes from OT/IT convergence and to design, build, and operate more secure, scalable, and innovative IIoT solutions by using AWS capabilities to deliver measurable business outcomes.

How did you get started in the world of OT and IIoT security?

I’ve been working with OT for more than 25 years and with IIoT, for the last 10 years. I’ve led digital transformation initiatives for numerous world-class organizations including Accenture, Siemens, General Electric, IBM, and AECOM, serving customers on their digital transformation initiatives across a wide range of industry verticals such as manufacturing, buildings, utilities, smart cities, and more.

Throughout my career, I witnessed devices across critical infrastructure sectors, such as water, manufacturing, electricity, transportation, oil and gas, and buildings, getting digitized and connected to the internet. I quickly realized that this trend of connected assets and digitization will continue to grow and could outstrip the supply of cybersecurity professionals. Each customer that embraces the digital world faces cybersecurity challenges. At AWS, I work with customers to understand these challenges and provide prescriptive and practical guidance on how to secure their OT environments and IIoT solutions to help ensure safe and secure digital transformation.

What makes OT security different from information technology (IT) security?

OT and IT security are two distinct areas of security that are designed to protect different types of systems and assets. OT security is concerned with the protection of industrial control systems and other related operational technology, such as supervisory control and data acquisition (SCADA) systems, which are used to control and monitor physical processes in critical infrastructure industries such as manufacturing, energy, transportation, buildings, and utilities. The main focus of OT security is on the availability, integrity, safety, and reliability of these systems, as well as protection of the physical equipment that is being controlled. OT cybersecurity supports the safe operation of critical infrastructure. IT security, on the other hand, is concerned with the protection of computer systems, networks, and data from cyberthreats such as hacking, malware, and phishing attempts. The main focus of IT security is on the confidentiality, integrity, and availability of information and systems.

As a result of OT/IT convergence, IIoT, and the industrial digital transformation, our customers now must secure an increasing attack surface and overlapping IT and OT environments. They realize that it is business critical to secure OT/IIoT systems to avoid security events that could cause unplanned downtime and pose a safety risk. I refer to this as “securing cyber-physical systems and enabling safe and secure industrial digital transformation.”

How do you explain your job to your non-tech friends?

I explain that OT is used in buildings, manufacturing, utilities, transportation, and more, and when these systems connect to the internet, they’re exposed to risks. The risks are the same as those faced by IoT devices in our own homes and workplaces—but with greater consequences if compromised because these systems deal with critical infrastructure that our society relies on. I often share the Colonial Pipeline example and explain that I help AWS customers understand the risks and the consequences from a compromise, and design cybersecurity solutions to protect these critical infrastructure assets.

What are you currently working on that you’re excited about?

Our customers use lots of security tools from lots of different vendors. Security is a team sport, and I’m really excited to be working with customers, APN partners, and AWS service teams to build security features and product integrations that make it simpler for customers to monitor and secure OT, IIoT, and the cloud. For example, I’m working with our APN security partners to build integrations with AWS Security Hub and Amazon Security Lake, bring zero trust security solutions to OT environments, and improve security at the industrial edge.

Another project that I’m super excited about is bringing OT/IIoT security solutions to our critical infrastructure customers, including small and mid-sized organizations, by simplifying the deployment, management, procurement, and payment process so that customers can get more value from these AWS security solutions faster.

Another area of focus for me is tracking the fast-evolving critical infrastructure cybersecurity regulations, how they impact our customers, and the role that AWS can play to make it simpler for customers to align with these new security and compliance requirements.

Just like how the cloud transformed IT, I think the cloud will continue to revolutionize OT, and I’m super excited and energized to work with customers and APN partners to move OT and IIoT applications to the cloud and build nearly anything they can imagine faster and more cost-effectively on AWS.

What are the biggest challenges in securing critical infrastructure systems?

With critical infrastructure, the biggest challenge is legacy OT systems that may not have been designed with cybersecurity in mind and that use older operating systems and software, which can be difficult to upgrade and patch. These systems were designed to operate in an air-gapped environment, but there is a growing trend to connect them in new ways to IT systems. As IT and OT converge to support expanding business needs, air-gapped devices and perimeter security are no longer sufficient to address and defend against modern threats such as ransomware, data exfiltration, denial of service, and cryptocurrency mining. As OT and IT converge and OT becomes more cloud connected, the biggest challenge is to secure critical infrastructure that uses legacy and aging industrial control systems (ICS) and OT technology. We are seeing a trend to keep ICS/OT systems connected, but in smarter and more secure ways by using network segmentation, edge gateways, and the hybrid cloud so that if a problem occurs, you can still run the most important systems in an isolated and disconnected mode. For example, if your corporate systems are compromised with ransomware, you can disconnect your critical infrastructure systems from the external world and continue the most critical operations. There is a growing need to design innovative and highly distributed solution patterns to keep critical information and hybrid systems safe and secure. This is an area of focus for me at AWS.

What else can enterprises do to manage OT/IT convergence and protect themselves from these security risks?

I’ve done multiple presentations, blog posts, and whitepapers on this topic, and even if the solutions sound simple, they can be challenging to implement in industrial environments. I recommend reading the blog posts Managing Organization Transformation for Successful OT/IT Convergence and Assessing OT and IIoT cybersecurity risk, and implementing the Ten security golden rules for IIoT solutions. AWS offers lots of prescriptive guidance and solutions to help enterprises more safely and securely manage OT/IT convergence and mitigate risk with proper planning and implementation across the various aspects of business—people, processes, and technology. I encourage customers to start by focusing on the security fundamentals of securing identities, assessing their risk from OT/IT convergence, and improving their visibility into devices on the network and across the converged OT and IT environment. I also recommend using standards such as ISA/IEC 62443, which are comprehensive, consensus-based, and form a strong basis for securing critical infrastructure systems.

What skills do professionals need to be successful in critical infrastructure security?

Critical infrastructure security sounds harder than it really is. When I train people, I break it down into bite-sized pieces that are simple to understand and implement. There is some mystery around cybersecurity, but it’s just a lot of small parts. You must learn what all the parts are, what the acronyms are, and how they fit together to form cyber-physical systems. When I describe it in a real-world application, most people pick it up quickly.

Curiosity and a desire to continue learning are important characteristics to have, because cybersecurity is a fast-evolving technology field. Empathy is also important because to secure a system, you must have empathy for the people behind the work and why their goals and needs are important. For example, in the OT world, you have operations folks who just want the thing to work. If an alarm is going off on their computer screen and they must react by clicking a button, they don’t want their screen to lock them out so they can’t click that button, because this could cause the plant to have big problems. So, you need to design a solution that matches user access controls with roles and responsibilities so that a plant operator can take corrective actions in an emergency situation.

Another example is patching critical OT systems that have vulnerabilities. This may not be possible due to the risk of causing unplanned downtime, and it could pose a safety risk or result in additional time and cost for recertification due to compliance requirements. You must have empathy for the people in this situation and their needs, and then, as a security professional, design around that so they can still have those things but in a more secure way. For example, you might need to create mechanisms to identify, network isolate, or replace legacy devices that aren’t capable of receiving updates. If you are detail-oriented and have strong curiosity and empathy, you can succeed in the field of critical infrastructure cybersecurity.

What’s your favorite Amazon Leadership Principle, and why?

I have two favorite leadership principles: Learn and be Curious; and one that I initially discounted, Frugality. I believe that the best way to predict the future is to invent it, which is why I’m never done learning and seeking new ways to solve problems.

My view on the Frugality leadership principle is that we need to be frugal with each other’s time. There are so many competing demands on everyone’s time, and it’s important in a place like AWS to be mindful of that. Make sure you’ve done your due diligence on something before you broadly ask the question or escalate. Being frugal in my view is about being self-sufficient, learning to use self-service tools, and working with limited time or resources to deliver results.

I wake up every morning with the conviction that the world is always changing, and that, to succeed, I have to change faster by learning new skills and being frugal with time and resources.

What’s the thing you’re most proud of in your career?

I’m really proud of working with critical infrastructure customers across a diverse range of industries over the last 25 years and supporting their digital transformation initiatives. In the early part of my career, I was a design and commissioning engineer of industrial automation systems. In this role, I had the opportunity to design and commission new industrial plants and get them into operation, which was extremely fulfilling. I feel fortunate to have joined a company like AWS that takes cybersecurity seriously in developing its products and cloud services, and I’m proud to bring real-world experience in the design and security of cyber-physical systems to our critical infrastructure customers.

If you had to pick an industry outside of engineering, what would you want to do?

Growing up in India in a family of engineers and doctors, there were only two options: engineer or doctor. Both professions have the ability to change the world. Because my mother and brother worked at Siemens, I pursued a career in engineering. If I had to pick an industry outside of engineering, it would have been in the medical field.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Maddie Bacon

Maddie (she/her) is a technical writer for Amazon Security with a passion for creating meaningful content that focuses on the human side of security and encourages a security-first mindset. She previously worked as a reporter and editor, and has a BA in Mathematics. In her spare time, she enjoys reading, traveling, and staunchly defending the Oxford comma.

Ryan Dsouza

Ryan Dsouza

Ryan is a Principal Industrial IoT (IIoT) Security Solutions Architect at AWS. Based in New York City, Ryan helps customers design, develop, and operate more secure, scalable, and innovative IIoT solutions using the breadth and depth of AWS capabilities to deliver measurable business outcomes. Ryan has over 25 years of experience in digital platforms, smart manufacturing, energy management, building and industrial automation, and OT/IIoT security across a diverse range of industries. Before AWS, Ryan worked for Accenture, SIEMENS, General Electric, IBM, and AECOM, serving customers for their digital transformation initiatives. Ryan is passionate about bringing security to all connected devices and being a champion of building a better, safer, and more resilient world for everyone.

AWS Security Profile: Matt Luttrell, Principal Solutions Architect for AWS Identity

Post Syndicated from Maddie Bacon original https://aws.amazon.com/blogs/security/aws-security-profile-matt-luttrell-principal-solutions-architect-for-aws-identity/

AWS Security Profile: Matt Luttrell, Principal Solutions Architect for AWS Identity

In the AWS Security Profile series, I interview some of the humans who work in Amazon Web Services Security and help keep our customers safe and secure. In this profile, I interviewed Matt Luttrell, Principal Solutions Architect for AWS Identity.


How long have you been at AWS and what do you do in your current role?

I’ve been at AWS around five years and have worked in a variety of roles from Professional Services consulting as an application architect to a solutions architect. In my current role, I work on the Identity Solutions team, which is a group of solutions architects who are embedded directly in the Identity and Control Services team. We have both internal-facing and external-facing functions. Internally, we work with product managers, drive concepts like data perimeters, and generally act as the voice of the customer to our product teams. Externally, we have conversations with customers, present at events, and so on.

How did you get started in security?

My background is in software development. I’ve always had a side interest in security and have always worked for very security-conscious companies. Early in my career, I became CISSP certified and that’s what got me kickstarted in security-specific domains and conversations. At AWS, being involved in security isn’t an optional thing. So, even before I joined the Identity Solutions team, I spent a lot of time working on identity and AWS Identity and Access Management (IAM) in particular, as well as AWS IAM Access Analyzer, while working with security-conscious customers in the financial services industry. As I got involved in that, I was able to dive deep in the security elements of AWS, but I’ve always had a background in security.

How do you explain your job to non-technical friends and family?

I typically tell them that I work in the cloud computing division at Amazon and that my job title is Solutions Architect. Naturally, the next question is, “what does a solutions architect do? I’ve never heard of that.” I explain that I work with customers to figure out how to put the building blocks together that we offer them. We offer a bunch of different services and features, and my job is to teach customers how they all work and interact with each other.

What are you currently working on that you’re excited about?

One of the things our team is working on is data perimeters. Our customers will see continued guidance on data perimeters. We’ve done a lot of work in this space—workshops and presentations at some of our big conferences, as well as blog posts and example repositories.

I’m also putting together some videos that go in depth on IAM policy evaluation and offer prescriptive guidance on writing IAM policies.

In your opinion, what’s one of the coolest things happening in identity right now?

I might be biased here, but I think there’s been a shift in the security industry at large from network-based perimeters in the traditional on-premises world to identity-based perimeters in the cloud. This is where the concept of data perimeters comes into play. Because your resources and identities are distributed, you can no longer look at your server and touch your server that’s sitting right next to you. This really puts an extra emphasis on your authentication and authorization controls, as well as the need for visibility into those controls. I think there’s a lot of innovation happening in the identity world because of this increased focus on identity perimeters. You’re hearing about concepts in this area like zero trust, data perimeters, and general identity awareness in all levels of the application and infrastructure stacks. You have services like IAM Access Analyzer to help give you that visibility into your AWS environment and what your identities are doing in terms of who can access what. I think we’ll continue to see growth in these areas because workloads are not becoming less distributed over time.

Tell me about something fun that you’ve done recently at AWS.

Roberto Migli and I presented a 400-level workshop at re:Invent 2022 on IAM policy evaluation, AWS Identity and Access Management (IAM) policy evaluation in action. This workshop introduced a new mental model for thinking about policy evaluation and walked attendees through a number of different policy evaluation scenarios. The idea behind the workshop is that we introduce a scenario and have the attendee try to figure out what the result of the evaluation would be. It spends some extra time comparing how the evaluation of resource-based policies differs from that of identity-based policies. I hope attendees walked away with a better understanding of how policy evaluations work at a deeper level and how they can write better, more secure IAM policies. We presented practical advice on how to structure different types of IAM policies and the different tradeoffs when writing a policy one way compared to another. I hope the mental model we introduced helps customers better reason about how policies will evaluate when they write them in their environment.

What is your favorite Amazon Leadership Principle and why?

This is an easy one. For me, it’s definitely Learn and Be Curious. Something I try to do is put myself in uncomfortable situations because I feel that when I’m uncomfortable, I’m learning and growing because it means I don’t know something. I find comfortable situations boring at times, so I’m always trying to dig in and learn how things work. This can sometimes be distracting, too, because there’s so much to learn and understand in the identity world.

What’s the thing you’re most proud of in your career?

There’s no particular project that I can point to and say, “this is what I’m most proud of.” I’m proud to be a part of the team I’m on now. For my team, Customer Obsession is more than just a slogan. We really advocate on behalf of the customer, listen to the voice of the customer, and push back on features that might not be the best thing for the customer. I think it’s awesome that I get to work for a company that really does advocate on behalf of the customer, and that my voice is heard when I’m trying to be that advocate. That aspect of working at AWS and with my team is what I’m most proud of.

I’m also proud of the mentoring and teaching that I get to do within AWS and within my role specifically. It’s really fulfilling to watch somebody grow and realize that career growth is not a zero-sum game—just because someone else succeeds does not mean that I have to fail.

If you had to pick an industry outside of security, what would you want to do?

I’d probably choose to be a ski instructor. I’m a big fan of skiing, but I don’t get to ski very often because of where I live. I love being out on the mountains, skiing, and teaching. I’m looking for any excuse to spend my days in the mountains.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Maddie Bacon

Maddie (she/her) is a technical writer for Amazon Security with a passion for creating meaningful content that focuses on the human side of security and encourages a security-first mindset. She previously worked as a reporter and editor, and has a BA in Mathematics. In her spare time, she enjoys reading, traveling, and staunchly defending the Oxford comma.

Author

Matt Luttrell

Matt is a Principal Solutions Architect on the AWS Identity Solutions team. When he’s not spending time chasing his kids around, he enjoys skiing, cycling, and the occasional video game.

SANS Institute uses Amazon QuickSight to drive transformational security awareness maturity within organizations

Post Syndicated from Carl R. Marrelli original https://aws.amazon.com/blogs/big-data/sans-institute-uses-amazon-quicksight-to-drive-transformational-security-awareness-maturity-within-organizations/

This is a guest post by Carl Marrelli from SANS Institute.

The SANS Institute is a world leader in cybersecurity training and certification. For over 30 years, SANS has worked with leading organizations to help ensure security across their organization, as well as with individual IT professionals who want to build and grow their security careers. We partner with over 500 organizations and support over 200,000 IT professionals with more than 90 technical training courses and over 40 professional (GIAC) certifications.

Our Security Awareness products include more than 70 instructional modules and have been deployed to over 6.5 million end-users to bring cybersecurity training to each employee within an organization.

As the Security Awareness department in particular began developing product strategies to deliver data-driven insights to customers, we were clear on using existing analytics services to rapidly build customer-facing analytics solutions. Building on a proven cloud provider would allow us to focus on our core expertise of helping organizations train, learn, and mature their programs instead of spending extra time and resources building and maintaining analytics from scratch.

We identified Amazon QuickSight, a fully managed, cloud-native business intelligence (BI) service, as the product that fit all our criteria. With it, we found an intuitive product with rich visualizations that we could build and grow with rapidly, allowing us to innovate without monetary risks or being locked in to cumbersome contracts. We considered other options, but they couldn’t support the licensing model that fit our needs.

In this post, we go over how we use QuickSight to serve our security customers.

Helping manage human risk with data-driven insights

SANS Security Awareness helps organizations use best-in-class security awareness and training solutions to transform their ability to measure and manage human risk. Security awareness programs are initiatives aimed at educating individuals about the importance of information security and the best practices for maintaining the confidentiality, integrity, and availability of information. We deliver expertly authored training materials to organizations, including computer-based video training sessions, interactive learning modules, supplemental materials, and reinforcement curriculum to keep security top-of-mind for all employees.

As organizations rapidly adopt and expand their use of digital technologies in their day-to-day work, the number of touchpoints with humans increases. As threat landscapes become increasingly more severe, managing human risk is critical to the success of the security program in any organization. Not only do organizations have to conduct security awareness training programs, but they also need insights into data and metrics that identify points of weakness to take data-driven corrective courses of action. As a leader in the space, we wanted to innovate by bringing relevant data-driven insights to our Security Awareness partners and customers in the journey to ensuring human-centered security across their organizations.

New data products to enhance and gamify risk assessment

We built one of our first insights products to support our Behavioral Risk Assessment. This service allows senior security and risk leaders to assess human risk with data handling, digital behavior, and compliance in an organization by individual, team, geography, business unit, and more. Leaders use the assessment to mature their security awareness capability with risk-informed interventions, identify process and procedure gaps, surface shadow IT, and reduce overall awareness training costs by focusing attention on the most important areas of risk.

Behavioral Risk Assessment dashboard with various charts

Delivered via a survey customized to the data types and risk profile of an organization, this assessment allows risk management leaders to more easily understand the data handling practices across roles and departments. Dashboards built in QuickSight empower stakeholders to quickly visualize what areas may need added attention by way of training intervention or updated policy.

Another product area where we invested in analytics to help organizations identify human risk is in gamified awareness training. The SANS Scavenger Hunt utilizes QuickSight in a unique way as a real-time game scoreboard. Players compete in the hunt while solving cybersecurity-related challenges, giving security teams a fun way to engage the workforce and promote good cyber behaviors.

Security Awareness challenge dashboard

The Scavenger Hunt was widely deployed during global Cybersecurity Awareness Month—a time for security awareness practitioners to shine a light on the purpose and mission of security awareness and also have a little fun. Typically, programs run during this time take place outside any regulated training cycle and are typically not delivered as mandatory training. This being the case, we identified dashboards as a way to gamify the experience to increase engagement among participants. These dashboards, built using QuickSight, provided users access to a leaderboard to not only track their own progress, but to also see how they compared to their fellow participants.

Building on the success of their experience with QuickSight and the Scavenger Hunt, we wanted to push the gamification and dashboards concept further so Chief Information and Security Officers (CISOs) and security teams could identify and mitigate the human side of ransomware risk. We developed Snack Attack!, a gamified learning experience that shows an organization how employees are performing in six key defensive areas where ransomware can be prevented. In 2021, over 80% of cyber breaches involved human error of some kind. Employees must have a fundamental awareness of cybersecurity and the ability to apply cyber knowledge within the scope of their jobs. Snack Attack! and QuickSight proved to be a great product to visualize and action on areas of human risk and sentiment for senior leadership.

A screenshot of a Snack Attack! dashboard

With Snack Attack!, we looked at Cybersecurity Awareness Month from the viewpoint of the awareness practitioner. The program itself focuses on driving engagement through an entertaining storyline with creative visuals. We chose to use the data from the training to help our customers build their awareness programs going forward. The dashboards included in Snack Attack! give the security awareness practitioner insights into the learned behavior of their users. Quick visualizations of learners’ scoring in Snack Attack! can act as an audit of the effectiveness of their existing program and provide a roadmap for future trainings.

Paving the way in using analytics for customer security

The SANS Institute brings together security awareness training programs with a metrics-based approach through out-of-the-box analytics dashboards so our customers can assess and manage human risk successfully. With QuickSight, we were able to rapidly innovate, developing valuable data products at a speed we could not have otherwise. Without up-front investments to get started and with the low cost to try with usage-based pricing, we were able to quickly ideate, build, and deploy customer-facing analytic products to drive security awareness within our customer organizations. Our analytics solutions differentiate us from existing enterprise products. With QuickSight, we are able to show organizations where they have cyber risk.

With the delivery of analytics solutions to customers, the SANS Institute is not only a top cybersecurity training, learning, and certification platform, but also a technology provider that helps customers use data and insights to make meaningful change in their organization. Moving forward, we have identified an expansion of QuickSight dashboards into our larger suite of assessments as the next logical step. Along with the Behavioral Risk Assessment, we offer Knowledge and Culture assessments to help security awareness practitioners better understand where and how to apply training and gauge the effectiveness of their programs. Because of the success we have had with QuickSight on our existing projects, we feel that similar dashboards can provide even more value to our customers.

To learn more about how QuickSight can help your business with dashboards, reports, and more, visit Amazon QuickSight.


About the Author

Carl R. Marrelli is the Director of Business Development and Digital Programs at SANS Institute. Based in Charlotte, NC, he has extensive experience in cross-functional team leadership, product management, and product marketing. Previously as Head of Product at SANS, Carl led the product management team for the Online Training and Security Awareness divisions through a significant growth period. Carl’s unique perspective and innovative ideas, support SANS as the company continues its mission to empower cybersecurity practitioners around the world.

New Global AWS Data Processing Addendum

Post Syndicated from Chad Woolf original https://aws.amazon.com/blogs/security/new-global-aws-data-processing-addendum/

Navigating data protection laws around the world is no simple task. Today, I’m pleased to announce that AWS is expanding the scope of the AWS Data Processing Addendum (Global AWS DPA) so that it applies globally whenever customers use AWS services to process personal data, regardless of which data protection laws apply to that processing. AWS is proud to be the first major cloud provider to adopt this global approach to help you meet your compliance needs for data protection.

The Global AWS DPA is designed to help you satisfy requirements under data protection laws worldwide, without having to create separate country-specific data processing agreements for every location where you use AWS services. By introducing this global, one-stop addendum, we are simplifying contracting procedures and helping to reduce the time that you spend assessing contractual data privacy requirements on a country-by-country basis.

If you have signed a copy of the previous AWS General Data Protection Regulation (GDPR) DPA, then you do not need to take any action and can continue to rely on that addendum to satisfy data processing requirements. AWS is always innovating to help you meet your compliance obligations wherever you operate. We’re confident that this expanded Global AWS DPA will help you on your journey. If you have questions or need more information, see Data Protection & Privacy at AWS and GDPR Center.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Chad Woolf

Chad joined Amazon in 2010 and built the AWS compliance functions from the ground up, including audit and certifications, privacy, contract compliance, control automation engineering and security process monitoring. Chad’s work also includes enabling public sector and regulated industry adoption of the AWS cloud and leads the AWS trade and product compliance team.

It’s the Amazon QuickSight Community’s 1st birthday!

Post Syndicated from Kristin Mandia original https://aws.amazon.com/blogs/big-data/its-the-amazon-quicksight-communitys-1st-birthday/

Happy birthday Amazon QuickSight Community! We are celebrating 1 year since the launch of our new Community. The Amazon QuickSight Community website is a one-stop-shop where business intelligence (BI) authors and developers from across the globe can ask and answer questions, stay up to date, network, and learn together about Amazon QuickSight. In this post, we celebrate the rapid growth of our first year of the QuickSight Community, discuss new Community features, and share voices from our Community. We also invite you to sign up for the QuickSight Community (it’s easy and free) and get started on your learning journey.

Happy 1st birthday Amazon QuickSight Community!

We’re growing strong

Since last year’s announcement of the new QuickSight Community, we are witnessing hundreds of thousands of visits to the QuickSight Community each month. Answers to thousands of searchable questions have been posted, and our online Learning Series is now running every week. An events calendar has been added, and hundreds of learning resources have been posted (how-to videos and articles, Getting Started resources, blog posts, and What’s New posts). Furthermore, a Community Manager has been brought on board. And Community Experts and QuickSight Solution Architects are posting robust answers in the Q&A, supported by AWS Professional Services, AWS Data Lab, and Amazon QuickSight Service Delivery Partners—led by West Loop Strategy. In addition, other QuickSight Partners and content creators are bringing new life to the Community.

QuickSgiht Community members at an Amazon conference

New QuickSight Community members at an internal Amazon conference

Why are we excited? Community members share their voices

As we celebrate year one, QuickSight Community members share their voices:

“The community has been an awesome space to exchange ideas, problems, and solutions, which has really helped every one of us in adopting QuickSight more effectively!” #thank you #community — Sagnik Mukherjee, Data and Analytics Architect, QuickSight Expert

“I am so excited for the QuickSight Learning Series! And I am happy to know that there is a supportive community…as I begin my QS journey!“ — Rachel Krentz, Configuration Analyst, Madaket

“The community has evolved so much in just one year!” — Darren Demicoli, BI Team Lead, The Mill Adventure, QuickSight Expert

“Happy Birthday QuickSight Community! I’m pretty new here but have found this to be a really great resource with answers to every question I have been able to come up with so far. Cheers!” — Brian Jager, Sales Operations Lead, Amazon Business

“I love being connected directly with other users and solutions architects…The QuickSight Community helps me out a ton not having to spend as much of my time on training when I can point new users to a resource where their questions have likely already been answered.” — Ryane Brady, Programmer Analyst, GROWMARK, Inc., QuickSight Expert

Hear more voices from our Community on our birthday post.

The QuickSight Community: Your one-stop-shop for BI learning

Here’s a quick tour of the QuickSight Community website as well as its new features.

The following figure shows the homepage view and everything that’s available.

A One-Stop-Shop for your Business Intelligence journey includes resource toolbar, searchable Q&A, learning resources, what’s new, blog, events and featured content

Learning Center: What’s new

In addition to growing a robust, searchable Q&A, in the last year, we’ve added tons of new learning resources to the QuickSight Community, including:

When you select Learning Center on the homepage, you can choose from these various resources.

Amazon QuickSight Learning Center – includes getting started resources, how to videos, webinar videos, articles, and other resources.

New: Learning events

In the last year, we’ve added an Events section to the QuickSight Community. We now offer weekly and monthly learning webinars as well as featured in-person and online events. Ongoing virtual events include:

New: Experts and user groups

We’ve been excited to see QuickSight user groups pop up over the past year, including one in the Twin Cities and one in Chicago—both hosted by QuickSight Service Delivery Partners (Charter Solutions and West Loop Strategy, respectively).

In addition, we’ve launched our QuickSight Experts program, honoring top question-answerers in the Community. These rock stars have relentlessly helped their peers take their learning to the next level.

QuickSight Community Expert

Ryane Brady, Programmer Analyst, GROWMARK, Inc., is a QuickSight Expert and top question-answerer in the Community. Here she is showing off her Expert swag. See the 2022 to Q1 2023 list of Experts at the bottom of this post.

First External Viz Challenge

In the last year, AWS hosted its very first external data visualization design competition to invited QuickSight Service Delivery Partners. This enabled QuickSight Partners to show off their visual analytics and storytelling skills. We were proud to feature the QuickSight Partner Viz Challenge winners of 2022 on the QuickSight Community.

Amazon QuickSight Service Delivery Partner Viz Challenge winners

Want to be a part of the QuickSight Community?

Sign up now to join the QuickSight Community (it’s free and easy) and take your QuickSight learning to the next level.

As part of the Community, you can:

  • Encourage your teams and colleagues to create QuickSight Community user profiles—signup up is easy
  • Click the bell icon in the top right corner of the page to get notifications and keep up to date with QuickSight news
  • Share your community ideas with the Community Manager

Come in on the ground floor and help us take the QuickSight Community to the next level:

  • Become an Expert and share your knowledge
  • Pioneer a user group online or in your area
  • Contribute a how-to article

If you’re interested in any of these opportunities, reach out to Kristin, the Sr. Online Community Manager for QuickSight, at [email protected].

We’re just getting started

Thank you to everyone who has helped the QuickSight Community grow over the last year! We can’t wait to see what this next year has in store!

Special thanks to our QuickSight Experts who make the QuickSight Community possible (listed alphabetically): Ryane Brady, Biswajit Dash, Darren Demicoli, Max Engelhard, Charles Greenacre, Naveed Hashimi, Todd Hoffman, Mike Khuns, Thomas Kotzurek, Sanjeeb Mohapatra, Sagnik Mukherjee and David Wong. Incredible gratitude to Lillie Atkins and Mia Heard, who launched the Community a year ago. Also, thanks to Jose Kunnackal John, Jill Florant, the QuickSight Solution Architects, Amazon QuickSight Service Delivery Partners—led by West Loop Strategy, AWS Service Acceleration team, AWS ProServe team, and AWS Data Lab team for your incredible investment in the QuickSight Community.


About the Authors

Kristin Mandia is Senior Online Community Manager for Amazon QuickSight, Amazon Web Service’s cloud-native, fully managed BI service.


Ian McNamara
is a Program Manager and writer on the Customer Success Team for Amazon QuickSight, Amazon Web Service’s cloud-native, fully managed BI service.

Showpad accelerates data maturity to unlock innovation using Amazon QuickSight

Post Syndicated from Shruthi Panicker original https://aws.amazon.com/blogs/big-data/showpad-accelerates-data-maturity-to-unlock-innovation-using-amazon-quicksight/

Showpad aligns sales and marketing teams around impactful content and powerful training, helping sellers engage with buyers and generate the insights needed to continuously improve conversion rates. In 2021, Showpad set forth the vision to use the power of data to unlock innovations and drive business decisions across its organization. Showpad’s legacy solution was fragmented and expensive, with different tools providing conflicting insights and lengthening time to insight. The company decided to use AWS to unify its business intelligence (BI) and reporting strategy for both internal organization-wide use cases and in-product embedded analytics targeted at its customers.

Showpad built new customer-facing embedded dashboards within Showpad eOSTM and migrated its legacy dashboards to Amazon QuickSight, a unified BI service providing modern interactive dashboards, natural language querying, paginated reports, machine learning (ML) insights, and embedded analytics at scale.

In this post, we share how Showpad used QuickSight to streamline data and insights access across teams and customers. Showpad migrated over 70 dashboards with over 1,000 visuals. They have rolled out the solution to all its 600 employees, increased dashboard development activity by three times, and reduced dashboard turnaround time from months to weeks. Showpad also launched dashboards and reports to over 1,300 customers worldwide, providing access to tens of thousands of users across all its customers.

Streamlining data-driven decisions by defragmenting the data and reporting architecture

Founded in 2011, dual headquartered in Belgium and Chicago and with offices around the world, Showpad provides a single destination for sales representatives to access all sales content and information, along with coaching and training tools to create informed, upskilled, and trusted buying teams. The platform also provides analytics and insights to support successful information sharing and fuel continuous improvement. In 2021, Showpad decided to take the next step in its data evolution and set forth the vision to power innovation, product decisions, and customer engagement using data-driven insights. This required Showpad to accelerate its data maturity as a company by mindfully using data and technology holistically to help its customers.

But the company’s legacy BI solution and data were fragmented across multiple tools, some with proprietary logic. “Each of these tools were getting data from a different place, and that’s where it gets difficult,” says Jeroen Minnaert, head of data at Showpad. “If each tool tells a different story because it has different data, we won’t have alignment within the business on what this data means.” Showpad also struggled with data quality issues in terms of consistency, ownership, and insufficient data access across its targeted user base due to a complex BI access process, licensing challenges, and insufficient education.

Showpad wanted to unify all the data into a single unified interface through a data lake, democratize that data through a BI solution such that autonomous teams across the company could effectively use data, and drive and unlock innovation in the company through advanced insights data, artificial intelligence, and ML. The company already used AWS in other aspects of its business and found that QuickSight would not only meet all its BI and reporting needs with seamless integrations into the AWS stack, but also bring with it several unique benefits unlike incumbents and other tools evaluated. “We chose QuickSight because of its embedded analytic capabilities, serverless architecture, and consumption-based pricing,” says Minnaert. “Using QuickSight to launch interactive dashboards and reporting for our customers, along with the ability for our customer success teams to create or alter dashboard prototypes on our internal-facing QuickSight instance and then promote those dashboards to customers through the product, was a very compelling use case.”

QuickSight would help local data stewards, who weren’t technical but knew the use cases intimately, to create their own dashboards and prototype them with their customers before promoting them through the product. “The serverless model was also compelling because we did not have to pay for server instances nor license fees per reader. With QuickSight, we pay for usage. This makes it easy for us to provide access to everyone by default. This is a key pillar in our ability to democratize data,” says Minnaert.

A screenshot of Showpad's Platform/Adoption dashboard

Architecting a portable data layer and migrating to QuickSight to accelerate time to value

After choosing QuickSight as its solution in November 2021, Showpad took on two streams of development: migrating internal organization-wide BI reporting and building in-product reporting using embedded analytics. Showpad worked closely alongside the QuickSight team to have a smooth rollout. The company involved members of the QuickSight team in its internal communications and set up meetings every 2 weeks to resolve difficult issues quickly.

On the internal reporting front, the data team took a “working backwards” approach to make sure it had the right approach before going all in with all existing dashboards. Showpad selected five difficult and complex dashboards and took 3 months to explore various possibilities using QuickSight. For example, the team determined that user log-in would be automated with single sign-on and Okta and built a plan for assets organization, asset promotion, and access controls. The company also used the opportunity to reimagine its data pipeline and architecture. A key architectural decision that Showpad took during this time was to create a portable data layer by decoupling the data transformation from visualization, ML, or ad hoc querying tools and centralizing its business logic. The portable data layer facilitated the creation of data products for varied use cases, made available within various tools based on the need of the consumer—be it a business analyst, data scientist, or business user.

After the solution approach was decided on and the foundation was built, the team wanted to scale. However, with 70 dashboards with over 1,000 visuals with varying levels of complexities, including proprietary logic unique to certain tools, and data from over 1,000 tables ingesting data from over 20 data sources, the team decided to take a measured approach to the migration. The entire data team of 20 people were “all on hands on deck” for the project.

First, Showpad created a landscape of all the dashboards, connecting data sources and dependencies before prioritizing the migration order. The company decided to start with dashboards with the fewest dependencies, like product and engineering dashboards that had a single data source, followed by revenue operations dashboards with a couple of data sources, and lastly with customer success and marketing dashboards that combined product and engineering and revenue operations data. After migration was complete, Showpad validated all the numbers and worked with business stakeholders for quality assurance. It launched the first set of dashboards in April 2022, followed by customer success and marketing dashboards in July 2022. As of January 2023, Showpad’s QuickSight instance includes over 2,433 datasets and 199 dashboards.

Showpad also achieves benefits for its customers by using QuickSight to deliver a wide variety of insights, including usage dashboards, industry comparisons, user comparison, group comparison, and revenue attribution. On the second workstream of in-product customer-facing reporting, Showpad released its first version of QuickSight reporting to customers in June 2022. “We went through user research, development, and beta tests in a span of 6 months, which was a fast turnaround and a big win for us,” says Minnaert. And Showpad aims to further accelerate the turnaround time to make a report and ship it to a customer.

With the foundational architecture now in place, shipping to a customer can happen in a few sprints, with most of the time spent on iterating and fine-tuning insights instead of engineering a scalable reporting solution. Showpad can also improve the insights it offers to customers. “Using QuickSight in our product makes it a powerful tool,” says Minnaert. “We can launch reporting for our customers so that they can look at and interpret data by themselves.” Showpad can then follow up with tailor-made reporting for each customer using the same data so that it tells a consistent story.

After a dashboard is agreed on, the dashboard can go through Showpad’s automated dashboard promotion process that can take an idea from development to production to a smile on a customer’s face in weeks, not months.

A screenshot of their Shared Space engagement dashboard

Unlocking innovation with self-service BI and rapid prototyping

By providing dashboard and report building to analysts and nontechnical users, Showpad drastically reduced overall turnaround time to build and deliver insights, down from months to weeks. Showpad also increased dashboard development activity by three times across the organization.

Showpad users can quickly prototype reports in a well-known environment—building reports using QuickSight, and then testing them with customers by helping customers understand how the reporting would look and function. “After we settle on reports or dashboards, it does not take much engineering effort to bring them to production,” says Minnaert. “We can make a lot of innovation happen by quickly prototyping and bringing validated prototypes to production.” Showpad’s users and customers also benefit from performance gains with 10 times increased speed when using SPICE (Super-fast, Parallel, In-memory Calculation Engine), which is the robust in-memory engine that QuickSight uses. It takes only seconds to load dashboards.

Because QuickSight is serverless and uses a session-based pricing model, Showpad expects to see cost savings. By paying per use, Showpad can easily provide access to all its users and customers without purchasing expensive per-reader licenses. Showpad also doesn’t need to pay for server instances or maintain infrastructure for BI. In addition, Showpad can deprecate custom reporting, infrastructure, and multiple tools with the new data architecture and QuickSight. “Much of our cost savings will come from being able to deprecate the custom reporting that we’ve made in the past,” says Minnaert. “The custom reporting used a lot of infrastructure that we no longer need to maintain.” Showpad expects to see a three times increase in projected return on investment in the upcoming year.

Showpad completed its internal BI migration to QuickSight by the end of 2022. Showpad also continues to expand the in-product reporting while continuing to optimize performance for the best customer experience. Showpad hopes to further reduce the time it takes to load a dashboard to under 1 second.

In conjunction with Showpad’s new portable data layer, QuickSight helps users of all types across its organization and customers self-serve data and insights rapidly. Everyone in Showpad gets access to data and insights the day they onboard with Showpad. To make self-service even easier, Showpad will soon launch embedded Amazon QuickSight Q so anyone can ask questions in natural language and receive accurate answers with relevant visualizations that help them gain insights from the data. By helping business users and experts rapidly prototype dashboards and reports in line with user and customer needs, Showpad uses the power of data to unlock innovation and drive growth across its organization. “QuickSight has become our go-to tool for any BI requirement at Showpad—both internal and external customer facing, especially when it comes to correlating data across departments and business units,” says Minnaert.

Get started with QuickSight

Migrating to QuickSight enabled Showpad to streamline data and insights access across teams and customers and reduced overall turnaround time to build and deliver insights from months to weeks.

Learn more about unleashing your organization’s ability to accelerate revenue growth with Showpad. To learn more about how QuickSight can help your business with dashboards, reports, and more, visit Amazon QuickSight.


About the Author

Shruthi Panicker is a Senior Product Marketing Manager with Amazon QuickSight at AWS. As an engineer turned product marketer, Shruthi has spent over 15 years in the technology industry in various roles from software engineering, to solution architecting to product marketing. She is passionate about working at the intersection of technology and business to tell great product stories that help drive customer value.

Create threshold alerts on tables and pivot tables in Amazon QuickSight

Post Syndicated from Lillie Atkins original https://aws.amazon.com/blogs/big-data/create-threshold-alerts-on-tables-and-pivot-tables-in-amazon-quicksight/

Amazon QuickSight previously launched threshold alerts on KPIs and gauge charts. Now, QuickSight supports creating threshold alerts on tables and pivot tables—our most popular visual types. This allows readers and authors to track goals or key performance indicators (KPIs) and be notified via email when they are met. These alerts allow readers and authors to relax and rely on notifications for when their attention is needed. In this post, we share how to create threshold alerts on tables or pivot tables to track important metrics.

Background information

Threshold alerts are a QuickSight Enterprise Edition feature and available for dashboards consumed on the QuickSight website. Threshold alerts aren’t yet available in embedded QuickSight dashboards or on the mobile app.

Alerts are created based on the visual at that point in time and are not affected by potential future changes to the visual’s design. This means the visual can be changed or deleted and the alert continues to work as long as the data in the dataset remains valid. In addition, you can create multiple alerts off of one visual, and rename them as appropriate.

Finally, alerts respect RLS and CLS rules.

Set up an alert on a table or pivot table

Threshold alerts are configured for dashboards. On a dashboard, there are three different ways to create an alert on a table or pivot table.

First, you can create directly from a pivot table or table. You click directly on the cell you would like to create an alert on (if there is another action enabled, you may have to right-click to get this option to show). This needs to be on a numeric value (no dates or strings allow for creation of alerts). Then choose Create Alert to start creating the alert.

Let’s assume you want to track the profit coming from online purchases for auto-related merchandise being shipped first class. Choose the appropriate cell and then choose Create Alert.

Create Alert

You’re presented with the creation pane for alerts. The only difference from KPIs or gauge visual alerts is that here you’ll find the other dimensions in the row that you’re creating the alert on. This will help you identify what value from the table you have selected, because there can be duplicates of the numeric values.

In the following screenshot, the value to track is profit, which currently is $437.39. This is the value that will be compared to the threshold you set. You will also see the dimensions being used to define this alert, which are taken from the row of the table. In this case, the Category is Auto, the Segement is Online, and the Ship Mode is First Class.

Now that you have checked that the value is correct, you can update the name of the alert that is automatically filled with the name of the visual it is created off of, set the condition (between Is above, Is below, and Is equal to), and pick the threshold value, notification frequency, and whether you want to be emailed when there is no data.

In the following example, the alert has been configured so that you will receive an email when the profit is above the threshold of $1,000. You’ve also left the notification frquency at Daily at most and haven’t requested to be emailed when there is no data.

If you have a date field, you also will see an option to control the date. This will automatically set the date field to be the most recent of whatever aggregation you’re looking at, such as hour, week, day, month. However, you could override to use the specific date applied to the value you have selected if you would prefer.

Below is an example where the data was aggregated based on the week and so Latest Week has been selected rather than the historical Week of Jan 4, 2015.

You can then choose Save if you’re happy with the alert and it will load the Manage alert pane.

The Create Alert button is also at the bottom of the pane. This is the second way you can start creating an alert off of a table or pivot table.

You can also get to this pane from the upper right alert button on the dashboard.

Create Alert through the icon on dashboard

If you have no alerts, this will automatically drop you into the creation pane. There you will be asked to select a visual that supports alerts to begin creating an alert. If you already have alerts (as previously demonstrated), then all you need to do is choose Create Alert.

Then select a visual and choose Next.

You’re prompted to select a cell if you have picked a table or pivot table visual.

Then you repeat the same steps as creating off a cell within a table or pivot table.

Finally, you can start creating an alert from the bell icon on the pivot table or table. This is the third way to create an alert.

bell icon

You’ll be prompted to select a cell from the table, and the creation pane appears.

After you choose the cell that you want to track, you start the creation process just like the first two examples.

Update and delete alerts

To update or delete an alert, you need to navigate back to the Manage alerts pane. You get there from the bell icon on the top right corner of the dashboard.

Create Alert through the icon on dashboard

You can then choose the options menu (three dots) on the alert you want to manage. Three options appear: Edit alert, View history (to view recent times the alert has breached and notified you), and Delete.

Notifications

You’ll receive an email when your alert breaches the rule you set. The following is an example of what that looks like (the alert has been adjusted to be alerted if profit is over $100 and to be notified as frequently as possible).

notification if alert is breached

The current profit breach is highlighted and the historical numbers are shown along with the date and time of the recorded breaches. You can also navigate to the dashboard by choosing View Dashboard.

Evaluate alerts

The evaluation schedule for threshold alerts is based on the dataset. For SPICE datasets, alert rules are checked against the data after a successful data refresh. With datasets querying your data sources directly, alerts are evaluated daily at a random time between 6:00 PM and 8:00 AM. This is based on the the timezone of the AWS Region your dataset was created in. Dataset owners can set up their own schedules for checking alerts and increase the frequency up to hourly (to learn more, refer to Working with threshold alerts in Amazon QuickSight).

Restrict alerts

The admin for the QuickSight account can restrict who has access to set threshold alerts through custom permissions. For more information, see the section Customizing user permissions in Embed multi-tenant analytics in applications with Amazon QuickSight.

Pricing

Threshold alerts are billed for each evaluation, and follow the familiar pricing used for anomaly detection, starting at $0.50 per 1,000 evaluations. For example, if you set up an alert on a SPICE dataset that refreshes daily, you have 30 evaluations of the alert rule in a month, which costs 30 * $0.5/1000 = $0.015 in a month. For more information, refer to Amazon QuickSight Pricing.

Conclusion

In this post, you learned how to create threshold alerts on tables and pivot tables within QuickSight dashboards so that you can track important metrics. For more information about how to create threshold alerts on KPIs or gauge charts, refer to Create threshold-based alerts in Amazon QuickSight. Additional information is available in the Amazon QuickSight User Guide.


About the Author

Lillie Atkins is a Product Manager for Amazon QuickSight, Amazon Web Service’s cloud-native, fully managed BI service.

The National Intelligence Center of Spain and AWS collaborate to promote public sector cybersecurity

Post Syndicated from Borja Larrumbide original https://aws.amazon.com/blogs/security/the-national-intelligence-center-of-spain-and-aws-collaborate-to-promote-public-sector-cybersecurity/

Spanish version »

The National Intelligence Center and National Cryptological Center (CNI-CCN)—attached to the Spanish Ministry of Defense—and Amazon Web Services (AWS) have signed a strategic collaboration agreement to jointly promote cybersecurity and innovation in the public sector through AWS Cloud technology.

Under the umbrella of this alliance, the CNI-CCN will benefit from the help of AWS in defining the security roadmap of public administrations with different maturity levels and help the CNI-CCN comply with their security requirements and the National Security Scheme.

In addition, CNI-CCN and AWS will collaborate in various areas of cloud security. First, they will work together on cybersecurity training and awareness-raising focused on government institutions, through education, training, and certification programs for professionals who are experts in technology and security. Second, both organizations will collaborate in creating security guidelines for the use of cloud technology, incorporating the shared security model in the AWS cloud and contributing their experience and knowledge to help organizations with the challenges they face. Third, CNI-CCN and AWS will take part in joint events that demonstrate best practices for the deployment and secure use of the cloud, both for public and private sector organizations. Finally, AWS will support cybersecurity operations centers that are responsible for surveillance, early warning, and response to security incidents in public administrations.

Today, AWS has achieved certification in the National Security Scheme (ENS) High category and has been the first cloud provider to accredit several security services in the CNI-CCN’s STIC Products and Services catalog (CPSTIC), meaning that its infrastructure meets the highest levels of security and compliance for state agencies and public organizations in Spain. All of this gives Spanish organizations, including startups, large companies, and the public sector, access to AWS infrastructure that allows them to make use of advanced technologies such as data analysis, artificial intelligence, databases, Internet of Things (IoT), machine learning, and mobile or serverless services, to promote innovation.

In addition, the new cloud infrastructure of AWS in Spain, the AWS Europe (Spain) Region, allows customers who have data residency requirements to store their content in Spain, with the assurance that they maintain full control over the location of their data. This is a critical element for those who have data residency requirements. The launch of the AWS Europe (Spain) Region provides customers building applications that comply with General Data Protection Regulation (GDPR) access to another secure AWS Region in the European Union (EU) that helps meet the highest levels of security, compliance, and data protection. AWS is also Esquema Nacional de Seguridad (ENS) High certified, meaning its infrastructure meets the highest levels of security and compliance for government agencies and public organizations in Spain.
 


El Centro Nacional de Inteligencia de España y AWS colaboran para promover la ciberseguridad en el sector público

El Centro Nacional de Inteligencia y Centro Criptológico Nacional (CNI-CCN)–adscrito al Ministerio de Defensa de España- y Amazon Web Services (AWS) han firmado un acuerdo de colaboración estratégica para impulsar de forma conjunta la ciberseguridad y la innovación del sector público a través de la tecnología en la Nube de AWS.

Bajo el paraguas de esta alianza, el CNI-CCN se beneficiará de la ayuda de AWS en definir la hoja de ruta de seguridad de las administraciones públicas, con distintos niveles de madurez y ayudar al CNI-CCN en el cumplimiento de sus requisitos de seguridad y el Esquema Nacional de Seguridad.

Además, CNI-CCN y AWS colaborarán en diversos ámbitos en materia de seguridad en la nube. En primer lugar, trabajarán conjuntamente en formación y concienciación en ciberseguridad enfocadas a instituciones gubernamentales, a través de programas de educación, capacitación y certificación de profesionales expertos en tecnología y seguridad. En segundo lugar, ambas organizaciones colaborarán en la creación de guías de seguridad para el uso de tecnología en la nube, incorporando el modelo de seguridad compartida en la nube de AWS y aportando su experiencia y conocimiento para ayudar a las organizaciones con los desafíos a los que se enfrentan. En tercer lugar, CNI-CCN y AWS participarán en eventos conjuntos que demuestren las mejores prácticas de despliegue y uso seguro de la nube, tanto para organizaciones del sector público como privado. Finalmente, AWS apoyará a los centros de operaciones de ciberseguridad encargados de la vigilancia, alerta temprana y respuesta a incidentes de seguridad en las administraciones públicas.

Hoy AWS cuenta con la certificación del Esquema Nacional de Seguridad (ENS) categoría Alta y ha sido el primer proveedor de la nube en acreditar varios servicios de seguridad en el catálogo de Productos y Servicios STIC (CPSTIC) del CNI-CCN, los cual significa que su infraestructura cumple con los más altos niveles de seguridad y cumplimiento para agencias estatales y organizaciones públicas en España. Todo ello concede a las organizaciones españolas, incluyendo a startups, grandes empresas, así como al sector público, acceso a infraestructura de AWS que les permita hacer uso de tecnologías avanzadas como análisis de datos, inteligencia artificial, bases de datos, Internet de las Cosas (IoT), aprendizaje automático, y servicios móviles o serverless, para impulsar la innovación.

Además, la nueva infraestructura de nube de AWS en España, la Región AWS Europa (España) permite a los clientes almacenar su contenido en España, con la seguridad de que mantienen el control total sobre la localización de sus datos. Esto es un elemento crítico para quienes tienen requisitos de residencia de datos. Los clientes que desarrollan aplicaciones en cumplimiento con el Reglamento General de Protección de Datos (RGPD) tendrán acceso a otra región de infraestructura segura de AWS en la Unión Europea (UE), respetando los más altos estándares de seguridad, cumplimiento normativo y protección de datos. Hoy AWS también cuenta con la certificación del Esquema Nacional de Seguridad (ENS) categoría Alta, lo cual significa que su infraestructura cumple con los más altos niveles de seguridad y cumplimiento para agencias estatales y organizaciones públicas en España.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Borja Larrumbide

Borja Larrumbide

Borja is a Security Assurance Manager for AWS in Spain and Portugal. He received a bachelor’s degree in Computer Science from Boston University (USA). Since then, he has worked in companies such as Microsoft and BBVA, where he has performed in different roles and sectors. Borja is a seasoned security assurance practitioner with many years of experience engaging key stakeholders at national and international levels. His areas of interest include security, privacy, risk management, and compliance.

Deep Pool boosts software quality control using Amazon QuickSight

Post Syndicated from Shruthi Panicker original https://aws.amazon.com/blogs/big-data/deep-pool-boosts-software-quality-control-using-amazon-quicksight/

Deep Pool Financial Solutions, an investor servicing and compliance solutions supplier, was looking to build key performance indicators to track its software tests, failures, and successful fixes to pinpoint the specific areas for improvement in its client software. Deep Pool was unable to access the large amounts of data that its project management software provided, so it used AWS to access, manage, and analyze that data more efficiently.

During a larger migration to the AWS Cloud, the company discovered Amazon QuickSight, a cloud-native, serverless business intelligence (BI) service that powers interactive dashboards that let companies make better data-driven decisions. With QuickSight, Deep Pool could democratize access to this unused data and pinpoint areas for improvement in its software development processes, thereby improving the overall quality of its software.

According to Brett Promisel, Chief Operating Officer for Deep Pool, the company wanted to manage the data that it was collecting from a BI point of view to help it make more informed decisions. Because word of mouth and high-quality software are critical in Deep Pool’s industry, the company wanted to add additional rigorous quality controls to its product development and testing so that it continues to provide top-notch, stable software that its clients can rely on.

In this post, we share how Deep Pool boosted its software quality control using QuickSight.

Enhancing software testing to with data-driven insights

Continuous improvement is a hallmark of leading organizations. Deep Pool wanted achieve greater software quality and decided to improve how it monitored and managed its software testing processes using data.

Typical development processes involve extensive testing. First, the original developer tests the code, and then the code is unit tested. Next, larger groups test the code. For all this testing to be successful and result in product improvement, it needs to be measured and tracked so that developers can learn from it and implement improvements during the development process.

Using QuickSight for data-driven insights, Deep Pool has implemented significant software testing and control. It can now count the number of bugs found or tests failed and time how long it took to create patches and repair issues. It can also better track its work backlog and the progress of functionality requests. Monitoring this information lets the company know that it has successfully implemented improvements, because a decreasing number of bugs over time is a strong indicator of quality control.

Monitoring development to increase efficiency

Better software test management benefits two groups: internal teams and external customers. Deep Pool can now log and communicate important information, such as when a request is made, how it’s being resolved or addressed, and how it’s being tested. In addition to helping internal teams streamline their processes, the company can also use this data to track communications with customers, which are also stored in its project management software. Such knowledge helps the company determine whether customer requests are being promptly addressed and identify common trends that require action on a larger scale.

Seven development teams at Deep Pool independently write the code for the components of the company’s software products, and they must integrate those components to create the final products. With the granularity of the data that is provided by QuickSight, Deep Pool can thoroughly analyze the development and testing of these products. The company now has the ability to trace software bugs down to their original coding, which makes it simple to quickly locate and address any issues that come up. Deep Pool can also measure the results of those mitigating actions and determine whether its repairs were successful.

Attention to detail helps Deep Pool improve software quality, leading to better products and customers who are more likely to give positive referrals.

“Amazon QuickSight is extremely valuable to us when performing quality control. We can now expose our whole development team to how we’re managing databases and servers and measuring performance in a more optimal way,” says Promisel.

Deep Pool has successfully proven that it can use QuickSight to measure its quality control to improve its software and, ultimately, better support its customers. Since the migration to AWS, the number of software issues discovered and logged has dropped by 57%.

The increased quality control that has come from the company’s focus on accessing all its data and optimizing its use has led to better efficiency, which results in the ability to expand its growth without increasing its costs.

“We should be able to increase our customer base without adding the equivalent costs. Making sure we are as efficient as possible lets me manage that way,” says Promisel.

Expanding into more data sources

Deep Pool is also exploring how to expand its use of QuickSight to extract and use data from even more of its databases. In the future, it hopes to analyze its internal metrics, such as sales data, and its external client-related information, such as assets and holdings, to guide how it builds its client software and provides even more custom products.

Deep Pool is also committed to helping its employees be innovative and successful by investing in their futures and skill sets. It understands that well-trained employees can optimize the use of their tools, which results in better products. As such, the company will continue to invest in the training offered by AWS. Using cutting-edge tools and promoting its intent to invest in its employees indicate to Deep Pool’s customers that the company plans to stay innovative and ahead of the technological curve.

To learn more about how QuickSight can help your business with dashboards, reports, and more, visit Amazon QuickSight.


About the Author

Shruthi Panicker is a Senior Product Marketing Manager with Amazon QuickSight at AWS. As an engineer turned product marketer, Shruthi has spent over 15 years in the technology industry in various roles from software engineering, to solution architecting to product marketing. She is passionate about working at the intersection of technology and business to tell great product stories that help drive customer value.

How to use Amazon Macie to reduce the cost of discovering sensitive data

Post Syndicated from Nicholas Doropoulos original https://aws.amazon.com/blogs/security/how-to-use-amazon-macie-to-reduce-the-cost-of-discovering-sensitive-data/

Amazon Macie is a fully managed data security service that uses machine learning and pattern matching to discover and help protect your sensitive data, such as personally identifiable information (PII), payment card data, and Amazon Web Services (AWS) credentials. Analyzing large volumes of data for the presence of sensitive information can be expensive, due to the nature of compute-intensive operations involved in the process.

Macie offers several capabilities to help customers reduce the cost of discovering sensitive data, including automated data discovery, which can reduce your spend with new data sampling techniques that are custom-built for Amazon Simple Storage Service (Amazon S3). In this post, we will walk through such Macie capabilities and best practices so that you can cost-efficiently discover sensitive data with Macie.

Overview of the Macie pricing plan

Let’s do a quick recap of how customers pay for the Macie service. With Macie, you are charged based on three dimensions: the number of S3 buckets evaluated for bucket inventory and monitoring, the number of S3 objects monitored for automated data discovery, and the quantity of data inspected for sensitive data discovery. You can read more about these dimensions on the Macie pricing page.

The majority of the cost incurred by customers is driven by the quantity of data inspected for sensitive data discovery. For this reason, we will limit the scope of this post to techniques that you can use to optimize the quantity of data that you scan with Macie.

Not all security use cases require the same quantity of data for scanning

Broadly speaking, you can choose to scan your data in two ways—run full scans on your data or sample a portion of it. When to use which method depends on your use cases and business objectives. Full scans are useful when customers have identified what they want to scan. A few examples of such use cases are: scanning buckets that are open to the internet, monitoring a bucket for unintentionally added sensitive data by scanning every new object, or performing analysis on a bucket after a security incident.

The other option is to use sampling techniques for sensitive data discovery. This method is useful when security teams want to reduce data security risks. For example, just knowing that an S3 bucket contains credit card numbers is enough information to prioritize it for remediation activities.

Macie offers both options, and you can discover sensitive data either by creating and running sensitive data discovery jobs that perform full scans on targeted locations, or by configuring Macie to perform automated sensitive data discovery for your account or organization. You can also use both options simultaneously in Macie.

Use automated data discovery as a best practice

Automated data discovery in Macie minimizes the quantity of data scanning that is needed to a fraction of your S3 estate.

When you enable Macie for the first time, automated data discovery is enabled by default. When you already use Macie in your organization, you can enable automatic data discovery in the management console of the Amazon Macie administrator account. This Macie capability automatically starts discovering sensitive data in your S3 buckets and builds a sensitive data profile for each bucket. The profiles are organized in a visual, interactive data map, and you can use the data map to identify data security risks that need immediate attention.

Automated data discovery in Macie starts to evaluate the level of sensitivity of each of your buckets by using intelligent and fully managed data sampling techniques to minimize the quantity of data scanning needed. During evaluation, objects are organized with similar S3 metadata, such as bucket names, object-key prefixes, file-type extensions, and storage class, into groups that are likely to have similar content. Macie then selects small, but representative, samples from each identified group of objects and scans them to detect the presence of sensitive data. Macie has a feedback loop that uses the results of previously scanned samples to prioritize the next set of samples to inspect.

The automated sensitive data discovery feature is designed to detect sensitive data at scale across hundreds of buckets and accounts, which makes it easier to identify the S3 buckets that need to be prioritized for more focused scanning. Because the amount of data that needs to be scanned is reduced, this task can be done at fraction of the cost of running a full data inspection across all your S3 buckets. The Macie console displays the scanning results as a heat map (Figure 1), which shows the consolidated information grouped by account, and whether a bucket is sensitive, not sensitive, or not analyzed yet.

Figure 1: A heat map showing the results of automated sensitive data discovery

Figure 1: A heat map showing the results of automated sensitive data discovery

There is a 30-day free trial period when you enable automatic data discovery on your AWS account. During the trial period, in the Macie console, you can see the estimated cost of running automated sensitive data discovery after the trial period ends. After the evaluation period, we charge based on the total quantity of S3 objects in your account, as well as the bytes that are scanned for sensitive content. Charges are prorated per day. You can disable this capability at any time.

Tune your monthly spend on automated sensitive data discovery

To further reduce your monthly spend on automated sensitive data, Macie allows you to exclude buckets from automated discovery. For example, you might consider excluding buckets that are used for storing operational logs, if you’re sure they don’t contain any sensitive information. Your monthly spend is reduced roughly by the percentage of data in those excluded buckets compared to your total S3 estate.

Figure 2 shows the setting in the heatmap area of the Macie console that you can use to exclude a bucket from automated discovery.

Figure 2: Excluding buckets from automated sensitive data discovery from the heatmap

Figure 2: Excluding buckets from automated sensitive data discovery from the heatmap

You can also use the automated data discovery settings page to specify multiple buckets to be excluded, as shown in Figure 3.

Figure 3: Excluding buckets from the automated sensitive data discovery settings page

Figure 3: Excluding buckets from the automated sensitive data discovery settings page

How to run targeted, cost-efficient sensitive data discovery jobs

Making your sensitive data discovery jobs more targeted make them more cost-efficient, because it reduces the quantity of data scanned. Consider using the following strategies:

  1. Make your sensitive data discovery jobs as targeted and specific as possible in their scope by using the Object criteria settings on the Refine the scope page, shown in Figure 4.
    Figure 4: Adjusting the scope of a sensitive data discovery job

    Figure 4: Adjusting the scope of a sensitive data discovery job

    Options to make discovery jobs more targeted include:

    • Include objects by using the “last modified” criterion — If you are aware of the frequency at which your classifiable S3-hosted objects get modified, and you want to scan the resources that changed at a particular point in time, include in your scope the objects that were modified at a certain date or time by using the “last modified” criterion.
    • Don’t scan CloudTrail logs — Identify the S3 bucket prefixes that contain AWS CloudTrail logs and exclude them from scanning.
    • Consider using random object sampling — With this option, you specify the percentage of eligible S3 objects that you want Macie to analyze when a sensitive data discovery job runs. If this value is less than 100%, Macie selects eligible objects to analyze at random, up to the specified percentage, and analyzes the data in those objects. If your data is highly consistent and you want to determine whether a specific S3 bucket, rather than each object, contains sensitive information, adjust the sampling depth accordingly.
    • Include objects with specific extensions, tags, or storage size — To fine tune the scope of a sensitive data discovery job, you can also define custom criteria that determine which S3 objects Macie includes or excludes from a job’s analysis. These criteria consist of one or more conditions that derive from properties of S3 objects. You can exclude objects with specific file name extensions, exclude objects by using tags as the criterion, and exclude objects on the basis of their storage size. For example, you can use a criteria-based job to scan the buckets associated with specific tag key/value pairs such as Environment: Production.
  2. Specify S3 bucket criteria in your job — Use a criteria-based job to scan only buckets that have public read/write access. For example, if you have 100 buckets with 10 TB of data, but only two of those buckets containing 100 GB are public, you could reduce your overall Macie cost by 99% by using a criteria-based job to classify only the public buckets.
  3. Consider scheduling jobs based on how long objects live in your S3 buckets. Running jobs at a higher frequency than needed can result in unnecessary costs in cases where objects are added and deleted frequently. For example, if you’ve determined that the S3 objects involved contain high velocity data that is expected to reside in your S3 bucket for a few days, and you’re concerned that sensitive data might remain, scheduling your jobs to run at a lower frequency will help in driving down costs. In addition, you can deselect the Include existing objects checkbox to scan only new objects.
    Figure 5: Specifying the frequency of a sensitive data discovery job

    Figure 5: Specifying the frequency of a sensitive data discovery job

  4. As a best practice, review your scheduled jobs periodically to verify that they are still meaningful to your organization. If you aren’t sure whether one of your periodic jobs continues to be fit for purpose, you can pause it so that you can investigate whether it is still needed, without incurring potentially unnecessary costs in the meantime. If you determine that a periodic job is no longer required, you can cancel it completely.
    Figure 6: Pausing a scheduled sensitive data discovery job

    Figure 6: Pausing a scheduled sensitive data discovery job

  5. If you don’t know where to start to make your jobs more targeted, use the results of Macie automated data discovery to plan your scanning strategy. Start with small buckets and the ones that have policy findings associated with them.
  6. In multi-account environments, you can monitor Macie’s usage across your organization in AWS Organizations through the usage page of the delegated administrator account. This will enable you to identify member accounts that are incurring higher costs than expected, and you can then investigate and take appropriate actions to keep expenditure low.
  7. Take advantage of the Macie pricing calculator so that you get an estimate of your Macie fees in advance.

Conclusion

In this post, we highlighted the best practices to keep in mind and configuration options to use when you discover sensitive data with Amazon Macie. We hope that you will walk away with a better understanding of when to use the automated data discovery capability and when to run targeted sensitive data discovery jobs. You can use the pointers in this post to tune the quantity of data you want to scan with Macie, so that you can continuously optimize your Macie spend.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on Amazon Macie re:Post.

Want more AWS Security news? Follow us on Twitter.

Nicholas Doropoulos

Nicholas Doropoulos

Nicholas is an AWS Cloud Security Engineer, a Bestselling Udemy Instructor, and a subject matter expert in AWS Shield, GuardDuty and Certificate Manager. Outside work, he enjoys spending his time with his wife and their beautiful baby son.

Koulick Ghosh

Koulick Ghosh

Koulick is a Senior Product Manager in AWS Security based in Seattle, WA. He loves speaking with customers on how AWS Security services can help make them more secure. In his free-time, he enjoys playing the guitar, reading, and exploring the Pacific Northwest.

New AWS Security Blog homepage

Post Syndicated from Anna Brinkmann original https://aws.amazon.com/blogs/security/new-aws-security-blog-homepage/

We’ve launched a new AWS Security Blog homepage! While we currently have no plans to deprecate our existing list-view homepage, we have recently launched a new, security-centered homepage to provide readers with more blog info and easy access to the rest of AWS Security. Please bookmark the new page, and let us know what you think, by adding a comment, below.

Thumbnail view of the new page

The new AWS Security Blog homepage

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Author

Anna Brinkmann

Anna is a technical editor and writer, and she manages the AWS Security Blog. She enjoys creating helpful content and running short, streamlined meetings. In her free time, you can find her hanging out with her son, reading, and cooking with her air fryer.

Ivy Lin

Ivy Lin

Ivy is a seasoned web production expert. With the nickname “IvyBOT” from previous jobs, she has a passion for transforming raw documents and design comps into web pages with exceptional user experiences. In her spare time, she enjoys sharing information about delicious foods from her mother country of Taiwan with her friends.

2022 H2 IRAP report is now available on AWS Artifact for Australian customers

Post Syndicated from Patrick Chang original https://aws.amazon.com/blogs/security/2022-h2-irap-report-is-now-available-on-aws-artifact-for-australian-customers/

Amazon Web Services (AWS) is excited to announce that a new Information Security Registered Assessors Program (IRAP) report (2022 H2) is now available through AWS Artifact. An independent Australian Signals Directorate (ASD) certified IRAP assessor completed the IRAP assessment of AWS in December 2022.

The new IRAP report includes an additional six AWS services, as well as the new AWS Melbourne Region, that are now assessed at the PROTECTED level under IRAP. This brings the total number of services assessed at the PROTECTED level to 139.

The following are the six newly assessed services:

For the full list of services, see the IRAP tab on the AWS Services in Scope by Compliance Program page.

AWS has developed an IRAP documentation pack to assist Australian government agencies and their partners to plan, architect, and assess risk for their workloads when they use AWS Cloud services.

We developed this pack in accordance with the Australian Cyber Security Centre (ACSC) Cloud Security Guidance and Anatomy of a Cloud Assessment and Authorisation framework, which addresses guidance within the Australian Government Information Security Manual (ISM), the Attorney-General’s Protective Security Policy Framework (PSPF), and the Digital Transformation Agency Secure Cloud Strategy.

The IRAP pack on AWS Artifact also includes newly updated versions of the AWS Consumer Guide and the whitepaper Reference Architectures for ISM PROTECTED Workloads in the AWS Cloud.

Reach out to your AWS representatives to let us know which additional services you would like to see in scope for upcoming IRAP assessments. We strive to bring more services into scope at the PROTECTED level under IRAP to support your requirements.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Patrick Chang

Patrick Chang

Patrick is the APJ Audit Lead based in Hong Kong. He leads security audits, certifications and compliance programs across the APJ region. He is a technology risk and audit professional with over a decade of experience. He is passionate about delivering assurance programs that build trust with customers and provide them assurance on cloud security.