Tag Archives: AWS Wickr

Three key security themes from AWS re:Invent 2022

Post Syndicated from Anne Grahn original https://aws.amazon.com/blogs/security/three-key-security-themes-from-aws-reinvent-2022/

AWS re:Invent returned to Las Vegas, Nevada, November 28 to December 2, 2022. After a virtual event in 2020 and a hybrid 2021 edition, spirits were high as over 51,000 in-person attendees returned to network and learn about the latest AWS innovations.

Now in its 11th year, the conference featured 5 keynotes, 22 leadership sessions, and more than 2,200 breakout sessions and hands-on labs at 6 venues over 5 days.

With well over 100 service and feature announcements—and innumerable best practices shared by AWS executives, customers, and partners—distilling highlights is a challenge. From a security perspective, three key themes emerged.

Turn data into actionable insights

Security teams are always looking for ways to increase visibility into their security posture and uncover patterns to make more informed decisions. However, as AWS Vice President of Data and Machine Learning, Swami Sivasubramanian, pointed out during his keynote, data often exists in silos; it isn’t always easy to analyze or visualize, which can make it hard to identify correlations that spark new ideas.

“Data is the genesis for modern invention.” – Swami Sivasubramanian, AWS VP of Data and Machine Learning

At AWS re:Invent, we launched new features and services that make it simpler for security teams to store and act on data. One such service is Amazon Security Lake, which brings together security data from cloud, on-premises, and custom sources in a purpose-built data lake stored in your account. The service, which is now in preview, automates the sourcing, aggregation, normalization, enrichment, and management of security-related data across an entire organization for more efficient storage and query performance. It empowers you to use the security analytics solutions of your choice, while retaining control and ownership of your security data.

Amazon Security Lake has adopted the Open Cybersecurity Schema Framework (OCSF), which AWS cofounded with a number of organizations in the cybersecurity industry. The OCSF helps standardize and combine security data from a wide range of security products and services, so that it can be shared and ingested by analytics tools. More than 37 AWS security partners have announced integrations with Amazon Security Lake, enhancing its ability to transform security data into a powerful engine that helps drive business decisions and reduce risk. With Amazon Security Lake, analysts and engineers can gain actionable insights from a broad range of security data and improve threat detection, investigation, and incident response processes.

Strengthen security programs

According to Gartner, by 2026, at least 50% of C-Level executives will have performance requirements related to cybersecurity risk built into their employment contracts. Security is top of mind for organizations across the globe, and as AWS CISO CJ Moses emphasized during his leadership session, we are continuously building new capabilities to help our customers meet security, risk, and compliance goals.

In addition to Amazon Security Lake, several new AWS services announced during the conference are designed to make it simpler for builders and security teams to improve their security posture in multiple areas.

Identity and networking

Authorization is a key component of applications. Amazon Verified Permissions is a scalable, fine-grained permissions management and authorization service for custom applications that simplifies policy-based access for developers and centralizes access governance. The new service gives developers a simple-to-use policy and schema management system to define and manage authorization models. The policy-based authorization system that Amazon Verified Permissions offers can shorten development cycles by months, provide a consistent user experience across applications, and facilitate integrated auditing to support stringent compliance and regulatory requirements.

Additional services that make it simpler to define authorization and service communication include Amazon VPC Lattice, an application-layer service that consistently connects, monitors, and secures communications between your services, and AWS Verified Access, which provides secure access to corporate applications without a virtual private network (VPN).

Threat detection and monitoring

Monitoring for malicious activity and anomalous behavior just got simpler. Amazon GuardDuty RDS Protection expands the threat detection capabilities of GuardDuty by using tailored machine learning (ML) models to detect suspicious logins to Amazon Aurora databases. You can enable the feature with a single click in the GuardDuty console, with no agents to manually deploy, no data sources to enable, and no permissions to configure. When RDS Protection detects a potentially suspicious or anomalous login attempt that indicates a threat to your database instance, GuardDuty generates a new finding with details about the potentially compromised database instance. You can view GuardDuty findings in AWS Security Hub, Amazon Detective (if enabled), and Amazon EventBridge, allowing for integration with existing security event management or workflow systems.

To bolster vulnerability management processes, Amazon Inspector now supports AWS Lambda functions, adding automated vulnerability assessments for serverless compute workloads. With this expanded capability, Amazon Inspector automatically discovers eligible Lambda functions and identifies software vulnerabilities in application package dependencies used in the Lambda function code. Actionable security findings are aggregated in the Amazon Inspector console, and pushed to Security Hub and EventBridge to automate workflows.

Data protection and privacy

The first step to protecting data is to find it. Amazon Macie now automatically discovers sensitive data, providing continual, cost-effective, organization-wide visibility into where sensitive data resides across your Amazon Simple Storage Service (Amazon S3) estate. With this new capability, Macie automatically and intelligently samples and analyzes objects across your S3 buckets, inspecting them for sensitive data such as personally identifiable information (PII), financial data, and AWS credentials. Macie then builds and maintains an interactive data map of your sensitive data in S3 across your accounts and Regions, and provides a sensitivity score for each bucket. This helps you identify and remediate data security risks without manual configuration and reduce monitoring and remediation costs.

Encryption is a critical tool for protecting data and building customer trust. The launch of the end-to-end encrypted enterprise communication service AWS Wickr offers advanced security and administrative controls that can help you protect sensitive messages and files from unauthorized access, while working to meet data retention requirements.

Management and governance

Maintaining compliance with regulatory, security, and operational best practices as you provision cloud resources is key. AWS Config rules, which evaluate the configuration of your resources, have now been extended to support proactive mode, so that they can be incorporated into infrastructure-as-code continuous integration and continuous delivery (CI/CD) pipelines to help identify noncompliant resources prior to provisioning. This can significantly reduce time spent on remediation.

Managing the controls needed to meet your security objectives and comply with frameworks and standards can be challenging. To make it simpler, we launched comprehensive controls management with AWS Control Tower. You can use it to apply managed preventative, detective, and proactive controls to accounts and organizational units (OUs) by service, control objective, or compliance framework. You can also use AWS Control Tower to turn on Security Hub detective controls across accounts in an OU. This new set of features reduces the time that it takes to define and manage the controls required to meet specific objectives, such as supporting the principle of least privilege, restricting network access, and enforcing data encryption.

Do more with less

As we work through macroeconomic conditions, security leaders are facing increased budgetary pressures. In his opening keynote, AWS CEO Adam Selipsky emphasized the effects of the pandemic, inflation, supply chain disruption, energy prices, and geopolitical events that continue to impact organizations.

Now more than ever, it is important to maintain your security posture despite resource constraints. Citing specific customer examples, Selipsky underscored how the AWS Cloud can help organizations move faster and more securely. By moving to the cloud, agricultural machinery manufacturer Agco reduced costs by 78% while increasing data retrieval speed, and multinational HVAC provider Carrier Global experienced a 40% reduction in the cost of running mission-critical ERP systems.

“If you’re looking to tighten your belt, the cloud is the place to do it.” – Adam Selipsky, AWS CEO

Security teams can do more with less by maximizing the value of existing controls, and bolstering security monitoring and analytics capabilities. Services and features announced during AWS re:Invent—including Amazon Security Lake, sensitive data discovery with Amazon Macie, support for Lambda functions in Amazon Inspector, Amazon GuardDuty RDS Protection, and more—can help you get more out of the cloud and address evolving challenges, no matter the economic climate.

Security is our top priority

AWS re:Invent featured many more highlights on a variety of topics, such as Amazon EventBridge Pipes and the pre-announcement of GuardDuty EKS Runtime protection, as well as Amazon CTO Dr. Werner Vogels’ keynote, and the security partnerships showcased on the Expo floor. It was a whirlwind week, but one thing is clear: AWS is working harder than ever to make our services better and to collaborate on solutions that ease the path to proactive security, so that you can focus on what matters most—your business.

For more security-related announcements and on-demand sessions, see A recap for security, identity, and compliance sessions at AWS re:Invent 2022 and the AWS re:Invent Security, Identity, and Compliance playlist on YouTube.

If you have feedback about this post, submit comments in the Comments section below.

Anne Grahn

Anne Grahn

Anne is a Senior Worldwide Security GTM Specialist at AWS based in Chicago. She has more than a decade of experience in the security industry, and has a strong focus on privacy risk management. She maintains a Certified Information Systems Security Professional (CISSP) certification.

Author

Paul Hawkins

Paul helps customers of all sizes understand how to think about cloud security so they can build the technology and culture where security is a business enabler. He takes an optimistic approach to security and believes that getting the foundations right is the key to improving your security posture.

Streaming the AWS Wickr desktop client with Amazon AppStream 2.0

Post Syndicated from Charles H. original https://aws.amazon.com/blogs/architecture/streaming-the-aws-wickr-desktop-client-with-amazon-appstream-2-0/

Amazon Web Services (AWS) customers using AWS Wickr who want to find a way to access their AWS Wickr Windows desktop client though a web browser, can use Amazon AppStream 2.0 to stream the application through to their users.

Using this architecture, you can provide lightweight access to the AWS Wickr desktop client for users that cannot install it onto their local device. By using AppStream 2.0, you can focus on managing your AWS Wickr network while AppStream 2.0 manages the AWS resources required to host and run the application, scaling automatically and providing on-demand access to your users.

If you want to ensure that AWS Wickr user data persists between streaming sessions, you can make use of AppStream 2.0 user persistence to securely save user data (including AWS Wickr client data) to Amazon Simple Storage Service (Amazon S3).

In this post, we discuss how to build an AppStream 2.0 image for AWS Wickr on Windows, enable persistence for users, and deploy a stack.

Solution overview

These steps will illustrate deploying an AppStream 2.0 Windows Image Builder to a single availability zone. Then we deploy an AppStream 2.0 fleet with internet access and user data persistence to three availability zones for high availability.

Review the Regions and Availability Zones documentation and the AWS Regional Services List to choose the best Region for your deployment, as well as the networking and bandwidth requirements for User Connections to Amazon AppStream 2.0.

AWS Wickr with Amazon AppStream 2.0 Architecture

Figure 1. AWS Wickr with Amazon AppStream 2.0 Architecture

Cost

The costs associated with using AWS services when deploying AppStream 2.0 and AWS Wickr in your AWS account can be estimated on the pricing pages for the services used.

Walkthrough

This walkthrough takes you from installing the AWS Wickr for Windows desktop client onto an AppStream 2.0 Image Builder, to configuring the client for persistence, and finally to deploying a fleet and stack for your users to consume:

  1. Install AWS Wickr for Windows client onto your AppStream 2.0 Windows Image Builder.
  2. Configure the client using the Image Assistant to set up user data and settings persistence.
  3. Create an on-demand instance streaming fleet.
  4. Create a user stack and enable user data and settings persistence.
  5. Create a user pool.
  6. Test your streaming application and prove data persistence.

Prerequisites

You should have the following prerequisites:

  • Familiarity with AppStream 2.0 and an existing AWS Wickr account with associated credentials. Currently, it is not possible for an AWS Wickr user to register their account over AppStream 2.0.
  • An AWS account with access to AppStream 2.0. Instructions for setting up an AWS account can be found at Setting Up for Amazon AppStream 2.0.
  • A running AppStream 2.0 image builder, based on the WinServer2019-10-05-2022 base image with the AWS Wickr for Windows client installer downloaded to it.
  • An up-to-date Google Chrome browser (if you want to use your device’s webcam).

Install AWS Wickr

AppStream 2.0 uses Amazon Elastic Compute Cloud (Amazon EC2) instances to stream applications. You launch instances from base images, called image builders. In this step you will install the AWS Wickr client onto your image builder before moving on to configuration.

  1. Connect to your image builder as an Administrator.
  2. Run the AWS Wickr installer and carry out the following steps:
    1. On the Welcome to the AWS Wickr Setup Wizard prompt, choose Next.
    2. For Installation Type, choose Everybody (all users).
    3. Choose Next to accept the default installation folder.
    4. Choose Install.
  3. Once the AWS Wickr client has finished installing, uncheck Launch Wickr on the final screen, and then choose Finish.

Run the Image Assistant

To create your own custom image, connect to an image builder instance, install and configure your applications for streaming with guidance from the Image Assistant, and then create your image by creating a snapshot of the image builder instance:

  1. Run the Image Assistant shortcut found on the desktop.
  2. Under 1. ADD APPS, choose Add App.
  3. Choose Program Files, Amazon Web Services, Wickr, and AWS Wickr.
  4. Scroll to the bottom, choose the Wickr application, and then choose Open.
  5. In the pop-up that appears, customize the Name and Display Name (if needed), and then in the Launch Parameters box, enter -datalocation “C:\Users\%username%” (Figure 2).
    Updating Launch Parameters

    Figure 2. Updating Launch Parameters

    Note: If you want to mask the name of the application within the URL used for streaming, replace the Display Name field text with something else.

  6. Choose Save.
  7. Wickr will now appear as an app on the Image Assistant screen. Choose Next.
  8. Under the 2. CONFIGURE APPS stage, follow the instructions from 1-5, choose Save settings, and choose Next (Figure 3).

    Configuring the Application with Image Assistant

    Figure 3. Configuring the Application with Image Assistant

  9. Under the 3. TEST stage, carry out steps 1-3.
  10. Under the 4. OPTIMIZE stage, select Launch.
  11. As instructed, once the app has launched successfully, select Continue and wait for the app to be optimized.
  12. Under the 5. CONFIGURE IMAGE stage, give the image that will be created a Name, Display name, and Description.
  13. Choose the Always use latest agent version checkbox. This ensures that new image builders or fleet instances that are launched from your image always use the latest AppStream 2.0 agent version, and then select Next (Figure 4).

    Finalizing the Application Image

    Figure 4. Finalizing the Application Image

  14. Under the 6. REVIEW stage, select Disconnect and Create Image. Your session will be terminated while your image is being created.

Create an AppStream 2.0 fleet

With AppStream 2.0, you create fleet instances and stacks as part of the process of streaming applications. A fleet consists of streaming instances that run the image that you specify.

  1. Return to the AppStream 2.0 management console.
  2. Choose Fleets from the menu on the left.
  3. Choose your Fleet type (this demonstration uses On-Demand) and choose Next.
  4. Give the Fleet a Name, Display name, and a Description.
    Note: If you wish to mask the name of the fleet within the URL used for streaming, replace the Name and Display name section with something else.
  5. Choose your Fleet instance type (this walkthrough uses a general purpose stream.standard.large instance type).
  6. Adjust the Fleet capacity as required, leave the other sections as they’re displayed by default, and choose Next.
  7. On the Choose an Image screen, choose the image you created earlier.
  8. On the Configure network screen, choose Enable default internet access and choose your VPC, along with three subnets that you will deploy into. Finally, choose the Security group that you will use to restrict access (the default is to allow access from all IPs). Choose Next.
  9. Review your settings and then choose Create fleet.

Create an AppStream 2.0 stack

A stack consists of an associated fleet, user access policies, and storage configurations. In this step, you will create a stack, enable application settings persistence, and associate the stack with the fleet you provisioned previously.

  1. From the menu in the AppStream 2.0 management console, choose Stacks.
  2. Choose Create Stack.
  3. Give the stack a Name, and optionally a Display name and Description.
  4. Leave all other options as they’re displayed by default and choose Next.
  5. In the Enable storage window, ensure that Enable home folders is selected, leave all the settings as they are, and choose Next.
  6. In the Edit user settings section, modify the copy and paste functionality to your requirements, and ensure that Enable application settings persistence is selected. Choose Next.
  7. Review your configuration and choose Create stack.
  8. You will now be presented with an overview of the stack you have created. Choose the Action dropdown list and choose Associate fleet.
  9. From the dropdown list, choose the fleet that you previously provisioned and choose Associate.

Create an AppStream 2.0 user pool

Users can access application stacks through a persistent URL and login credentials by using their email address and a password that they choose. In this step, you will create a user and assign it to a stack so you can access your AppStream 2.0 streaming session.

  1. Choose User pool.
  2. Choose Create user.
  3. Enter an email address, first name, and last name, and then choose Create User.
  4. In the User pool window, choose the User and then choose Action.
  5. Choose Assign stack, choose the newly created stack, and choose Send email notification to user.
  6. Choose Assign stack.
  7. You will receive two emails. Follow the instructions on the one titled Start accessing your apps using AppStream 2.0 to access your app.

Launch your AppStream 2.0 streaming session

In this step, you will use the user created earlier to log in to an AppStream 2.0 streaming session. You will then prove AWS Wickr user data persistence by exiting and logging back into your session.

Note: You will only be able to launch your session once your fleet has been provisioned. This can take around 15-20 minutes.

  1. Choose the login page link from the email you received in the previous section and log in.
  2. You will be presented with the AWS Wickr client icon. Choose it to start your session.
  3. Log in to the AWS Wickr client with your credentials.
  4. As you have application persistence enabled, you can close the tab and the session will pick up from where you left it when you log back in (Figure 5).
Accessing the Streaming Application

Figure 5. Accessing the Streaming Application

Cleanup

To avoid incurring future charges, delete the stack, user, fleet, custom image, and image builder that you have created.

Conclusion

In this post, we demonstrated how customers can take advantage of AppStream 2.0 as a managed service to enable the provisioning of AWS Wickr clients for users, with persistence between sessions, through a web browser.

AWS Wickr – A Secure, End-to-End Encrypted Communication Service For Enterprises With Auditing And Regulatory Requirements

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/aws-wickr-a-secure-end-to-end-encrypted-communication-service-for-enterprises-with-auditing-and-regulatory-requirements/

I am excited to announce the availability of AWS Wickr, an enterprise communications service with end-to-end encryption, that allows businesses and public sector organizations to communicate more securely, enabling customers to meet auditing and regulatory requirements like e-discovery, legal hold, and FOIA requests. Unlike many enterprise communication tools, Wickr uses end-to-end encryption mechanisms to ensure your messages, files, voice, or video calls are solely accessible to their intended recipients.

The flexible administrative controls make it easy for your Wickr administrator to manage the communication channels and retain information to meet regulatory requirements when required. The information retained is stored on the servers you choose and stays entirely under your control.

End-to-End Encryption
Wickr provides secure communication between two or more correspondents. It means that the system provides authenticity and confidentiality: no unauthorized party can inject a message into the system, and no unintended party can access or understand the communications without being given them by one of the correspondents.

Each message gets a unique AES encryption key and a unique ECDH public key to negotiate the key exchange with other recipients. The message content (text, files, audio, or video) is encrypted on the sending device (your iPhone, for example) using the message-specific AES key. The message-specific AES key is exchanged with recipients via a Diffie-Hellman elliptic curve key exchange (EDCH521) mechanism. This ensures that only intended recipients have the message-specific AES key to decrypt the message.

Message-specific keys are passed through a key derivation function that binds the key exchange to a recipient device. When the recipient adds devices to their account later on (for example, I add a macOS client to my Wickr account, in addition to my iPhone), the new device will not see the message history by default. There is a way to migrate history from your old device to your new device if you have the two devices at hand and single sign-on (SSO) configured.

I drew the below diagram to show how the key exchange works at a high level.

wickr key exchange

The Wickr secure messaging protocol is open and documented, allowing the community to inspect it. The source code we use in Wickr clients to implement the secure messaging protocol is available to audit and review.

Wickr Client Application
The Wickr client application is very familiar to end users and easy to get started with. It is available for Windows, macOS, Linux, Android, and iOS devices. Once downloaded from a preferred app store and registered, users can create chat rooms or send messages to individual recipients. They may use emoticons to react to messages, exchange files, and make audio and video calls.

Here I am on macOS connected with me on iOS in my kitchen.

Wickr text message Wickr video calls

Wickr for the Administrator
Wickr administration is now integrated and available in the AWS Management Console. You can control access to Wickr administration using familiar AWS Identity and Access Management (IAM) access control and policies. It is integrated with AWS Cloud Development Kit (AWS CDK) and Amazon CloudWatch for monitoring.

A Wickr administrator manages networks. A network is a group of users and its related configuration, similar to Slack workspaces. Users might be added manually or imported. Most organizations will federate users through an existing identity system. Wickr will federate users with any OpenID Connect-compliant system.

A Wickr network is also the place where Wickr administrators configure security groups to manage messaging, calling, security, and federation settings. It also allows Wickr administrators to configure logging, data retention, and bots.

To get started, I select Wickr in the AWS Management Console. Then, I select Create a network. I enter a Network name, and I select Continue.

Wickr from AWS console Wickr - Create a network

The Wickr page of the Management Console lets you configure the Wickr network, the user federation with other Wickr networks, and more.

The Wickr consoleIn this demo, I don’t use single sign-on. I manually add two users by selecting Create new user. Once added, the user receives an invitation email with links to the client app. The client app asks the user to define a password at first use.

Customer-Controlled Data Retention and Bots
Wickr allows administrators to selectively retain information that must be maintained for regulatory needs into a secure, controlled data store that they manage. No one other than the recipient—including AWS—has access to keys to decrypt conversations or documents, giving organizations full control over their data. It helps organizations in the public sector to use Wickr for their secure collaboration needs.

Data retention is implemented as a process added to conversations, like a participant. The data retention process participates in the key exchange, just like any recipient, allowing it to decrypt the messages. The data retention process can run anywhere: on-premises, on an Amazon Elastic Compute Cloud (Amazon EC2) virtual machine, or at any location of your choice. Once data retention is configured in the console, Wickr administrators may start the data retention process and register it with their Wickr network.

Wickr Compliance Architecture schema

The data retention process is available as a Docker container for ease of deployment. The process stores clear text messages on the storage of your choice: a local or remote file system or Amazon Simple Storage Service (Amazon S3).

To try this process, I follow the documentation. I open the Wickr administration page and selected Data Retention under Network Settings.

Wickr Data retention

I copy the docker command, the Username, and the Password (not shown in the previous screenshot). Then, I connect to a Linux EC2 instance I created beforehand. I create a local directory for data retention, and I start the container.

docker run -v 
       /home/ec2-user/retention_34908291_bot:/tmp/retention_34908291_bot
       --restart on-failure:5 
       --name="retention_34908291_bot"
       -it 
       -e WICKRIO_BOT_NAME='retention_34908291_bot'
       wickr/bot-retention-cloud:5.109.08.03

The application prompts for the username and password collected in the console. When the process starts, I return to the console and activate the Data Retention switch at the bottom of the screen.

Note that for this demo, I choose to store data on the local file system. In reality, you might want to use S3 to securely store all your organization communications, encrypt the data at rest, and use the mechanisms you already have in place to control access to this data. The data retention process natively supports integration with AWS Secrets Manager and S3.

As a user, I exchange a few messages in a Wickr room. Then, as an administrator, I look at the data captured. I can observe that the data retention process captured the message and its metadata in JSON format.

Wickr Compliance data

When configuring the data retention capability, compliance and security officers can audit and review communications in a secure and controlled data store.

The retention bot is not the only bot available for Wickr. The Wickr Broadcast Bot allows you to broadcast messages to all of the members of your network or specific security groups. Developers can create workflows using Wickr Bots to automate chat-based workflows and integrate them with other systems. Similarly, a bot is a process integrated into conversation or chat rooms that can receive and act upon messages. Developers write bots with NodeJS. Bot processes securely integrate with a Wickr network, as defined by the network administrator. They are typically packaged as Docker containers for ease of deployment at the location of your choice. If you are a developer, have a look at the Wickr bot developer documentation to learn all the details.

Pricing and availability
Wickr is available in the US East (N. Virginia) AWS Region.

Wickr is free for individuals and teams of up to 30 users looking for a more secure workspace for the first 3 months. For organizations with more than 30 users, there is a standard plan available starting at $5 per user per month and a premium plan for $15 per user per month. The premium plan adds features and retention capabilities like granular administrative controls, client-side data expiration timer of up to 1 year, data retention, and e-discovery. As usual, there are no upfront fees or long-term engagement. You pay per user and per month (annual billing is available, contact us). Have a look at the pricing page for details.

Create your first Wickr network today!

— seb

AWS Week in Review – August 1, 2022

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-august-1-2022/

AWS re:Inforce returned to Boston last week, kicking off with a keynote from Amazon Chief Security Officer Steve Schmidt and AWS Chief Information Security officer C.J. Moses:

Be sure to take some time to watch this video and the other leadership sessions, and to use what you learn to take some proactive steps to improve your security posture.

Last Week’s Launches
Here are some launches that caught my eye last week:

AWS Wickr uses 256-bit end-to-end encryption to deliver secure messaging, voice, and video calling, including file sharing and screen sharing, across desktop and mobile devices. Each call, message, and file is encrypted with a new random key and can be decrypted only by the intended recipient. AWS Wickr supports logging to a secure, customer-controlled data store for compliance and auditing, and offers full administrative control over data: permissions, ephemeral messaging options, and security groups. You can now sign up for the preview.

AWS Marketplace Vendor Insights helps AWS Marketplace sellers to make security and compliance data available through AWS Marketplace in the form of a unified, web-based dashboard. Designed to support governance, risk, and compliance teams, the dashboard also provides evidence that is backed by AWS Config and AWS Audit Manager assessments, external audit reports, and self-assessments from software vendors. To learn more, read the What’s New post.

GuardDuty Malware Protection protects Amazon Elastic Block Store (EBS) volumes from malware. As Danilo describes in his blog post, a malware scan is initiated when Amazon GuardDuty detects that a workload running on an EC2 instance or in a container appears to be doing something suspicious. The new malware protection feature creates snapshots of the attached EBS volumes, restores them within a service account, and performs an in-depth scan for malware. The scanner supports many types of file systems and file formats and generates actionable security findings when malware is detected.

Amazon Neptune Global Database lets you build graph applications that run across multiple AWS Regions using a single graph database. You can deploy a primary Neptune cluster in one region and replicate its data to up to five secondary read-only database clusters, with up to 16 read replicas each. Clusters can recover in minutes in the result of an (unlikely) regional outage, with a Recovery Point Objective (RPO) of 1 second and a Recovery Time Objective (RTO) of 1 minute. To learn a lot more and see this new feature in action, read Introducing Amazon Neptune Global Database.

Amazon Detective now Supports Kubernetes Workloads, with the ability to scale to thousands of container deployments and millions of configuration changes per second. It ingests EKS audit logs to capture API activity from users, applications, and the EKS control plane, and correlates user activity with information gleaned from Amazon VPC flow logs. As Channy notes in his blog post, you can enable Amazon Detective and take advantage of a free 30 day trial of the EKS capabilities.

AWS SSO is Now AWS IAM Identity Center in order to better represent the full set of workforce and account management capabilities that are part of IAM. You can create user identities directly in IAM Identity Center, or you can connect your existing Active Directory or standards-based identify provider. To learn more, read this post from the AWS Security Blog.

AWS Config Conformance Packs now provide you with percentage-based scores that will help you track resource compliance within the scope of the resources addressed by the pack. Scores are computed based on the product of the number of resources and the number of rules, and are reported to Amazon CloudWatch so that you can track compliance trends over time. To learn more about how scores are computed, read the What’s New post.

Amazon Macie now lets you perform one-click temporary retrieval of sensitive data that Macie has discovered in an S3 bucket. You can retrieve up to ten examples at a time, and use these findings to accelerate your security investigations. All of the data that is retrieved and displayed in the Macie console is encrypted using customer-managed AWS Key Management Service (AWS KMS) keys. To learn more, read the What’s New post.

AWS Control Tower was updated multiple times last week. CloudTrail Organization Logging creates an org-wide trail in your management account to automatically log the actions of all member accounts in your organization. Control Tower now reduces redundant AWS Config items by limiting recording of global resources to home regions. To take advantage of this change you need to update to the latest landing zone version and then re-register each Organizational Unit, as detailed in the What’s New post. Lastly, Control Tower’s region deny guardrail now includes AWS API endpoints for AWS Chatbot, Amazon S3 Storage Lens, and Amazon S3 Multi Region Access Points. This allows you to limit access to AWS services and operations for accounts enrolled in your AWS Control Tower environment.

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
Here are some other news items and customer stories that you may find interesting:

AWS Open Source News and Updates – My colleague Ricardo Sueiras writes a weekly open source newsletter and highlights new open source projects, tools, and demos from the AWS community. Read installment #122 here.

Growy Case Study – This Netherlands-based company is building fully-automated robot-based vertical farms that grow plants to order. Read the case study to learn how they use AWS IoT and other services to monitor and control light, temperature, CO2, and humidity to maximize yield and quality.

Journey of a Snap on Snapchat – This video shows you how a snapshot flows end-to-end from your camera to AWS, to your friends. With over 300 million daily active users, Snap takes advantage of Amazon Elastic Kubernetes Service (EKS), Amazon DynamoDB, Amazon Simple Storage Service (Amazon S3), Amazon CloudFront, and many other AWS services, storing over 400 terabytes of data in DynamoDB and managing over 900 EKS clusters.

Cutting Cardboard Waste – Bin packing is almost certainly a part of every computer science curriculum! In the linked article from the Amazon Science site, you can learn how an Amazon Principal Research Scientist developed PackOpt to figure out the optimal set of boxes to use for shipments from Amazon’s global network of fulfillment centers. This is an NP-hard problem and the article describes how they build a parallelized solution that explores a multitude of alternative solutions, all running on AWS.

Upcoming Events
Check your calendar and sign up for these online and in-person AWS events:

AWS SummitAWS Global Summits – AWS Global Summits are free events that bring the cloud computing community together to connect, collaborate, and learn about AWS. Registrations are open for the following AWS Summits in August:

Imagine Conference 2022IMAGINE 2022 – The IMAGINE 2022 conference will take place on August 3 at the Seattle Convention Center, Washington, USA. It’s a no-cost event that brings together education, state, and local leaders to learn about the latest innovations and best practices in the cloud. You can register here.

That’s all for this week. Check back next Monday for another Week in Review!

Jeff;

This post is part of our Week in Review series. Check back each week for a quick roundup of interesting news and announcements from AWS!