Tag Archives: Uncategorized

Using AI to Scale Spear Phishing

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/08/using-ai-to-scale-spear-phishing.html

The problem with spear phishing is that it takes time and creativity to create individualized enticing phishing emails. Researchers are using GPT-3 to attempt to solve that problem:

The researchers used OpenAI’s GPT-3 platform in conjunction with other AI-as-a-service products focused on personality analysis to generate phishing emails tailored to their colleagues’ backgrounds and traits. Machine learning focused on personality analysis aims to be predict a person’s proclivities and mentality based on behavioral inputs. By running the outputs through multiple services, the researchers were able to develop a pipeline that groomed and refined the emails before sending them out. They say that the results sounded “weirdly human” and that the platforms automatically supplied surprising specifics, like mentioning a Singaporean law when instructed to generate content for people living in Singapore.

While they were impressed by the quality of the synthetic messages and how many clicks they garnered from colleagues versus the human-composed ones, the researchers note that the experiment was just a first step. The sample size was relatively small and the target pool was fairly homogenous in terms of employment and geographic region. Plus, both the human-generated messages and those generated by the AI-as-a-service pipeline were created by office insiders rather than outside attackers trying to strike the right tone from afar.

It’s just a matter of time before this is really effective. Combine it with voice and video synthesis, and you have some pretty scary scenarios. The real risk isn’t that AI-generated phishing emails are as good as human-generated ones, it’s that they can be generated at much greater scale.

Defcon presentation and slides. Another news article

Introducing public builds for AWS CodeBuild

Post Syndicated from Richard H Boyd original https://aws.amazon.com/blogs/devops/introducing-public-builds-for-aws-codebuild/

Using AWS CodeBuild, you can now share both the logs and the artifacts produced by CodeBuild projects. This blog post explains how to configure an existing CodeBuild project to enable public builds.

AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. With CodeBuild, you don’t need to provision, manage, and scale your own build servers. With this new feature, you can now make the results of a CodeBuild project build publicly viewable. Public builds simplify the collaboration workflow for open source projects by allowing contributors to see the results of Continuous Integration (CI) tasks.

How public builds work

During a project build, CodeBuild will place build logs in either Amazon Simple Storage Service (Amazon S3) or Amazon CloudWatch, depending on how the customer has configured the project’s LogsConfig property. Optionally, a project build can produce artifacts that persist after the build has completed. During a project build that has public builds enabled, CodeBuild will set an environment variable named CODEBUILD_PUBLIC_BUILD_URL that supplies the URL for that build’s publicly viewable logs and artifacts. When a user navigates to that URL, CodeBuild will use an AWS Identity and Access Management (AWS IAM) Role (defined by the project maintainer) to fetch build logs and available artifacts and displays these.

To enable public builds for a project:

  1. Navigate to the resource page in the CodeBuild console for the project for which you want to enable public builds.
  2. In the Edit choose Project configuration.
  3. Select Enable public build access.
  4. Choose New service role.
  5. For Service role enter the role name you want this new role to have. For this post we will use the role name example-public-builds-role. This creates a new IAM role with the permissions defined in the next section of this blog post.
  6. Choose Update configuration to save the changes and return to the project’s resource page within the CodeBuild console.

Project builds will now have the build logs and artifacts made available at the URL listed in the Public project URL section of the Configuration panel within the project’s resource page.

Now the CI build statuses within pull requests for the GitHub repository will include a public link to the build results. When a pull request is created in the repository, CodeBuild will start a project build and provide commit status updates during the build with a link to the public build information. This link is available as a hyperlink from the Details section of the commit status message.

IAM role permissions

This new feature introduces a new IAM role for CodeBuild. The new role is assumed by the CodeBuild service and needs read access to the build logs and any potential artifacts you would like to make publicly available. In the previous example, we had configured the CodeBuild project to store logs in Amazon CloudWatch and placed our build artifacts in Amazon S3 (namespaced to the build ID). The following AWS CloudFormation template will create an IAM Role with the appropriate least-privilege policies for accessing the public build results.

Role template

Parameters:
  LogGroupName:
    Type: String
    Description: prefix for the CloudWatch log group name
  ArtifactBucketArn:
    Type: String
    Description: Arn for the Amazon S3 bucket used to store build artifacts.

Resources:
  PublicReadRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Statement:
        - Action: ['sts:AssumeRole']
          Effect: Allow
          Principal:
            Service: [codebuild.amazonaws.com]
        Version: '2012-10-17'
      Path: /

  PublicReadPolicy:
    Type: 'AWS::IAM::Policy'
    Properties:
      PolicyName: PublicBuildPolicy
      PolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Action:
              - "logs:GetLogEvents"
            Resource:
              - !Sub "arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:${LogGroupName}:*"
          - Effect: Allow
            Action:
              - "s3:GetObject"
              - "s3:GetObjectVersion"
            Resource:
              - !Sub "${ArtifactBucketArn}/*"
      Roles:
        - !Ref PublicReadRole

Creating a public build in AWS CloudFormation

Using AWS CloudFormation, you can provision CodeBuild projects using infrastructure as code (IaC). To update an existing CodeBuild project to enable public builds add the following two fields to your project definition:

  CodeBuildProject:
    Type: AWS::CodeBuild::Project
    Properties:
      ServiceRole: !GetAtt CodeBuildRole.Arn
      LogsConfig: 
        CloudWatchLogs:
          GroupName: !Ref LogGroupName
          Status: ENABLED
          StreamName: ServerlessRust
      Artifacts:
        Type: S3
        Location: !Ref ArtifactBucket
        Name: ServerlessRust
        NamespaceType: BUILD_ID
        Packaging: ZIP
      Environment:
        Type: LINUX_CONTAINER
        ComputeType: BUILD_GENERAL1_LARGE
        Image: aws/codebuild/standard:4.0
        PrivilegedMode: true
      Triggers:
        BuildType: BUILD
        Webhook: true
        FilterGroups:
          - - Type: EVENT
              Pattern: PULL_REQUEST_CREATED,PULL_REQUEST_UPDATED
      Source:
        Type: GITHUB
        Location: "https://github.com/richardhboyd/ServerlessRust.git"
        BuildSpec: |
          version: 0.2
          phases:
            build:
              commands:
                - sam build
          artifacts:
            files:
              - .aws-sam/build/**/*
            discard-paths: no
      Visibility: PUBLIC_READ
      ResourceAccessRole: !Ref PublicReadRole # Note that this references the role defined in the previous section.
 

Disabling public builds

If a project has public builds enabled and you would like to disable it, you can clear the check-box named Enable public build access in the project configuration or set the Visibility to PRIVATE in the CloudFormation definition for the project. To prevent any project in your AWS account from using public builds, you can set an AWS Organizations service control policy (SCP) to deny the IAM Action CodeBuild:UpdateProjectVisibility

Conclusion

With CodeBuild public builds, you can now share build information for your open source projects with all contributors without having to grant them direct access to your AWS account. This post explains how to enable public builds with AWS CodeBuild using both the console and CloudFormation, create a least-privilege IAM role for sharing the public build results, and how to disable public builds for a project.

Cobalt Strike Vulnerability Affects Botnet Servers

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/08/cobolt-strike-vulnerability-affects-botnet-servers.html

Cobalt Strike is a security tool, used by penetration testers to simulate network attackers. But it’s also used by attackers — from criminals to governments — to automate their own attacks. Researchers have found a vulnerability in the product.

The main components of the security tool are the Cobalt Strike client — also known as a Beacon — and the Cobalt Strike team server, which sends commands to infected computers and receives the data they exfiltrate. An attacker starts by spinning up a machine running Team Server that has been configured to use specific “malleability” customizations, such as how often the client is to report to the server or specific data to periodically send.

Then the attacker installs the client on a targeted machine after exploiting a vulnerability, tricking the user or gaining access by other means. From then on, the client will use those customizations to maintain persistent contact with the machine running the Team Server.

The link connecting the client to the server is called the web server thread, which handles communication between the two machines. Chief among the communications are “tasks” servers send to instruct clients to run a command, get a process list, or do other things. The client then responds with a “reply.”

Researchers at security firm SentinelOne recently found a critical bug in the Team Server that makes it easy to knock the server offline. The bug works by sending a server fake replies that “squeeze every bit of available memory from the C2’s web server thread….”

It’s a pretty serious vulnerability, and there’s already a patch available. But — and this is the interesting part — that patch is available to licensed users, which attackers often aren’t. It’ll be a while before that patch filters down to the pirated copies of the software, and that time window gives defenders an opportunity. They can simulate a Cobolt Strike client, and leverage this vulnerability to reply to servers with messages that cause the server to crash.

Apple Adds a Backdoor to iMessage and iCloud Storage

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/08/apple-adds-a-backdoor-to-imesssage-and-icloud-storage.html

Apple’s announcement that it’s going to start scanning photos for child abuse material is a big deal. (Here are five news stories.) I have been following the details, and discussing it in several different email lists. I don’t have time right now to delve into the details, but wanted to post something.

EFF writes:

There are two main features that the company is planning to install in every Apple device. One is a scanning feature that will scan all photos as they get uploaded into iCloud Photos to see if they match a photo in the database of known child sexual abuse material (CSAM) maintained by the National Center for Missing & Exploited Children (NCMEC). The other feature scans all iMessage images sent or received by child accounts — that is, accounts designated as owned by a minor — for sexually explicit material, and if the child is young enough, notifies the parent when these images are sent or received. This feature can be turned on or off by parents.

This is pretty shocking coming from Apple, which is generally really good about privacy. It opens the door for all sorts of other surveillance, since now that the system is built it can be used for all sorts of other messages. And it breaks end-to-end encryption, despite Apple’s denials:

Does this break end-to-end encryption in Messages?

No. This doesn’t change the privacy assurances of Messages, and Apple never gains access to communications as a result of this feature. Any user of Messages, including those with with communication safety enabled, retains control over what is sent and to whom. If the feature is enabled for the child account, the device will evaluate images in Messages and present an intervention if the image is determined to be sexually explicit. For accounts of children age 12 and under, parents can set up parental notifications which will be sent if the child confirms and sends or views an image that has been determined to be sexually explicit. None of the communications, image evaluation, interventions, or notifications are available to Apple.

Notice Apple changing the definition of “end-to-end encryption.” No longer is the message a private communication between sender and receiver. A third party is alerted if the message meets a certain criteria.

This is a security disaster. Read tweets by Matthew Green and Edward Snowden. Also this. I’ll post more when I see it.

Beware the Four Horsemen of the Information Apocalypse. They’ll scare you into accepting all sorts of insecure systems.

EDITED TO ADD: This is a really good write-up of the problems.

EDITED TO ADD: Alex Stamos comments.

An open letter to Apple criticizing the project.

A leaked Apple memo responding to the criticisms. (What are the odds that Apple did not intend this to leak?)

EDITED TO ADD: John Gruber’s excellent analysis.

EDITED TO ADD (8/11): Paul Rosenzweig wrote an excellent policy discussion.

EDITED TO ADD (8/13): Really good essay by EFF’s Kurt Opsahl. Ross Anderson did an interview with Glenn Beck. And this news article talks about dissent within Apple about this feature.

The Economist has a good take. Apple responds to criticisms. (It’s worth watching the Wall Street Journal video interview as well.)

EDITED TO ADD (8/14): Apple released a threat model

EDITED TO ADD (8/20): Follow-on blog posts here and here.

Bring on the documentation

Post Syndicated from Alasdair Allan original https://www.raspberrypi.org/blog/bring-on-the-documentation/

I joined Raspberry Pi eighteen months ago and spent my first year here keeping secrets and writing about Raspberry Silicon, and the chip that would eventually be known as RP2040. This is all (largely) completed work: Raspberry Pi Pico made its way out into the world back in January, and our own Raspberry Silicon followed last month.

The question is then, what have I done for you lately?

The Documentation

Until today our documentation for the “big” boards — as opposed to Raspberry Pi Pico — lived in a Github repository and was written in Github-flavoured Markdown. From there our documentation site was built from the Markdown source, which was pulled periodically from the repository, run through a script written many years ago which turned it into HTML, and then deployed onto our website.

This all worked really rather well in the early days of Raspberry Pi.

The old-style documentation

The documentation repository itself has been left to grow organically. When I arrived here, it needed to be restructured, and a great deal of non-Raspberry Pi specific documentation needed to be removed, while other areas were underserved and needed to be expanded. The documentation was created when there was a lot less third-party content around to support the Raspberry Pi, so a fair bit of it really isn’t that relevant anymore, and is better dealt with elsewhere on the web. And the structure was a spider’s web that, in places, made very little sense.

Frankly, it was all in a bit of a mess.

Enter the same team of folks that built the excellent PDF-based documentation for Raspberry Pi Pico and RP2040. The PDF documentation was built off an Asciidoc-based toolchain, and we knew from the outset that we’d want to migrate the Markdown-based documentation to Asciidoc. It’d offer us more powerful tools going forwards, and a lot more flexibility.

After working through the backlog of community pull requests, we took a snapshot of the current Markdown-based repository and built out a toolchain. A lot of which we intended to, and did, throw away after converting the Markdown to Asciidoc as our “source of truth.” This didn’t happen without a bit of a wrench; nobody throws working code away lightly. But it did mean we’d reached the point of no return.

The next generation of documentation

The result of our new documentation project launches today.

The new-look documentation

The new documentation site is built and deployed directly from the documentation repository using Github Actions when someone pushes to the master branch. However we’ll mostly be working on the develop branch in the repository, which is the default branch you’ll now get when you take a fresh checkout, and also the branch you should target for your pull requests.

We’ve always taken pull requests against the Markdown-based source behind our documentation site. Over the years as the documentation set has grown there have been hundreds of community contributors, who have made over 1,200 individual pull requests, ranging from fixing small typos, to contributing whole new sections.

With the introduction of the new site, we’re going to continue to take pull requests against the new Asciidoc-based documentation. However, we’re going to be a bit more targeted around what we’ll to accept into the documentation, and will be looking to keep the repository focussed on Raspberry Pi-specific things, rather than having generic Linux tutorial content.

The documentation itself will remain under a Creative Commons Attribution-Sharealike (CC BY-SA 4.0) license.

Product Information Portal

Supporting our customers in the best way we can when they build products around Raspberry Pi computers is important to us. A big part of this is being able to get customers access to the right documents easily. So alongside the new-look documentation, we have revamped how our customers (that’s you) get access to the documents you need for commercial applications of Raspberry Pi.

The Product Information Portal, or PIP as we’ve come to refer to it here at Pi Towers, is where documents such as regulatory paperwork, product change notices, and white papers will be stored and accessed from now on.

The new Product Information Portal (PIP)

PIP has three tiers of document type: those which are publicly available; restricted documents that require a customer to sign up for a free account; and confidential documents which require a customer’s company to enter into a confidentiality agreement with Raspberry Pi.

PIP will also be a way for customers to get updates on products, allowing customers with a user account to subscribe to products, and receive email updates should there be a product change, regulatory update, or white paper release.

The portal can be found at pip.raspberrypi.org and will be constantly updated as new documents become available.

Where next?

I’m hoping that everyone that has contributed to the documentation over the years will see the new site as a big step towards making our documentation more accessible – and, as ever, we accept pull requests. However, if you’re already a contributor, the easiest thing to do is to take a fresh checkout of the repository, because things have changed a lot today.

Big changes to the look-and-feel of the documentation site

This isn’t the end. Instead, it’s the beginning of a journey to try and pull together our documentation into something that feels a bit more cohesive. While the documentation set now looks, and feels, a lot better and is (I think) a lot easier to navigate if you don’t know it well, there is still a lot of pruning and re-writing ahead of me. But we’ve reached the stage where I’m happy, and want to, work on that in public so the community can see how things are changing and can help out.

The post Bring on the documentation appeared first on Raspberry Pi.

Defeating Microsoft’s Trusted Platform Module

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/08/defeating-microsofts-trusted-platform-module.html

This is a really interesting story explaining how to defeat Microsoft’s TPM in 30 minutes — without having to solder anything to the motherboard.

Researchers at the security consultancy Dolos Group, hired to test the security of one client’s network, received a new Lenovo computer preconfigured to use the standard security stack for the organization. They received no test credentials, configuration details, or other information about the machine.

They were not only able to get into the BitLocker-encrypted computer, but then use the computer to get into the corporate network.

It’s the “evil maid attack.” It requires physical access to your computer, but you leave it in your hotel room all the time when you go out to dinner.

Original blog post.

CICD on Serverless Applications using AWS CodeArtifact

Post Syndicated from Anand Krishna original https://aws.amazon.com/blogs/devops/cicd-on-serverless-applications-using-aws-codeartifact/

Developing and deploying applications rapidly to users requires a working pipeline that accepts the user code (usually via a Git repository). AWS CodeArtifact was announced in 2020. It’s a secure and scalable artifact management product that easily integrates with other AWS products and services. CodeArtifact allows you to publish, store, and view packages, list package dependencies, and share your application’s packages.

In this post, I will show how we can build a simple DevOps pipeline for a sample JAVA application (JAR file) to be built with Maven.

Solution Overview

We utilize the following AWS services/Tools/Frameworks to set up our continuous integration, continuous deployment (CI/CD) pipeline:

The following diagram illustrates the pipeline architecture and flow:

 

aws-codeartifact-pipeline

 

Our pipeline is built on CodePipeline with CodeCommit as the source (CodePipeline Source Stage). This triggers the pipeline via a CloudWatch Events rule. Then the code is fetched from the CodeCommit repository branch (main) and sent to the next pipeline phase. This CodeBuild phase is specifically for compiling, packaging, and publishing the code to CodeArtifact by utilizing a package manager—in this case Maven.

After Maven publishes the code to CodeArtifact, the pipeline asks for a manual approval to be directly approved in the pipeline. It can also optionally trigger an email alert via Amazon Simple Notification Service (Amazon SNS). After approval, the pipeline moves to another CodeBuild phase. This downloads the latest packaged JAR file from a CodeArtifact repository and deploys to the AWS Lambda function.

Clone the Repository

Clone the GitHub repository as follows:

git clone https://github.com/aws-samples/aws-cdk-codeartifact-pipeline-sample.git

Code Deep Dive

After the Git repository is cloned, the directory structure is shown as in the following screenshot :

aws-codeartifact-pipeline-code

Let’s study the files and code to understand how the pipeline is built.

The directory java-events is a sample Java Maven project. Find numerous sample applications on GitHub. For this post, we use the sample application java-events.

To add your own application code, place the pom.xml and settings.xml files in the root directory for the AWS CDK project.

Let’s study the code in the file lib/cdk-pipeline-codeartifact-new-stack.ts of the stack CdkPipelineCodeartifactStack. This is the heart of the AWS CDK code that builds the whole pipeline. The stack does the following:

  • Creates a CodeCommit repository called ca-pipeline-repository.
  • References a CloudFormation template (lib/ca-template.yaml) in the AWS CDK code via the module @aws-cdk/cloudformation-include.
  • Creates a CodeArtifact domain called cdkpipelines-codeartifact.
  • Creates a CodeArtifact repository called cdkpipelines-codeartifact-repository.
  • Creates a CodeBuild project called JarBuild_CodeArtifact. This CodeBuild phase does all of the code compiling, packaging, and publishing to CodeArtifact into a repository called cdkpipelines-codeartifact-repository.
  • Creates a CodeBuild project called JarDeploy_Lambda_Function. This phase fetches the latest artifact from CodeArtifact created in the previous step (cdkpipelines-codeartifact-repository) and deploys to the Lambda function.
  • Finally, creates a pipeline with four phases:
    • Source as CodeCommit (ca-pipeline-repository).
    • CodeBuild project JarBuild_CodeArtifact.
    • A Manual approval Stage.
    • CodeBuild project JarDeploy_Lambda_Function.

 

CodeArtifact shows the domain-specific and repository-specific connection settings to mention/add in the application’s pom.xml and settings.xml files as below:

aws-codeartifact-repository-connections

Deploy the Pipeline

The AWS CDK code requires the following packages in order to build the CI/CD pipeline:

  • @aws-cdk/core
  • @aws-cdk/aws-codepipeline
  • @aws-cdk/aws-codepipeline-actions
  • @aws-cdk/aws-codecommit
  • @aws-cdk/aws-codebuild
  • @aws-cdk/aws-iam
  • @aws-cdk/cloudformation-include

 

Install the required AWS CDK packages as below:

npm i @aws-cdk/core @aws-cdk/aws-codepipeline @aws-cdk/aws-codepipeline-actions @aws-cdk/aws-codecommit @aws-cdk/aws-codebuild @aws-cdk/pipelines @aws-cdk/aws-iam @ @aws-cdk/cloudformation-include

Compile the AWS CDK code:

npm run build

Deploy the AWS CDK code:

cdk synth
cdk deploy

After the AWS CDK code is deployed, view the final output on the stack’s detail page on the AWS CloudFormation :

aws-codeartifact-pipeline-cloudformation-stack

 

How the pipeline works with artifact versions (using SNAPSHOTS)

In this demo, I publish SNAPSHOT to the repository. As per the documentation here and here, a SNAPSHOT refers to the most recent code along a branch. It’s a development version preceding the final release version. Identify a snapshot version of a Maven package by the suffix SNAPSHOT appended to the package version.

The application settings are defined in the pom.xml file. For this post, we define the following:

  • The version to be used, called 1.0-SNAPSHOT.
  • The specific packaging, called jar.
  • The specific project display name, called JavaEvents.
  • The specific group ID, called JavaEvents.

The screenshot below shows the pom.xml settings we utilised in the application:

aws-codeartifact-pipeline-pom-xml

 

You can’t republish a package asset that already exists with different content, as per the documentation here.

When a Maven snapshot is published, its previous version is preserved in a new version called a build. Each time a Maven snapshot is published, a new build version is created.

When a Maven snapshot is published, its status is set to Published, and the status of the build containing the previous version is set to Unlisted. If you request a snapshot, the version with status Published is returned. This is always the most recent Maven snapshot version.

For example, the image below shows the state when the pipeline is run for the FIRST RUN. The latest version has the status Published and previous builds are marked Unlisted.

aws-codeartifact-repository-package-versions

 

For all subsequent pipeline runs, multiple Unlisted versions will occur every time the pipeline is run, as all previous versions of a snapshot are maintained in its build versions.

aws-codeartifact-repository-package-versions

 

Fetching the Latest Code

Retrieve the snapshot from the repository in order to deploy the code to an AWS Lambda Function. I have used AWS CLI to list and fetch the latest asset of package version 1.0-SNAPSHOT.

 

Listing the latest snapshot

export ListLatestArtifact = `aws codeartifact list-package-version-assets —domain cdkpipelines-codeartifact --domain-owner $Account_Id --repository cdkpipelines-codeartifact-repository --namespace JavaEvents --format maven --package JavaEvents --package-version "1.0-SNAPSHOT"| jq ".assets[].name"|grep jar|sed ’s/“//g’`

NOTE : Please note the dynamic CDK variable $Account_Id which represents AWS Account ID.

 

Fetching the latest code using Package Version

aws codeartifact get-package-version-asset --domain cdkpipelines-codeartifact --repository cdkpipelines-codeartifact-repository --format maven --package JavaEvents --package-version 1.0-SNAPSHOT --namespace JavaEvents --asset $ListLatestArtifact demooutput

Notice that I’m referring the last code by using variable $ListLatestArtifact. This always fetches the latest code, and demooutput is the outfile of the AWS CLI command where the content (code) is saved.

 

Testing the Pipeline

Now clone the CodeCommit repository that we created with the following code:

git clone https://git-codecommit.<region>.amazonaws.com/v1/repos/codeartifact-pipeline-repository

 

Enter the following code to push the code to the CodeCommit repository:

cp -rp cdk-pipeline-codeartifact-new /* ca-pipeline-repository
cd ca-pipeline-repository
git checkout -b main
git add .
git commit -m “testing the pipeline”
git push origin main

Once the code is pushed to Git repository, the pipeline is automatically triggered by Amazon CloudWatch events.

The following screenshots shows the second phase (AWS CodeBuild Phase – JarBuild_CodeArtifact) of the pipeline, wherein the asset is successfully compiled and published to the CodeArtifact repository by Maven:

aws-codeartifact-pipeline-codebuild-jarbuild

aws-codeartifact-pipeline-codebuild-screenshot

aws-codeartifact-pipeline-codebuild-screenshot2

 

The following screenshots show the last phase (AWS CodeBuild Phase – Deploy-to-Lambda) of the pipeline, wherein the latest asset is successfully pulled and deployed to AWS Lambda Function.

Asset JavaEvents-1.0-20210618.131629-5.jar is the latest snapshot code for the package version 1.0-SNAPSHOT. This is the same asset version code that will be deployed to AWS Lambda Function, as seen in the screenshots below:

aws-codeartifact-pipeline-codebuild-jardeploy

aws-codeartifact-pipeline-codebuild-screenshot-jarbuild

The following screenshot of the pipeline shows a successful run. The code was fetched and deployed to the existing Lambda function (codeartifact-test-function).

aws-codeartifact-pipeline-codepipeline

Cleanup

To clean up, You can either delete the entire stack through the AWS CloudFormation console or use AWS CDK command like below –

cdk destroy

For more information on the AWS CDK commands, please check the here or sample here.

Summary

In this post, I demonstrated how to build a CI/CD pipeline for your serverless application with AWS CodePipeline by utilizing AWS CDK with AWS CodeArtifact. Please check the documentation here for an in-depth explanation regarding other package managers and the getting started guide.

Using “Master Faces” to Bypass Face-Recognition Authenticating Systems

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/08/using-master-faces-to-bypass-face-recognition-authenticating-systems.html

Fascinating research: “Generating Master Faces for Dictionary Attacks with a Network-Assisted Latent Space Evolution.”

Abstract: A master face is a face image that passes face-based identity-authentication for a large portion of the population. These faces can be used to impersonate, with a high probability of success, any user, without having access to any user-information. We optimize these faces, by using an evolutionary algorithm in the latent embedding space of the StyleGAN face generator. Multiple evolutionary strategies are compared, and we propose a novel approach that employs a neural network in order to direct the search in the direction of promising samples, without adding fitness evaluations. The results we present demonstrate that it is possible to obtain a high coverage of the population (over 40%) with less than 10 master faces, for three leading deep face recognition systems.

Two good articles.

Code your own pinball game | Wireframe #53

Post Syndicated from Ryan Lambie original https://www.raspberrypi.org/blog/code-your-own-pinball-game-wireframe-53/

Get flappers flapping and balls bouncing off bumpers. Mark Vanstone has the code in the new issue of Wireframe magazine, available now.

There are so many pinball video games that it’s become a genre in its own right. For the few of you who haven’t encountered pinball for some reason, it originated as an analogue arcade machine where a metal ball would be fired onto a sloping play area and bounce between obstacles. The player operates a pair of flippers by pressing buttons on each side of the machine, which will in turn ping the ball back up the play area to hit obstacles and earn points. The game ends when the ball falls through the exit at the bottom of the play area.

NES Pinball
One of the earliest pinball video games – it’s the imaginatively-named Pinball on the NES.

Recreating pinball machines for video games

Video game developers soon started trying to recreate pinball, first with fairly rudimentary graphics and physics, but with increasingly greater realism over time – if you look at Nintendo’s Pinball from 1984, then, say, Devil’s Crush on the Sega Mega Drive in 1990, and then 1992’s Pinball Dreams on PC, you can see how radically the genre evolved in just a few years. In this month’s Source Code, we’re going to put together a very simple rendition of pinball in Pygame Zero. We’re not going to use any complicated maths or physics systems, just a little algebra and trigonometry.

Let’s start with our background. We need an image which has barriers around the outside for the ball to bounce off, and a gap at the bottom for the ball to fall through. We also want some obstacles in the play area and an entrance at the side for the ball to enter when it’s first fired. In this case, we’re going to use our background as a collision map, too, so we need to design it so that all the areas that the ball can move in are black.

Pinball in Python
Here it is: your own pinball game in less than 100 lines of code.

Next, we need some flippers. These are defined as Actors with a pivot anchor position set near the larger end, and are positioned near the bottom of the play area. We detect left and right key presses and rotate the angle of the flippers by 20 degrees within a range of -30 to +30 degrees. If no key is pressed, then the flipper drops back down. With these elements in place, we have our play area and an ability for the player to defend the exit.

All we need now is a ball to go bouncing around the obstacles we’ve made. Defining the ball as an Actor, we can add a direction and a speed parameter to it. With these values set, the ball can be moved using a bit of trigonometry. Our new x-coordinate will move by the sin of the ball direction multiplied by the speed, and the new y-coordinate will move by the cos of the ball direction multiplied by speed. We need to detect collisions with objects and obstacles, so we sample four pixels around the ball to see if it’s hit anything solid. If it has, we need to make the ball bounce.

Get the code

Here’s Mark’s pinball code. To get it working on your system, you’ll need to install Pygame Zero. And to download the full code and assets, head here.

If you wanted more realistic physics, you’d calculate the reflection angle from the surface which has been hit, but in this case, we’re going to use a shortcut which will produce a rough approximation. We work out what direction the ball is travelling in and then rotate either left or right by a quarter of a turn until the ball no longer collides with a wall. We could finesse this calculation further to create a more accurate effect, but we’ll keep it simple for this sample. Finally, we need to add some gravity. As the play area is tilted downwards, we need to increase the ball speed as it travels down and decrease it as it travels up.

All of this should give you the bare bones of a pinball game. There’s lots more you could add to increase the realism, but we’ll leave you to discover the joys of normal vectors and dot products…

Get your copy of Wireframe issue 53

You can read more features like this one in Wireframe issue 53, available directly from Raspberry Pi Press — we deliver worldwide.

And if you’d like a handy digital version of the magazine, you can also download issue 53 for free in PDF format.

The post Code your own pinball game | Wireframe #53 appeared first on Raspberry Pi.

Zoom Lied about End-to-End Encryption

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/08/zoom-lied-about-end-to-end-encryption.html

The facts aren’t news, but Zoom will pay $85M — to the class-action attorneys, and to users — for lying to users about end-to-end encryption, and for giving user data to Facebook and Google without consent.

The proposed settlement would generally give Zoom users $15 or $25 each and was filed Saturday at US District Court for the Northern District of California. It came nine months after Zoom agreed to security improvements and a “prohibition on privacy and security misrepresentations” in a settlement with the Federal Trade Commission, but the FTC settlement didn’t include compensation for users.

Paragon: Yet Another Cyberweapons Arms Manufacturer

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/08/paragon-yet-another-cyberweapons-arms-manufacturer.html

Forbes has the story:

Paragon’s product will also likely get spyware critics and surveillance experts alike rubbernecking: It claims to give police the power to remotely break into encrypted instant messaging communications, whether that’s WhatsApp, Signal, Facebook Messenger or Gmail, the industry sources said. One other spyware industry executive said it also promises to get longer-lasting access to a device, even when it’s rebooted.

[…]

Two industry sources said they believed Paragon was trying to set itself apart further by promising to get access to the instant messaging applications on a device, rather than taking complete control of everything on a phone. One of the sources said they understood that Paragon’s spyware exploits the protocols of end-to-end encrypted apps, meaning it would hack into messages via vulnerabilities in the core ways in which the software operates.

Read that last sentence again: Paragon uses unpatched zero-day exploits in the software to hack messaging apps.

The European Space Agency Launches Hackable Satellite

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/08/the-european-space-agency-launches-hackable-satellite.html

Of course this is hackable:

A sophisticated telecommunications satellite that can be completely repurposed while in space has launched.

[…]

Because the satellite can be reprogrammed in orbit, it can respond to changing demands during its lifetime.

[…]

The satellite can detect and characterise any rogue emissions, enabling it to respond dynamically to accidental interference or intentional jamming.

We can assume strong encryption, and good key management. Still, seems like a juicy target for other governments.

I Am Parting With My Crypto Library

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/07/i-am-parting-with-my-crypto-library.html

The time has come for me to find a new home for my (paper) cryptography library. It’s about 150 linear feet of books, conference proceedings, journals, and monographs — mostly from the 1980s, 1990s, and 2000s.

My preference is that it goes to an educational institution, but will consider a corporate or personal home if that’s the only option available. If you think you can break it up and sell it, I’ll consider that as a last resort. New owner pays all packaging and shipping costs, and possibly a purchase price depending on who you are and what you want to do with the library.

If you are interested, please email me. I can send photos.

EDITED TO ADD (8/1): I am talking with two universities at the Internet Archive. It will find a good home. Thank you all for your suggestions.

Storing Encrypted Photos in Google’s Cloud

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/07/storing-encrypted-photos-in-googles-cloud.html

New paper: “Encrypted Cloud Photo Storage Using Google Photos“:

Abstract: Cloud photo services are widely used for persistent, convenient, and often free photo storage, which is especially useful for mobile devices. As users store more and more photos in the cloud, significant privacy concerns arise because even a single compromise of a user’s credentials give attackers unfettered access to all of the user’s photos. We have created Easy Secure Photos (ESP) to enable users to protect their photos on cloud photo services such as Google Photos. ESP introduces a new client-side encryption architecture that includes a novel format-preserving image encryption algorithm, an encrypted thumbnail display mechanism, and a usable key management system. ESP encrypts image data such that the result is still a standard format image like JPEG that is compatible with cloud photo services. ESP efficiently generates and displays encrypted thumbnails for fast and easy browsing of photo galleries from trusted user devices. ESP’s key management makes it simple to authorize multiple user devices to view encrypted image content via a process similar to device pairing, but using the cloud photo service as a QR code communication channel. We have implemented ESP in a popular Android photos app for use with Google Photos and demonstrate that it is easy to use and provides encryption functionality transparently to users, maintains good interactive performance and image quality while providing strong privacy guarantees, and retains the sharing and storage benefits of Google Photos without any changes to the cloud service

AirDropped Gun Photo Causes Terrorist Scare

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/07/airdropped-gun-photo-causes-terrorist-scare.html

A teenager on an airplane sent a photo of a replica gun via AirDrop to everyone who had their settings configured to receive unsolicited photos from strangers. This caused a three-hour delay as the plane — still at the gate — was evacuated and searched.

The teen was not allowed to reboard. I can’t find any information about whether he was charged with any of those vague “terrorist threat” crimes.

It’s been a long time since we’ve had one of these sorts of overreactions.

De-anonymization Story

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/07/de-anonymization-story.html

This is important:

Monsignor Jeffrey Burrill was general secretary of the US Conference of Catholic Bishops (USCCB), effectively the highest-ranking priest in the US who is not a bishop, before records of Grindr usage obtained from data brokers was correlated with his apartment, place of work, vacation home, family members’ addresses, and more.

[…]

The data that resulted in Burrill’s ouster was reportedly obtained through legal means. Mobile carriers sold­ — and still sell — ­location data to brokers who aggregate it and sell it to a range of buyers, including advertisers, law enforcement, roadside services, and even bounty hunters. Carriers were caught in 2018 selling real-time location data to brokers, drawing the ire of Congress. But after carriers issued public mea culpas and promises to reform the practice, investigations have revealed that phone location data is still popping up in places it shouldn’t. This year, T-Mobile even broadened its offerings, selling customers’ web and app usage data to third parties unless people opt out.

The publication that revealed Burrill’s private app usage, The Pillar, a newsletter covering the Catholic Church, did not say exactly where or how it obtained Burrill’s data. But it did say how it de-anonymized aggregated data to correlate Grindr app usage with a device that appears to be Burrill’s phone.

The Pillar says it obtained 24 months’ worth of “commercially available records of app signal data” covering portions of 2018, 2019, and 2020, which included records of Grindr usage and locations where the app was used. The publication zeroed in on addresses where Burrill was known to frequent and singled out a device identifier that appeared at those locations. Key locations included Burrill’s office at the USCCB, his USCCB-owned residence, and USCCB meetings and events in other cities where he was in attendance. The analysis also looked at other locations farther afield, including his family lake house, his family members’ residences, and an apartment in his Wisconsin hometown where he reportedly has lived.

Location data is not anonymous. It cannot be made anonymous. I hope stories like these will teach people that.

Hiding Malware in ML Models

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/07/hiding-malware-in-ml-models.html

Interesting research: “EvilModel: Hiding Malware Inside of Neural Network Models”.

Abstract: Delivering malware covertly and detection-evadingly is critical to advanced malware campaigns. In this paper, we present a method that delivers malware covertly and detection-evadingly through neural network models. Neural network models are poorly explainable and have a good generalization ability. By embedding malware into the neurons, malware can be delivered covertly with minor or even no impact on the performance of neural networks. Meanwhile, since the structure of the neural network models remains unchanged, they can pass the security scan of antivirus engines. Experiments show that 36.9MB of malware can be embedded into a 178MB-AlexNet model within 1% accuracy loss, and no suspicious are raised by antivirus engines in VirusTotal, which verifies the feasibility of this method. With the widespread application of artificial intelligence, utilizing neural networks becomes a forwarding trend of malware. We hope this work could provide a referenceable scenario for the defense on neural network-assisted attacks.

News article.