Tag Archives: Inside

Continuous Delivery of Nested AWS CloudFormation Stacks Using AWS CodePipeline

Post Syndicated from Prakash Palanisamy original https://aws.amazon.com/blogs/devops/continuous-delivery-of-nested-aws-cloudformation-stacks-using-aws-codepipeline/

In CodePipeline Update – Build Continuous Delivery Workflows for CloudFormation Stacks, Jeff Barr discusses infrastructure as code and how to use AWS CodePipeline for continuous delivery. In this blog post, I discuss the continuous delivery of nested CloudFormation stacks using AWS CodePipeline, with AWS CodeCommit as the source repository and AWS CodeBuild as a build and testing tool. I deploy the stacks using CloudFormation change sets following a manual approval process.

Here’s how to do it:

In AWS CodePipeline, create a pipeline with four stages:

  • Source (AWS CodeCommit)
  • Build and Test (AWS CodeBuild and AWS CloudFormation)
  • Staging (AWS CloudFormation and manual approval)
  • Production (AWS CloudFormation and manual approval)

Pipeline stages, the actions in each stage, and transitions between stages are shown in the following diagram.

CloudFormation templates, test scripts, and the build specification are stored in AWS CodeCommit repositories. These files are used in the Source stage of the pipeline in AWS CodePipeline.

The AWS::CloudFormation::Stack resource type is used to create child stacks from a master stack. The CloudFormation stack resource requires the templates of the child stacks to be stored in the S3 bucket. The location of the template file is provided as a URL in the properties section of the resource definition.

The following template creates three child stacks:

  • Security (IAM, security groups).
  • Database (an RDS instance).
  • Web stacks (EC2 instances in an Auto Scaling group, elastic load balancer).
Description: Master stack which creates all required nested stacks

Parameters:
  TemplatePath:
    Type: String
    Description: S3Bucket Path where the templates are stored
  VPCID:
    Type: "AWS::EC2::VPC::Id"
    Description: Enter a valid VPC Id
  PrivateSubnet1:
    Type: "AWS::EC2::Subnet::Id"
    Description: Enter a valid SubnetId of private subnet in AZ1
  PrivateSubnet2:
    Type: "AWS::EC2::Subnet::Id"
    Description: Enter a valid SubnetId of private subnet in AZ2
  PublicSubnet1:
    Type: "AWS::EC2::Subnet::Id"
    Description: Enter a valid SubnetId of public subnet in AZ1
  PublicSubnet2:
    Type: "AWS::EC2::Subnet::Id"
    Description: Enter a valid SubnetId of public subnet in AZ2
  S3BucketName:
    Type: String
    Description: Name of the S3 bucket to allow access to the Web Server IAM Role.
  KeyPair:
    Type: "AWS::EC2::KeyPair::KeyName"
    Description: Enter a valid KeyPair Name
  AMIId:
    Type: "AWS::EC2::Image::Id"
    Description: Enter a valid AMI ID to launch the instance
  WebInstanceType:
    Type: String
    Description: Enter one of the possible instance type for web server
    AllowedValues:
      - t2.large
      - m4.large
      - m4.xlarge
      - c4.large
  WebMinSize:
    Type: String
    Description: Minimum number of instances in auto scaling group
  WebMaxSize:
    Type: String
    Description: Maximum number of instances in auto scaling group
  DBSubnetGroup:
    Type: String
    Description: Enter a valid DB Subnet Group
  DBUsername:
    Type: String
    Description: Enter a valid Database master username
    MinLength: 1
    MaxLength: 16
    AllowedPattern: "[a-zA-Z][a-zA-Z0-9]*"
  DBPassword:
    Type: String
    Description: Enter a valid Database master password
    NoEcho: true
    MinLength: 1
    MaxLength: 41
    AllowedPattern: "[a-zA-Z0-9]*"
  DBInstanceType:
    Type: String
    Description: Enter one of the possible instance type for database
    AllowedValues:
      - db.t2.micro
      - db.t2.small
      - db.t2.medium
      - db.t2.large
  Environment:
    Type: String
    Description: Select the appropriate environment
    AllowedValues:
      - dev
      - test
      - uat
      - prod

Resources:
  SecurityStack:
    Type: "AWS::CloudFormation::Stack"
    Properties:
      TemplateURL:
        Fn::Sub: "https://s3.amazonaws.com/${TemplatePath}/security-stack.yml"
      Parameters:
        S3BucketName:
          Ref: S3BucketName
        VPCID:
          Ref: VPCID
        Environment:
          Ref: Environment
      Tags:
        - Key: Name
          Value: SecurityStack

  DatabaseStack:
    Type: "AWS::CloudFormation::Stack"
    Properties:
      TemplateURL:
        Fn::Sub: "https://s3.amazonaws.com/${TemplatePath}/database-stack.yml"
      Parameters:
        DBSubnetGroup:
          Ref: DBSubnetGroup
        DBUsername:
          Ref: DBUsername
        DBPassword:
          Ref: DBPassword
        DBServerSecurityGroup:
          Fn::GetAtt: SecurityStack.Outputs.DBServerSG
        DBInstanceType:
          Ref: DBInstanceType
        Environment:
          Ref: Environment
      Tags:
        - Key: Name
          Value:   DatabaseStack

  ServerStack:
    Type: "AWS::CloudFormation::Stack"
    Properties:
      TemplateURL:
        Fn::Sub: "https://s3.amazonaws.com/${TemplatePath}/server-stack.yml"
      Parameters:
        VPCID:
          Ref: VPCID
        PrivateSubnet1:
          Ref: PrivateSubnet1
        PrivateSubnet2:
          Ref: PrivateSubnet2
        PublicSubnet1:
          Ref: PublicSubnet1
        PublicSubnet2:
          Ref: PublicSubnet2
        KeyPair:
          Ref: KeyPair
        AMIId:
          Ref: AMIId
        WebSG:
          Fn::GetAtt: SecurityStack.Outputs.WebSG
        ELBSG:
          Fn::GetAtt: SecurityStack.Outputs.ELBSG
        DBClientSG:
          Fn::GetAtt: SecurityStack.Outputs.DBClientSG
        WebIAMProfile:
          Fn::GetAtt: SecurityStack.Outputs.WebIAMProfile
        WebInstanceType:
          Ref: WebInstanceType
        WebMinSize:
          Ref: WebMinSize
        WebMaxSize:
          Ref: WebMaxSize
        Environment:
          Ref: Environment
      Tags:
        - Key: Name
          Value: ServerStack

Outputs:
  WebELBURL:
    Description: "URL endpoint of web ELB"
    Value:
      Fn::GetAtt: ServerStack.Outputs.WebELBURL

During the Validate stage, AWS CodeBuild checks for changes to the AWS CodeCommit source repositories. It uses the ValidateTemplate API to validate the CloudFormation template and copies the child templates and configuration files to the appropriate location in the S3 bucket.

The following AWS CodeBuild build specification validates the CloudFormation templates listed under the TEMPLATE_FILES environment variable and copies them to the S3 bucket specified in the TEMPLATE_BUCKET environment variable in the AWS CodeBuild project. Optionally, you can use the TEMPLATE_PREFIX environment variable to specify a path inside the bucket. This updates the configuration files to use the location of the child template files. The location of the template files is provided as a parameter to the master stack.

version: 0.1

environment_variables:
  plaintext:
    CHILD_TEMPLATES: |
      security-stack.yml
      server-stack.yml
      database-stack.yml
    TEMPLATE_FILES: |
      master-stack.yml
      security-stack.yml
      server-stack.yml
      database-stack.yml
    CONFIG_FILES: |
      config-prod.json
      config-test.json
      config-uat.json

phases:
  install:
    commands:
      npm install jsonlint -g
  pre_build:
    commands:
      - echo "Validating CFN templates"
      - |
        for cfn_template in $TEMPLATE_FILES; do
          echo "Validating CloudFormation template file $cfn_template"
          aws cloudformation validate-template --template-body file://$cfn_template
        done
      - |
        for conf in $CONFIG_FILES; do
          echo "Validating CFN parameters config file $conf"
          jsonlint -q $conf
        done
  build:
    commands:
      - echo "Copying child stack templates to S3"
      - |
        for child_template in $CHILD_TEMPLATES; do
          if [ "X$TEMPLATE_PREFIX" = "X" ]; then
            aws s3 cp "$child_template" "s3://$TEMPLATE_BUCKET/$child_template"
          else
            aws s3 cp "$child_template" "s3://$TEMPLATE_BUCKET/$TEMPLATE_PREFIX/$child_template"
          fi
        done
      - echo "Updating template configurtion files to use the appropriate values"
      - |
        for conf in $CONFIG_FILES; do
          if [ "X$TEMPLATE_PREFIX" = "X" ]; then
            echo "Replacing \"TEMPLATE_PATH_PLACEHOLDER\" for \"$TEMPLATE_BUCKET\" in $conf"
            sed -i -e "s/TEMPLATE_PATH_PLACEHOLDER/$TEMPLATE_BUCKET/" $conf
          else
            echo "Replacing \"TEMPLATE_PATH_PLACEHOLDER\" for \"$TEMPLATE_BUCKET/$TEMPLATE_PREFIX\" in $conf"
            sed -i -e "s/TEMPLATE_PATH_PLACEHOLDER/$TEMPLATE_BUCKET\/$TEMPLATE_PREFIX/" $conf
          fi
        done

artifacts:
  files:
    - master-stack.yml
    - config-*.json

After the template files are copied to S3, CloudFormation creates a test stack and triggers AWS CodeBuild as a test action.

Then the AWS CodeBuild build specification executes validate-env.py, the Python script used to determine whether resources created using the nested CloudFormation stacks conform to the specifications provided in the CONFIG_FILE.

version: 0.1

environment_variables:
  plaintext:
    CONFIG_FILE: env-details.yml

phases:
  install:
    commands:
      - pip install --upgrade pip
      - pip install boto3 --upgrade
      - pip install pyyaml --upgrade
      - pip install yamllint --upgrade
  pre_build:
    commands:
      - echo "Validating config file $CONFIG_FILE"
      - yamllint $CONFIG_FILE
  build:
    commands:
      - echo "Validating resources..."
      - python validate-env.py
      - exit $?

Upon successful completion of the test action, CloudFormation deletes the test stack and proceeds to the UAT stage in the pipeline.

During this stage, CloudFormation creates a change set against the UAT stack and then executes the change set. This updates the UAT environment and makes it available for acceptance testing. The process continues to a manual approval action. After the QA team validates the UAT environment and provides an approval, the process moves to the Production stage in the pipeline.

During this stage, CloudFormation creates a change set for the nested production stack and the process continues to a manual approval step. Upon approval (usually by a designated executive), the change set is executed and the production deployment is completed.
 

Setting up a continuous delivery pipeline

 
I used a CloudFormation template to set up my continuous delivery pipeline. The codepipeline-cfn-codebuild.yml template, available from GitHub, sets up a full-featured pipeline.

When I use the template to create my pipeline, I specify the following:

  • AWS CodeCommit repositories.
  • SNS topics to send approval notifications.
  • S3 bucket name where the artifacts will be stored.

The CFNTemplateRepoName points to the AWS CodeCommit repository where CloudFormation templates, configuration files, and build specification files are stored.

My repo contains following files:

The continuous delivery pipeline is ready just seconds after clicking Create Stack. After it’s created, the pipeline executes each stage. Upon manual approvals for the UAT and Production stages, the pipeline successfully enables continuous delivery.


 

Implementing a change in nested stack

 
To make changes to a child stack in a nested stack (for example, to update a parameter value or add or change resources), update the master stack. The changes must be made in the appropriate template or configuration files and then checked in to the AWS CodeCommit repository. This triggers the following deployment process:

 

Conclusion

 
In this post, I showed how you can use AWS CodePipeline, AWS CloudFormation, AWS CodeBuild, and a manual approval process to create a continuous delivery pipeline for both infrastructure as code and application deployment.

For more information about AWS CodePipeline, see the AWS CodePipeline documentation. You can get started in just a few clicks. All CloudFormation templates, AWS CodeBuild build specification files, and the Python script that performs the validation are available in codepipeline-nested-cfn GitHub repository.


About the author

 
Prakash Palanisamy is a Solutions Architect for Amazon Web Services. When he is not working on Serverless, DevOps or Alexa, he will be solving problems in Project Euler. He also enjoys watching educational documentaries.

mkosi — A Tool for Generating OS Images

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/mkosi-a-tool-for-generating-os-images.html

Introducing mkosi

After blogging about
casync
I realized I never blogged about the
mkosi tool that combines nicely
with it. mkosi has been around for a while already, and its time to
make it a bit better known. mkosi stands for Make Operating System
Image
, and is a tool for precisely that: generating an OS tree or
image that can be booted.

Yes, there are many tools like mkosi, and a number of them are quite
well known and popular. But mkosi has a number of features that I
think make it interesting for a variety of use-cases that other tools
don’t cover that well.

What is mkosi?

What are those use-cases, and what does mkosi precisely set apart?
mkosi is definitely a tool with a focus on developer’s needs for
building OS images, for testing and debugging, but also for generating
production images with cryptographic protection. A typical use-case
would be to add a mkosi.default file to an existing project (for
example, one written in C or Python), and thus making it easy to
generate an OS image for it. mkosi will put together the image with
development headers and tools, compile your code in it, run your test
suite, then throw away the image again, and build a new one, this time
without development headers and tools, and install your build
artifacts in it. This final image is then “production-ready”, and only
contains your built program and the minimal set of packages you
configured otherwise. Such an image could then be deployed with
casync (or any other tool of course) to be delivered to your set of
servers, or IoT devices or whatever you are building.

mkosi is supposed to be legacy-free: the focus is clearly on
today’s technology, not yesteryear’s. Specifically this means that
we’ll generate GPT partition tables, not MBR/DOS ones. When you tell
mkosi to generate a bootable image for you, it will make it bootable
on EFI, not on legacy BIOS. The GPT images generated follow
specifications such as the Discoverable Partitions
Specification
,
so that /etc/fstab can remain unpopulated and tools such as
systemd-nspawn can automatically dissect the image and boot from
them.

So, let’s have a look on the specific images it can generate:

  1. Raw GPT disk image, with ext4 as root
  2. Raw GPT disk image, with btrfs as root
  3. Raw GPT disk image, with a read-only squashfs as root
  4. A plain directory on disk containing the OS tree directly (this is useful for creating generic container images)
  5. A btrfs subvolume on disk, similar to the plain directory
  6. A tarball of a plain directory

When any of the GPT choices above are selected, a couple of additional
options are available:

  1. A swap partition may be added in
  2. The system may be made bootable on EFI systems
  3. Separate partitions for /home and /srv may be added in
  4. The root, /home and /srv partitions may be optionally encrypted with LUKS
  5. The root partition may be protected using dm-verity, thus making offline attacks on the generated system hard
  6. If the image is made bootable, the dm-verity root hash is automatically added to the kernel command line, and the kernel together with its initial RAM disk and the kernel command line is optionally cryptographically signed for UEFI SecureBoot

Note that mkosi is distribution-agnostic. It currently can build
images based on the following Linux distributions:

  1. Fedora
  2. Debian
  3. Ubuntu
  4. ArchLinux
  5. openSUSE

Note though that not all distributions are supported at the same
feature level currently. Also, as mkosi is based on dnf
--installroot
, debootstrap, pacstrap and zypper, and those
packages are not packaged universally on all distributions, you might
not be able to build images for all those distributions on arbitrary
host distributions. For example, Fedora doesn’t package zypper,
hence you cannot build an openSUSE image easily on Fedora, but you can
still build Fedora (obviously…), Debian, Ubuntu and ArchLinux images
on it just fine.

The GPT images are put together in a way that they aren’t just
compatible with UEFI systems, but also with VM and container managers
(that is, at least the smart ones, i.e. VM managers that know UEFI,
and container managers that grok GPT disk images) to a large
degree. In fact, the idea is that you can use mkosi to build a
single GPT image that may be used to:

  1. Boot on bare-metal boxes
  2. Boot in a VM
  3. Boot in a systemd-nspawn container
  4. Directly run a systemd service off, using systemd’s RootImage= unit file setting

Note that in all four cases the dm-verity data is automatically used
if available to ensure the image is not tempered with (yes, you read
that right, systemd-nspawn and systemd’s RootImage= setting
automatically do dm-verity these days if the image has it.)

Mode of Operation

The simplest usage of mkosi is by simply invoking it without
parameters (as root):

# mkosi

Without any configuration this will create a GPT disk image for you,
will call it image.raw and drop it in the current directory. The
distribution used will be the same one as your host runs.

Of course in most cases you want more control about how the image is
put together, i.e. select package sets, select the distribution, size
partitions and so on. Most of that you can actually specify on the
command line, but it is recommended to instead create a couple of
mkosi.$SOMETHING files and directories in some directory. Then,
simply change to that directory and run mkosi without any further
arguments. The tool will then look in the current working directory
for these files and directories and make use of them (similar to how
make looks for a Makefile…). Every single file/directory is
optional, but if they exist they are honored. Here’s a list of the
files/directories mkosi currently looks for:

  1. mkosi.default — This is the main configuration file, here you
    can configure what kind of image you want, which distribution, which
    packages and so on.

  2. mkosi.extra/ — If this directory exists, then mkosi will copy
    everything inside it into the images built. You can place arbitrary
    directory hierarchies in here, and they’ll be copied over whatever is
    already in the image, after it was put together by the distribution’s
    package manager. This is the best way to drop additional static files
    into the image, or override distribution-supplied ones.

  3. mkosi.build — This executable file is supposed to be a build
    script. When it exists, mkosi will build two images, one after the
    other in the mode already mentioned above: the first version is the
    build image, and may include various build-time dependencies such as
    a compiler or development headers. The build script is also copied
    into it, and then run inside it. The script should then build
    whatever shall be built and place the result in $DESTDIR (don’t
    worry, popular build tools such as Automake or Meson all honor
    $DESTDIR anyway, so there’s not much to do here explicitly). It may
    also run a test suite, or anything else you like. After the script
    finished, the build image is removed again, and a second image (the
    final image) is built. This time, no development packages are
    included, and the build script is not copied into the image again —
    however, the build artifacts from the first run (i.e. those placed in
    $DESTDIR) are copied into the image.

  4. mkosi.postinst — If this executable script exists, it is invoked
    inside the image (inside a systemd-nspawn invocation) and can
    adjust the image as it likes at a very late point in the image
    preparation. If mkosi.build exists, i.e. the dual-phased
    development build process used, then this script will be invoked
    twice: once inside the build image and once inside the final
    image. The first parameter passed to the script clarifies which phase
    it is run in.

  5. mkosi.nspawn — If this file exists, it should contain a
    container configuration file for systemd-nspawn (see
    systemd.nspawn(5)
    for details), which shall be shipped along with the final image and
    shall be included in the check-sum calculations (see below).

  6. mkosi.cache/ — If this directory exists, it is used as package
    cache directory for the builds. This directory is effectively bind
    mounted into the image at build time, in order to speed up building
    images. The package installers of the various distributions will
    place their package files here, so that subsequent runs can reuse
    them.

  7. mkosi.passphrase — If this file exists, it should contain a
    pass-phrase to use for the LUKS encryption (if that’s enabled for the
    image built). This file should not be readable to other users.

  8. mkosi.secure-boot.crt and mkosi.secure-boot.key should be an
    X.509 key pair to use for signing the kernel and initrd for UEFI
    SecureBoot, if that’s enabled.

How to use it

So, let’s come back to our most trivial example, without any of the
mkosi.$SOMETHING files around:

# mkosi

As mentioned, this will create a build file image.raw in the current
directory. How do we use it? Of course, we could dd it onto some USB
stick and boot it on a bare-metal device. However, it’s much simpler
to first run it in a container for testing:

# systemd-nspawn -bi image.raw

And there you go: the image should boot up, and just work for you.

Now, let’s make things more interesting. Let’s still not use any of
the mkosi.$SOMETHING files around:

# mkosi -t raw_btrfs --bootable -o foobar.raw
# systemd-nspawn -bi foobar.raw

This is similar as the above, but we made three changes: it’s no
longer GPT + ext4, but GPT + btrfs. Moreover, the system is made
bootable on UEFI systems, and finally, the output is now called
foobar.raw.

Because this system is bootable on UEFI systems, we can run it in KVM:

qemu-kvm -m 512 -smp 2 -bios /usr/share/edk2/ovmf/OVMF_CODE.fd -drive format=raw,file=foobar.raw

This will look very similar to the systemd-nspawn invocation, except
that this uses full VM virtualization rather than container
virtualization. (Note that the way to run a UEFI qemu/kvm instance
appears to change all the time and is different on the various
distributions. It’s quite annoying, and I can’t really tell you what
the right qemu command line is to make this work on your system.)

Of course, it’s not all raw GPT disk images with mkosi. Let’s try
a plain directory image:

# mkosi -d fedora -t directory -o quux
# systemd-nspawn -bD quux

Of course, if you generate the image as plain directory you can’t boot
it on bare-metal just like that, nor run it in a VM.

A more complex command line is the following:

# mkosi -d fedora -t raw_squashfs --checksum --xz --package=openssh-clients --package=emacs

In this mode we explicitly pick Fedora as the distribution to use, ask
mkosi to generate a compressed GPT image with a root squashfs,
compress the result with xz, and generate a SHA256SUMS file with
the hashes of the generated artifacts. The package will contain the
SSH client as well as everybody’s favorite editor.

Now, let’s make use of the various mkosi.$SOMETHING files. Let’s
say we are working on some Automake-based project and want to make it
easy to generate a disk image off the development tree with the
version you are hacking on. Create a configuration file:

# cat > mkosi.default <<EOF
[Distribution]
Distribution=fedora
Release=24

[Output]
Format=raw_btrfs
Bootable=yes

[Packages]
# The packages to appear in both the build and the final image
Packages=openssh-clients httpd
# The packages to appear in the build image, but absent from the final image
BuildPackages=make gcc libcurl-devel
EOF

And let’s add a build script:

# cat > mkosi.build <<EOF
#!/bin/sh
cd $SRCDIR
./autogen.sh
./configure --prefix=/usr
make -j `nproc`
make install
EOF
# chmod +x mkosi.build

And with all that in place we can now build our project into a disk image, simply by typing:

# mkosi

Let’s try it out:

# systemd-nspawn -bi image.raw

Of course, if you do this you’ll notice that building an image like
this can be quite slow. And slow build times are actively hurtful to
your productivity as a developer. Hence let’s make things a bit
faster. First, let’s make use of a package cache shared between runs:

# mkdir mkosi.chache

Building images now should already be substantially faster (and
generate less network traffic) as the packages will now be downloaded
only once and reused. However, you’ll notice that unpacking all those
packages and the rest of the work is still quite slow. But mkosi can
help you with that. Simply use mkosi‘s incremental build feature. In
this mode mkosi will make a copy of the build and final images
immediately before dropping in your build sources or artifacts, so
that building an image becomes a lot quicker: instead of always
starting totally from scratch a build will now reuse everything it can
reuse from a previous run, and immediately begin with building your
sources rather than the build image to build your sources in. To
enable the incremental build feature use -i:

# mkosi -i

Note that if you use this option, the package list is not updated
anymore from your distribution’s servers, as the cached copy is made
after all packages are installed, and hence until you actually delete
the cached copy the distribution’s network servers aren’t contacted
again and no RPMs or DEBs are downloaded. This means the distribution
you use becomes “frozen in time” this way. (Which might be a bad
thing, but also a good thing, as it makes things kinda reproducible.)

Of course, if you run mkosi a couple of times you’ll notice that it
won’t overwrite the generated image when it already exists. You can
either delete the file yourself first (rm image.raw) or let mkosi
do it for you right before building a new image, with mkosi -f. You
can also tell mkosi to not only remove any such pre-existing images,
but also remove any cached copies of the incremental feature, by using
-f twice.

I wrote mkosi originally in order to test systemd, and quickly
generate a disk image of various distributions with the most current
systemd version from git, without all that affecting my host system. I
regularly use mkosi for that today, in incremental mode. The two
commands I use most in that context are:

# mkosi -if && systemd-nspawn -bi image.raw

And sometimes:

# mkosi -iff && systemd-nspawn -bi image.raw

The latter I use only if I want to regenerate everything based on the
very newest set of RPMs provided by Fedora, instead of a cached
snapshot of it.

BTW, the mkosi files for systemd are included in the systemd git
tree:
mkosi.default
and
mkosi.build. This
way, any developer who wants to quickly test something with current
systemd git, or wants to prepare a patch based on it and test it can
check out the systemd repository and simply run mkosi in it and a
few minutes later he has a bootable image he can test in
systemd-nspawn or KVM. casync has similar files:
mkosi.default,
mkosi.build.

Random Interesting Features

  1. As mentioned already, mkosi will generate dm-verity enabled
    disk images if you ask for it. For that use the --verity switch on
    the command line or Verity= setting in mkosi.default. Of course,
    dm-verity implies that the root volume is read-only. In this mode
    the top-level dm-verity hash will be placed along-side the output
    disk image in a file named the same way, but with the .roothash
    suffix. If the image is to be created bootable, the root hash is also
    included on the kernel command line in the roothash= parameter,
    which current systemd versions can use to both find and activate the
    root partition in a dm-verity protected way. BTW: it’s a good idea
    to combine this dm-verity mode with the raw_squashfs image mode,
    to generate a genuinely protected, compressed image suitable for
    running in your IoT device.

  2. As indicated above, mkosi can automatically create a check-sum
    file SHA256SUMS for you (--checksum) covering all the files it
    outputs (which could be the image file itself, a matching .nspawn
    file using the mkosi.nspawn file mentioned above, as well as the
    .roothash file for the dm-verity root hash.) It can then
    optionally sign this with gpg (--sign). Note that systemd‘s
    machinectl pull-tar and machinectl pull-raw command can download
    these files and the SHA256SUMS file automatically and verify things
    on download. With other words: what mkosi outputs is perfectly
    ready for downloads using these two systemd commands.

  3. As mentioned, mkosi is big on supporting UEFI SecureBoot. To
    make use of that, place your X.509 key pair in two files
    mkosi.secureboot.crt and mkosi.secureboot.key, and set
    SecureBoot= or --secure-boot. If so, mkosi will sign the
    kernel/initrd/kernel command line combination during the build. Of
    course, if you use this mode, you should also use
    Verity=/--verity=, otherwise the setup makes only partial
    sense. Note that mkosi will not help you with actually enrolling
    the keys you use in your UEFI BIOS.

  4. mkosi has minimal support for GIT checkouts: when it recognizes
    it is run in a git checkout and you use the mkosi.build script
    stuff, the source tree will be copied into the build image, but will
    all files excluded by .gitignore removed.

  5. There’s support for encryption in place. Use --encrypt= or
    Encrypt=. Note that the UEFI ESP is never encrypted though, and the
    root partition only if explicitly requested. The /home and /srv
    partitions are unconditionally encrypted if that’s enabled.

  6. Images may be built with all documentation removed.

  7. The password for the root user and additional kernel command line
    arguments may be configured for the image to generate.

Minimum Requirements

Current mkosi requires Python 3.5, and has a number of dependencies,
listed in the
README. Most
notably you need a somewhat recent systemd version to make use of its
full feature set: systemd 233. Older versions are already packaged for
various distributions, but much of what I describe above is only
available in the most recent release mkosi 3.

The UEFI SecureBoot support requires sbsign which currently isn’t
available in Fedora, but there’s a
COPR
.

Future

It is my intention to continue turning mkosi into a tool suitable
for:

  1. Testing and debugging projects
  2. Building images for secure devices
  3. Building portable service images
  4. Building images for secure VMs and containers

One of the biggest goals I have for the future is to teach mkosi and
systemd/sd-boot native support for A/B IoT style partition
setups. The idea is that the combination of systemd, casync and
mkosi provides generic building blocks for building secure,
auto-updating devices in a generic way from, even though all pieces
may be used individually, too.

FAQ

  1. Why are you reinventing the wheel again? This is exactly like
    $SOMEOTHERPROJECT!
    — Well, to my knowledge there’s no tool that
    integrates this nicely with your project’s development tree, and can
    do dm-verity and UEFI SecureBoot and all that stuff for you. So
    nope, I don’t think this exactly like $SOMEOTHERPROJECT, thank you
    very much.

  2. What about creating MBR/DOS partition images? — That’s really
    out of focus to me. This is an exercise in figuring out how generic
    OSes and devices in the future should be built and an attempt to
    commoditize OS image building. And no, the future doesn’t speak MBR,
    sorry. That said, I’d be quite interested in adding support for
    booting on Raspberry Pi, possibly using a hybrid approach, i.e. using
    a GPT disk label, but arranging things in a way that the Raspberry Pi
    boot protocol (which is built around DOS partition tables), can still
    work.

  3. Is this portable? — Well, depends what you mean by
    portable. No, this tool runs on Linux only, and as it uses
    systemd-nspawn during the build process it doesn’t run on
    non-systemd systems either. But then again, you should be able to
    create images for any architecture you like with it, but of course if
    you want the image bootable on bare-metal systems only systems doing
    UEFI are supported (but systemd-nspawn should still work fine on
    them).

  4. Where can I get this stuff? — Try
    GitHub. And some distributions
    carry packaged versions, but I think none of them the current v3
    yet.

  5. Is this a systemd project? — Yes, it’s hosted under the
    systemd GitHub umbrella. And yes,
    during run-time systemd-nspawn in a current version is required. But
    no, the code-bases are separate otherwise, already because systemd
    is a C project, and mkosi Python.

  6. Requiring systemd 233 is a pretty steep requirement, no?
    Yes, but the feature we need kind of matters (systemd-nspawn‘s
    --overlay= switch), and again, this isn’t supposed to be a tool for
    legacy systems.

  7. Can I run the resulting images in LXC or Docker? — Humm, I am
    not an LXC nor Docker guy. If you select directory or subvolume
    as image type, LXC should be able to boot the generated images just
    fine, but I didn’t try. Last time I looked, Docker doesn’t permit
    running proper init systems as PID 1 inside the container, as they
    define their own run-time without intention to emulate a proper
    system. Hence, no I don’t think it will work, at least not with an
    unpatched Docker version. That said, again, don’t ask me questions
    about Docker, it’s not precisely my area of expertise, and quite
    frankly I am not a fan. To my knowledge neither LXC nor Docker are
    able to run containers directly off GPT disk images, hence the
    various raw_xyz image types are definitely not compatible with
    either. That means if you want to generate a single raw disk image
    that can be booted unmodified both in a container and on bare-metal,
    then systemd-nspawn is the container manager to go for
    (specifically, its -i/--image= switch).

Should you care? Is this a tool for you?

Well, that’s up to you really.

If you hack on some complex project and need a quick way to compile
and run your project on a specific current Linux distribution, then
mkosi is an excellent way to do that. Simply drop the mkosi.default
and mkosi.build files in your git tree and everything will be
easy. (And of course, as indicated above: if the project you are
hacking on happens to be called systemd or casync be aware that
those files are already part of the git tree — you can just use them.)

If you hack on some embedded or IoT device, then mkosi is a great
choice too, as it will make it reasonably easy to generate secure
images that are protected against offline modification, by using
dm-verity and UEFI SecureBoot.

If you are an administrator and need a nice way to build images for a
VM or systemd-nspawn container, or a portable service then mkosi
is an excellent choice too.

If you care about legacy computers, old distributions, non-systemd
init systems, old VM managers, Docker, … then no, mkosi is not for
you, but there are plenty of well-established alternatives around that
cover that nicely.

And never forget: mkosi is an Open Source project. We are happy to
accept your patches and other contributions.

Oh, and one unrelated last thing: don’t forget to submit your talk
proposal

and/or buy a ticket for
All Systems Go! 2017 in Berlin — the
conference where things like systemd, casync and mkosi are
discussed, along with a variety of other Linux userspace projects used
for building systems.

Synchronizing Amazon S3 Buckets Using AWS Step Functions

Post Syndicated from Andy Katz original https://aws.amazon.com/blogs/compute/synchronizing-amazon-s3-buckets-using-aws-step-functions/

Constantin Gonzalez is a Principal Solutions Architect at AWS

In my free time, I run a small blog that uses Amazon S3 to host static content and Amazon CloudFront to distribute it world-wide. I use a home-grown, static website generator to create and upload my blog content onto S3.

My blog uses two S3 buckets: one for staging and testing, and one for production. As a website owner, I want to update the production bucket with all changes from the staging bucket in a reliable and efficient way, without having to create and populate a new bucket from scratch. Therefore, to synchronize files between these two buckets, I use AWS Lambda and AWS Step Functions.

In this post, I show how you can use Step Functions to build a scalable synchronization engine for S3 buckets and learn some common patterns for designing Step Functions state machines while you do so.

Step Functions overview

Step Functions makes it easy to coordinate the components of distributed applications and microservices using visual workflows. Building applications from individual components that each perform a discrete function lets you scale and change applications quickly.

While this particular example focuses on synchronizing objects between two S3 buckets, it can be generalized to any other use case that involves coordinated processing of any number of objects in S3 buckets, or other, similar data processing patterns.

Bucket replication options

Before I dive into the details on how this particular example works, take a look at some alternatives for copying or replicating data between two Amazon S3 buckets:

  • The AWS CLI provides customers with a powerful aws s3 sync command that can synchronize the contents of one bucket with another.
  • S3DistCP is a powerful tool for users of Amazon EMR that can efficiently load, save, or copy large amounts of data between S3 buckets and HDFS.
  • The S3 cross-region replication functionality enables automatic, asynchronous copying of objects across buckets in different AWS regions.

In this use case, you are looking for a slightly different bucket synchronization solution that:

  • Works within the same region
  • Is more scalable than a CLI approach running on a single machine
  • Doesn’t require managing any servers
  • Uses a more finely grained cost model than the hourly based Amazon EMR approach

You need a scalable, serverless, and customizable bucket synchronization utility.

Solution architecture

Your solution needs to do three things:

  1. Copy all objects from a source bucket into a destination bucket, but leave out objects that are already present, for efficiency.
  2. Delete all "orphaned" objects from the destination bucket that aren’t present on the source bucket, because you don’t want obsolete objects lying around.
  3. Keep track of all objects for #1 and #2, regardless of how many objects there are.

In the beginning, you read in the source and destination buckets as parameters and perform basic parameter validation. Then, you operate two separate, independent loops, one for copying missing objects and one for deleting obsolete objects. Each loop is a sequence of Step Functions states that read in chunks of S3 object lists and use the continuation token to decide in a choice state whether to continue the loop or not.

This solution is based on the following architecture that uses Step Functions, Lambda, and two S3 buckets:

As you can see, this setup involves no servers, just two main building blocks:

  • Step Functions manages the overall flow of synchronizing the objects from the source bucket with the destination bucket.
  • A set of Lambda functions carry out the individual steps necessary to perform the work, such as validating input, getting lists of objects from source and destination buckets, copying or deleting objects in batches, and so on.

To understand the synchronization flow in more detail, look at the Step Functions state machine diagram for this example.

Walkthrough

Here’s a detailed discussion of how this works.

To follow along, use the code in the sync-buckets-state-machine GitHub repo. The code comes with a ready-to-run deployment script in Python that takes care of all the IAM roles, policies, Lambda functions, and of course the Step Functions state machine deployment using AWS CloudFormation, as well as instructions on how to use it.

Fine print: Use at your own risk

Before I start, here are some disclaimers:

  • Educational purposes only.

    The following example and code are intended for educational purposes only. Make sure that you customize, test, and review it on your own before using any of this in production.

  • S3 object deletion.

    In particular, using the code included below may delete objects on S3 in order to perform synchronization. Make sure that you have backups of your data. In particular, consider using the Amazon S3 Versioning feature to protect yourself against unintended data modification or deletion.

Step Functions execution starts with an initial set of parameters that contain the source and destination bucket names in JSON:

{
    "source":       "my-source-bucket-name",
    "destination":  "my-destination-bucket-name"
}

Armed with this data, Step Functions execution proceeds as follows.

Step 1: Detect the bucket region

First, you need to know the regions where your buckets reside. In this case, take advantage of the Step Functions Parallel state. This allows you to use a Lambda function get_bucket_location.py inside two different, parallel branches of task states:

  • FindRegionForSourceBucket
  • FindRegionForDestinationBucket

Each task state receives one bucket name as an input parameter, then detects the region corresponding to "their" bucket. The output of these functions is collected in a result array containing one element per parallel function.

Step 2: Combine the parallel states

The output of a parallel state is a list with all the individual branches’ outputs. To combine them into a single structure, use a Lambda function called combine_dicts.py in its own CombineRegionOutputs task state. The function combines the two outputs from step 1 into a single JSON dict that provides you with the necessary region information for each bucket.

Step 3: Validate the input

In this walkthrough, you only support buckets that reside in the same region, so you need to decide if the input is valid or if the user has given you two buckets in different regions. To find out, use a Lambda function called validate_input.py in the ValidateInput task state that tests if the two regions from the previous step are equal. The output is a Boolean.

Step 4: Branch the workflow

Use another type of Step Functions state, a Choice state, which branches into a Failure state if the comparison in step 3 yields false, or proceeds with the remaining steps if the comparison was successful.

Step 5: Execute in parallel

The actual work is happening in another Parallel state. Both branches of this state are very similar to each other and they re-use some of the Lambda function code.

Each parallel branch implements a looping pattern across the following steps:

  1. Use a Pass state to inject either the string value "source" (InjectSourceBucket) or "destination" (InjectDestinationBucket) into the listBucket attribute of the state document.

    The next step uses either the source or the destination bucket, depending on the branch, while executing the same, generic Lambda function. You don’t need two Lambda functions that differ only slightly. This step illustrates how to use Pass states as a way of injecting constant parameters into your state machine and as a way of controlling step behavior while re-using common step execution code.

  2. The next step UpdateSourceKeyList/UpdateDestinationKeyList lists objects in the given bucket.

    Remember that the previous step injected either "source" or "destination" into the state document’s listBucket attribute. This step uses the same list_bucket.py Lambda function to list objects in an S3 bucket. The listBucket attribute of its input decides which bucket to list. In the left branch of the main parallel state, use the list of source objects to work through copying missing objects. The right branch uses the list of destination objects, to check if they have a corresponding object in the source bucket and eliminate any orphaned objects. Orphans don’t have a source object of the same S3 key.

  3. This step performs the actual work. In the left branch, the CopySourceKeys step uses the copy_keys.py Lambda function to go through the list of source objects provided by the previous step, then copies any missing object into the destination bucket. Its sister step in the other branch, DeleteOrphanedKeys, uses its destination bucket key list to test whether each object from the destination bucket has a corresponding source object, then deletes any orphaned objects.

  4. The S3 ListObjects API action is designed to be scalable across many objects in a bucket. Therefore, it returns object lists in chunks of configurable size, along with a continuation token. If the API result has a continuation token, it means that there are more objects in this list. You can work from token to token to continue getting object list chunks, until you get no more continuation tokens.

By breaking down large amounts of work into chunks, you can make sure each chunk is completed within the timeframe allocated for the Lambda function, and within the maximum input/output data size for a Step Functions state.

This approach comes with a slight tradeoff: the more objects you process at one time in a given chunk, the faster you are done. There’s less overhead for managing individual chunks. On the other hand, if you process too many objects within the same chunk, you risk going over time and space limits of the processing Lambda function or the Step Functions state so the work cannot be completed.

In this particular case, use a Lambda function that maximizes the number of objects listed from the S3 bucket that can be stored in the input/output state data. This is currently up to 32,768 bytes, assuming (based on some experimentation) that the execution of the COPY/DELETE requests in the processing states can always complete in time.

A more sophisticated approach would use the Step Functions retry/catch state attributes to account for any time limits encountered and adjust the list size accordingly through some list site adjusting.

Step 6: Test for completion

Because the presence of a continuation token in the S3 ListObjects output signals that you are not done processing all objects yet, use a Choice state to test for its presence. If a continuation token exists, it branches into the UpdateSourceKeyList step, which uses the token to get to the next chunk of objects. If there is no token, you’re done. The state machine then branches into the FinishCopyBranch/FinishDeleteBranch state.

By using Choice states like this, you can create loops exactly like the old times, when you didn’t have for statements and used branches in assembly code instead!

Step 7: Success!

Finally, you’re done, and can step into your final Success state.

Lessons learned

When implementing this use case with Step Functions and Lambda, I learned the following things:

  • Sometimes, it is necessary to manipulate the JSON state of a Step Functions state machine with just a few lines of code that hardly seem to warrant their own Lambda function. This is ok, and the cost is actually pretty low given Lambda’s 100 millisecond billing granularity. The upside is that functions like these can be helpful to make the data more palatable for the following steps or for facilitating Choice states. An example here would be the combine_dicts.py function.
  • Pass states can be useful beyond debugging and tracing, they can be used to inject arbitrary values into your state JSON and guide generic Lambda functions into doing specific things.
  • Choice states are your friend because you can build while-loops with them. This allows you to reliably grind through large amounts of data with the patience of an engine that currently supports execution times of up to 1 year.

    Currently, there is an execution history limit of 25,000 events. Each Lambda task state execution takes up 5 events, while each choice state takes 2 events for a total of 7 events per loop. This means you can loop about 3500 times with this state machine. For even more scalability, you can split up work across multiple Step Functions executions through object key sharding or similar approaches.

  • It’s not necessary to spend a lot of time coding exception handling within your Lambda functions. You can delegate all exception handling to Step Functions and instead simplify your functions as much as possible.

  • Step Functions are great replacements for shell scripts. This could have been a shell script, but then I would have had to worry about where to execute it reliably, how to scale it if it went beyond a few thousand objects, etc. Think of Step Functions and Lambda as tools for scripting at a cloud level, beyond the boundaries of servers or containers. "Serverless" here also means "boundary-less".

Summary

This approach gives you scalability by breaking down any number of S3 objects into chunks, then using Step Functions to control logic to work through these objects in a scalable, serverless, and fully managed way.

To take a look at the code or tweak it for your own needs, use the code in the sync-buckets-state-machine GitHub repo.

To see more examples, please visit the Step Functions Getting Started page.

Enjoy!

NSA Insider Security Post-Snowden

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/06/nsa_insider_sec.html

According to a recently declassified report obtained under FOIA, the NSA’s attempts to protect itself against insider attacks aren’t going very well:

The N.S.A. failed to consistently lock racks of servers storing highly classified data and to secure data center machine rooms, according to the report, an investigation by the Defense Department’s inspector general completed in 2016.

[…]

The agency also failed to meaningfully reduce the number of officials and contractors who were empowered to download and transfer data classified as top secret, as well as the number of “privileged” users, who have greater power to access the N.S.A.’s most sensitive computer systems. And it did not fully implement software to monitor what those users were doing.

In all, the report concluded, while the post-Snowden initiative — called “Secure the Net” by the N.S.A. — had some successes, it “did not fully meet the intent of decreasing the risk of insider threats to N.S.A. operations and the ability of insiders to exfiltrate data.”

Marcy Wheeler comments:

The IG report examined seven of the most important out of 40 “Secure the Net” initiatives rolled out since Snowden began leaking classified information. Two of the initiatives aspired to reduce the number of people who had the kind of access Snowden did: those who have privileged access to maintain, configure, and operate the NSA’s computer systems (what the report calls PRIVACs), and those who are authorized to use removable media to transfer data to or from an NSA system (what the report calls DTAs).

But when DOD’s inspectors went to assess whether NSA had succeeded in doing this, they found something disturbing. In both cases, the NSA did not have solid documentation about how many such users existed at the time of the Snowden leak. With respect to PRIVACs, in June 2013 (the start of the Snowden leak), “NSA officials stated that they used a manually kept spreadsheet, which they no longer had, to identify the initial number of privileged users.” The report offered no explanation for how NSA came to no longer have that spreadsheet just as an investigation into the biggest breach thus far at NSA started. With respect to DTAs, “NSA did not know how many DTAs it had because the manually kept list was corrupted during the months leading up to the security breach.”

There seem to be two possible explanations for the fact that the NSA couldn’t track who had the same kind of access that Snowden exploited to steal so many documents. Either the dog ate their homework: Someone at NSA made the documents unavailable (or they never really existed). Or someone fed the dog their homework: Some adversary made these lists unusable. The former would suggest the NSA had something to hide as it prepared to explain why Snowden had been able to walk away with NSA’s crown jewels. The latter would suggest that someone deliberately obscured who else in the building might walk away with the crown jewels. Obscuring that list would be of particular value if you were a foreign adversary planning on walking away with a bunch of files, such as the set of hacking tools the Shadow Brokers have since released, which are believed to have originated at NSA.

Read the whole thing. Securing against insiders, especially those with technical access, is difficult, but I had assumed the NSA did more post-Snowden.

Three Men Sentenced Following £2.5m Internet Piracy Case

Post Syndicated from Andy original https://torrentfreak.com/three-men-sentenced-following-2-5m-internet-piracy-case-170622/

While legal action against low-level individual file-sharers is extremely rare in the UK, the country continues to pose a risk for those engaged in larger-scale infringement.

That is largely due to the activities of the Police Intellectual Property Crime Unit and private anti-piracy outfits such as the Federation Against Copyright Theft (FACT). Investigations are often a joint effort which can take many years to complete, but the outcomes can often involve criminal sentences.

That was the profile of another Internet piracy case that concluded in London this week. It involved three men from the UK, Eric Brooks, 43, from Bolton, Mark Valentine, 44, from Manchester, and Craig Lloyd, 33, from Wolverhampton.

The case began when FACT became aware of potentially infringing activity back in February 2011. The anti-piracy group then investigated for more than a year before handing the case to police in March 2012.

On July 4, 2012, officers from City of London Police arrested Eric Brooks’ at his home in Bolton following a joint raid with FACT. Computer equipment was seized containing evidence that Brooks had been running a Netherlands-based server hosting more than £100,000 worth of pirated films, music, games, software and ebooks.

According to police, a spreadsheet on Brooks’ computer revealed he had hundreds of paying customers, all recruited from online forums. Using PayPal or utilizing bank transfers, each paid money to access the server. Police mentioned no group or site names in information released this week.

“Enquiries with PayPal later revealed that [Brooks] had made in excess of £500,000 in the last eight years from his criminal business and had in turn defrauded the film and TV industry alone of more than £2.5 million,” police said.

“As his criminal enterprise affected not only the film and TV but the wider entertainment industry including music, games, books and software it is thought that he cost the wider industry an amount much higher than £2.5 million.”

On the same day police arrested Brooks, Mark Valentine’s home in Manchester had a similar unwelcome visit. A day later, Craig Lloyd’s home in Wolverhampton become the third target for police.

Computer equipment was seized from both addresses which revealed that the pair had been paying for access to Brooks’ servers in order to service their own customers.

“They too had used PayPal as a means of taking payment and had earned thousands of pounds from their criminal actions; Valentine gaining £34,000 and Lloyd making over £70,000,” police revealed.

But after raiding the trio in 2012, it took more than four years to charge the men. In a feature common to many FACT cases, all three were charged with Conspiracy to Defraud rather than copyright infringement offenses. All three men pleaded guilty before trial.

On Monday, the men were sentenced at Inner London Crown Court. Brooks was sentenced to 24 months in prison, suspended for 12 months and ordered to complete 140 hours of unpaid work.

Valentine and Lloyd were each given 18 months in prison, suspended for 12 months. Each was ordered to complete 80 hours unpaid work.

Detective Constable Chris Glover, who led the investigation for the City of London Police, welcomed the sentencing.

“The success of this investigation is a result of co-ordinated joint working between the City of London Police and FACT. Brooks, Valentine and Lloyd all thought that they were operating under the radar and doing something which they thought was beyond the controls of law enforcement,” Glover said.

“Brooks, Valentine and Lloyd will now have time in prison to reflect on their actions and the result should act as deterrent for anyone else who is enticed by abusing the internet to the detriment of the entertainment industry.”

While even suspended sentences are a serious matter, none of the men will see the inside of a cell if they meet the conditions of their sentence for the next 12 months. For a case lasting four years involving such large sums of money, that is probably a disappointing result for FACT and the police.

Nevertheless, the men won’t be allowed to enjoy the financial proceeds of their piracy, if indeed any money is left. City of London Police say the trio will be subject to a future confiscation hearing to seize any proceeds of crime.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Protect Web Sites & Services Using Rate-Based Rules for AWS WAF

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/protect-web-sites-services-using-rate-based-rules-for-aws-waf/

AWS WAF (Web Application Firewall) helps to protect your application from many different types of application-layer attacks that involve requests that are malicious or malformed. As I showed you when I first wrote about this service (New – AWS WAF), you can define rules that match cross-site scripting, IP address, SQL injection, size, or content constraints:

When incoming requests match rules, actions are invoked. Actions can either allow, block, or simply count matches.

The existing rule model is powerful and gives you the ability to detect and respond to many different types of attacks. It does not, however, allow you to respond to attacks that simply consist of a large number of otherwise valid requests from a particular IP address. These requests might be a web-layer DDoS attack, a brute-force login attempt, or even a partner integration gone awry.

New Rate-Based Rules
Today we are adding Rate-based Rules to WAF, giving you control of when IP addresses are added to and removed from a blacklist, along with the flexibility to handle exceptions and special cases:

Blacklisting IP Addresses – You can blacklist IP addresses that make requests at a rate that exceeds a configured threshold rate.

IP Address Tracking– You can see which IP addresses are currently blacklisted.

IP Address Removal – IP addresses that have been blacklisted are automatically removed when they no longer make requests at a rate above the configured threshold.

IP Address Exemption – You can exempt certain IP addresses from blacklisting by using an IP address whitelist inside of the a rate-based rule. For example, you might want to allow trusted partners to access your site at a higher rate.

Monitoring & Alarming – You can watch and alarm on CloudWatch metrics that are published for each rule.

You can combine new Rate-based Rules with WAF Conditions to implement sophisticated rate-limiting strategies. For example, you could use a Rate-based Rule and a WAF Condition that matches your login pages. This would allow you to impose a modest threshold on your login pages (to avoid brute-force password attacks) and allow a more generous one on your marketing or system status pages.

Thresholds are defined in terms of the number of incoming requests from a single IP address within a 5 minute period. Once this threshold is breached, additional requests from the IP address are blocked until the request rate falls below the threshold.

Using Rate-Based Rules
Here’s how you would define a Rate-based Rule that protects the /login portion of your site. Start by defining a WAF condition that matches the desired string in the URI of the page:

Then use this condition to define a Rate-based Rule (the rate limit is expressed in terms of requests within a 5 minute interval, but the blacklisting goes in to effect as soon as the limit is breached):

With the condition and the rule in place, create a Web ACL (ProtectLoginACL) to bring it all together and to attach it to the AWS resource (a CloudFront distribution in this case):

Then attach the rule (ProtectLogin) to the Web ACL:

The resource is now protected in accord with the rule and the web ACL. You can monitor the associated CloudWatch metrics (ProtectLogin and ProtectLoginACL in this case). You could even create CloudWatch Alarms and use them to fire Lambda functions when a protection threshold is breached. The code could examine the offending IP address and make a complex, business-driven decision, perhaps adding a whitelisting rule that gives an extra-generous allowance to a trusted partner or to a user with a special payment plan.

Available Now
The new, Rate-based Rules are available now and you can start using them today! Rate-based rules are priced the same as Regular rules; see the WAF Pricing page for more info.

Jeff;

Sync vs. Backup vs. Storage

Post Syndicated from Yev original https://www.backblaze.com/blog/sync-vs-backup-vs-storage/

Cloud Sync vs. Cloud Backup vs. Cloud Storage

Google Drive recently announced their new Backup and Sync feature for Google Drive, which allows users to select folders on their computer that they want to back up to their Google Drive account (note: these files count against your Google Drive storage limit). Whenever new backup services are announced, we get a lot of questions so I thought we should take a minute to review the differences in cloud based services.

What is the Cloud? Sync Vs Backup Vs Storage

There is still a lot of confusion in the space about what exactly the “cloud” is and how different services interact with it. When folks use a syncing and sharing service like Dropbox, Box, Google Drive, OneDrive or any of the others, they often assume those are acting as a cloud backup solution as well. Adding to the confusion, cloud storage services are often the backend for backup and sync services as well as standalone services. To help sort this out, we’ll define some of the terms below as they apply to a traditional computer set-up with a bunch of apps and data.

Cloud Sync (ex. Dropbox, iCloud Drive, OneDrive, Box, Google Drive) – these services sync folders on your computer to folders on other machines or to the cloud – allowing users to work from a folder or directory across devices. Typically these services have tiered pricing, meaning you pay for the amount of data you store with the service. If there is data loss, sometimes these services even have a rollback feature, of course only files that are in the synced folders are available to be recovered.

Cloud Backup (ex. Backblaze Cloud Backup, Mozy, Carbonite) – these services work in the background automatically. The user does not need to take any action like setting up specific folders. Backup services typically back up any new or changed data on your computer to another location. Before the cloud took off, that location was primarily a CD or an external hard drive – but as cloud storage became more readily available it became the most popular storage medium. Typically these services have fixed pricing, and if there is a system crash or data loss, all backed up data is available for restore. In addition, these services have rollback features in case there is data loss / accidental file deletion.

Cloud Storage (ex. Backblaze B2, Amazon S3, Microsoft Azure) – these services are where many online backup and syncing and sharing services store data. Cloud storage providers typically serve as the endpoint for data storage. These services typically provide APIs, CLIs, and access points for individuals and developers to tie in their cloud storage offerings directly. These services are priced “per GB” meaning you pay for the amount of storage that you use. Since these services are designed for high-availability and durability, data can live solely on these services – though we still recommend having multiple copies of your data, just in case.

What Should You Use?

Backblaze strongly believes in a 3-2-1 Backup Strategy. A 3-2-1 strategy means having at least 3 total copies of your data, 2 of which are local but on different mediums (e.g. an external hard drive in addition to your computer’s local drive), and at least 1 copy offsite. The best setup is data on your computer, a copy on a hard drive that lives somewhere not inside your computer, and another copy with a cloud backup provider. Backblaze Cloud Backup is a great compliment to other services, like Time Machine, Dropbox, and even the free-tiers of cloud storage services.

What is The Difference Between Cloud Sync and Backup?

Let’s take a look at some sync setups that we see fairly frequently.

Example 1) Users have one folder on their computer that is designated for Dropbox, Google Drive, OneDrive, or one of the other syncing/sharing services. Users save or place data into those directories when they want them to appear on other devices. Often these users are using the free-tier of those syncing and sharing services and only have a few GB of data uploaded in them.

Example 2) Users are paying for extended storage for Dropbox, Google Drive, OneDrive, etc… and use those folders as the “Documents” folder – essentially working out of those directories. Files in that folder are available across devices, however, files outside of that folder (e.g. living on the computer’s desktop or anywhere else) are not synced or stored by the service.

What both examples are missing however is the backup of photos, movies, videos, and the rest of the data on their computer. That’s where cloud backup providers excel, by automatically backing up user data with little or no set-up, and no need for the dragging-and-dropping of files. Backblaze actually scans your hard drive to find all the data, regardless of where it might be hiding. The results are, all the user’s data is kept in the Backblaze cloud and the portion of the data that is synced is also kept in that provider’s cloud – giving the user another layer of redundancy. Best of all, Backblaze will actually back up your Dropbox, iCloud Drive, Google Drive, and OneDrive folders.

Data Recovery

The most important feature to think about is how easy it is to get your data back from all of these services. With sync and share services, retrieving a lot of data, especially if you are in a high-data tier, can be cumbersome and take awhile. Generally, the sync and share services only allow customers to download files over the Internet. If you are trying to download more than a couple gigabytes of data, the process can take time and can be fraught with errors.

With cloud storage services, you can usually only retrieve data over the Internet as well, and you pay for both the storage and the egress of the data, so retrieving a large amount of data can be both expensive and time consuming.

Cloud backup services will enable you to download files over the internet too and can also suffer from long download times. At Backblaze we never want our customers to feel like we’re holding their data hostage, which is why we have a lot of restore options, including our Restore Return Refund policy, which allows people to restore their data via a USB Hard Drive, and then return that drive to us for a refund. Cloud sync providers do not provide this capability.

One popular data recovery use case we’ve seen when a person has a lot of data to restore is to download just the files that are needed immediately, and then order a USB Hard Drive restore for the remaining files that are not as time sensitive. The user gets all their files back in a few days, and their network is spared the download charges.

The bottom line is that all of these services have merit for different use-cases. Have questions about which is best for you? Sound off in the comments below!

The post Sync vs. Backup vs. Storage appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

The Pirate Bay Isn’t Affected By Adverse Court Rulings – Everyone Else Is

Post Syndicated from Andy original https://torrentfreak.com/the-pirate-bay-isnt-affected-by-adverse-court-rulings-everyone-else-is-170618/

For more than a decade The Pirate Bay has been the world’s most controversial site. Delivering huge quantities of copyrighted content to the masses, the platform is revered and reviled across the copyright spectrum.

Its reputation is one of a defiant Internet swashbuckler, but due to changes in how the site has been run in more recent times, its current philosophy is more difficult to gauge. What has never been in doubt, however, is the site’s original intent to be as provocative as possible.

Through endless publicity stunts, some real, some just for the ‘lulz’, The Pirate Bay managed to attract a massive audience, all while incurring the wrath of every major copyright holder in the world.

Make no mistake, they all queued up to strike back, but every subsequent rightsholder action was met by a Pirate Bay middle finger, two fingers, or chin flick, depending on the mood of the day. This only served to further delight the masses, who happily spread the word while keeping their torrents flowing.

This vicious circle of being targeted by the entertainment industries, mocking them, and then reaping the traffic benefits, developed into the cheapest long-term marketing campaign the Internet had ever seen. But nothing is ever truly for free and there have been consequences.

After taunting Hollywood and the music industry with its refusals to capitulate, endless legal action that the site would have ordinarily been forced to participate in largely took place without The Pirate Bay being present. It doesn’t take a law degree to work out what happened in each and every one of those cases, whatever complex route they took through the legal system. No defense, no win.

For example, the web-blocking phenomenon across the UK, Europe, Asia and Australia was driven by the site’s absolute resilience and although there would clearly have been other scapegoats had The Pirate Bay disappeared, the site was the ideal bogeyman the copyright lobby required to move forward.

Filing blocking lawsuits while bringing hosts, advertisers, and ISPs on board for anti-piracy initiatives were also made easier with the ‘evil’ Pirate Bay still online. Immune from every anti-piracy technique under the sun, the existence of the platform in the face of all onslaughts only strengthened the cases of those arguing for even more drastic measures.

Over a decade, this has meant a significant tightening of the sharing and streaming climate. Without any big legislative changes but plenty of case law against The Pirate Bay, web-blocking is now a walk in the park, ad hoc domain seizures are a fairly regular occurrence, and few companies want to host sharing sites. Advertisers and brands are also hesitant over where they place their ads. It’s a very different world to the one of 10 years ago.

While it would be wrong to attribute every tightening of the noose to the actions of The Pirate Bay, there’s little doubt that the site and its chaotic image played a huge role in where copyright enforcement is today. The platform set out to provoke and succeeded in every way possible, gaining supporters in their millions. It could also be argued it kicked a hole in a hornets’ nest, releasing the hell inside.

But perhaps the site’s most amazing achievement is the way it has managed to stay online, despite all the turmoil.

This week yet another ruling, this time from the powerful European Court of Justice, found that by offering links in the manner it does, The Pirate Bay and other sites are liable for communicating copyright works to the public. Of course, this prompted the usual swathe of articles claiming that this could be the final nail in the site’s coffin.

Wrong.

In common with every ruling, legal defeat, and legislative restriction put in place due to the site’s activities, this week’s decision from the ECJ will have zero effect on the Pirate Bay’s availability. For right or wrong, the site was breaking the law long before this ruling and will continue to do so until it decides otherwise.

What we have instead is a further tightened legal landscape that will have a lasting effect on everything BUT the site, including weaker torrent sites, Internet users, and user-uploaded content sites such as YouTube.

With The Pirate Bay carrying on regardless, that is nothing short of remarkable.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

“Kodi Boxes Are a Fire Risk”: Awful Timing or Opportunism?

Post Syndicated from Andy original https://torrentfreak.com/kodi-boxes-are-a-fire-risk-awful-timing-or-opportunism-170618/

Anyone who saw the pictures this week couldn’t have failed to be moved by the plight of Londoners caught up in the Grenfell Tower inferno. The apocalyptic images are likely to stay with people for years to come and the scars for those involved may never heal.

As the building continued to smolder and the death toll increased, UK tabloids provided wall-to-wall coverage of the disaster. On Thursday, however, The Sun took a short break to put out yet another sensationalized story about Kodi. Given the week’s events, it was bound to raise eyebrows.

“HOT GOODS: Kodi boxes are a fire hazard because thousands of IPTV devices nabbed by customs ‘failed UK electrical standards’,” the headline reads.

Another sensational ‘Kodi’ headline

“It’s estimated that thousands of Brits have bought so-called Kodi boxes which can be connected to telly sets to stream pay-per-view sport and films for free,” the piece continued.

“But they could be a fire hazard, according to the Federation Against Copyright Theft (FACT), which has been nabbing huge deliveries of the devices as they arrive in the UK.”

As the image below shows, “Kodi box” fire hazard claims appeared next to images from other news articles about the huge London fire. While all separate stories, the pairing is not a great look.

A ‘Kodi Box’, as depicted in The Sun

FACT chief executive Kieron Sharp told The Sun that his group had uncovered two parcels of 2,000 ‘Kodi’ boxes and found that they “failed electrical safety standards”, making them potentially dangerous. While that may well be the case, the big question is all about timing.

It’s FACT’s job to reduce copyright infringement on behalf of clients such as The Premier League so it’s no surprise that they’re making a sustained effort to deter the public from buying these devices. That being said, it can’t have escaped FACT or The Sun that fire and death are extremely sensitive topics this week.

That leaves us with a few options including unfortunate opportunism or perhaps terrible timing, but let’s give the benefit of the doubt for a moment.

There’s a good argument that FACT and The Sun brought a valid issue to the public’s attention at a time when fire safety is on everyone’s lips. So, to give credit where it’s due, providing people with a heads-up about potentially dangerous devices is something that most people would welcome.

However, it’s difficult to offer congratulations on the PSA when the story as it appears in The Sun does nothing – absolutely nothing – to help people stay safe.

If some boxes are a risk (and that’s certainly likely given the level of Far East imports coming into the UK) which ones are dangerous? Where were they manufactured? Who sold them? What are the serial numbers? Which devices do people need to get out of their houses?

Sadly, none of these questions were answered or even addressed in the article, making it little more than scaremongering. Only making matters worse, the piece notes that it isn’t even clear how many of the seized devices are indeed a fire risk and that more tests need to be done. Is this how we should tackle such an important issue during an extremely sensitive week?

Timing and lack of useful information aside, one then has to question the terminology employed in the article.

As a piece of computer software, Kodi cannot catch fire. So, what we’re actually talking about here is small computers coming into the country without passing safety checks. The presence of Kodi on the devices – if indeed Kodi was even installed pre-import – is absolutely irrelevant.

Anti-piracy groups warning people of the dangers associated with their piracy habits is nothing new. For years, Internet users have been told that their computers will become malware infested if they share files or stream infringing content. While in some cases that may be true, there’s rarely any effort by those delivering the warnings to inform people on how to stay safe.

A classic example can be found in the numerous reports put out by the Digital Citizens Alliance in the United States. The DCA has produced several and no doubt expensive reports which claim to highlight the risks Internet users are exposed to on ‘pirate’ sites.

The DCA claims to do this in the interests of consumers but the group offers no practical advice on staying safe nor does it provide consumers with risk reduction strategies. Like many high-level ‘drug prevention’ documents shuffled around government, it could be argued that on a ‘street’ level their reports are next to useless.

Demonizing piracy is a well-worn and well-understood strategy but if warnings are to be interpreted as representing genuine concern for the welfare of people, they have to be a lot more substantial than mere scaremongering.

Anyone concerned about potentially dangerous devices can check out these useful guides from Electrical Safety First (pdf) and the Electrical Safety Council (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Mira, tiny robot of joyful delight

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/mira-robot-alonso-martinez/

The staff of Pi Towers are currently melting into puddles while making ‘Aaaawwwwwww’ noises as Mira, the adorable little Pi-controlled robot made by Pixar 3D artist Alonso Martinez, steals their hearts.

Mira the robot playing peek-a-boo

If you want to get updates on Mira’s progress, sign up for the mailing list! http://eepurl.com/bteigD Mira is a desk companion that makes your life better one smile at a time. This project explores human robot interactivity and emotional intelligence. Currently Mira uses face tracking to interact with the users and loves playing the game “peek-a-boo”.

Introducing Mira

Honestly, I can’t type words – I am but a puddle! If I could type at all, I would only produce a stream of affectionate fragments. Imagine walking into a room full of kittens. What you would sound like is what I’d type.

No! I can do this. I’m a professional. I write for a living! I can…

SHE BLINKS OHMYAAAARGH!!!

Mira Alonso Martinez Raspberry Pi

Weebl & Bob meets South Park’s Ike Broflovski in an adorable 3D-printed bundle of ‘Aaawwwww’

Introducing Mira (I promise I can do this)

Right. I’ve had a nap and a drink. I’ve composed myself. I am up for this challenge. As long as I don’t look directly at her, I’ll be fine!

Here I go.

As one of the many über-talented 3D artists at Pixar, Alonso Martinez knows a thing or two about bringing adorable-looking characters to life on screen. However, his work left him wondering:

In movies you see really amazing things happening but you actually can’t interact with them – what would it be like if you could interact with characters?

So with the help of his friends Aaron Nathan and Vijay Sundaram, Alonso set out to bring the concept of animation to the physical world by building a “character” that reacts to her environment. His experiments with robotics started with Gertie, a ball-like robot reminiscent of his time spent animating bouncing balls when he was learning his trade. From there, he moved on to Mira.

Mira Alonso Martinez

Many, many of the views of this Tested YouTube video have come from me. So many.

Mira swivels to follow a person’s face, plays games such as peekaboo, shows surprise when you finger-shoot her, and giggles when you give her a kiss.

Mira’s inner workings

To get Mira to turn her head in three dimensions, Alonso took inspiration from the Microsoft Sidewinder Pro joystick he had as a kid. He purchased one on eBay, took it apart to understand how it works, and replicated its mechanism for Mira’s Raspberry Pi-powered innards.

Mira Alonso Martinez

Alonso used the smallest components he could find so that they would fit inside Mira’s tiny body.

Mira’s axis of 3D-printed parts moves via tiny Power HD DSM44 servos, while a camera and OpenCV handle face-tracking, and a single NeoPixel provides a range of colours to indicate her emotions. As for the blinking eyes? Two OLED screens boasting acrylic domes fit within the few millimeters between all the other moving parts.

More on Mira, including her history and how she works, can be found in this wonderful video released by Tested this week.

Pixar Artist’s 3D-Printed Animated Robots!

We’re gushing with grins and delight at the sight of these adorable animated robots created by artist Alonso Martinez. Sean chats with Alonso to learn how he designed and engineered his family of robots, using processes like 3D printing, mold-making, and silicone casting. They’re amazing!

You can also sign up for Alonso’s newsletter here to stay up-to-date about this little robot. Hopefully one of these newsletters will explain how to buy or build your own Mira, as I for one am desperate to see her adorable little face on my desk every day for the rest of my life.

The post Mira, tiny robot of joyful delight appeared first on Raspberry Pi.

Estefannie’s GPS-Controlled GoPro Photo Taker

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/estefannie-gopro-selfie/

Are you tired of having to take selfies physically? Do you only use your GoPro for the occasional beach vacation? Are you maybe even wondering what to do with the load of velcro you bought on a whim? Then we have good news for you: Estefannie‘s back to help you out with her Personal Automated GPS-Controlled Portable Photo Taker…PAGCPPT for short…or pagsssspt, if you like.

RASPBERRY PI + GPS CONTROLLED PHOTO TAKER

Hey World! Do you like vacation pictures but don’t like taking them? Make your own Personal Automated GPS Controlled Portable Photo Taker! The code, components, and instructions are in my Hackster.io account: https://www.hackster.io/estefanniegg/automated-gps-controlled-photo-taker-3fc84c For this build, I decided to put together a backpack to take pictures of me when I am close to places that like.

The Personal Automated GPS-Controlled Portable Photo Taker

Try saying that five times in a row.

Go on. I’ll wait.

Using a Raspberry Pi 3, a GPS module, a power pack, and a GoPro plus GoPro Stick, Estefannie created the PAGCPPT as a means of automatically taking selfies at pre-specified tourist attractions across London.

Estefannie Explains it All Raspberry Pi GPS GoPro Camera

There’s pie in my backpack too…but it’s a bit messy

With velcro and hot glue, she secured the tech in place on (and inside) a backpack. Then it was simply a case of programming her set up to take pictures while she walked around the city.

Estefannie Explains it All Raspberry Pi GPS GoPro Camera

Making the GoPro…go

Estefannie made use of a GoPro API library to connect her GoPro to the Raspberry Pi via WiFi. With the help of this library, she wrote a Python script that made the GoPro take a photograph whenever her GPS module placed her within a ten-metre radius of a pre-selected landmark such as Tower Bridge, Abbey Road, or Platform 9 3/4.

Estefannie Explains it All Raspberry Pi GPS GoPro Camera

“Accio selfie.”

The full script, as well as details regarding the components she used for the project, can be found on her hackster.io page here.

Estefannie Explains it All

You’ll have noticed that we’ve covered Estefannie once or twice before on the Raspberry Pi blog. We love project videos that convey a sense of ‘Oh hey, I can totally build one of those!’, and hers always tick that box. They are imaginative, interesting, quirky, and to be totally honest with you, I’ve been waiting for this particular video since she hinted at it on her visit to Pi Towers in May. I got the inside scoop, yo!

What’s better than taking pictures? Not taking pictures. But STILL having pictures. I made a personal automated GPS controlled Portable Photo Taker ⚡ NEW VIDEO ALERT⚡ Link in bio.

1,351 Likes, 70 Comments – Estefannie Explains It All (@estefanniegg) on Instagram: “What’s better than taking pictures? Not taking pictures. But STILL having pictures. I made a…”

Make sure to follow her on YouTube and Instagram for more maker content and random shenanigans. And if you have your own maker social media channel, YouTube account, blog, etc, this is your chance to share it for the world to see in the comments below!

The post Estefannie’s GPS-Controlled GoPro Photo Taker appeared first on Raspberry Pi.

Encased in amber: meet the epoxy-embedded Pi

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/epoxy-pi-resin-io/

The maker of one of our favourite projects from this year’s Maker Faire Bay Area took the idea of an ’embedded device’ and ran with it: Ronald McCollam has created a wireless, completely epoxy-encased Pi build – screen included!

Resin.io in resin epoxy-encased Raspberry Pi

*cue epic music theme* “Welcome…to resin in resin.”

Just encase…

Of course, this build is not meant to be a museum piece: Ronald embedded a Raspberry Pi 3 with built-in wireless LAN and Bluetooth to create a hands-on demonstration of the resin.io platform, for which he is a Solution Architect. Resin.io is useful for remotely controlling groups of Linux-based IoT devices. In this case, Ronald used it to connect to the encased Pi. And yes, he named his make Resin-in-resin – we salute you, sir!

resin.io in resin epoxy-encased Raspberry Pi

“Life uh…finds a way.”

Before he started the practical part of his project, he did his research to find a suitable resin. He found that epoxy types specifically designed for encasing electronics are very expensive. In the end, Ronald tried out a cheap type, usually employed to coat furniture, by encasing an LED. It worked perfectly, and he went ahead to use this resin for embedding the Pi.

Bubbleshooting epoxy

This was the first time Ronald had worked with resin, so he learned some essential things about casting. He advises other makers to mix the epoxy very, very slowly to minimize the formation of bubbles; to try their hands on some small-scale casting attempts first; and to make sure they’re using a large enough mold for casting. Another thing to keep in mind is that some components of the make will heat up and expand while the device is running.

His first version of an encased Pi was still connected to the outside world by its USB cable:

Ronald McCollam on Twitter

Updates don’t get more “hands off” than a Raspberry Pi encased in epoxy — @resin_io inside resin! Come ask me about it at @DockerCon!

Not satisfied with this, he went on to incorporate an inductive charging coil as a power source, so that the Pi could be totally insulated in epoxy. The Raspberry Pi Foundation’s Matt Richardson got a look the finished project at Maker Faire Bay Area:

MattRichardson🏳️‍🌈 on Twitter

If you’re at @makerfaire, you must check out what @resin_io is showing. A @Raspberry_Pi completely enclosed in resin. Completely wireless. https://t.co/djVjoLz3hI

MAGNETS!

The charging coil delivers enough power to keep the Pi running for several hours, but it doesn’t allow secure booting. After some head-scratching, Ronald came up with a cool solution to this problem: he added a battery and a magnetic reed switch. He explains:

[The] boot process is to use the magnetic switch to turn off the Pi, put it on the charger for a few minutes to allow the battery to charge up, then remove the magnet so the Pi boots.

Pi in resin controlled by resin.io

“God help us, we’re in the hands of engineers.”

He talks about his build on the resin.io blog, and has provided a detailed project log on Hackaday. For those of you who want to recreate this project at home, Ronald has even put together an Adafruit wishlist of the necessary components.

Does this resin-ate with you?

What’s especially great about Ronald’s posts is that they’re full of helpful tips about getting started with using epoxy resin in your digital making projects. So whether you’re keen to build your own wireless Pi, or just generally interested in embedding electronic components in resin, you’ll find his write-ups useful.

If you have experience in working with epoxy and electronic devices and want to share what you’ve learned, please do so in the comments!

The post Encased in amber: meet the epoxy-embedded Pi appeared first on Raspberry Pi.

GameTale

Post Syndicated from Григор original http://www.gatchev.info/blog/?p=2060

Are you a parent to a several years old?

Do you want to teach the little kid to like books, while all she or he wants is games?

There is now a way to have both!

Sure, there are a lot of gamebooks, but they are targeted to teenagers. I will tell now of one that was written for children between three and nine years.

It is the tale of Gremmy – the little gremlin who goes to a big adventure. Who will climb The Big Mountain, or maybe will travel down The Deep River. Will venture into The Enchanted Forest, unless you would go with it inside The Dark Cave. Who will meet magical creatures and will face ingenious choices…

It is a tale you can read to your kids. Lead them through a kingdom of magic and wonder, meet them with its inhabitants and have them make their choices and see their funny and witty results. Nurture their curiosity and imagination, while also teaching them wise and important things.

The author – Nikola Raykov – is the youngest writer ever to win the most prestigious award for children’s literature in Bulgaria. The number of copies in Bulgarian that have been sold is higher than the typical for a book by Stephen King or Paulo Coelho! Since some time, it has been published also in Russian, Italian and Latvian. And now you can have the English translation.

Most gamebooks will have few illustrations, typically black-and-white ones. GameTale is full of excellent true color ones, as a book for children must be. And it provides not only entertainment, but also value.

Don’t you believe it? Take a look yourself – the entire book is available freely on the author’s website, even before it is printed – to read and play it, to download and enjoy it. Like all of its translations and the Bulgarian original. Yes, all these sales were done while the book has been available to everybody. The ability of the readers to see what they are buying has been its best advertisement.

Here is what the writer says:

“I believe it would be cruel if children weren’t able to enjoy my books because their parents could not afford them, and children’s authors should not be cruel. They should be gentle, caring and loving. The values we write about should not be just words on paper. We should be the living and breathing examples of those values, because what we write HAS to be true. Every good author will tell you that you cannot lie to your readers (or little listeners). They will catch you in a second. When you read a book, you can actually feel if the author is being honest about his or her inner self.”

“I DO believe that people are inherently good. If you have poured your heart into something, if you have tried your best, people will feel that and give you their unconditional support. There is no need to hide your work: people are not thieves! If you share, they will care, they will follow you, they will nag you about when your next book comes out, and yes, they will gladly support you because they will know that their children’s favorite author actually believes in the values he’s writing about. The same things they believe in – friendship, love and freedom!”

Nikola started a campaign on Kickstarter. Its goal is to fund the printing of 1000 copies of the book in English. And you do get for your donations things your kid will love!

Years ago, when I read this book, I felt like a kid. And now envy you a little for the joy that you will get from it. 🙂 Do give it a try. There is nothing to lose, and a lot to win!

Raspberry Pi Looper-Synth-Drum…thing

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/raspberry-pi-looper/

To replace his iPad for live performance, Colorado-based musician Toby Hendricks built a looper, complete with an impressive internal sound library, all running on a Raspberry Pi.

Raspberry Pi Looper/synth/drum thing

Check out the guts here: https://youtu.be/mCOHFyI3Eoo My first venture into raspberry pi stuff. Running a custom pure data patch I’ve been working on for a couple years on a Raspberry Pi 3. This project took a couple months and I’m still tweaking stuff here and there but it’s pretty much complete, it even survived it’s first live show!

Toby’s build is a pretty mean piece of kit, as this video attests. Not only does it have a multitude of uses, but the final build is beautiful. Do make sure to watch to the end of the video for a wonderful demonstration of the kit.

Inside the Raspberry Pi looper

Alongside the Raspberry Pi and Behringer U-Control sound card, Toby used Pure Data, a multimedia visual programming language, and a Teensy 3.6 processor to complete the build. Together, these allow for playback of a plethora of sounds, which can either be internally stored, or externally introduced via audio connectors along the back.

This guy is finally taking shape. DIY looper/fx box/sample player/synth. #teensy #arduino #raspberrypi #puredata

98 Likes, 6 Comments – otem rellik (@otem_rellik) on Instagram: “This guy is finally taking shape. DIY looper/fx box/sample player/synth. #teensy #arduino…”

Delay, reverb, distortion, and more are controlled by sliders along one side, while pre-installed effects are selected and played via some rather beautiful SparkFun buttons on the other. Loop buttons, volume controls, and a repurposed Nintendo DS screen complete the interface.

Raspberry Pi Looper Guts

Thought I’d do a quick overview of the guts of my pi project. Seems like many folks have been interested in seeing what the internals look like.

Code for the looper can be found on Toby’s GitHub here. Make sure to continue to follow him via YouTube and Instagram for updates on the build, including these fancy new buttons.

Casting my own urethane knobs and drum pads from 3D printed molds! #3dprinted #urethanecasting #diy

61 Likes, 4 Comments – otem rellik (@otem_rellik) on Instagram: “Casting my own urethane knobs and drum pads from 3D printed molds! #3dprinted #urethanecasting #diy”

I got the music in me

If you want to get musical with a Raspberry Pi, but the thought of recreating Toby’s build is a little daunting, never fear! Our free GPIO Music Box resource will help get you started. And projects such as Mike Horne’s fabulous Raspberry Pi music box should help inspire you to take your build further.

Raspberry Pi Looper post image of Mike Horne's music box

Mike’s music box boasts wonderful flashy buttons and turny knobs for ultimate musical satisfaction!

If you use a Raspberry Pi in any sort of musical adventure, be sure to share your project in the comments below!

 

 

The post Raspberry Pi Looper-Synth-Drum…thing appeared first on Raspberry Pi.

Some non-lessons from WannaCry

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/06/some-non-lessons-from-wannacry.html

This piece by Bruce Schneier needs debunking. I thought I’d list the things wrong with it.

The NSA 0day debate

Schneier’s description of the problem is deceptive:

When the US government discovers a vulnerability in a piece of software, however, it decides between two competing equities. It can keep it secret and use it offensively, to gather foreign intelligence, help execute search warrants, or deliver malware. Or it can alert the software vendor and see that the vulnerability is patched, protecting the country — and, for that matter, the world — from similar attacks by foreign governments and cybercriminals. It’s an either-or choice.

The government doesn’t “discover” vulnerabilities accidentally. Instead, when the NSA has a need for something specific, it acquires the 0day, either through internal research or (more often) buying from independent researchers.

The value of something is what you are willing to pay for it. If the NSA comes across a vulnerability accidentally, then the value to them is nearly zero. Obviously such vulns should be disclosed and fixed. Conversely, if the NSA is willing to pay $1 million to acquire a specific vuln for imminent use against a target, the offensive value is much greater than the fix value.

What Schneier is doing is deliberately confusing the two, combing the policy for accidentally found vulns with deliberately acquired vulns.

The above paragraph should read instead:

When the government discovers a vulnerability accidentally, it then decides to alert the software vendor to get it patched. When the government decides it needs as vuln for a specific offensive use, it acquires one that meets its needs, uses it, and keeps it secret. After spending so much money acquiring an offensive vuln, it would obviously be stupid to change this decision and not use it offensively.

Hoarding vulns

Schneier also says the NSA is “hoarding” vulns. The word has a couple inaccurate connotations.
One connotation is that the NSA is putting them on a heap inside a vault, not using them. The opposite is true: the NSA only acquires vulns it for which it has an active need. It uses pretty much all the vulns it acquires. That can be seen in the ShadowBroker dump, all the vulns listed are extremely useful to attackers, especially ETERNALBLUE. Efficiency is important to the NSA. Your efficiency is your basis for promotion. There are other people who make their careers finding waste in the NSA. If you are hoarding vulns and not using them, you’ll quickly get ejected from the NSA.
Another connotation is that the NSA is somehow keeping the vulns away from vendors. That’s like saying I’m hoarding naked selfies of myself. Yes, technically I’m keeping them away from you, but it’s not like they ever belong to you in the first place. The same is true the NSA. Had it never acquired the ETERNALBLUE 0day, it never would’ve been researched, never found.

The VEP

Schneier describes the “Vulnerability Equities Process” or “VEP”, a process that is supposed to manage the vulnerabilities the government gets.

There’s no evidence the VEP process has ever been used, at least not with 0days acquired by the NSA. The VEP allows exceptions for important vulns, and all the NSA vulns are important, so all are excepted from the process. Since the NSA is in charge of the VEP, of course, this is at the sole discretion of the NSA. Thus, the entire point of the VEP process goes away.

Moreover, it can’t work in many cases. The vulns acquired by the NSA often come with clauses that mean they can’t be shared.

New classes of vulns

One reason sellers forbid 0days from being shared is because they use new classes of vulnerabilities, such that sharing one 0day will effectively ruin a whole set of vulnerabilities. Schneier poo-poos this because he doesn’t see new classes of vulns in the ShadowBroker set.
This is wrong for two reasons. The first is that the ShadowBroker 0days are incomplete. There’s no iOS exploits, for example, and we know that iOS is a big target of the NSA.
Secondly, I’m not sure we’ve sufficiently analyzed the ShadowBroker exploits yet to realize there may be a new class of vuln. It’s easy to miss the fact that a single bug we see in the dump may actually be a whole new class of vulnerability. In the past, it’s often been the case that a new class was named only after finding many examples.
In any case, Schneier misses the point denying new classes of vulns exist. He should instead use the point to prove the value of disclosure, that instead of playing wack-a-mole fixing bugs one at a time, vendors would be able to fix whole classes of bugs at once.

Rediscovery

Schneier cites two studies that looked at how often vulnerabilities get rediscovered. In other words, he’s trying to measure the likelihood that some other government will find the bug and use it against us.
These studies are weak, scarcely better than anecdotal evidence. Schneier’s own study seems almost unrelated to the problem, and the Rand’s study cannot be replicated, as it relies upon private data. Also, there is little differentiation between important bugs (like SMB/MSRPC exploits and full-chain iOS exploits) and lesser bugs.
Whether from the Rand study or from anecdotes, we have good reason to believe that the longer an 0day exists, the less likely it’ll be rediscovered. Schneier argues that vulns should only be used for 6 months before being disclosed to a vendor. Anecdotes suggest otherwise, that if it hasn’t been rediscovered in the first year, it likely won’t ever be.
The Rand study was overwhelmingly clear on the issue that 0days are dramatically more likely to become obsolete than be rediscovered. The latest update to iOS will break an 0day, rather than somebody else rediscovering it. Win10 adoption will break older SMB exploits faster than rediscovery.
In any case, this post is about ETERNALBLUE specifically. What we learned from this specific bug is that it was used for at least 5 year without anybody else rediscovering it (before it was leaked). Chances are good it never would’ve been rediscovered, just made obsolete by Win10.

Notification is notification

All disclosure has the potential of leading to worms like WannaCry. The Conficker worm of 2008, for example, was written after Microsoft patched the underlying vulnerability.
Thus, had the NSA disclosed the bug in the normal way, chances are good it still would’ve been used for worming ransomware.
Yes, WannaCry had a head-start because ShadowBrokers published a working exploit, but this doesn’t appear to have made a difference. The Blaster worm (the first worm to compromise millions of computers) took roughly the same amount of time to create, and almost no details were made public about the vulnerability, other than the fact it was patched. (I know from personal experience — we used diff to find what changed in the patch in order to reverse engineer the 0day).
In other words, the damage the NSA is responsible for isn’t really the damage that came after it was patched — that was likely to happen anyway, as it does with normal vuln disclosure. Instead, the only damage the NSA can truly be held responsible for is the damage ahead of time, such as the months (years?) the ShadowBrokers possessed the exploits before they were patched.

Disclosed doesn’t mean fixed

One thing we’ve learned from 30 years of disclosure is that vendors ignore bugs.
We’ve gotten to the state where a few big companies like Microsoft and Apple will actually fix bugs, but the vast majority of vendors won’t. Even Microsoft and Apple have been known to sit on tricky bugs for over a year before fixing them.
And the only reason Microsoft and Apple have gotten to this state is because we, the community, bullied them into it. When we disclose bugs to them, we give them a deadline when we make the bug public, whether or not its been fixed.
The same goes for the NSA. If they quietly disclose bugs to vendors, in general, they won’t be fixed unless the NSA also makes the bug public within a certain time frame. Either Schneier has to argue that the NSA should do such public full-disclosures, or argue that disclosures won’t always lead to fixes.

Replacement SMB/MSRPC

The ETERNALBLUE vuln is so valuable to the NSA that it’s almost certainly seeking a replacement.
Again, I’m trying to debunk the impression Schneier tries to form that somehow the NSA stumbled upon ETERNALBLUE by accident to begin with. The opposite is true: remote exploits for the SMB (port 445) or MSRPC (port 135) services are some of the most valuable vulns, and the NSA will work hard to acquire them.

That it was leaked

The only issue here is that the 0day leaked. If the NSA can’t keep it’s weaponized toys secret, then maybe it shouldn’t have them.
Instead of processing this new piece of information, which is important, Schneier takes this opportunity to just re-hash the old inaccurate and deceptive VEP debate.

Conclusion

Except for a tiny number of people working for the NSA, none of us really know what’s going on with 0days inside government. Schneier’s comments seem more off-base than most. Like all activists, he deliberately uses language to deceive rather than explain (like “discover” instead of “acquire”). Like all activists, he seems obsessed with the VEP, even though as far as anybody can tell, it’s not used for NSA acquired vulns. He deliberate ignores things he should be an expert in, such as how all patches/disclosures sometimes lead to worms/exploits, and how not all disclosure leads to fixes.

“Only a year? It’s felt like forever”: a twelve-month retrospective

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/12-months-raspberry-pi/

This weekend saw my first anniversary at Raspberry Pi, and this blog marks my 100th post written for the company. It would have been easy to let one milestone or the other slide had they not come along hand in hand, begging for some sort of acknowledgement.

Alex, Matt, and Courtney in a punt on the Cam

The day Liz decided to keep me

So here it is!

Joining the crew

Prior to my position in the Comms team as Social Media Editor, my employment history was largely made up of retail sales roles and, before that, bit parts in theatrical backstage crews. I never thought I would work for the Raspberry Pi Foundation, despite its firm position on my Top Five Awesome Places I’d Love to Work list. How could I work for a tech company when my knowledge of tech stretched as far as dismantling my Game Boy when I was a kid to see how the insides worked, or being the one friend everyone went to when their phone didn’t do what it was meant to do? I never thought about the other side of the Foundation coin, or how I could find my place within the hidden workings that turned the cogs that brought everything together.

… when suddenly, as if out of nowhere, a new job with a dream company. #raspberrypi #positive #change #dosomething

12 Likes, 1 Comments – Alex J’rassic (@thealexjrassic) on Instagram: “… when suddenly, as if out of nowhere, a new job with a dream company. #raspberrypi #positive…”

A little luck, a well-written though humorous resumé, and a meeting with Liz and Helen later, I found myself the newest member of the growing team at Pi Towers.

Ticking items off the Bucket List

I thought it would be fun to point out some of the chances I’ve had over the last twelve months and explain how they fit within the world of Raspberry Pi. After all, we’re about more than just a $35 credit card-sized computer. We’re a charitable Foundation made up of some wonderful and exciting projects, people, and goals.

High altitude ballooning (HAB)

Skycademy offers educators in the UK the chance to come to Pi Towers Cambridge to learn how to plan a balloon launch, build a payload with onboard Raspberry Pi and Camera Module, and provide teachers with the skills needed to take their students on an adventure to near space, with photographic evidence to prove it.

All the screens you need to hunt balloons. . We have our landing point and are now rushing to Therford to find the payload in a field. . #HAB #RasppberryPi

332 Likes, 5 Comments – Raspberry Pi (@raspberrypifoundation) on Instagram: “All the screens you need to hunt balloons. . We have our landing point and are now rushing to…”

I was fortunate enough to join Sky Captain James, along with Dan Fisher, Dave Akerman, and Steve Randell on a test launch back in August last year. Testing out new kit that James had still been tinkering with that morning, we headed to a field in Elsworth, near Cambridge, and provided Facebook Live footage of the process from payload build to launch…to the moment when our balloon landed in an RAF shooting range some hours later.

RAF firing range sign

“Can we have our balloon back, please, mister?”

Having enjoyed watching Blue Peter presenters send up a HAB when I was a child, I marked off the event on my bucket list with a bold tick, and I continue to show off the photographs from our Raspberry Pi as it reached near space.

Spend the day launching/chasing a high-altitude balloon. Look how high it went!!! #HAB #ballooning #space #wellspacekinda #ish #photography #uk #highaltitude

13 Likes, 2 Comments – Alex J’rassic (@thealexjrassic) on Instagram: “Spend the day launching/chasing a high-altitude balloon. Look how high it went!!! #HAB #ballooning…”

You can find more information on Skycademy here, plus more detail about our test launch day in Dan’s blog post here.

Dear Raspberry Pi Friends…

My desk is slowly filling with stuff: notes, mementoes, and trinkets that find their way to me from members of the community, both established and new to the life of Pi. There are thank you notes, updates, and more from people I’ve chatted to online as they explore their way around the world of Pi.

Letter of thanks to Raspberry Pi from a young fan

*heart melts*

By plugging myself into social media on a daily basis, I often find hidden treasures that go unnoticed due to the high volume of tags we receive on Facebook, Twitter, Instagram, and so on. Kids jumping off chairs in delight as they complete their first Scratch project, newcomers to the Raspberry Pi shedding a tear as they make an LED blink on their kitchen table, and seasoned makers turning their hobby into something positive to aid others.

It’s wonderful to join in the excitement of people discovering a new skill and exploring the community of Raspberry Pi makers: I’ve been known to shed a tear as a result.

Meeting educators at Bett, chatting to teen makers at makerspaces, and sharing a cupcake or three at the birthday party have been incredible opportunities to get to know you all.

You’re all brilliant.

The Queens of Robots, both shoddy and otherwise

Last year we welcomed the Queen of Shoddy Robots, Simone Giertz to Pi Towers, where we chatted about making, charity, and space while wandering the colleges of Cambridge and hanging out with flat Tim Peake.

Queen of Robots @simonegiertz came to visit #PiTowers today. We hung out with cardboard @astro_timpeake and ate chelsea buns at @fitzbillies #Cambridge. . We also had a great talk about the educational projects of the #RaspberryPi team, #AstroPi and how not enough people realise we’re a #charity. . If you’d like to learn more about the Raspberry Pi Foundation and the work we do with #teachers and #education, check out our website – www.raspberrypi.org. . How was your day? Get up to anything fun?

597 Likes, 3 Comments – Raspberry Pi (@raspberrypifoundation) on Instagram: “Queen of Robots @simonegiertz came to visit #PiTowers today. We hung out with cardboard…”

And last month, the wonderful Estefannie ‘Explains it All’ de La Garza came to hang out, make things, and discuss our educational projects.

Estefannie on Twitter

Ahhhh!!! I still can’t believe I got to hang out and make stuff at the @Raspberry_Pi towers!! Thank you thank you!!

Meeting such wonderful, exciting, and innovative YouTubers was a fantastic inspiration to work on my own projects and to try to do more to help others discover ways to connect with tech through their own interests.

Those ‘wow’ moments

Every Raspberry Pi project I see on a daily basis is awesome. The moment someone takes an idea and does something with it is, in my book, always worthy of awe and appreciation. Whether it be the aforementioned flashing LED, or sending Raspberry Pis to the International Space Station, if you have turned your idea into reality, I applaud you.

Some of my favourite projects over the last twelve months have not only made me say “Wow!”, they’ve also inspired me to want to do more with myself, my time, and my growing maker skill.

Museum in a Box on Twitter

Great to meet @alexjrassic today and nerd out about @Raspberry_Pi and weather balloons and @Space_Station and all things #edtech 🎈⛅🛰📚🤖

Projects such as Museum in a Box, a wonderful hands-on learning aid that brings the world to the hands of children across the globe, honestly made me tear up as I placed a miniaturised 3D-printed Virginia Woolf onto a wooden box and gasped as she started to speak to me.

Jill Ogle’s Let’s Robot project had me in awe as Twitch-controlled Pi robots tackled mazes, attempted to cut birthday cake, or swung to slap Jill in the face over webcam.

Jillian Ogle on Twitter

@SryAbtYourCats @tekn0rebel @Beam Lol speaking of faces… https://t.co/1tqFlMNS31

Every day I discover new, wonderful builds that both make me wish I’d thought of them first, and leave me wondering how they manage to make them work in the first place.

Space

We have Raspberry Pis in space. SPACE. Actually space.

Raspberry Pi on Twitter

New post: Mission accomplished for the European @astro_pi challenge and @esa @Thom_astro is on his way home 🚀 https://t.co/ycTSDR1h1Q

Twelve months later, this still blows my mind.

And let’s not forget…

  • The chance to visit both the Houses of Parliment and St James’s Palace

Raspberry Pi team at the Houses of Parliament

  • Going to a Doctor Who pre-screening and meeting Peter Capaldi, thanks to Clare Sutcliffe

There’s no need to smile when you’re #DoctorWho.

13 Likes, 2 Comments – Alex J’rassic (@thealexjrassic) on Instagram: “There’s no need to smile when you’re #DoctorWho.”

We’re here. Where are you? . . . . . #raspberrypi #vidconeu #vidcon #pizero #zerow #travel #explore #adventure #youtube

1,944 Likes, 30 Comments – Raspberry Pi (@raspberrypifoundation) on Instagram: “We’re here. Where are you? . . . . . #raspberrypi #vidconeu #vidcon #pizero #zerow #travel #explore…”

  • Making a GIF Cam and other builds, and sharing them with you all via the blog

Made a Gif Cam using a Raspberry Pi, Pi camera, button and a couple LEDs. . When you press the button, it takes 8 images and stitches them into a gif file. The files then appear on my MacBook. . Check out our Twitter feed (Raspberry_Pi) for examples! . Next step is to fit it inside a better camera body. . #DigitalMaking #Photography #Making #Camera #Gif #MakersGonnaMake #LED #Creating #PhotosofInstagram #RaspberryPi

19 Likes, 1 Comments – Alex J’rassic (@thealexjrassic) on Instagram: “Made a Gif Cam using a Raspberry Pi, Pi camera, button and a couple LEDs. . When you press the…”

The next twelve months

Despite Eben jokingly firing me near-weekly across Twitter, or Philip giving me the ‘Dad glare’ when I pull wires and buttons out of a box under my desk to start yet another project, I don’t plan on going anywhere. Over the next twelve months, I hope to continue discovering awesome Pi builds, expanding on my own skills, and curating some wonderful projects for you via the Raspberry Pi blog, the Raspberry Pi Weekly newsletter, my submissions to The MagPi Magazine, and the occasional video interview or two.

It’s been a pleasure. Thank you for joining me on the ride!

The post “Only a year? It’s felt like forever”: a twelve-month retrospective appeared first on Raspberry Pi.

WannaCry and Vulnerabilities

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/06/wannacry_and_vu.html

There is plenty of blame to go around for the WannaCry ransomware that spread throughout the Internet earlier this month, disrupting work at hospitals, factories, businesses, and universities. First, there are the writers of the malicious software, which blocks victims’ access to their computers until they pay a fee. Then there are the users who didn’t install the Windows security patch that would have prevented an attack. A small portion of the blame falls on Microsoft, which wrote the insecure code in the first place. One could certainly condemn the Shadow Brokers, a group of hackers with links to Russia who stole and published the National Security Agency attack tools that included the exploit code used in the ransomware. But before all of this, there was the NSA, which found the vulnerability years ago and decided to exploit it rather than disclose it.

All software contains bugs or errors in the code. Some of these bugs have security implications, granting an attacker unauthorized access to or control of a computer. These vulnerabilities are rampant in the software we all use. A piece of software as large and complex as Microsoft Windows will contain hundreds of them, maybe more. These vulnerabilities have obvious criminal uses that can be neutralized if patched. Modern software is patched all the time — either on a fixed schedule, such as once a month with Microsoft, or whenever required, as with the Chrome browser.

When the US government discovers a vulnerability in a piece of software, however, it decides between two competing equities. It can keep it secret and use it offensively, to gather foreign intelligence, help execute search warrants, or deliver malware. Or it can alert the software vendor and see that the vulnerability is patched, protecting the country — and, for that matter, the world — from similar attacks by foreign governments and cybercriminals. It’s an either-or choice. As former US Assistant Attorney General Jack Goldsmith has said, “Every offensive weapon is a (potential) chink in our defense — and vice versa.”

This is all well-trod ground, and in 2010 the US government put in place an interagency Vulnerabilities Equities Process (VEP) to help balance the trade-off. The details are largely secret, but a 2014 blog post by then President Barack Obama’s cybersecurity coordinator, Michael Daniel, laid out the criteria that the government uses to decide when to keep a software flaw undisclosed. The post’s contents were unsurprising, listing questions such as “How much is the vulnerable system used in the core Internet infrastructure, in other critical infrastructure systems, in the US economy, and/or in national security systems?” and “Does the vulnerability, if left unpatched, impose significant risk?” They were balanced by questions like “How badly do we need the intelligence we think we can get from exploiting the vulnerability?” Elsewhere, Daniel has noted that the US government discloses to vendors the “overwhelming majority” of the vulnerabilities that it discovers — 91 percent, according to NSA Director Michael S. Rogers.

The particular vulnerability in WannaCry is code-named EternalBlue, and it was discovered by the US government — most likely the NSA — sometime before 2014. The Washington Post reported both how useful the bug was for attack and how much the NSA worried about it being used by others. It was a reasonable concern: many of our national security and critical infrastructure systems contain the vulnerable software, which imposed significant risk if left unpatched. And yet it was left unpatched.

There’s a lot we don’t know about the VEP. The Washington Post says that the NSA used EternalBlue “for more than five years,” which implies that it was discovered after the 2010 process was put in place. It’s not clear if all vulnerabilities are given such consideration, or if bugs are periodically reviewed to determine if they should be disclosed. That said, any VEP that allows something as dangerous as EternalBlue — or the Cisco vulnerabilities that the Shadow Brokers leaked last August to remain unpatched for years isn’t serving national security very well. As a former NSA employee said, the quality of intelligence that could be gathered was “unreal.” But so was the potential damage. The NSA must avoid hoarding vulnerabilities.

Perhaps the NSA thought that no one else would discover EternalBlue. That’s another one of Daniel’s criteria: “How likely is it that someone else will discover the vulnerability?” This is often referred to as NOBUS, short for “nobody but us.” Can the NSA discover vulnerabilities that no one else will? Or are vulnerabilities discovered by one intelligence agency likely to be discovered by another, or by cybercriminals?

In the past few months, the tech community has acquired some data about this question. In one study, two colleagues from Harvard and I examined over 4,300 disclosed vulnerabilities in common software and concluded that 15 to 20 percent of them are rediscovered within a year. Separately, researchers at the Rand Corporation looked at a different and much smaller data set and concluded that fewer than six percent of vulnerabilities are rediscovered within a year. The questions the two papers ask are slightly different and the results are not directly comparable (we’ll both be discussing these results in more detail at the Black Hat Conference in July), but clearly, more research is needed.

People inside the NSA are quick to discount these studies, saying that the data don’t reflect their reality. They claim that there are entire classes of vulnerabilities the NSA uses that are not known in the research world, making rediscovery less likely. This may be true, but the evidence we have from the Shadow Brokers is that the vulnerabilities that the NSA keeps secret aren’t consistently different from those that researchers discover. And given the alarming ease with which both the NSA and CIA are having their attack tools stolen, rediscovery isn’t limited to independent security research.

But even if it is difficult to make definitive statements about vulnerability rediscovery, it is clear that vulnerabilities are plentiful. Any vulnerabilities that are discovered and used for offense should only remain secret for as short a time as possible. I have proposed six months, with the right to appeal for another six months in exceptional circumstances. The United States should satisfy its offensive requirements through a steady stream of newly discovered vulnerabilities that, when fixed, also improve the country’s defense.

The VEP needs to be reformed and strengthened as well. A report from last year by Ari Schwartz and Rob Knake, who both previously worked on cybersecurity policy at the White House National Security Council, makes some good suggestions on how to further formalize the process, increase its transparency and oversight, and ensure periodic review of the vulnerabilities that are kept secret and used for offense. This is the least we can do. A bill recently introduced in both the Senate and the House calls for this and more.

In the case of EternalBlue, the VEP did have some positive effects. When the NSA realized that the Shadow Brokers had stolen the tool, it alerted Microsoft, which released a patch in March. This prevented a true disaster when the Shadow Brokers exposed the vulnerability on the Internet. It was only unpatched systems that were susceptible to WannaCry a month later, including versions of Windows so old that Microsoft normally didn’t support them. Although the NSA must take its share of the responsibility, no matter how good the VEP is, or how many vulnerabilities the NSA reports and the vendors fix, security won’t improve unless users download and install patches, and organizations take responsibility for keeping their software and systems up to date. That is one of the important lessons to be learned from WannaCry.

This essay originally appeared in Foreign Affairs.