Tag Archives: Uncategorized

Friday Squid Blogging: Climate Change Causing “Squid Bloom” along Pacific Coast

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/02/friday-squid-blogging-climate-change-causing-squid-bloom-along-pacific-coast.html

The oceans are warmer, which means more squid.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

On the Irish Health Services Executive Hack

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/02/on-the-irish-health-services-executive-hack.html

A detailed report of the 2021 ransomware attack against Ireland’s Health Services Executive lists some really bad security practices:

The report notes that:

  • The HSE did not have a Chief Information Security Officer (CISO) or a “single responsible owner for cybersecurity at either senior executive or management level to provide leadership and direction.
  • It had no documented cyber incident response runbooks or IT recovery plans (apart from documented AD recovery plans) for recovering from a wide-scale ransomware event.
  • Under-resourced Information Security Managers were not performing their business as usual role (including a NIST-based cybersecurity review of systems) but were working on evaluating security controls for the COVID-19 vaccination system. Antivirus software triggered numerous alerts after detecting Cobalt Strike activity but these were not escalated. (The antivirus server was later encrypted in the attack).
  • There was no security monitoring capability that was able to effectively detect, investigate and respond to security alerts across HSE’s IT environment or the wider National Healthcare Network (NHN).
  • There was a lack of effective patching (updates, bug fixes etc.) across the IT estate and reliance was placed on a single antivirus product that was not monitored or effectively maintained with updates across the estate. (The initial workstation attacked had not had antivirus signatures updated for over a year.)
  • Over 30,000 machines were running Windows 7 (out of support since January 2020).
  • The initial breach came after a HSE staff member interacted with a malicious Microsoft Office Excel file attached to a phishing email; numerous subsequent alerts were not effectively investigated.

PwC’s crisp list of recommendations in the wake of the incident ­ as well as detail on the business impact of the HSE ransomware attack ­ may prove highly useful guidance on best practice for IT professionals looking to set up a security programme and get it funded.

Get an easy start to coding with our new free online course

Post Syndicated from Michael Conterio original https://www.raspberrypi.org/blog/learn-to-code-new-free-online-course-scratch-programming/

Are you curious about coding and computer programming but don’t know how to begin? Do you want to help your children at home, or learners in your school, with their digital skills, but you’re not very confident yet? Then our new, free, and on-demand online course Introduction to Programming with Scratch course is a fun, creative, and colourful starting point for you.

An illustration of Scratch coding.

Being able to code can help you do lots of things — from expressing yourself to helping others practice their skills, and from highlighting real-world issues to controlling a robot. Whether you want to get a taste of what coding is about, or you want to learn so that you can support young people, our Introduction to Programming with Scratch course is the perfect place to start if you’ve never tried any coding before.

Scratch course presenters Vasu and Mark.
Your course presenters, Vasu and Mark.

On this on-demand course, Mark and Vasu from our team will help you take your very first steps on your programming journey. 

You can code — we’ll show you how

On the course, you’ll use the programming language Scratch, a beginner-friendly, visual programming language particularly suitable for creating animations and games. All you need is our course and a computer or tablet with a web browser and internet connection that can access the online Scratch editor.

You can code in Scratch without having to memorise and type in commands. Instead, by snapping blocks together, you’ll take control of ‘sprites’, which are characters and objects on the screen that you can move around with the code you create.

A video of what Scratch coding looks like.
This is how you build Scratch programs.

As well as learning what you can do with Scratch, you’ll be learning basic programming concepts that are the same for all programming languages. You’ll see how the order of commands is important (sequencing), you’ll make the computer repeat actions (repetition), and you’ll write programs that do different things in different circumstances, for example responding to your user’s actions (selection). Later on, you’ll also make your own reusable code blocks (abstraction).

You can create your own programs and share them

Throughout the course you’ll learn to make your own programs step by step. In the final week, Mark and Vasu will show you how you can create musical projects and interact with your program using a webcam.

A Scratch coding project.
By the end of the course, you will create a program to control a Scratch character using your live webcam video.

Vasu and Mark will encourage you to share your programs and join the Scratch online community. You will discover how you can explore other people’s Scratch programs for inspiration and support, and how to build on the code they’ve created.

A Scratch coding project.
Thousands of people share their projects in the Scratch online community — you could be one of them.

Sign up for the course now!

The course starts for the first time on Monday 14 February, but it is available on demand, so you can join it at any time. You’ll get four weeks’ access to the course no matter when you sign up.

For the first four weeks that the course is available, and every three months after that, people from our team will join in to support you and help answer your questions in the comments sections.

If you’re a teacher in England, get free extended access by signing up through Teach Computing here.

And if you want to do more Scratch coding…

You can find more free resources here! These are the newest Scratch pathways on our project site, which you can also share with the young people in your life:

The post Get an easy start to coding with our new free online course appeared first on Raspberry Pi.

Breaking 256-bit Elliptic Curve Encryption with a Quantum Computer

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/02/breaking-245-bit-elliptic-curve-encryption-with-a-quantum-computer.html

Researchers have calculated the quantum computer size necessary to break 256-bit elliptic curve public-key cryptography:

Finally, we calculate the number of physical qubits required to break the 256-bit elliptic curve encryption of keys in the Bitcoin network within the small available time frame in which it would actually pose a threat to do so. It would require 317 × 106 physical qubits to break the encryption within one hour using the surface code, a code cycle time of 1 μs, a reaction time of 10 μs, and a physical gate error of 10-3. To instead break the encryption within one day, it would require 13 × 106 physical qubits.

In other words: no time soon. Not even remotely soon. IBM’s largest ever superconducting quantum computer is 127 physical qubits.

Amy Zegart on Spycraft in the Internet Age

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/02/amy-zegart-on-spycraft-in-the-internet-age.html

Amy Zegart has a new book: Spies, Lies, and Algorithms: The History and Future of American Intelligence. Wired has an excerpt:

In short, data volume and accessibility are revolutionizing sensemaking. The intelligence playing field is leveling­ — and not in a good way. Intelligence collectors are everywhere, and government spy agencies are drowning in data. This is a radical new world and intelligence agencies are struggling to adapt to it. While secrets once conferred a huge advantage, today open source information increasingly does. Intelligence used to be a race for insight where great powers were the only ones with the capabilities to access secrets. Now everyone is racing for insight and the internet gives them tools to do it. Secrets still matter, but whoever can harness all this data better and faster will win.

The third challenge posed by emerging technologies strikes at the heart of espionage: secrecy. Until now, American spy agencies didn’t have to interact much with outsiders, and they didn’t want to. The intelligence mission meant gathering secrets so we knew more about adversaries than they knew about us, and keeping how we gathered secrets a secret too.

[…]

In the digital age, however, secrecy is bringing greater risk because emerging technologies are blurring nearly all the old boundaries of geopolitics. Increasingly, national security requires intelligence agencies to engage the outside world, not stand apart from it.

I have not yet read the book.

Friday Squid Blogging: Are Squid from Another Planet?

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/02/friday-squid-blogging-are-squid-from-another-planet.html

An actually serious scientific journal has published a paper speculating that octopus and squid could be of extraterrestrial origin.

News article.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

How to mount Linux volume and keep mount point consistency

Post Syndicated from limillan original https://aws.amazon.com/blogs/compute/how-to-mount-linux-volume-and-keep-mount-point-consistency/

This post is written by: Leonardo Azize Martins, Cloud Infrastructure Architect, Professional Services

Customers often use Amazon Elastic Compute Cloud (Amazon EC2) Linux based instances with many Amazon Elastic Block Store (Amazon EBS) volumes attached. In this case, device name can vary depending on some facts, such as virtualization type, instance type, or operating system. As the device name can change, the customer shouldn’t rely on the device name to mount volumes from it. The customer wants to avoid the situation where a volume is mounted on a different mount point just because the device name changed, or the device name doesn’t exist because the name pattern changed.

Customers who want to utilize the latest instance family usually change the instance type when a new one is available. The device name can be different between instance families, such as T2 and T3. T2 uses /dev/sd[a-z], while T3 uses /dev/nvme[0-26]n1. If you mount a device on T2 called /dev/sdc, when you change the instance family to T3 the same device won’t be called /dev/sdc anymore. In this case, it will fail to mount.

Amazon EBS volumes are exposed as NVMe block devices on instances built on the AWS Nitro System. The block device driver can assign NVMe device names in a different order than what you specified for the volumes in the block device mapping. In this situation, a device that should be mounted on /data could end-up being mounted on /logs.

On Linux, you can use the fstab file to mount devices using kernel name descriptors (the traditional way), file system labels, or the file system UUID. Kernel name descriptors aren’t persistent and can change each boot. Therefore, they shouldn’t be used in configuration files.

UUID is a mechanism for giving each filesystem a unique identifier. These identifiers’ attributes are generated by filesystem utilities (mkfs.*) when the device is formatted, and they’re designed so that collisions are unlikely. All GNU/Linux filesystems (including swap and LUKS headers of raw encrypted devices) support UUID.

As UUID is a filesystem attribute, it can also be used with Logical Volume Manager (LVM) and Linux software RAID (mdadm).

Depending on the fstab file configuration, you may find that you can’t access your instance, which requires you to follow a rescue process to fix issues. This is the case if you configure the fstab file with the device name and change the instance type.

This post shows how to mount Linux volumes and keep mount points preserved when the instance type is changed or the instance is rebooted.

Overview of solution

When you create an instance, you specify block devices mapping. It doesn’t mean that the Linux device has the same name or can be discovered in the same order as specified on the instance mapping. This situation can be more evident when using applications that require more volumes.

Using UUID to mount volumes lets you mitigate future issues when you stop and start your instance or change the instance type.

EC2 instance block device mapping

Figure 1: EC2 instance block device mapping

Walkthrough

You will create one instance with three volumes: one root volume and two data volumes. We use Amazon Linux 2 in this post. In this instance type, volumes have a specific name format. Later, you will change the instance type. The new instance type volumes will have another name format.

Follow these steps:

  • Create an instance with three volumes
  • Create the filesystem on the volumes and mount them
  • Change the instance type

Prerequisites

For this walkthrough, you should have the following prerequisites:

Create instance

Create one instance with three volumes: a root volume and two data volumes. Use Launch Instance Wizard with the following details.

Launch Amazon Linux 2 instance

  1. On Step 1, choose Amazon Linux 2 AMI (HVM), SSD Volume Type.
  2. On Step 2, choose micro.
  3. On Step 3, choose Next.
  4. On Step 4, add two new volumes, device /dev/sdb 10 GiB and device /dev/sdc 12 GiB.

Launch instances, add storage

Figure 2: Launch instances, add storage

Create filesystem and mount

Connect to your instance using EC2 Instance Connect or any other method that feels comfortable for you. Mount the device using UUID instead of the device name. Run the following instructions as the user root.

Format and mount the device

  1. Run the following command to confirm that you have three disks:
    $ lsblk
  2. Format disk as xfs, and run the following commands:
    mkfs.xfs /dev/xvdb
    mkfs.xfs /dev/xvdc
  3. Create mount point, and run the following commands:
    mkdir /mnt/disk1
    mkdir /mnt/disk2
  4. Add mount instructions, and run the following commands:
    echo "$(blkid /dev/xvdb | awk '{print $2}') /mnt/disk1 xfs defaults,noatime" | tee -a /etc/fstab
    echo "$(blkid /dev/xvdc | awk '{print $2}') /mnt/disk2 xfs defaults,noatime" | tee -a /etc/fstab
  5. Mount volumes, create dummy file, and run the following commands:
    mount -a
    touch /mnt/disk1/file1.txt
    touch /mnt/disk2/file2.txt

You will have an fstab file like the following:

cat /etc/fstab
UUID=7b355c6b-f82b-4810-94b9-4f3af651f629     /           xfs    defaults,noatime  1   1
UUID="2c160dd6-586c-4258-84bb-c79933b9ae02" /mnt/disk1 xfs defaults,noatime
UUID="3e7a0c28-7cf1-40dc-82a1-2f5cfb53f9a4" /mnt/disk2 xfs defaults,noatime

Change instance type

Change the instance type from t2.micro to t3.micro.

Launch Amazon Linux 2 instance

  1. Stop instance.
  2. Change the instance type to micro.
  3. Start instance.
  4. Connect to your instance using EC2 Instance Connect.
  5. Check the device name, and run the following command:
lsblk
  1. List files, and run the following command:
ls -l /mnt/*

Note that the device names are changed from xvd* to nvme*. All of the devices are mounted without any issue and with the correct mount points.

Cleaning up

To avoid incurring future charges, delete the instance and all of the volumes that you created in this post.

The other side

UUID is an attribute of the filesystem that was generated when you formatted your device. Therefore, it will follow your device even if you create an AMI or snapshot. So you don’t need to worry about a restore process, and it will smoothly proceed to an instance restore. You must be careful if you restore a snapshot from one volume and attach it to the same instance, as you will end-up with two volumes that are using the same UUID. If you try to mount the restored volume on the same instance, then it will fail and you will find this message on /var/log/messages file. kernel: XFS (xvdf1): Filesystem has duplicate UUID f6646b81-f2a6-46ca-9f3d-c746cf015379 - can't mount It is even more important to be careful if you attach a volume created from the snapshot of the root volume and restart your instance. Since both volumes have the same UUID, you may find that a volume other than the one attached to /dev/xvda or /dev/sda has become the root volume of your instance. See the following example for details. Note that both volumes have the same UUID, but the one mounted on / is /dev/xvdf1, not /dev/xvda1, which is the real root volume for this instance.

$ blkid
/dev/xvda1: LABEL="/" UUID="f6646b81-f2a6-46ca-9f3d-c746cf015379" TYPE="xfs" PARTLABEL="Linux" PARTUUID="79fae994-3708-4293-bb29-4d069d1c786b"
/dev/xvdf1: LABEL="/" UUID="f6646b81-f2a6-46ca-9f3d-c746cf015379" TYPE="xfs" PARTLABEL="Linux" PARTUUID="79fae994-3708-4293-bb29-4d069d1c786b"
$ lsblk
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda    202:0    0   8G  0 disk
└─xvda1 202:1    0   8G  0 part
xvdf    202:80   0   8G  0 disk
└─xvdf1 202:81   0   8G  0 part /

Conclusion

In this post, we covered how to use UUID to mount Linux devices using fstab file. This keeps the mount point on the correct device. It also lets you change the instance type without changes to the fstab file. You can use UUID with LVM and Linux software RAID (mdadm), UUID, as an attribute of the filesystem, will be the same even after a backup and restore process, snapshot, or clone. To learn more, check out our block device mappings and device names on Linux instances documentation.

Expanding Your EC2 Possibilities By Utilizing the CPU Launch Options

Post Syndicated from limillan original https://aws.amazon.com/blogs/compute/expanding-your-ec2-possibilities-by-utilizing-the-cpu-launch-options/

This post is written by: Matthew Brunton, Senior Solutions Architect – WWPS

To ensure our customers have the appropriate machines available for their workloads, AWS offers a wide range of hardware options that include hundreds of types of instances that help customers achieve the best price performance for their workloads.  In some specialized circumstances, our customers need an even wider range of options, or more flexibility. This could be driven by a desire to optimize licensing costs, or the customer wanting more hardware configuration options. Some high performance workloads can improve by turning off simultaneous multithreading. In our AWS Well Architected Framework – High Performance Computing Lens we have the following recommendation “Unless an application has been tested with hyperthreading enabled, it is recommended that hyperthreading be disabled”. With these factors in mind, AWS offers the ability to configure some options regarding the CPU configuration in launched instances.

Our larger instance types that have a higher number of cores, and offer multithreaded cores will translate to a larger combination of potential options. The valid combinations of cores and threads per core can be found here. To consider utilizing the CPU options for both cores and threads per core, you will need to consider instance types that have multiple CPU’s and/or cores.

You can specify numerous CPU options for some of our larger instances via the console, command line interface, or the API. Moreover, you can remove CPUs from the launch configuration, or deactivate threading within CPUs that have multiple threads per core. Amazon Elastic Compute Cloud (Amazon EC2) FAQ’s for the Optimize CPU’s feature can be found here. You should be aware that this feature is only available during instance launch and cannot be modified after launch. The launch options persist after you reboot, stop, or start an instance.

You can easily determine how many CPUs and threads a machine has. There are numerous ways to see this via the AWS Management Console and the AWS Command Line Interface (CLI).

Within the AWS Management Console, under the ‘Instance Details’ section, opening up the ‘Host and placement group’ item reveals the number of vCPUs that your machine has.

Instance Details showing the number of vCPU's a particular EC2 machine has

Figure 1: Console details showing number of vCPU’s

This information is also available using the AWS CLI as follows:

aws ec2 describe-instances --region us-east-1 --filters "Name=instance-type,Values='c6i.*large'"

...
    "Instances": [
        {
            "Monitoring": {
                "State": "disabled"
            }, 
            "PrivateDnsName": "ip-172-31-44-121.ec2.internal",
 "PrivateIpAddress": "172.31.44.121",                              
 "State": {
                "Code": 16, 
                "Name": "running"
            }, 
            "EbsOptimized": false, 
            "LaunchTime": "2021-11-22T01:46:58+00:00",
            "ProductCodes": [], 
            "VpcId": " vpc-7f7f1502", 
            "CpuOptions": { "CoreCount": 32, "ThreadsPerCore": 1 }, 
            "StateTransitionReason": "", 
            ...
        }
    ]
...

Figure 2: ec2 describe-instances cli output

A handy AWS CLI command ‘describe-instance-types’ will show the list of valid cores, the possible threads-per-core, and the default values for the vCPUs and cores. These are listed in the ‘DefaultVCpus’ and ‘DefaultCores’ items in the returned JSON listed as follows.

aws ec2 describe-instance-types --region us-east-1 --filters "Name=instance-type,Values='c6i.*'"
{
    "InstanceTypes": [
        {
            "InstanceType": "c6i.4xlarge",
            "CurrentGeneration": true,
            "FreeTierEligible": false,
            "SupportedUsageClasses": [
                "on-demand",
                "spot"
            ],
            "SupportedRootDeviceTypes": [
                "ebs"
            ],
            "SupportedVirtualizationTypes": [
                "hvm"
            ],
            "BareMetal": false,
            "Hypervisor": "nitro",
            "ProcessorInfo": {
                "SupportedArchitectures": [
                    "x86_64"
                ],
                "SustainedClockSpeedInGhz": 3.5
            },
            "VCpuInfo": { "DefaultVCpus": 16, "DefaultCores": 8, "DefaultThreadsPerCore": 2, "ValidCores": [ 2, 4, 6, 8 ], "ValidThreadsPerCore": [ 1, 2 ] },
            "MemoryInfo": {
                "SizeInMiB": 32768
            },

 

Figure 3: ec2 describe-instance-types cli output

To utilize the AWS CLI to launch one or multiple instances, you should use the run instances CLI command.

The following is the shorthand syntax:

aws ec2 run-instances --image-id xxx --instance-type xxx --cpu-options “CoreCount=xx,ThreadsPerCore=xx” --key-name xxx --region xxx

For example, the following command launches a 32-core machine with only 1 thread per core instead of the standard 2 threads per core:

aws ec2 run-instances --image-id ami-0c2b8ca1dad447f8a --instance-type c6i.16xlarge
--cpu-options " CoreCount =32, ThreadsPerCore =1" --key-name MyKeyPair --region xxx

If you are using the CPU options parameters in CloudFormation templates, then the following applies:

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-instance-cpuoptions.html

The following is an example of the YAML syntax for specifying the CPU configuration.

Resources:
  CustomEC2Instance:
    Type: "AWS::EC2::Instance"
    Properties:
      InstanceType: xxx
      ImageId: xxx
      CpuOptions:
CoreCount: xx
ThreadsPerCore: x

As can be seen in the following table, there are a number of valid CPU core options, as well as the option to set one or two threads per core for each CPU for certain instance types. This significantly opens up the number of combinations and permutations to meet your specific workload need. In the case of the c6i instance listed in the following table, there are an additional 32 CPU core and threading combinations available to customers.

Instance type Default vCPUs Default CPU cores Default threads per core Valid CPU cores Valid threads per core
c6i.16xlarge 64 32 2 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32 1, 2

Figure 4: Valid Core and Thread Launch Options

Note that you still pay the same amount for the EC2 instance, even if you deactivate some of the cores or threads.

Conclusion

This approach can let customers customize the EC2 hardware options even further to ensure a wider range of CPU/memory combinations over and above the already extensive AWS instance options. This lets customers finely tune hardware for their exact application requirements, whether that is running High Performance workloads or trying to save money where license restrictions mean you can benefit from a specific CPU configuration.

We have run through various scenarios in this post, which detailed how to launch instances with alternate CPU configurations, easily check the current configuration of your running instances via our API and console, and how to configure the options in CloudFormation templates.

We love to see our customers maximizing the flexibility of the AWS platform to deliver outstanding results. Have a look at some of your High Performance workloads and give the threading options a try, or take a deep dive into any of your more expensive licenses to see if you could benefit from an alternate CPU configuration. In order to get started, check out our detailed developer documentation for the optimize CPU options.

The EARN IT Act Is Back

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/02/the-earn-it-act-is-back.html

Senators have reintroduced the EARN IT Act, requiring social media companies (among others) to administer a massive surveillance operation on their users:

A group of lawmakers led by Sen. Richard Blumenthal (D-CT) and Sen. Lindsey Graham (R-SC) have re-introduced the EARN IT Act, an incredibly unpopular bill from 2020 that was dropped in the face of overwhelming opposition. Let’s be clear: the new EARN IT Act would pave the way for a massive new surveillance system, run by private companies, that would roll back some of the most important privacy and security features in technology used by people around the globe. It’s a framework for private actors to scan every message sent online and report violations to law enforcement. And it might not stop there. The EARN IT Act could ensure that anything hosted online — backups, websites, cloud photos, and more — is scanned.

Slashdot thread.

Interview with the Head of the NSA’s Research Directorate

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/02/interview-with-the-head-of-the-nsas-research-directorate.html

MIT Technology Review published an interview with Gil Herrera, the new head of the NSA’s Research Directorate. There’s a lot of talk about quantum computing, monitoring 5G networks, and the problems of big data:

The math department, often in conjunction with the computer science department, helps tackle one of NSA’s most interesting problems: big data. Despite public reckoning over mass surveillance, NSA famously faces the challenge of collecting such extreme quantities of data that, on top of legal and ethical problems, it can be nearly impossible to sift through all of it to find everything of value. NSA views the kind of “vast access and collection” that it talks about internally as both an achievement and its own set of problems. The field of data science aims to solve them.

“Everyone thinks their data is the messiest in the world, and mine maybe is because it’s taken from people who don’t want us to have it, frankly,” said Herrera’s immediate predecessor at the NSA, the computer scientist Deborah Frincke, during a 2017 talk at Stanford. “The adversary does not speak clearly in English with nice statements into a mic and, if we can’t understand it, send us a clearer statement.”

Making sense of vast stores of unclear, often stolen data in hundreds of languages and even more technical formats remains one of the directorate’s enduring tasks.

Finding Vulnerabilities in Open Source Projects

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/02/finding-vulnerabilities-in-open-source-projects.html

The Open Source Security Foundation announced $10 million in funding from a pool of tech and financial companies, including $5 million from Microsoft and Google, to find vulnerabilities in open source projects:

The “Alpha” side will emphasize vulnerability testing by hand in the most popular open-source projects, developing close working relationships with a handful of the top 200 projects for testing each year. “Omega” will look more at the broader landscape of open source, running automated testing on the top 10,000.

This is an excellent idea. This code ends up in all sorts of critical applications.

Log4j would be a prototypical vulnerability that the Alpha team might look for ­– an unknown problem in a high-impact project that automated tools would not be able to pick up before a human discovered it. The goal is not to use the personnel engaged with Alpha to replicate dependency analysis, for example.

Me on App Store Monopolies and Security

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/02/me-on-app-store-monopolies-and-security.html

There are two bills working their way through Congress that would force companies like Apple to allow competitive app stores. Apple hates this, since it would break its monopoly, and it’s making a variety of security arguments to bolster its argument. I have written a rebuttal:

I would like to address some of the unfounded security concerns raised about these bills. It’s simply not true that this legislation puts user privacy and security at risk. In fact, it’s fairer to say that this legislation puts those companies’ extractive business-models at risk. Their claims about risks to privacy and security are both false and disingenuous, and motivated by their own self-interest and not the public interest. App store monopolies cannot protect users from every risk, and they frequently prevent the distribution of important tools that actually enhance security. Furthermore, the alleged risks of third-party app stores and “side-loading” apps pale in comparison to their benefits. These bills will encourage competition, prevent monopolist extortion, and guarantee users a new right to digital self-determination.

Matt Stoller has also written about this.

EDITED TO ADD (2/13): Here are the two bills.

Mocking service integrations with AWS Step Functions Local

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/mocking-service-integrations-with-aws-step-functions-local/

This post is written by Sam Dengler, Principal Specialist Solutions Architect, and Dhiraj Mahapatro, Senior Specialist Solutions Architect.

AWS Step Functions now supports over 200 AWS Service integrations via AWS SDK Integration. Developers want to build and test control flow logic for workflows using branching logicerror handling, and retries. This allows for precise workflow execution with deterministic results. Additionally, developers use Step Functions’ input and output processing features to transform data as it enters and exits tasks.

Developers can test their state machines locally using Step Functions Local before deploying them to an AWS account. However state machines that use service integrations like AWS Lambda, Amazon SQS, or Amazon SNS require Step Functions Local to perform calls to AWS service endpoints. Often, developers want to test the control and data flows of their state machine executions in isolation, without any dependency on service integration availability.

Today, AWS is releasing Mocked Service Integrations for Step Functions Local. This allows developers to define sample outputs from AWS service integrations. You can combine them into test case scenarios to validate workflow control and data flow definitions. You can find the code used in this post in the Step Functions examples GitHub repository.

Sales lead generation sample workflow

In this example, new sales leads are created in a customer relationship management system. This triggers the sample workflow execution using input data, which provides information about the contact.

Using the sales lead data, the workflow first validates the contact’s identity and address. If valid, it uses Step Functions’ AWS SDK integration for Amazon Comprehend to call the DetectSentiment API. It uses the sales lead’s comments as input for sentiment analysis.

If the comments have a positive sentiment, it adds the sales leads information to a DynamoDB table for follow-up. The event is published to Amazon EventBridge to notify subscribers.

If the sales lead data is invalid or a negative sentiment is detected, it publishes events to EventBridge for notification. No record is added to the Amazon DynamoDB table. The following Step Functions Workflow Studio diagram shows the control logic:

The full workflow definition is available in the code repository. Note the workflow task names in the diagram, such as DetectSentiment, which are important when defining the mocked responses.

Sentiment analysis test case

In this example, you test a scenario in which:

  1. The identity and address are successfully validated using a Lambda function.
  2. A positive sentiment is detected using the Comprehend.DetectSentiment API after three retries.
  3. A contact item is written to a DynamoDB table successfully
  4. An event is published to an EventBridge event bus successfully

The execution path for this test scenario is shown in the following diagram (the red and green numbers have been added). 0 represents the first execution; 1, 2, and 3 represent the max retry attempts (MaxAttempts), in case of an InternalServerException.

Mocked response configuration

To use service integration mocking, create a mock configuration file with sections specifying mock AWS service responses. These are grouped into test cases that can be activated when executing state machines locally. The following example provides code snippets and the full mock configuration is available in the code repository.

To mock a successful Lambda function invocation, define a mock response that conforms to the Lambda.Invoke API response elements. Associate it to the first request attempt:

"CheckIdentityLambdaMockedSuccess": {
  "0": {
    "Return": {
      "StatusCode": 200,
      "Payload": {
        "statusCode": 200,
        "body": "{\"approved\":true,\"message\":\"identity validation passed\"
}"
      }
    }
  }
}

To mock the DetectSentiment retry behavior, define failure and successful mock responses that conform to the Comprehend.DetectSentiment API call. Associate the failure mocks to three request attempts, and associate the successful mock to the fourth attempt:

"DetectSentimentRetryOnErrorWithSuccess": {
  "0-2": {
    "Throw": {
      "Error": "InternalServerException",
      "Cause": "Server Exception while calling DetectSentiment API in Comprehend Service"
    }
  },
  "3": {
    "Return": {
      "Sentiment": "POSITIVE",
      "SentimentScore": {
        "Mixed": 0.00012647535,
        "Negative": 0.00008031699,
        "Neutral": 0.0051454515,
        "Positive": 0.9946478
      }
    }
  }
}

Note that Step Functions Local does not validate the structure of the mocked responses. Ensure that your mocked responses conform to actual responses before testing. To review the structure of service responses, either perform the actual service calls using Step Functions or view the documentation for those services.

Next, associate the mocked responses to a test case identifier:

"RetryOnServiceExceptionTest": {
  "Check Identity": "CheckIdentityLambdaMockedSuccess",
  "Check Address": "CheckAddressLambdaMockedSuccess",
  "DetectSentiment": "DetectSentimentRetryOnErrorWithSuccess",
  "Add to FollowUp": "AddToFollowUpSuccess",
  "CustomerAddedToFollowup": "CustomerAddedToFollowupSuccess"
}

With the test case and mock responses configured, you can use them for testing with Step Functions Local.

Test case execution using Step Functions Local

The Step Functions Developer Guide describes the steps used to set up Step Functions Local on your workstation and create a state machine.

After these steps are complete, you can run a workflow locally using the start-execution AWS CLI command. Activate the mocked responses by appending a pound sign and the test case identifier to the state machine ARN:

aws stepfunctions start-execution \
  --endpoint http://localhost:8083 \
  --state-machine arn:aws:states:us-east-1:123456789012:stateMachine: LeadGenerationStateMachine#RetryOnServiceExceptionTest \
  --input file://events/sfn_valid_input.json

Test case validation

To validate the workflow executed correctly in the test case, examine the state machine execution events using the StepFunctions.GetExecutionHistory API. This ensures that the correct states are used. There are a variety of validation tools available. This post shows how to achieve this using the AWS CLI filtering feature using JMESPath syntax.

In this test case, you validate the TaskFailed and TaskSucceeded events match the retry definition for the DetectSentiment task, which specifies three retries. Use the following AWS CLI command to get the execution history and filter on the execution events:

aws stepfunctions get-execution-history \
  --endpoint http://localhost:8083 \
  --execution-arn <ExecutionArn>
  --query 'events[?(type==`TaskFailed` && contains(taskFailedEventDetails.cause, `Server Exception while calling DetectSentiment API in Comprehend Service`)) || (type==`TaskSucceeded` && taskSucceededEventDetails.resource==`comprehend:detectSentiment`)]'

The results include matching events:

{
  "timestamp": "2022-01-13T17:24:32.276000-05:00",
  "type": "TaskFailed",
  "id": 19,
  "previousEventId": 18,
  "taskFailedEventDetails": {
    "error": "InternalServerException",
    "cause": "Server Exception while calling DetectSentiment API in Comprehend Service"
  }
}

These results should be compared to the test acceptance criteria to verify the execution behavior. Test cases, acceptance criteria, and validation expressions vary by customer and use case. These techniques are flexible to accommodate various happy path and error scenarios. To explore additional sample test cases and examples, visit the example code repository.

Conclusion

This post introduces a new robust way to test AWS Step Functions state machines in isolation. With mocking, developers get more control over the type of scenarios that a state machine can handle, leading to assertion of multiple behaviors. Testing a state machine with mocks can also be part of the software release. Asserting on behaviors like error handling, branching, parallel, dynamic parallel (map state) helps test the entire state machine’s behavior. For any new behavior in the state machine, such as a new type of exception from a state, you can mock and add as a test.

See the Step Functions Developer Guide for more information on service mocking with Step Functions Local. The sample application covers basic scenarios of testing a state machine. You can use a similar approach for complex scenarios including other Step Functions flows, like map and wait.

For more serverless learning resources, visit Serverless Land.

Using the circuit breaker pattern with AWS Step Functions and Amazon DynamoDB

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/using-the-circuit-breaker-pattern-with-aws-step-functions-and-amazon-dynamodb/

This post is written by Anitha Deenadayalan, Developer Specialist SA, DevAx

Modern applications use microservices as an architectural and organizational approach to software development, where the application comprises small independent services that communicate over well-defined APIs.

When multiple microservices collaborate to handle requests, one or more services may become unavailable or exhibit a high latency. Microservices communicate through remote procedure calls, and it is always possible that transient errors could occur in the network connectivity, causing failures.

This can cause performance degradation in the entire application during synchronous execution because of the cascading of timeouts or failures causing poor user experience. When complex applications use microservices, an outage in one microservice can lead to application failure. This post shows how to use the circuit breaker design pattern to help with a graceful service degradation.

Introducing circuit breakers

Michael Nygard popularized the circuit breaker pattern in his book, Release It. This design pattern can prevent a caller service from retrying another callee service call that has previously caused repeated timeouts or failures. It can also detect when the callee service is functional again.

Fallacies of distributed computing are a set of assertions made by Peter Deutsch and others at Sun Microsystems. They say the programmers new to distributed applications invariably make false assumptions. The network reliability, zero-latency expectations, and bandwidth limitations result in software applications written with minimal error handling for network errors.

During a network outage, applications may indefinitely wait for a reply and continually consume application resources. Failure to retry the operations when the network becomes available can also lead to application degradation. If API calls to a database or an external service time-out due to network issues, repeated calls with no circuit breaker can affect cost and performance.

The circuit breaker pattern

There is a circuit breaker object that routes the calls from the caller to the callee in the circuit breaker pattern. For example, in an ecommerce application, the order service can call the payment service to collect the payments. When there are no failures, the order service routes all calls to the payment service by the circuit breaker:

Circuit breaker with no failures

Circuit breaker with no failures

If the payment service times out, the circuit breaker can detect the timeout and track the failure. If the timeouts exceed a specified threshold, the application opens the circuit:

Circuit breaker with payment service failure

Circuit breaker with payment service failure

Once the circuit is open, the circuit breaker object does not route the calls to the payment service. It returns an immediate failure when the order service calls the payment service:

Circuit breaker stops routing to payment service

Circuit breaker stops routing to payment service

The circuit breaker object periodically tries to see if the calls to the payment service are successful:

Circuit breaker retries payment service

Circuit breaker retries payment service

When the call to payment service succeeds, the circuit is closed, and all further calls are routed to the payment service again:

Circuit breaker with working payment service again

Circuit breaker with working payment service again

Architecture overview

This example uses the AWS Step Functions, AWS Lambda, and Amazon DynamoDB to implement the circuit breaker pattern:

Circuit breaker architecture

Circuit breaker architecture

The Step Functions workflow provides circuit breaker capabilities. When a service wants to call another service, it starts the workflow with the name of the callee service.

The workflow gets the circuit status from the CircuitStatus DynamoDB table, which stores the currently degraded services. If the CircuitStatus contains a record for the service called, then the circuit is open. The Step Functions workflow returns an immediate failure and exit with a FAIL state.

If the CircuitStatus table does not contain an item for the called service, then the service is operational. The ExecuteLambda step in the state machine definition invokes the Lambda function sent through a parameter value. The Step Functions workflow exits with a SUCCESS state, if the call succeeds.

The items in the DynamoDB table have the following attributes:

DynamoDB items list

DynamoDB items list

If the service call fails or a timeout occurs, the application retries with exponential backoff for a defined number of times. If the service call fails after the retries, the workflow inserts a record in the CircuitStatus table for the service with the CircuitStatus as OPEN, and the workflow exits with a FAIL state. Subsequent calls to the same service return an immediate failure as long as the circuit is open.

I enter the item with an associated time-to-live (TTL) value to ensure eventual connection retries and the item expires at the defined TTL time. DynamoDB’s time to live (TTL) allows you to define a per-item timestamp to determine when an item is no longer needed. Shortly after the date and time of the specified timestamp, DynamoDB deletes the item from your table without consuming write throughput.

For example, if you set the TTL value to 60 seconds to check a service status after a minute, DynamoDB deletes the item from the table after 60 seconds. The workflow invokes the service to check for availability when a new call comes in after the item has expired.

Circuit breaker Step Function

Circuit breaker Step Function

Prerequisites

For this walkthrough, you need:

Setting up the environment

Use the .NET Core 3.1 code in the GitHub repository and the AWS SAM template to create the AWS resources for this walkthrough. These include IAM roles, DynamoDB table, the Step Functions workflow, and Lambda functions.

  1. You need an AWS access key ID and secret access key to configure the AWS Command Line Interface (AWS CLI). To learn more about configuring the AWS CLI, follow these instructions.
  2. Clone the repo:
    git clone https://github.com/aws-samples/circuit-breaker-netcore-blog
  3. After cloning, this is the folder structure:

    Project file structure

    Project file structure

Deploy using Serverless Application Model (AWS SAM)

The AWS Serverless Application Model (AWS SAM) CLI provides developers with a local tool for managing serverless applications on AWS.

  1. The sam build command processes your AWS SAM template file, application code, and applicable language-specific files and dependencies. The command copies build artifacts in the format and location expected for subsequent steps in your workflow. Run these commands to process the template file:
    cd circuit-breaker
    sam build
  2. After you build the application, test using the sam deploy command. AWS SAM deploys the application to AWS and displays the output in the terminal.
    sam deploy --guided

    Output from sam deploy

    Output from sam deploy

  3. You can also view the output in AWS CloudFormation page.

    Output in CloudFormation console

    Output in CloudFormation console

  4. The Step Functions workflow provides the circuit-breaker function. Refer to the circuitbreaker.asl.json file in the statemachine folder for the state machine definition in the Amazon States Language (ASL).

To deploy with the CDK, refer to the GitHub page.

Running the service through the circuit breaker

To provide circuit breaker capabilities to the Lambda microservice, you must send the name or function ARN of the Lambda function to the Step Functions workflow:

{
  "TargetLambda": "<Name or ARN of the Lambda function>"
}

Successful run

To simulate a successful run, use the HelloWorld Lambda function provided by passing the name or ARN of the Lambda function the stack has created. Your input appears as follows:

{
  "TargetLambda": "circuit-breaker-stack-HelloWorldFunction-pP1HNkJGugQz"
}

During the successful run, the Get Circuit Status step checks the circuit status against the DynamoDB table. Suppose that the circuit is CLOSED, which is indicated by zero records for that service in the DynamoDB table. In that case, the Execute Lambda step runs the Lambda function and exits the workflow successfully.

Step Function with closed circuit

Step Function with closed circuit

Service timeout

To simulate a timeout, use the TestCircuitBreaker Lambda function by passing the name or ARN of the Lambda function the stack has created. Your input appears as:

{
  "TargetLambda": "circuit-breaker-stack-TestCircuitBreakerFunction-mKeyyJq4BjQ7"
}

Again, the circuit status is checked against the DynamoDB table by the Get Circuit Status step in the workflow. The circuit is CLOSED during the first pass, and the Execute Lambda step runs the Lambda function and timeout.

The workflow retries based on the retry count and the exponential backoff values, and finally returns a timeout error. It runs the Update Circuit Status step where a record is inserted in the DynamoDB table for that service, with a predefined time-to-live value specified by TTL attribute ExpireTimeStamp.

Step Function with open circuit

Step Function with open circuit

Repeat timeout

As long as there is an item for the service in the DynamoDB table, the circuit breaker workflow returns an immediate failure to the calling service. When you re-execute the call to the Step Functions workflow for the TestCircuitBreaker Lambda function within 20 seconds, the circuit is still open. The workflow immediately fails, ensuring the stability of the overall application performance.

Step Function workflow immediately fails until retry

Step Function workflow immediately fails until retry

The item in the DynamoDB table expires after 20 seconds, and the workflow retries the service again. This time, the workflow retries with exponential backoffs, and if it succeeds, the workflow exits successfully.

Cleaning up

To avoid incurring additional charges, clean up all the created resources. Run the following command from a terminal window. This command deletes the created resources that are part of this example.

sam delete --stack-name circuit-breaker-stack --region <region name>

Conclusion

This post showed how to implement the circuit breaker pattern using Step Functions, Lambda, DynamoDB, and .NET Core 3.1. This pattern can help prevent system degradation in service failures or timeouts. Step Functions and the TTL feature of DynamoDB can make it easier to implement the circuit breaker capabilities.

To learn more about developing microservices on AWS, refer to the whitepaper on microservices. To learn more about serverless and AWS SAM, visit the Sessions with SAM series and find more resources at Serverless Land.

Twelve-Year-Old Linux Vulnerability Discovered and Patched

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/01/twelve-year-old-linux-vulnerability-discovered-and-patched.html

It’s a privilege escalation vulnerability:

Linux users on Tuesday got a major dose of bad news — a 12-year-old vulnerability in a system tool called Polkit gives attackers unfettered root privileges on machines running most major distributions of the open source operating system.

Previously called PolicyKit, Polkit manages system-wide privileges in Unix-like OSes. It provides a mechanism for nonprivileged processes to safely interact with privileged processes. It also allows users to execute commands with high privileges by using a component called pkexec, followed by the command.

It was discovered in October, and disclosed last week — after most Linux distributions issued patches. Of course, there’s lots of Linux out there that never gets patched, so expect this to be exploited in the wild for a long time.

Of course, this vulnerability doesn’t give attackers access to the system. They have to get that some other way. But if they get access, this vulnerability gives them root privileges.

Friday Squid Blogging: Cephalopods Thirty Million Years Older Than Previously Thought

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/01/friday-squid-blogging-cephalopods-thirty-million-years-older-than-previously-thought.html

New fossils from Newfoundland push the origins of cephalopods to 522 million years ago.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Tracking Secret German Organizations with Apple AirTags

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/01/tracking-secret-german-organizations-with-apple-airtags.html

A German activist is trying to track down a secret government intelligence agency. One of her research techniques is to mail Apple AirTags to see where they actually end up:

Wittmann says that everyone she spoke to denied being part of this intelligence agency. But what she describes as a “good indicator,” would be if she could prove that the postal address for this “federal authority” actually leads to the intelligence service’s apparent offices.

“To understand where mail ends up,” she writes (in translation), “[you can do] a lot of manual research. Or you can simply send a small device that regularly transmits its current position (a so-called AirTag) and see where it lands.”

She sent a parcel with an AirTag and watched through Apple’s Find My system as it was delivered via the Berlin sorting center to a sorting office in Cologne-Ehrenfeld. And then appears at the Office for the Protection of the Constitution in Cologne.

So an AirTag addressed to a telecommunications authority based in one part of Germany, ends up in the offices of an intelligence agency based in another part of the country.

Wittmann’s research is also now detailed in the German Wikipedia entry for the federal telecommunications service. It recounts how following her original discovery in December 2021, subsequent government press conferences have denied that there is such a federal telecommunications service at all.

Here’s the original Medium post, in German.

In a similar story, someone used an AirTag to track her furniture as a moving company lied about its whereabouts.

New DeadBolt Ransomware Targets NAT Devices

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/01/new-deadbolt-ransomware-targets-nat-devices.html

There’s a new ransomware that targets NAT devices made by QNAP:

The attacks started today, January 25th, with QNAP devices suddenly finding their files encrypted and file names appended with a .deadbolt file extension.

Instead of creating ransom notes in each folder on the device, the QNAP device’s login page is hijacked to display a screen stating, “WARNING: Your files have been locked by DeadBolt”….

[…]

BleepingComputer is aware of at least fifteen victims of the new DeadBolt ransomware attack, with no specific region being targeted.

As with all ransomware attacks against QNAP devices, the DeadBolt attacks only affect devices accessible to the Internet.

As the threat actors claim the attack is conducted through a zero-day vulnerability, it is strongly advised that all QNAP users disconnect their devices from the Internet and place them behind a firewall.

Merck Wins Insurance Lawsuit re NotPetya Attack

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/01/merck-wins-insurance-lawsuit-re-notpetya-attack.html

The insurance company Ace American has to pay for the losses:

On 6th December 2021, the New Jersey Superior Court granted partial summary judgment (attached) in favour of Merck and International Indemnity, declaring that the War or Hostile Acts exclusion was inapplicable to the dispute.

Merck suffered US$1.4 billion in business interruption losses from the Notpetya cyber attack of 2017 which were claimed against “all risks” property re/insurance policies providing coverage for losses resulting from destruction or corruption of computer data and software.

The parties disputed whether the Notpetya malware which affected Merck’s computers in 2017 was an instrument of the Russian government, so that the War or Hostile Acts exclusion would apply to the loss.

The Court noted that Merck was a sophisticated and knowledgeable party, but there was no indication that the exclusion had been negotiated since it was in standard language. The Court, therefore, applied, under New Jersey law, the doctrine of construction of insurance contracts that gives prevalence to the reasonable expectations of the insured, even in exceptional circumstances when the literal meaning of the policy is plain.

Merck argued that the attack was not “an official state action,” which I’m surprised wasn’t successfully disputed.

Slashdot thread.