Mega six-screen cyberdeck

Post Syndicated from Ashley Whittaker original https://www.raspberrypi.org/blog/mega-six-screen-cyberdeck/

Holy cyberdecks! Redditor Holistech (aka Sören Gebbert) really leaned in to the “more is more” idiom when building this big orange cyberdeck using three Raspberry Pis. Why use just one screen to manipulate enemy cyberware and take down your cyberpunk foes, when you can have six?

six screen cyber deck rear view
Rear view (keep reading for the big reveal)

From four to six

We first came across Sören’s work on hackster.io and we were impressed with what we found, which was this four‑screen creation running Linux Mint on a dual Raspberry Pi setup:

four screen cyberdeck
The first, four-screen, iteration of this project is still impressive

So imagine our surprise when we clicked through to check out Holistech on reddit, only to be confronted with this six‑screen monstrosity of brilliance:

six screen cyberdeck
Level up

He’s only gone and levelled up his original creation already. And before we even had the chance to properly swoon over the original.

Under the hood

Originally, Sören wanted to use Raspberry Pi Zero because they’re tiny and easily hidden away inside projects. He needed more power though, so he went with Raspberry Pi 4 instead.

cyberdecks on a desk
The whole family

Sören 3D-printed the distinctive orange frame. On the back of the rig are openings for a fan for active cooling and a mini control display that shows the CPU temperature and the fan speed.

Six 5.5″ HD resolution screens are the eyes of the project. And everything is powered by hefty 26,000 mAh battery power banks.

Carry on

And it gets even better: this whole multi-screen thing is portable. Yes, portable. You can fold it up, pack it away in its suitably steampunk metal box, and carry it with you.

There are plenty more photos. Head to Instagram to take a closer look at how Sören’s genius design folds in on itself to enable portability.

The post Mega six-screen cyberdeck appeared first on Raspberry Pi.

Perform Chaos Testing on your Amazon Aurora Cluster

Post Syndicated from Anthony Pasquariello original https://aws.amazon.com/blogs/architecture/perform-chaos-testing-on-your-amazon-aurora-cluster/

“Everything fails all the time” Werner Vogels, AWS CTO

In 2010, Netflix introduced a tool called “Chaos Monkey”, that was used for introducing faults in a production environment. Chaos Monkey led to the birth of Chaos engineering where teams test their live applications by purposefully injecting faults. Observations are then used to take corrective action and increase resiliency of applications.

In this blog, you will learn about the fault injection capabilities available in Amazon Aurora for simulating various database faults.

Chaos Experiments

Chaos experiments consist of:

  • Understanding the application baseline: The application’s steady-state behavior
  • Designing an experiment: Ask “What can go wrong?” to identify failure scenarios
  • Run the experiment: Introduce faults in the application environment
  • Observe and correct: Redesign apps or infrastructure for fault tolerance

Chaos experiments require fault simulation across distributed components of the application. Amazon Aurora provides a set of fault simulation capabilities that may be used by teams to exercise chaos experiments against their applications.

Amazon Aurora fault injection

Amazon Aurora is a fully managed database service that is compatible with MySQL and PostgreSQL. Aurora is highly fault tolerant due to its six-way replicated storage architecture. In order to test the resiliency of an application built with Aurora, developers can leverage the native fault injection features to design chaos experiments. The outcome of the experiments gives a better understanding of the blast radius, depth of monitoring required, and the need to evaluate event response playbooks.

In this section, we will describe the various fault injection scenarios that you can use for designing your own experiments. We’ll show you how to conduct the experiment and use the results. This will make your application more resilient and prepared for an actual event.

Note that availability of the fault injection feature is dependent on the version of MySQL and PostgreSQL.

Figure 1. Fault injection overview

Figure 1. Fault injection overview

1. Testing an instance crash

An Aurora cluster can have one primary and up to 15 read replicas. If the primary instance fails, one of the replicas becomes the primary. Applications must be designed to recover from these instance failures as soon as possible to have minimal impact on the end-user experience.

The instance crash fault injection simulates failure of the instance/dispatcher/node in the Aurora database cluster. Fault injection may be carried out on the primary or replicas by running the API against the target instance.

Example: Aurora PostgreSQL for instance crash simulation

The query following will simulate a database instance crash:

SELECT aurora_inject_crash ('instance' );

Since this is a simulation, it does not lead to a failover to the replica. As an alternative to using this API, you can carry out an actual failover by using the AWS Management Console or AWS CLI.

The team should observe the change in the application’s behavior to understand the impact of the instance failure. Take corrective actions to reduce the impact of such failures on the application.

A long recovery time on the application would require the team to reduce the Domain Name Service (DNS) time-to-live (TTL) for the DB connections. As a general best practice, the Aurora Database cluster should have at least one replica.

2. Testing the replica failure

Aurora manages asynchronous replication between cluster nodes within a cluster. The typical replication lag is under 100 milliseconds. Network slowness or issues on the nodes may lead to an increase in replication lag between writer and replica nodes.

The replica failure fault injection allows you to simulate replication failure across one or more replicas. Note that this type of fault injection applies only to a DB cluster that has at least one read replica.

Replica failure manifests itself as stale data read by the application that is connecting to the replicas. The specific functional impact on the application depends on the sensitivity to the freshness of data. Note that this fault injection mechanism does not apply to the native replication supported mechanisms in PostgreSQL and MySQL databases.

Example: Aurora PostgreSQL for replica failure

The statement following will simulate 100% failure of replica named ‘my-replica’ for 20 seconds.

SELECT aurora_inject_replica_failure(100, 20, ‘my-replica’)

The team must observe the behavior of the application from the data sensitivity perspective. If the observed lag is unacceptable, the team must evaluate corrective actions such as vertical scaling of database instances and query optimization. As a best practice, the team should monitor the replication lag and take proactive actions to address it.

3. Testing the disk failure

Aurora’s storage volume consists of six copies of data across three Availability Zones (refer the diagram preceding). Aurora has an inherent ability to repair itself for failures in the storage components. This high reliability is achieved by way of a quorum model. Reads require only 3/6 nodes and writes require 4/6 nodes to be available. However, there may still be transient impact on application depending on how widespread the issue.

The disk failure injection capability allows you to simulate failures of storage nodes and partial failure of disks. The severity of failure can be set as a percentage value. The simulation continues only for the specified amount of time. There is no impact on the actual data on the storage nodes and the disk.

Example: Aurora PostgreSQL for disk failure simulation

You may get the number of disks (for index) on your cluster using the query:

SELECT disks FROM aurora_show_volume_status()

The query following will simulate 75% failure on disk with index 15. The simulation will end in 20 seconds.

SELECT aurora_inject_disk_failure(75, 15, true, 20)

Applications may experience temporary failures due to this fault injection and should be able to gracefully recover from it. If the recovery time is higher than a threshold, or the application has a complete failure, the team can redesign their application.

4. Disk congestion fault

Disk congestion usually happens because of heavy I/O traffic against the storage devices. The impact may range from degraded application performance, to complete application failures.

Aurora provides the capability to simulate disk congestion without synthetic SQL load against the database. With this fault injection mechanism, you can gain a better understanding of the performance characteristics of the application under heavy I/O spikes.

Example: Aurora PostgreSQL for disk congestion simulation

You may get the number of disks (for index) on your cluster using the query:

SELECT disks FROM aurora_show_volume_status()

The query following will simulate a 100% disk failure for 20 seconds. The failure will be simulated on disk with index 15. Simulated delay will be between 30 and 40 milliseconds.

SELECT aurora_inject_disk_congestion(100, 15, true, 20, 30, 40)

If the observed behavior is unacceptable, then the team must carefully consider the load characteristics of their application. Depending on the observations, corrective action may include query optimization, indexing, vertical scaling of the database instances, and adding more replicas.

Conclusion

A chaos experiment involves injecting a fault in a production environment and then observing the application behavior. The outcome of the experiment helps the team identify application weaknesses and evaluate event response processes. Amazon Aurora natively provides fault-injection capabilities that can be used by teams to conduct chaos experiments for database failure scenarios. Aurora can be used for simulating instance failure, replication failure, disk failures, and disk congestion. Try out these capabilities in Aurora to make your applications more robust and resilient from database failures.

Amazon EBS io2 Block Express Volumes with Amazon EC2 R5b Instances Are Now Generally Available

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/amazon-ebs-io2-block-express-volumes-with-amazon-ec2-r5b-instances-are-now-generally-available/

At AWS re:Invent 2020, we previewed Amazon EBS io2 Block Express volumes, the next-generation server storage architecture that delivers the first SAN built for the cloud. Block Express is designed to meet the requirements of the largest, most I/O-intensive, mission-critical deployments of Microsoft SQL Server, Oracle, SAP HANA, and SAS Analytics on AWS.

Today, I am happy to announce the general availability of Amazon EBS io2 Block Express volumes, with Amazon EC2 R5b instances powered by the AWS Nitro System to provide the best network-attached storage performance available on EC2. The io2 Block Express volumes now also support io2 features such as Multi-Attach and Elastic Volumes.

In the past, customers had to stripe multiple volumes together in order go beyond single-volume performance. Today, io2 volumes can meet the needs of mission-critical performance-intensive applications without striping and the management overhead that comes along with it. With io2 Block Express, customers can get the highest performance block storage in the cloud with four times higher throughput, IOPS, and capacity than io2 volumes with sub-millisecond latency, at no additional cost.

Here is a summary of the use cases and characteristics of the key Solid State Drive (SSD)-backed EBS volumes:

General Purpose SSD Provisioned IOPS SSD
Volume type gp2 gp3 io2 io2 Block Express
Durability 99.8%-99.9% durability 99.999% durability
Use cases General applications, good to start with when you do not fully understand the performance profile yet I/O-intensive applications and databases Business-critical applications and databases that demand highest performance
Volume size 1 GiB – 16 TiB 4 GiB – 16 TiB 4 GiB – 64 TiB
Max IOPS 16,000 64,000 ** 256,000
Max throughput 250 MiB/s * 1,000 MiB/s 1,000 MiB/s ** 4,000 MiB/s

* The throughput limit is between 128 MiB/s and 250 MiB/s, depending on the volume size.
** Maximum IOPS and throughput are guaranteed only on instances built on the Nitro System provisioned with more than 32,000 IOPS.

The new Block Express architecture delivers the highest levels of performance with sub-millisecond latency by communicating with an AWS Nitro System-based instance using the Scalable Reliable Datagrams (SRD) protocol, which is implemented in the Nitro Card dedicated for EBS I/O function on the host hardware of the instance. Block Express also offers modular software and hardware building blocks that can be assembled in many ways, giving you the flexibility to design and deliver improved performance and new features at a faster rate.

Getting Started with io2 Block Express Volumes
You can now create io2 Block Express volumes in the Amazon EC2 console, AWS Command Line Interface (AWS CLI), or using an SDK with the Amazon EC2 API when you create R5b instances.

After you choose the EC2 R5b instance type, on the Add Storage page, under Volume Type, choose Provisioned IOPS SSD (io2). Your new volumes will be created in the Block Express format.

Things to Know
Here are a couple of things to keep in mind:

  • You can’t modify the size or provisioned IOPS of an io2 Block Express volume.
  • You can’t launch an R5b instance with an encrypted io2 Block Express volume that has a size greater than 16 TiB or IOPS greater than 64,000 from an unencrypted AMI or a shared encrypted AMI. In this case, you must first create an encrypted AMI in your account and then use that AMI to launch the instance.
  • io2 Block Express volumes do not currently support fast snapshot restore. We recommend that you initialize these volumes to ensure that they deliver full performance. For more information, see Initialize Amazon EBS volumes in Amazon EC2 User Guide.

Available Now
The io2 Block Express volumes are available in all AWS Regions where R5b instances are available: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Tokyo), Europe (Frankfurt), with support for more AWS Regions coming soon. We plan to allow EC2 instances of all types to connect to io2 Block Volumes, and will have updates on this later in the year.

In terms of pricing and billing, io2 volumes and io2 Block Express volumes are billed at the same rate. Usage reports do not distinguish between io2 Block Express volumes and io2 volumes. We recommend that you use tags to help you identify costs associated with io2 Block Express volumes. For more information, see the Amazon EBS pricing page.

To learn more, visit the EBS Provisioned IOPS Volume page and io2 Block Express Volumes in the Amazon EC2 User Guide.

Channy

[The Lost Bots] Episode 1: External Threat Intelligence

Post Syndicated from Rapid7 original https://blog.rapid7.com/2021/07/19/lost-bots-vlog/

[The Lost Bots] Episode 1: External Threat Intelligence

Welcome to The Lost Bots, a new vlog series where Rapid7 resident expert and former CISO Jeffrey Gardner (virtually) sits down with fellow industry experts to spill the tea on current events and trends in the security space. They’ll also share security best practices and trade war stories with the Rapid7 SOC team. The best part? Each episode is short, sweet, and to the (end)point – so you gain insights from the industry’s brightest in just 15 minutes.

For this inaugural episode, Jeffrey sits down with Rapid7 Insight Platform SVP Pete Rubio and IntSights Cofounder and CPO Alon Arvats to discuss how teams can successfully leverage external threat intelligence to identify and mitigate lurking attacks. They tackle the “what”, “why”, and “how” of external threat intelligence. They also share how security teams can effectively put external threat intel into action and what behaviors and telemetry are the most useful to find advanced threats.

[The Lost Bots] Episode 1: External Threat Intelligence

Stay tuned for future episodes of The Lost Bots! For our second installment, Jeffrey will be back to discuss a topic we’ve all been hearing a lot about in recent months: Extended Detection and Response, or XDR.

Rapid7 + XDR: Security that Moves as Fast as Your Business

Post Syndicated from Rich Perkett original https://blog.rapid7.com/2021/07/19/extended-detection-response/

Rapid7 + XDR: Security that Moves as Fast as Your Business

Since launching InsightIDR almost six years ago, our mission has remained constant: make it possible for any security team to achieve fast, sophisticated threat detection and response programs that scale with their business. Making threat detection and response as agile and simple as possible enables security professionals to focus their time and energy on the most critical incidents and the things that matter most.

We didn’t set out to build another security incident and event management (SIEM) or endpoint detection and response (EDR) product. Industry approaches at the time were — and largely remain — broken. We set out to build a more effective, efficient way to tackle threat detection and response across modern, distributed, hybrid cloud environments. Through the early days of introducing the user and entity behavior analytics (UEBA) category, to the addition of the Rapid7 agent to unlock EDR and attacker behavior analytics (ABA), and continued value delivery with deception technology, file integrity monitoring (FIM), automation, network traffic analysis (NTA), cloud detections, and security orchestration and automated response (SOAR), we were always informed by what we learned from customers, what we saw in our own service engagements, and community-infused threat intelligence projects, like Metasploit, Velociraptor, Project Sonar, and Project Heisenberg.

We are excited that analysts and others in the market are now validating the approach that we’ve taken from the start. For some time, we knew we had an “X factor” that differentiated InsightIDR — and made it challenging to put it into a specific pre-existing market category. It’s so fitting that the market is starting to equate our approach with extended detection and response, or XDR.

We’re happy to continue to lead from the front, and, regardless of the acronym, we remain unwavering in our promise to continue enabling security operations professionals to detect threats earlier and respond smarter and faster to secure their environments — regardless of scale. As part of our commitment to continue to forge the frontier of threat detection and response, we are thrilled to leverage technology and talent from IntSights. It supercharges the Threat Engine that powers our attack mapping and out-of-the-box detections — strengthening the signal-to-noise and extinguishing threats faster.

XDR that delivers the freedom to focus

XDR unifies and transforms relevant security data from across your modern environment to detect real attacks and provide security teams with high-context and actionable insights. By aggregating threat detection and response across multiple controls, XDR can improve threat detection and response efficacy and efficiency.

After countless conversations with customers, thousands of professional services engagements, and living in customers’ shoes with our managed detection and response (MDR) SOC experience, we consistently heard one thing: what eludes security teams is not attackers, it’s time. Teams simply don’t have the time or resources to do it all, and forced trade-offs create opportunities for attackers to get in. That’s why we purpose-built InsightIDR to give teams time back to focus on successful, proactive and complete threat detection and response programs.

Empower every analyst to be an expert. Today’s security analyst has to be a Renaissance player to be successful versus attackers. But longer onboarding cycles, antiquated rule sets created by previous employees, and steep learning curves make it challenging to ensure every analyst is productive. InsightIDR is cloud-native and SaaS-delivered to eliminate the distractions of months-to-years-long deployments and configurations. With a focus on flexibility, intuitive UI, and a highly contextualized view of the environment “out of the box,” InsightIDR helps teams level-up resources and see value on day one.

Transform security with your business. As every organization pursues digital transformation and cloud computing becomes the default, security teams struggle to bring legacy tools along and manage a vast array of disparate point solutions to try to get the full picture. InsightIDR has always had a forward-looking view of the attack surface, providing a harmonious, correlated view of users, endpoints, network, cloud, and applications — immediately. No more tab-hopping.

Trust your detections, immediately. One of the more egregious and frustrating errors that accompanies alternative threat detection and response offerings is the volume of false positives. Given that teams already have so little time to spare, even spending a moment chasing a false alarm is irritating; when it happens during dinner or on a weekend, it’s infuriating. InsightIDR takes a multi-layered detection approach, leveraging our knowledge of customer environments along with our internal and community-infused threat intelligence to fuel our Threat Engine. This engine encompasses all of our proprietary machine learning and algorithms that enable us to zero in on both known and unknown threats, with further human curation by our detections engineering experts. This highly curated library is then expertly tested in the field by our industry-leading MDR SOC. The result is a library of high-fidelity, relevant detections teams can feel confident acting on.

Accelerate response, stay ahead of attackers. When your team is up against an attack, every second matters; we don’t want to waste even a single mouse-click. With our detailed, correlated investigations, teams have the full timeline of an attack and all relevant information they need in one place. With expert- and community-driven playbooks, and containment and automation built in, analysts are empowered to eliminate threats faster — before attackers can succeed.

Strengthening our signal-to-noise with IntSights

As we look ahead to what’s next, a theme has emerged: signal-to-noise. The sprawl of data and noise is infinite. What matters is finding what matters.

With the acquisition of IntSights, we doubled down on our goal to deliver the highest-fidelity set of detections to thwart attackers. As a leading provider of contextualized external threat intelligence and proactive remediation, IntSights further strengthens our XDR offering, delivering improved signal-to-noise and higher-fidelity alerts to drive earlier threat detection and accelerated response. Combining IntSights’ external threat view with Rapid7’s knowledge of customers’ digital footprints and community-infused threat intelligence unlocks the most comprehensive, tailored view of a customer’s attack surface available.

We have a lot to be optimistic about when it comes to IntSights. One of the most exciting things is our shared view that we can democratize sophisticated intelligence, detection, and response. We are thrilled to collaborate with them on this next chapter, and look forward to sharing more with customers soon.

Rapid7 Acquires IntSights to Tackle the Expanding Threat Landscape

Post Syndicated from Corey Thomas original https://blog.rapid7.com/2021/07/19/rapid7-acquires-intsights/

Rapid7 Acquires IntSights to Tackle the Expanding Threat Landscape

I am pleased to share the exciting news that, today, Rapid7 acquired IntSights, a leading provider of cloud-native, external threat intelligence and proactive threat remediation. The IntSights team is fantastic, and their threat intelligence capabilities are equally impressive. I’ll share more about why IntSights is a great fit for Rapid7 and our customers, but let me first share some context for this acquisition.

We’ve seen firsthand that with digital transformation the attack surface has increased exponentially and customers are recognizing that improved visibility to their internal risk profile is just one part of the security equation. With today’s threat landscape, it’s imperative for security teams to have early, contextualized threat detection across their internal and external environment. Yet most security teams are already under-resourced and overburdened, struggling to identify and address what needs immediate action. So, under these circumstances, how can we help security teams stay one step ahead of the attackers? Enter IntSights.

IntSights offers a leading, cloud-native, external threat intelligence and remediation solution that helps customers solve this emerging challenge. Sophisticated threat intelligence capabilities are typically only realistic for the most mature, well-resourced organizations. But IntSights is disrupting that and democratizing threat intelligence so that every organization can protect itself, regardless of size or capabilities.

There’s no shortage of threat intelligence information available today, but much of it lacks context, creating too much alert noise and additional work for already-overburdened security teams. IntSights’ flagship Threat Command offering turns complex signals into contextualized attack-surface intelligence, making threat intelligence easier for organizations of any size to remediate their most critical external threats.

For example, IntSights monitors the clear, deep, and dark webs to identify threats specifically targeting an organization’s digital footprint, including things like data and credential leakage, malicious activity tied to their brand, and fraud. But IntSights goes beyond monitoring and takes action by proactively remediating with automated takedowns of threats.

Coupling IntSights’ tailored, external threat-intelligence capabilities with Rapid7’s community-infused threat intelligence and deep understanding of customer environments will enable customers with a unified view into threats, attack-surface monitoring, greater signal-to-noise ratio, relevant insights, and proactive threat mitigation.

What’s next

IntSights has built a tremendous business and we look forward to making Threat Command available as a standalone offering to an even broader set of customers through this acquisition. At the same time, we will begin integrating IntSights’ threat-intelligence capabilities into the Rapid7 Insight Platform to unlock earlier threat identification and faster remediation across our entire portfolio. Learn more about how we intend to accelerate security operations and emergent threat response with our platform.

In addition, we will leverage IntSights’ capabilities to enhance our cloud-native, extended detection and response (XDR) capabilities by enabling high-quality, high-fidelity alerts to ensure efficient security operations, earlier threat detection, and accelerated response times. Learn more about how the acquisition of IntSights enhances our best-in-class XDR offering.

Welcome, IntSights!

From its beginning, IntSights set out on a mission to democratize threat intelligence, something that is very culturally synergistic with Rapid7, as we continue our journey to close the security achievement gap and bring high-quality and efficient security operations to organizations of all sizes and capabilities. I want to welcome IntSights’ customers, partners, and team members to Rapid7. Today we begin a new and exciting chapter together as we continue to innovate in the threat-intelligence space, always keeping the needs of our customers at the forefront. I look forward to what will undoubtedly be great things to come.

Implement a centralized patching solution across multiple AWS Regions

Post Syndicated from Akash Kumar original https://aws.amazon.com/blogs/security/implement-a-centralized-patching-solution-across-multiple-aws-regions/

In this post, I show you how to implement a centralized patching solution across Amazon Web Services (AWS) Regions by using AWS Systems Manager in your AWS account. This helps you to initiate, track, and manage your patching events across AWS Regions from one centralized place.

Enterprises with large, multi-Region hybrid environments must determine whether they want to centralize patching by using Systems Manager to map all their instances under one Region, or decentralize patching to each Region where instances are deployed. Both approaches have trade-offs in terms of cost and operation overhead. For centralized patching under one Region, you must enable the Systems Manager advanced-instances tier if your running instances count exceeds the registration maximum for on-premises servers or VMs per AWS account per Region. (At the time of this blog post, the maximum count is set to 1,000). This tier is priced at a higher pay-as-you-go rate, but provides additional features on top of the standard-instances tier solution, such as the ability to connect to your hybrid machines by using Systems Manager Session Manager, Microsoft application patching, or other solutions. Using a decentralized patching approach, if you aren’t interested in advanced-tier features and have more instances than the AWS Region registration maximum that is allowed at the standard-tier level, you can distribute your instances across Regions and run it under the standard-tier section which is priced at a lower rate with respect to the advanced tier.

Solution overview

Figure 1 shows the architecture of the centralized patching solution across multiple Regions.

Figure 1: Solution architecture

Figure 1: Solution architecture

The automated solution I provide in this post is focused on scheduling and patching managed instances across AWS Regions. Systems Manager Maintenance Windows initiates a series of steps for automated patching for the instances, regardless of which Regions the instances are in.

Here are the key building blocks for this solution:

AWS Systems Manager Maintenance Windows is a feature you can use to define a schedule for when to perform potentially disruptive actions on your instances, such as patching an operating system, updating drivers, or installing software. Maintenance Windows also makes it possible for you to schedule actions on other AWS resource types, such as Amazon Simple Storage Service (Amazon S3) buckets, Amazon Simple Queue Service (Amazon SQS) queues, AWS Key Management Service (AWS KMS) keys, and others that are out of scope for this blog post.

AWS Lambda automatically runs your code without requiring you to provision or manage infrastructure. It can automatically scale your application by running code in response to each event. Also, you only pay for the compute time you consume, so you’re never paying for over-provisioned infrastructure.

AWS Systems Manager Automation simplifies common maintenance and deployment tasks for Amazon Elastic Compute Cloud (Amazon EC2) instances and other AWS resources, without the need for human action.

An AWS Systems Manager document (SSM document) defines the actions that Systems Manager performs on your managed instances.

Solution details

Figure 2 shows the centralized patching solution for a multi-Region hybrid workflow in detail.

Figure 2: Detailed workflow diagram: Centralized patching solution for multi-Region and hybrid instances

Figure 2: Detailed workflow diagram: Centralized patching solution for multi-Region and hybrid instances

You implement the solution as follows:

  1. In a central management Region, configure a maintenance window with a custom Lambda function as a target, with a JSON payload input that defines your target Regions, custom SSM document information, and target resource groups.
  2. Configure the Lambda function that will first filter out the target Regions where there are no instances mapped to resource groups, and then initiate the Systems Manager Automation API for the remaining Regions that have instances mapped to the resource groups.
  3. Configure a Systems Manager Automation API to initiate Run Command in all target Regions according to the custom AWS document.
  4. Configure the AWS custom automation document to call the AWS-RunPatchBaseline document against all instances for patching according to the resource group defined in the input payload JSON.

Solution deployment

To deploy the solution, you perform these steps:

  1. Verify prerequisites in your AWS account
  2. Deploy an AWS CloudFormation template
  3. Create a test patching event

Step 1: Verify prerequisites in your AWS account

The sample solution provided by this blog requires that you set up Systems Manager in your account and resource groups in the target Regions. Before you get started, make sure you’ve completed all of the following steps:

Step 2: Deploy the CloudFormation template

In this next step, you deploy a CloudFormation template to implement the centralized patching solution across Regions in your account. Make sure you deploy the template within the AWS account and Region from which you want to centralize patching coordination.

To deploy the CloudFormation stack

  1. Choose the following Launch Stack button to launch a CloudFormation stack in your account.

    Select the Launch Stack button to launch the template

Note: The stack will launch in the N. Virginia (us-east-1) Region. It takes approximately 15 minutes for the CloudFormation stack to complete. To deploy this solution into other AWS Regions, download the solution’s CloudFormation template and deploy it to the selected Region.

 

  • In the AWS CloudFormation console, choose the Select Template form, and then choose Next.
  • On the Specify Details page, provide the following input parameters. You can modify the default values to customize the solution for your environment.

    Input parameter Description
    Duration The duration for the maintenance window automation job, in hours. The default is 5.
    OwnerInformation The owner information for the maintenance window. The default is Patch Management Team.
    Schedule The schedule for the owner of the maintenance window, in the form of either a cron or rate expression). The default is cron(0 4 ? * SUN *).
    TimeZone The time zone for the maintenance window automation job. The default is S/EasternU.
    Figure 4: An example of the values entered for the template parameters

    Figure 4: An example of the values entered for the template parameters

  • After you’ve entered values for all of the input parameters, choose Next.
  • On the Options page, keep the defaults, and then choose Next.
  • On the Review page, under Capabilities, select the check box next to I acknowledge that AWS CloudFormation might create IAM resources with custom names, and then choose Create stack.

    Figure 5: CloudFormation capabilities acknowledgement

    Figure 5: CloudFormation capabilities acknowledgement

 

After the Status field for the CloudFormation stack changes to CREATE_COMPLETE, as shown in Figure 6, the solution is implemented and is ready for testing.

Figure 6: Completed deployment of the AWS CloudFormation stack

Figure 6: Completed deployment of the AWS CloudFormation stack

Step 3: Create a test patching event

After the CloudFormation stack has completed deployment, an AWS maintenance window is created. To test the centralized patching solution, you can use this maintenance window to initiate patching across Regions.

(Optional) To create a test patching event, edit the Lambda task as follows. Under the Tasks tab, add the following JSON data as the payload, and update the following parameters with your own data: resource group, AutomationAssumeRole ARN, MaxConcurrency, MaxErrors, Operation (Scan/ Install), and Regions, as needed for the target environment.

{
  "WindowId": "{{WINDOW_ID}}",
  "TaskExecutionId": "{{TASK_EXECUTION_ID}}",
  "Document": {
    "Name": "CustomAutomationDocument",
    "Version": "1",
    "Parameters": {
      "AutomationAssumeRole": [
        "arn:aws:iam::111222333444:role/AWS-SystemsManager-AutomationAdministrationRole"
      ],
      "Operation": [
        "Scan"
      ]
    }
  },
  "TargetParameterName": "InstanceIds",
  "Targets": [
    {
      "Key": "ResourceGroup",
      "Values": [
        "DevGroup"
      ]
    }
  ],
  "MaxConcurrency": "10",
  "MaxErrors": "1",
  "Regions": ["us-east-2","us-east-1"]
}

Wait for the next execution time for the maintenance window. On the History tab, you should see status Success to indicate that patching is complete, as shown in Figure 7.

Figure 7: The History tab for the maintenance window, showing successful patching

Figure 7: The History tab for the maintenance window, showing successful patching

To see more details related to the completed automations, look on the Automation Executions tab, shown in Figure 8.

Figure 8: The Executions tab showing details

Figure 8: The Executions tab showing details

Congratulations! You’ve successfully deployed and tested a centralized patching solution for an AWS multi-Region hybrid environment. In order to fully implement this solution, you’ll need to add the resource groups in all your target Regions and update the payload JSON in Systems Manager Maintenance Windows.

Summary

You’ve learned how to use Systems Manager to centralize patching across multiple AWS Regions and to include on-premises instances in your patching solution. All of the code for this solution is available as part of an CloudFormation template. Feel free to play around with the code; we hope it helps you learn more about automated security remediation. You can adjust the code to better fit your unique environment, or extend the code with additional steps. For example, you could extend it across accounts and also create a custom Systems Manager document to run across Regions.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about using this solution, contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Akash Kumar

Akash is a Cloud Migration Specialist with AWS Professional Services. He is passionate about re-architecting, designing, and developing modern IT solutions for the cloud.

[$] Descriptorless files for io_uring

Post Syndicated from original https://lwn.net/Articles/863071/rss

The lowly file descriptor is one of the fundamental objects in Linux
systems. A file descriptor, which is a simple integer value, can refer to an
open file — or to a network connection, a running process, a loaded BPF
program, or a namespace.
Over the years, the use of file descriptors to refer to transient objects
has grown to the point that it can be difficult to justify an API that
uses anything else. Interestingly, though, the io_uring subsystem looks as if it is moving
toward its own number space separate from file descriptors.

Candiru: Another Cyberweapons Arms Manufacturer

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/07/candiru-another-cyberweapons-arms-manufacturer.html

Citizen Lab has identified yet another Israeli company that sells spyware to governments around the world: Candiru.

From the report:

Summary:

  • Candiru is a secretive Israel-based company that sells spyware exclusively to governments. Reportedly, their spyware can infect and monitor iPhones, Androids, Macs, PCs, and cloud accounts.
  • Using Internet scanning we identified more than 750 websites linked to Candiru’s spyware infrastructure. We found many domains masquerading as advocacy organizations such as Amnesty International, the Black Lives Matter movement, as well as media companies, and other civil-society themed entities.
  • We identified a politically active victim in Western Europe and recovered a copy of Candiru’s Windows spyware.
  • Working with Microsoft Threat Intelligence Center (MSTIC) we analyzed the spyware, resulting in the discovery of CVE-2021-31979 and CVE-2021-33771 by Microsoft, two privilege escalation vulnerabilities exploited by Candiru. Microsoft patched both vulnerabilities on July 13th, 2021.
  • As part of their investigation, Microsoft observed at least 100 victims in Palestine, Israel, Iran, Lebanon, Yemen, Spain, United Kingdom, Turkey, Armenia, and Singapore. Victims include human rights defenders, dissidents, journalists, activists, and politicians.
  • We provide a brief technical overview of the Candiru spyware’s persistence mechanism and some details about the spyware’s functionality.
  • Candiru has made efforts to obscure its ownership structure, staffing, and investment partners. Nevertheless, we have been able to shed some light on those areas in this report.

We’re not going to be able to secure the Internet until we deal with the companies that engage in the international cyber-arms trade.

Field Notes: How Sportradar Accelerated Data Recovery Using AWS Services

Post Syndicated from Mithil Prasad original https://aws.amazon.com/blogs/architecture/field-notes-how-sportradar-accelerated-data-recovery-using-aws-services/

This post was co-written by Mithil Prasad, AWS Senior Customer Solutions Manager, Patrick Gryczkat, AWS Solutions Architect, Ben Burdsall, CTO at Sportradar and Justin Shreve, Director of Engineering at Sportradar. 

Ransomware is a type of malware which encrypts data, effectively locking those affected by it out of their own data and requesting a payment to decrypt the data.  The frequency of ransomware attacks has increased over the past year, with local governments, hospitals, and private companies experiencing cases of ransomware.

For Sportradar, providing their customers with access to high quality sports data and insights is central to their business. Ensuring that their systems are designed securely and in a way which minimizes the possibility of a ransomware attack is top priority.  While ransomware attacks can occur both on premises and in the cloud, AWS services offer increased visibility and native encryption and back up capabilities. This helps prevent and minimize the likelihood and impact of a ransomware attack.

Recovery, backup, and the ability to go back to a known good state is best practice. To further expand their defense and diminish the value of ransom, the Sportradar architecture team set out to leverage their AWS Step Functions expertise to minimize recovery time. The team’s strategy centered on achieving a short deployment process. This process commoditized their production environment, allowing them to spin up interchangeable environments in new isolated AWS accounts, pulling in data from external and isolated sources, and diminishing the value of a production environment as a ransom target. This also minimized the impact of a potential data destruction event.

By partnering with AWS, Sportradar was able to build a secure and resilient infrastructure to provide timely recovery of their service in the event of data destruction by an unauthorized third party. Sportradar automated the deployment of their application to a new AWS account and established a new isolation boundary from an account with compromised resources. In this blog post, we show how the Sportradar architecture team used a combination of AWS CodePipeline and AWS Step Functions to automate and reduce their deployment time to less than two hours.

Solution Overview

Sportradar’s solution uses AWS Step Functions to orchestrate the deployment of resources, the recovery of data, and the deployment of application code, and to navigate all necessary dependencies for order of deployment. While deployment can be orchestrated through CodePipeline, Sportradar used their familiarity with Step Functions to create a quick and repeatable deployment process for their environment.

Sportradar’s solution to a ransomware Disaster Recovery scenario has also provided them with a reliable and accelerated process for deploying development and testing environments. Developers are now able to scale testing and development environments up and down as needed.  This has allowed their Development and QA teams to follow the pace of feature development, versus weekly or bi-weekly feature release and testing schedules tied to a single testing environment.

Reference Architecture Showing How Sportradar Accelerated Data Recovery

Figure 1 – Reference Architecture Diagram showing Automated Deployment Flow

Prerequisites

The prerequisites for implementing this deployment strategy are:

  • An implemented database backup policy
  • Ideally data should be backed up to a data bunker AWS account outside the scope of the environment you are looking to protect. This is so that in the event of a ransomware attack, your backed up data is isolated from your affected environment and account
  • Application code within a GitHub repository
  • Separation of duties
  • Access and responsibility for the backups and GitHub repository should be separated to different stakeholders in order to reduce the likelihood of both being impacted by a security breach

Step 1: New Account Setup 

Once data destruction is identified, the first step in Sportradar’s process is to use a pre-created runbook to create a new AWS account.  A new account is created in case the malicious actors who have encrypted the application’s data have access to not just the application, but also to the AWS account the application resides in.

The runbook sets up a VPC for a selected Region, as well as spinning up the following resources:

  • Security Groups with network connectivity to their git repository (in this case GitLab), IAM Roles for their resources
  • KMS Keys
  • Amazon S3 buckets with CloudFormation deployment templates
  • CodeBuild, CodeDeploy, and CodePipeline

Step 2: Deploying Secrets

It is a security best practice to ensure that no secrets are hard coded into your application code. So, after account setup is complete, the new AWS accounts Access Keys and the selected AWS Region are passed into CodePipeline variables. The application secrets are then deployed to the AWS Parameter Store.

Step 3: Deploying Orchestrator Step Function and In-Memory Databases

To optimize deployment time, Sportradar decided to leave the deployment of their in-memory databases running on Amazon EC2 outside of their orchestrator Step Function.  They deployed the database using a CloudFormation template from their CodePipeline. This was in parallel with the deployment of the Step Function, which orchestrates the rest of their deployment.

Step 4: Step Function Orchestrates the Deployment of Microservices and Alarms

The AWS Step Functions orchestrate the deployment of Sportradar’s microservices solutions, deploying 10+ Amazon RDS instances, and restoring each dataset from DB snapshots. Following that, 80+ producer Amazon SQS queues and  S3 buckets for data staging were deployed. After the successful deployment of the SQS queues, the Lambda functions for data ingestion and 15+ data processing Step Functions are deployed to begin pulling in data from various sources into the solution.

Then the API Gateways and Lambda functions which provide the API layer for each of the microservices are deployed in front of the restored RDS instances. Finally, 300+ Amazon CloudWatch Alarms are created to monitor the environment and trigger necessary alerts. In total Sportradar’s deployment process brings online: 15+ Step Functions for data processing, 30+ micro-services, 10+ Amazon RDS instances with over 150GB of data, 80+ SQS Queues, 180+ Lambda functions, CDN for UI, Amazon Elasticache, and 300+ CloudWatch alarms to monitor the applications. In all, that is over 600 resources deployed with data restored consistently in less than 2 hours total.

Reference Architecture Diagram for How Sportradar Accelerated Data Recovery Using AWS Services

Figure 2 – Reference Architecture Diagram of the Recovered Application

Conclusion

In this blog, we showed how Sportradar’s team used Step Functions to accelerate their deployments, and a walk-through of an example disaster recovery scenario. Step Functions can be used to orchestrate the deployment and configuration of a new environment, allowing complex environments to be deployed in stages, and for those stages to appropriately wait on their dependencies.

For examples of Step Functions being used in different orchestration scenarios, check out how Step Functions acts as an orchestrator for ETLs in Orchestrate multiple ETL jobs using AWS Step Functions and AWS Lambda and Orchestrate Apache Spark applications using AWS Step Functions and Apache Livy. For migrations of Amazon EC2 based workloads, read more about CloudEndure, Migrating workloads across AWS Regions with CloudEndure Migration.

Field Notes provides hands-on technical guidance from AWS Solutions Architects, consultants, and technical account managers, based on their experiences in the field solving real-world business problems for customers.

 

Ben Burdsall

Ben Burdsall

Ben is currently the chief technology officer of Sportradar, – a data provider to the sporting industry, where he leads a product and engineering team of more than 800. Before that, Ben was part of the global leadership team of Worldpay.

Justin Shreve

Justin Shreve

Justin is Director of Engineering at Sportradar, leading an international team to build an innovative enterprise sports analytics platform.

Security updates for Monday

Post Syndicated from original https://lwn.net/Articles/863453/rss

Security updates have been issued by Arch Linux (chromium, firefox, mbedtls, nextcloud, python-pillow, ruby, ruby2.6, ruby2.7, systemd, thunderbird, varnish, and vivaldi), Debian (thunderbird), Fedora (chromium, firefox, and linux-firmware), Gentoo (apache, commons-fileupload, dovecot, and mediawiki), openSUSE (firefox, fossil, go1.16, and icinga2), Oracle (firefox, kernel, and kernel-container), Red Hat (nettle), and SUSE (firefox and go1.16).

Accelerating SecOps and Emergent Threat Response with the Insight Platform

Post Syndicated from Lee Weiner original https://blog.rapid7.com/2021/07/19/insight-platform-and-extended-detection-response/

Accelerating SecOps and Emergent Threat Response with the Insight Platform

When we talk to customers about the Insight Platform and how to best support their evolving needs, they’re often not asking for another product, but rather a capability that enhances a current experience. Our customers have the core ingredients of a robust security program, but as their attack surfaces endlessly sprawl, they’re looking for ways to double down on the efficiency and streamlining of security operations they’re already experiencing from the platform today. Efficiency and streamlined operations are 2 areas where our team will continue to focus efforts in order to deliver value across Rapid7’s growing best-in-class portfolio, while enabling cross-capability experiences that improve security-team effectiveness.

Responding to emerging threats and vulnerabilities: Alerts are not enough

One of Rapid7’s greatest strengths is the fact that we have market-leading products in detection and response, cloud security, and vulnerability management. As we increasingly see customers leveraging our products, there are many similar expectations from those user bases. One that stands out is the expectation/demand that Rapid7 quickly respond to emerging threats and new vulnerabilities in a way that provides actionable context. We refer to this program as Emergent Threat Response. We spend a lot of time on this today, though we need to do more here for our customers to help them combat emerging threats. We’re often addressing and detailing out what we know and what we’re doing about high-profile threats (e.g. SolarWinds SUNBURST, Microsoft Exchange Zero-Day), and while our customers have responded very positively to this type of outreach, they have also asked for more of it!

We have a unique opportunity with customers to enable a 2-way conversation. Our customers need to improve signal-to-noise, and our Emergent Threat Response approach does help to accomplish that. We can do a lot more though, and with more intelligence on the internal and external threat landscape we can offer more context and treat more threats with Emergent Threat Response. We’re constantly obsessing over improving signal-to-noise, so we’re careful to pick our spots. However, while an emerging threat may only impact a very small percentage of machines across our customer base, impacted customers may categorize those machines as high-value assets. Customers may also have a lot of interest in a specific threat group and are eager to learn more about them and the detections we have available for their known techniques. In both of these use cases — whether we’re pushing our intelligence or allowing customers to pull it — we can maintain our high standards for signal-to-noise as long as we’re always prioritizing relevancy.

The Insight Platform + IntSights: Enriching alerts and driving contextualized intelligence

When customers are battling emergent threats, core alerts and vulnerability information is important; but our customers are increasingly looking to understand more about adversary groups, tactics and techniques, and why they were targeted. Today we have a very comprehensive view of our customers’ internal networks. This is incredibly helpful to power every product we provide, but investing in more scalable ways to connect this internal profile to an external view of the world increases our ability to deliver timely, relevant, and actionable intelligence. With IntSights joining the Rapid7 family, this aspiration has become a reality. Beyond the Emergent Threat Response use case we drilled into here, the platform will leverage IntSights’ contextualized external threat intelligence to power and strengthen our threat library, risk scoring, and vulnerability prioritization. We believe we can add/enhance capabilities across the portfolio to not only help our customers solve the security concerns of today, but also take a proactive approach to defend against the security concerns of tomorrow.

Learn more about what’s in store for the Insight Platform as Rapid7 welcomes IntSights.

Имало ли е кеш от Реджеп Тайип Ердоган? Първи не само в Турция. По €25 на ръка за гласувалите за ДПС в Молдова

Post Syndicated from Николай Марченко original https://bivol.bg/%D0%BF%D1%8A%D1%80%D0%B2%D0%B8-%D0%BD%D0%B5-%D1%81%D0%B0%D0%BC%D0%BE-%D0%B2-%D1%82%D1%83%D1%80%D1%86%D0%B8%D1%8F-%D0%BF%D0%BE-e25-%D0%BD%D0%B0-%D1%80%D1%8A%D0%BA%D0%B0-%D0%B7%D0%B0-%D0%B3%D0%BB%D0%B0.html

понеделник 19 юли 2021


Поне два сигнала са постъпили в ЦИК от Република Молдова, свързани с купуването на гласове в полза на ПП “Движение за права и свободи” (ДПС) в три секции в Гагаузка…

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close