Security updates for Wednesday

Post Syndicated from original https://lwn.net/Articles/903676/

Security updates have been issued by CentOS (389-ds-base, firefox, java-1.8.0-openjdk, java-11-openjdk, kernel, postgresql, python, python-twisted-web, python-virtualenv, squid, thunderbird, and xz), Fedora (ceph, firefox, java-1.8.0-openjdk, java-11-openjdk, java-17-openjdk, java-latest-openjdk, and kubernetes), Oracle (firefox, go-toolset and golang, libvirt libvirt-python, openssl, pcre2, qemu, and thunderbird), SUSE (connman, drbd, kernel, python-jupyterlab, samba, and seamonkey), and Ubuntu (linux-oem-5.14, linux-oem-5.17 and ntfs-3g).

Отчет за свършената работа като министър на електронното управление

Post Syndicated from Bozho original https://blog.bozho.net/blog/3931

За мен беше чест да бъда първият министър на електронното управление. В 99-тото правителство на България.

Важно е тази политика да продължи да бъде представена на най-високо ниво и смятам, че ще има приемственост в служебния кабинет. Усилието за дигитализация на държавата продължава и смятам, че поставихме добри основи и добро темпо.

Тук е публикуван отчет на свършеното (и започнатото) през последните 7 месеца: https://otchet.egov.bg/.

Секция „киберсигурност“ неслучайно е толкова дълга. Както войната, така и изоставането на държавата в тази сфера, наложи спешно да се действа с ограничен капацитет, за да осигурим защитата на институциите и данните на гражданите, които те обработват.

Можеше да свършим и повече. В организации от мащаба на държавата нещата стават бавно, за да има предвидимост и устойчивост. И така трябва да бъде. Но от друга страна имаме години изоставане да наваксваме и съответно е нужно баланс между скорост и устойчивост. Смятам, че такъв баланс имаше и ще има.

Електронната идентификация трябва да заработи в началото на следващата година и това ще отключи достъпа на много граждани до лесно използване на електронни услуги. Дори без нея, в рамките на мандата, използването на е-услуги се е качило двойно. Не че сме пуснали много нови електронни услуги, но самото наличие на министерство и видимостта, която този статут дава, е довел до повече търсене. А с търсенето ще идва и търсене за удобство и там трябва да се намесим.

Предстоят избори, на които ще бъде отново кандидат. Със заявка да продължим нещата оттам, докъдето сме ги оставили и да отидем много по-далеч.

Материалът Отчет за свършената работа като министър на електронното управление е публикуван за пръв път на БЛОГодаря.

New – Run Visual Studio Software on Amazon EC2 with User-Based License Model

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/new-run-visual-studio-software-on-amazon-ec2-with-user-based-license-model/

We announce the general availability of license-included Visual Studio software on Amazon Elastic Cloud Compute (Amazon EC2) instances. You can now purchase fully compliant AWS-provided licenses of Visual Studio with a per-user subscription fee. Amazon EC2 provides preconfigured Amazon Machine Images (AMIs) of Visual Studio Enterprise 2022 and Visual Studio Professional 2022. You can launch on-demand Windows instances including Visual Studio and Windows Server licenses without long-term licensing commitments.

Amazon EC2 provides a broad choice of instances, and customers not only have the flexibility of paying for what their end users use but can also provide the capacity and right hardware to their end users. You can simply launch EC2 instances using license-included AMIs, and multiple authorized users can connect to these EC2 instances by using Remote Desktop software. Your administrator can authorize users centrally using AWS License Manager and AWS Managed Microsoft Active Directory (AD).

Configure Visual Studio License with AWS License Manager
As a prerequisite, your administrator needs to create an instance of AWS Managed Microsoft AD and allow AWS License Manager to onboard to it by accepting permission. To set up authorized users, see AWS Managed Microsoft AD documentation.

AWS License Manager makes it easier to manage your software licenses from vendors such as Microsoft, SAP, Oracle, and IBM across AWS and on-premises environments. To display a list of available Visual Studio software licenses, select User-based subscriptions in the AWS Licence Manager console.

You can see listed products to support user-based subscriptions. Each product has a descriptive name, a count of the subscribed users to utilize the product, and whether the subscription has been activated for use with a directory. Also, you are required to purchase Remote Desktop Services SAL licenses in the same way as Visual Studio by authorizing users for those licenses.

When you select Visual Studio Professional, you can see product details and subscribed users. By selecting Subscribe users, you can add authorized users to the license of Visual Studio Professional software.

You can perform the administrative tasks using the AWS Command Line Interface (CLI) tools via AWS License Manager APIs. For example, you can subscribe a user to the product in your Active Directory.

$ aws license-manager-user-subscriptions start-product-subscription \
         --username vscode2 \
         --product VISUAL_STUDIO_PROFESSIONAL \
         --identity-provider " \
                "ActiveDirectoryIdentityProvider" = \
                {"DirectoryId" = "d-9067b110b5"}" 
         --endpoint-url https://license-manager-user-subscriptions.us-east-1.amazonaws.com

To launch a Windows instance with preconfigured Visual Studio software, go to the EC2 console and select Launch instances. In the Application and OS Images (Amazon Machine Image), search for “Visual Studio on EC2,” and you can find AMIs under the Quickstart AMIs and AWS Marketplace AMIs tabs.

After launching your Windows instance, your administrator associates a user to the product in the Instances screen of the License Manager console. You can see the listed instances were launched using an AMI to provide the specified product to users who can then be associated.

These steps will be performed by the administrators who are responsible for managing users, instances, and costs across the organization. To learn more about administrative tasks, see User-based subscriptions in AWS License Manager.

Run Visual Studio Software on EC2 Instances
Once administrators authorize end users and launch the instances, you can remotely connect to Visual Studio instances using your AD account information shared by your administrator via Remote Desktop software. That’s all!

The instances deployed for user-based subscriptions must remain as managed nodes with AWS Systems Manager. For more information, see Troubleshooting managed node availability and Troubleshooting SSM Agent in the AWS Systems Manager User Guide.

Now Available
License-included Visual Studio on Amazon EC2 is now available in all AWS commercial and public Regions. You are billed per user for licenses of Visual Studio through a monthly subscription and per vCPU for license-included Windows Server instances on EC2.  You can use On-Demand InstancesReserved Instances, and Savings Plan pricing models like you do today for EC2 instances.

To learn more, visit our License Manager User Based Subscriptions documentation, and please send any feedback to AWS re:Post for EC2 or through your usual AWS Support contacts.

Channy

[$] Crosswords for GNOME

Post Syndicated from original https://lwn.net/Articles/903475/

Jonathan Blandford, who is a
longtime GNOME contributor—and a cruciverbalist
for longer still—thought it was time for GNOME to have a
crossword puzzle
application. So he set out to create one, which turned
into something of a yak-shaving exercise,
but also, ultimately, into Crosswords. Blandford
came to GUADEC 2022
to give a talk describing his journey bringing this brain
exerciser (and
productivity bane) to the GNOME desktop.

Партийни назначения на кабинета “Петков” след нагласени и непрозрачни конкурси Правителството на “Промяната” клиентелистки хариза пристанищата на изпроводяк

Post Syndicated from Екип на Биволъ original https://bivol.bg/%D0%BF%D1%80%D0%B0%D0%B2%D0%B8%D1%82%D0%B5%D0%BB%D1%81%D1%82%D0%B2%D0%BE%D1%82%D0%BE-%D0%BD%D0%B0-%D0%BF%D1%80%D0%BE%D0%BC%D1%8F%D0%BD%D0%B0%D1%82%D0%B0-%D0%BA%D0%BB%D0%B8%D0%B5%D0%BD%D1%82%D0%B5.html

вторник 2 август 2022


Досегашният министър на транспорта и съобщенията Николай Събев бетонира властта на ПП „Продължаваме промяната“ в държавните пристанища в последния момент – чрез назначаване на партийни “парашутисти” и разследвани от ДАНС…

Updated requirements for US toll-free phone numbers

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/updated-requirements-for-us-toll-free-phone-numbers/

Many Amazon Pinpoint customers use toll-free phone numbers to send messages to their customers in the United States. A toll-free number is a 10-digit number that begins with one of the following three-digit codes: 800, 888, 877, 866, 855, 844, or 833. You can use toll-free numbers to send both SMS and voice messages to recipients in the US.

What’s changing

Historically, US toll-free numbers have been available to purchase with no registration required. To prevent spam and other types of abuse, the United States mobile carriers now require new toll-free numbers to be registered as well. The carriers also require all existing toll-free numbers to be registered by September 30, 2022. The carriers will block SMS messages sent from unregistered toll-free numbers after this date.

If you currently use toll-free numbers to send SMS messages, you must complete this registration process for both new and existing toll-free numbers. We’re committed to helping you comply with these changing carrier requirements.

Information you provide as part of this registration process will be provided to the US carriers through our downstream partners. It can take up to 15 business days for your registration to be processed. To help prevent disruptions of service with your toll-free number, you should submit your registration no later than September 12th, 2022.

Requesting new toll-free numbers

Starting today, when you request a United States toll-free number in the Amazon Pinpoint console, you’ll see a new page that you can use to register your use case. Your toll-free number registration must be completed and verified before you can use it to send SMS messages. For more information about completing this registration process, see US toll-free number registration requirements and process in the Amazon Pinpoint User Guide.

Registering existing toll-free numbers

You can also use the Amazon Pinpoint console to register toll-free numbers that you already have in your account. For more information about completing the registration process for existing toll-free numbers, see US toll-free number registration requirements and process in the Amazon Pinpoint User Guide.

In closing

Change is a constant factor in the SMS and voice messaging industry. Carriers often introduce new processes in order to protect their customers. The new registration requirements for toll-free numbers are a good example of these kinds of changes. We’ll work with you to help make sure that these changes have minimal impact on your business. If you have any concerns about these changing requirements, open a ticket in the AWS Support Center.

Go 1.19 released

Post Syndicated from original https://lwn.net/Articles/903585/

Version 1.19 of the Go programming
language has been released. “Most of its changes are in the
implementation of the toolchain, runtime, and libraries. As always, the
release maintains the Go 1 promise of compatibility. We expect almost all
Go programs to continue to compile and run as before
“. This release
includes some memory-model tweaks, a LoongArch port, improvements in the
documentation-comment mechanism, and more.

Develop an Amazon Redshift ETL serverless framework using RSQL, AWS Batch, and AWS Step Functions

Post Syndicated from Lukasz Budnik original https://aws.amazon.com/blogs/big-data/develop-an-amazon-redshift-etl-serverless-framework-using-rsql-aws-batch-and-aws-step-functions/

Amazon Redshift RSQL is a command-line client for interacting with Amazon Redshift clusters and databases. You can connect to an Amazon Redshift cluster, describe database objects, query data, and view query results in various output formats. You can use enhanced control flow commands to replace existing extract, transform, load (ETL) and automation scripts.

This post explains how you can create a fully serverless and cost-effective Amazon Redshift ETL orchestration framework. To achieve this, you can use Amazon Redshift RSQL and AWS services such as AWS Batch and AWS Step Functions.

Overview of solution

When you’re migrating from existing data warehouses to Amazon Redshift, your existing ETL processes are implemented as proprietary scripts. These scripts contain SQL statements and complex business logic such as if-then-else control flow logic, error reporting, and error handling. You can convert all these features to Amazon Redshift RSQL, which you can use to replace existing ETL and other automation scripts. To learn more about Amazon Redshift RSQL features, examples, and use cases, see Accelerate your data warehouse migration to Amazon Redshift – Part 4.

AWS Schema Conversion Tool (AWS SCT) can convert proprietary scripts to Amazon Redshift RSQL. AWS SCT can automatically convert Teradata BTEQ scripts to Amazon Redshift RSQL. To learn more how to use AWS SCT, see Converting Teradata BTEQ scripts to Amazon Redshift RSQL with AWS SCT.

The goal of the solution presented in this post is to run complex ETL jobs implemented in Amazon Redshift RSQL scripts in the AWS Cloud without having to manage any infrastructure. In addition to meeting functional requirements, this solution also provides full auditing and traceability of all ETL processes that you run.

The following diagram shows the final architecture.

The deployment is fully automated using AWS Cloud Development Kit (AWS CDK) and comprises of the following stacks:

  • EcrRepositoryStack – Creates a private Amazon Elastic Container Registry (Amazon ECR) repository that hosts our Docker image with Amazon Redshift RSQL
  • RsqlDockerImageStack – Builds our Docker image asset and uploads it to the ECR repository
  • VpcStack – Creates a VPC with isolated subnets, creates an Amazon Simple Storage Service (Amazon S3) VPC endpoint gateway, as well as Amazon ECR, Amazon Redshift, and Amazon CloudWatch VPC endpoint interfaces
  • RedshiftStack – Creates an Amazon Redshift cluster, enables encryption, enforces encryption in-transit, enables auditing, and deploys the Amazon Redshift cluster in isolated subnets
  • BatchStack – Creates a compute environment (using AWS Fargate), job queue, and job definition (using our Docker image with RSQL)
  • S3Stack – Creates data, scripts, and logging buckets; enables encryption at-rest; enforces secure transfer; enables object versioning; and disables public access
  • SnsStack – Creates an Amazon Simple Notification Service (Amazon SNS) topic and email subscription (email is passed as a parameter)
  • StepFunctionsStack – Creates a state machine to orchestrate serverless RSQL ETL jobs
  • SampleDataDeploymentStack – Deploys sample RSQL ETL scripts and sample TPC benchmark datasets

Prerequisites

You should have the following prerequisites:

Deploy AWS CDK stacks

To deploy the serverless RSQL ETL framework solution, use the following code. Replace 123456789012 with your AWS account number, eu-west-1 with the AWS Region to which you want deploy the solution, and [email protected] with your email address to which ETL success and failure notifications are sent.

git clone https://github.com/aws-samples/amazon-redshift-serverless-rsql-etl-framework
cd amazon-redshift-serverless-rsql-etl-framework
npm install
./cdk.sh 123456789012 eu-west-1 bootstrap
./cdk.sh 123456789012 eu-west-1 deploy --all --parameters SnsStack:EmailAddressSubscription=[email protected]

The whole process takes a few minutes. While AWS CDK creates all the stacks, you can continue reading this post.

Create the RSQL container image

AWS CDK creates an RSQL Docker image. This Docker image is the basic building block of our solution. All ETL processes run inside it. AWS CDK creates the Docker image locally using Docker Engine and then uploads it to the Amazon ECR repository.

The Docker image is based on an Amazon Linux 2 Docker image. It has the following tools installed: the AWS Command Line Interface (AWS CLI), unixODBC, Amazon Redshift ODBC driver, and Amazon Redshift RSQL. It also contains .odbc.ini file, which defines the etl profile, which is used to connect to the Amazon Redshift cluster. See the following code:

FROM amazonlinux:2

ENV AMAZON_REDSHIFT_ODBC_VERSION=1.4.52.1000
ENV AMAZON_REDSHIFT_RSQL_VERSION=1.0.4

RUN yum install -y openssl gettext unixODBC awscli && \
yum clean all

RUN rpm -i \
https://s3.amazonaws.com/redshift-downloads/drivers/odbc/${AMAZON_REDSHIFT_ODBC_VERSION}/AmazonRedshiftODBC-64-bit-${AMAZON_REDSHIFT_ODBC_VERSION}-1.x86_64.rpm \
https://s3.amazonaws.com/redshift-downloads/amazon-redshift-rsql/${AMAZON_REDSHIFT_RSQL_VERSION}/AmazonRedshiftRsql-${AMAZON_REDSHIFT_RSQL_VERSION}-1.x86_64.rpm

COPY .odbc.ini .odbc.ini
COPY fetch_and_run.sh /usr/local/bin/fetch_and_run.sh

ENV ODBCINI=.odbc.ini
ENV ODBCSYSINI=/opt/amazon/redshiftodbc/Setup
ENV AMAZONREDSHIFTODBCINI=/opt/amazon/redshiftodbc/lib/64/amazon.redshiftodbc.ini

ENTRYPOINT ["/usr/local/bin/fetch_and_run.sh"]

The following code example shows the .odbc.ini file. It defines an etl profile, which uses an AWS Identity and Access Management (IAM) role to get temporary cluster credentials to connect to Amazon Redshift. AWS CDK creates this role for us. Because of this, we don’t need to hard-code credentials in a Docker image. The Database, DbUser, and ClusterID parameters are set in AWS CDK. Also, AWS CDK replaces the Region parameter at runtime with the Region to which you deploy the stacks.

[ODBC]
Trace=no

[etl]
Driver=/opt/amazon/redshiftodbc/lib/64/libamazonredshiftodbc64.so
Database=demo
DbUser=etl
ClusterID=redshiftblogdemo
Region=eu-west-1
IAM=1

For more information about connecting to Amazon Redshift clusters with RSQL, see Connect to a cluster with Amazon Redshift RSQL.

Our Docker image implements a well-known fetch and run integration pattern. To learn more about this pattern, see Creating a Simple “Fetch & Run” AWS Batch Job. The Docker image fetches the ETL script from an external repository, and then runs it. AWS CDK passes the information about the ETL script to run to the Docker container at runtime as an AWS Batch job parameter. The job parameter is exposed to the container as an environment variable called BATCH_SCRIPT_LOCATION. Our job also expects two other environment variables: DATA_BUCKET_NAME, which is the name of the S3 data bucket, and COPY_IAM_ROLE_ARN, which is the Amazon Redshift IAM role used for the COPY command to load the data into Amazon Redshift. All environment variables are set automatically by AWS CDK. The fetch_and_run.sh script is the entry point of the Docker container. See the following code:

#!/bin/bash

# This script expects the following env variables to be set:
# BATCH_SCRIPT_LOCATION - full S3 path to RSQL script to run
# DATA_BUCKET_NAME - S3 bucket name with the data
# COPY_IAM_ROLE_ARN - IAM role ARN that will be used to copy the data from S3 to Redshift

PATH="/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin:/usr/local/sbin"

if [ -z "${BATCH_SCRIPT_LOCATION}" ] || [ -z "${DATA_BUCKET_NAME}" ] || [ -z "${COPY_IAM_ROLE_ARN}" ]; then
    echo "BATCH_SCRIPT_LOCATION/DATA_BUCKET_NAME/COPY_IAM_ROLE_ARN not set. No script to run."
    exit 1
fi

# download script to a temp file
TEMP_SCRIPT_FILE=$(mktemp)
aws s3 cp ${BATCH_SCRIPT_LOCATION} ${TEMP_SCRIPT_FILE}

# execute script
# envsubst will replace ${COPY_IAM_ROLE_ARN} and ${COPY_IAM_ROLE_ARN} placeholders with actual values
envsubst < ${TEMP_SCRIPT_FILE} | rsql -D etl

exit $?

Create AWS Batch resources

Next, AWS CDK creates the AWS Batch compute environment, job queue, and job definition. As a fully managed service, AWS Batch helps you run batch computing workloads of any scale. AWS CDK creates a Fargate serverless compute environment for us. The compute environment deploys inside the same VPC as the Amazon Redshift cluster, inside the isolated subnets. The job definition uses our Docker image with Amazon Redshift RSQL.

This step turns Amazon Redshift RSQL into a serverless service. You can build complex ETL workflows based on this generic job.

Create a Step Functions state machine

AWS CDK then moves to the deployment of the Step Functions state machine. Step Functions enables you to build complex workflows in a visual way directly in your browser. This service supports over 9,000 API actions from over 200 AWS services.

You can use Amazon States Language to create a state machine on the Step Functions console. The Amazon States Language is a JSON-based, structured language used to define your state machine. You can also build them programmatically using AWS CDK, as I have done for this post.

After AWS CDK finishes, a new state machine is created in your account called ServerlessRSQLETLFramework. To run it, complete the following steps:

  1. Navigate to the Step Functions console.
  2. Choose the function to open the details page.
  3. Choose Edit, and then choose Workflow Studio New.
    The following screenshot shows our state machine.
  4. Choose Cancel to leave Workflow Studio, then choose Cancel again to leave the edit mode.
    You will be brought back to the details page.
  5. Choose Start execution.
    A dialog box appears. By default, the Name parameter is set to a random identifier, and the Input parameter is set to a sample JSON document.
  6. Delete the Input parameter and choose Start execution to start the state machine.

The Graph view on the details page updates in real time. The state machine starts with a parallel state with two branches. In the left branch, the first job loads customer data into staging table, then the second job merges new and existing customer records. In the right branch, two smaller tables for regions and nations are loaded and then merged one after another. The parallel state waits until all branches are complete before moving to the vacuum-analyze state, which runs VACUUM and ANALYZE commands on Amazon Redshift. The sample state machine also implements the Amazon SNS Publish API actions to send notifications about success or failure.

From the Graph view, you can check the status of each state by choosing it. Every state that uses an external resource has a link to it on the Details tab. In our example, next to every AWS Batch Job state, you can see a link to the AWS Batch Job details page. Here, you can view the status, runtime, parameters, IAM roles, link to Amazon CloudWatch Logs with the logs produced by ETL scripts, and more.

Clean up

To avoid ongoing charges for the resources that you created, delete them. AWS CDK deletes all resources except data resources such as S3 buckets and Amazon ECR repositories.

  1. First, delete all AWS CDK stacks. In the following code, provide your own AWS account and AWS Region:
    ./cdk.sh 123456789012 eu-west-1 destroy --all

  2. On the Amazon S3 console, empty and delete buckets with names starting with:
    1. s3stack-rsqletldemodata
    2. s3stack-rsqletldemoscripts
    3. s3stack-rsqletldemologging
  3. Finally, on the Amazon ECR console, delete repositories with names starting with:
    1. ecrrepositorystack-amazonlinuxrsql
    2. cdk-container-assets

Next steps

Here are some ideas of additional enhancements that you can add to the described solution.

You can break large complex state machines into smaller building blocks by creating self-contained state machines. In our example, you could create state machines for every pair of copy and merge jobs. You could create three such state machines: Copy and Merge Customer, Copy and Merge Region, and Copy and Merge Nation, and then call them from the main state machine. For complex workflows, a different team can work on each sub-state machine in parallel. Also, this pattern promotes reuse of existing components, best practices, and security mechanisms.

You can use Amazon S3 Object Functions or Amazon S3 EventBridge notifications to start a state machine automatically after you upload a file to an S3 bucket. To learn more about Amazon S3 integration with Amazon EventBridge, see Use Amazon S3 Event Notifications with Amazon EventBridge. This way you can achieve a fully event-driven serverless ETL orchestration framework.

Summary

You can use Amazon Redshift RSQL, AWS Batch, and Step Functions to create modern, serverless, and cost-effective ETL workflows. There is no infrastructure to manage, and Amazon Redshift RSQL works as a serverless RSQL service. In this post, we demonstrated how to use this serverless RSQL service to build more complex ETL workflows with Step Functions.

Step Functions integrates natively with over 200 AWS services. This opens a new world of possibilities to AWS customers and partners, who can integrate their processes with other data, analytics, machine learning, and compute services such as Amazon S3, Amazon DynamoDB, AWS Glue, Amazon OpenSearch Service (successor to Amazon Elasticsearch Service), Amazon SageMaker, AWS Lambda, and more. The additional advantage of Step Functions and AWS Batch is that you have full traceability and auditing out of the box. Step Functions shows Graph or Event views together with a complete history for all state machine runs.

In this post, I used RSQL automation scripts as the building blocks of ETL workflows. Using RSQL is a common integration pattern that we see for customers migrating from Teradata BTEQ scripts. However, if you have simple ETL or ELT processes that can be written as plain SQL, you can invoke the Amazon Redshift Data API directly from Step Functions. To learn more about this integration pattern, see ETL orchestration using the Amazon Redshift Data API and AWS Step Functions with AWS SDK integration.


About the author

Lukasz is a Principal Software Dev Engineer working in the AWS DMA team. Lukasz helps customers move their workloads to AWS and specializes in migrating data warehouses and data lakes to AWS. In his free time, Lukasz enjoys learning new human languages.

Primary Arms PII Disclosure via IDOR

Post Syndicated from Tod Beardsley original https://blog.rapid7.com/2022/08/02/primary-arms-pii-disclosure-via-idor/

Primary Arms PII Disclosure via IDOR

The Primary Arms website, a popular e-commerce site dealing in firearms and firearms-related merchandise, suffers from an insecure direct object reference (IDOR) vulnerability, which is an instance of CWE-639: Authorization Bypass Through User-Controlled Key.

Rapid7 is disclosing this vulnerability with the intent of providing information that has the potential to help protect the people who may be affected by it – in this case, Primary Arms users. Rapid7 regularly conducts vulnerability research and disclosure on a wide variety of technologies with the goal of improving cybersecurity. We typically disclose vulnerabilities to the vendor first, and in many cases, vulnerability disclosure coordinators like CERT/CC.  In situations where our previous disclosure through the aforementioned channels does not result in progress towards a solution or fix, we disclose unpatched vulnerabilities publicly. In this case, Rapid7 reached out to Primary Arms and federal and state agencies multiple times over a period of months (see “Disclosure Timeline,” below), but the vulnerability has yet to be addressed.

Vulnerabilities in specific websites are usually unremarkable, don’t usually warrant a CVE identifier, and are found and fixed every day. However, Rapid7 has historically publicized issues that presented an outsized risk to specific populations, were popularly mischaracterized, or remained poorly addressed by those most responsible. Some examples that leap to mind are the issues experienced by Ashley Madison and Grindr users, as well as a somewhat similar Yopify plugin issue for Shopify-powered e-commerce sites.

If exploited, this vulnerability has the potential to  allow an authorized user to view the personally identifiable information (PII) of Primary Arms customers, including their home address, phone number, and tracking information of purchases. Note that “authorized users” includes all Primary Arms customers, and user account creation is free and unrestricted.

Because this is a vulnerability on a single website, no CVE identifier has been assigned for this issue. We estimate the CVSSv3.1 calculation to be 4.3 (AV:N/AC:L/PR:L/UI:N/S:U/C:L/I:N/A:N) or 5.3 (PR:N) if one considers this vulnerability is exploitable by any person able to complete a web form.

Product description

Primary Arms is an online firearms and firearms accessories retailer based in Houston, Texas. According to their website, they cater to “firearms enthusiasts, professional shooters, and servicemen and women” and ship firearms to holders of a Federal Firearms License (FFL). The website is built with NetSuite SuiteCommerce.

Discoverer

This issue was discovered by a Rapid7 security researcher and penetration tester through the normal course of personal business as a customer of Primary Arms. It is being disclosed in accordance with Rapid7’s vulnerability disclosure policy.

Exploitation

An authenticated user can inspect the purchase information of other Primary Arms customers by manually navigating to a known or guessed record sales order URL, as demonstrated in the series of screenshots below.

First, in order to demonstrate the vulnerability, I created an account with the username [email protected], which I call “FakeTod FakeBeardsley.”

Primary Arms PII Disclosure via IDOR

Note that FakeTod has no purchase history:

Primary Arms PII Disclosure via IDOR

Next, I’ll simply navigate to the URL of a real purchase, made under my “real” account. An actual attacker would need to learn or guess this URL, which may be easy or difficult (see Impact, below). The screenshot below is a (redacted) view of that sales order receipt.

Primary Arms PII Disclosure via IDOR

The redacted URL is hxxps://www.primaryarms.com/sca-dev-2019-2/my_account.ssp#purchases/view/salesorder/85460532, and the final 8-digit salesorder value is the insecure direct object reference. In this case, we can see:

  • Customer name
  • Purchased item
  • Last four digits and issuer of the credit card used
  • Billing address and phone number

Manipulating this value produces other sets of PII from other customers, though the distribution is non-uniform and currently unknown (see below, under Impact, for more information).

If a given salesorder reference includes a shipped item, that tracking information is also displayed, as shown in this redacted example:

Primary Arms PII Disclosure via IDOR

Depending on the carrier and the age of the ordered item, this tracking information could then be used to monitor and possibly intercept delivery of the shipped items.

Root cause

The landing page for primaryarms.com and other pages have this auto-generated comment in the HTML source:

<!-- SuiteCommerce [ prodbundle_id "295132" ] [ baselabel "SC_2019.2" ] [ version "2019.2.3.a" ] [ datelabel "2020.00.00" ] [ buildno "0" ] -->
<!-- 361 s: 25% #59 cache: 4% #17 -->
<!-- Host [ sh14.prod.bos ] App Version [ 2022.1.15.30433 ] -->
<!-- COMPID [ 3901023 ] URL [ /s.nl ] Time [ Mon Jul 11 09:33:51 PDT 2022 ] -->
<!-- Not logging slowest SQL -->

This indicates a somewhat old version of SuiteCommerce, from 2019, being run in production. It’s hard to say for sure that this is the culprit of the issue, or even if this comment is accurate, but our colleagues at CERT/CC noticed that NetSuite released an update in 2020 that addressed CVE-2020-14728, which may be related to this IDOR.

Outside of this hint, the root cause of this issue is unknown at the time of this writing. It may be as straightforward as updating the local NetSuite instance, or there may be more local configuration needed to ensure that sales order receipts require proper authentication in order to read them.

Post-authentication considerations

Note that becoming an authenticated user is trivial for the Primary Arms website. New users are invited to create an account, and while a validly formatted email address is required, it is not authenticated. In the example gathered here, the simulated attacker, FakeTod, has the nonexistent email address of [email protected]. Therefore, there is no practical difference between an unauthenticated user and an authenticated user for the purpose of exploitation.

Impact

By exploiting this vulnerability, an attacker can learn the PII of likely firearms enthusiasts. However, exploiting this vulnerability at a reasonable scale may prove somewhat challenging.

Possible valid IDOR values

It is currently unknown how the salesorder values are generated, as Rapid7 has conducted very limited testing in order to merely validate the existence of the IDOR issue. We’re left with two possibilities.

It is the likely case that the salesorder values are sequential, start at a fixed point in the 8-digit space, and increment with every new transaction in a predictable way. If this is the case, exhausting the possible space of valid IDOR values is fairly trivial — only a few seconds to automate the discovery of newly created sales order records, and a few minutes to gather all past records. While limited testing indicates salesorder values are sequential, there are gaps in the sequence, likely due to abandoned and partial orders. We have not fully explored the attack surface of this issue out of an abundance of caution and restraint.

In the worst case (for the attacker), the numbers may be purely random out of a space of 100 million possibles. This seems unlikely according to Rapid7’s limited testing. If this is the case, however, exhausting the entire space for all records would take about two years, assuming an average of 100 queries per second (this probing would be noticeable by the website operators assuming normal website instrumentation).

The truth of the salesorder value generation is probably somewhere closer to the former than the latter, given past experience with similar bugs of this nature, which leads us to this disclosure in the interest of public safety, documented in the next section.

Possible attacks

We can imagine a few scenarios where attackers might find this collection of PII useful. The most obvious attack would be a follow-on phishing attack, identity theft, or other confidence scam, since PII is often useful in executing successful social engineering attacks. An attacker could pose as Primary Arms, another related organization, or the customer and be very convincing in such identity (to a third-party) when armed with the name, address, phone number, last four digits of a credit card, and recent purchase history.

Additionally, typical Primary Arms customers are self-identified firearms owners and enthusiasts. A recent data breach in June of 2022 involving California Conceal Carry License holders caused a stir among firearms enthusiasts, who worry that breach would lead to “increase the risk criminals will target their homes for burglaries.”

Indeed, if it is possible to see recent transactions (again, depending on how salesorder values are generated), especially those involving FFL holders, it may be possible for criminals to intercept firearms and firearms accessories in transit by targeting specific delivery addresses.

Finally, there is the potential that domestic terrorist organizations and foreign intelligence operations could use this highly specialized PII in recruiting, disinformation, and propaganda efforts.

Remediation

As mentioned above, it would appear that only Primary Arms is in a position to address this issue. We suspect this issue may be resolved by using a more current release of NetSuite SuiteCommerce. A similar e-commerce site, using similar technology but with a more updated version of SuiteCommerce, appears to not be subject to this specific attack technique, so it’s unlikely this is a novel vulnerability in the underlying web technology stack.

Customers affected by this issue are encouraged to try to contact Primary Arms, either by email to [email protected], or by calling customer service at +1 713.344.9600.

Disclosure timeline

At the time of this writing, Primary Arms has not been responsive to disclosure efforts by Rapid7, CERT/CC, or TX-ISAO.

  • May 2022 – Issue discovered by a Rapid7 security researcher
  • Mon, May 16, 2022 – Initial contact to Primary Arms at [email protected]
  • Wed, May 25, 2022 – Attempt to contact Primary Arms CTO via guessed email address
  • Wed, May 25, 2022 – Internal write-up of IDOR issue completed and validated
  • Thu, May 26, 2022 – Attempt to contact Primary Arms CEO via guessed email address
  • Tue, May 31, 2022 – Called customer support, asked for clarification on contact, reported issue
  • Thu, Jun 1, 2022 – Notified CERT/CC via [email protected] asking for advice
  • Fri, Jun 10, 2022  – Opened a case with CERT/CC, VRF#22-06-QFRZJ
  • Thu, Jun 16, 2022  – CERT/CC begins investigation and disclosure attempts, VU#615142
  • June-July 2022 – Collaboration with CERT/CC to validate and scope the issue
  • Mon, Jul 11, 2022 – Completed disclosure documentation presuming no contact from Primary Arms
  • Tue, Jul 12, 2022 – Sent a paper copy of this disclosure to Primary Arms via certified US mail, tracking number: 420770479514806664112193691642
  • Thu, Jul 14, 2022 – Disclosed details to the Texas Information Sharing and Analysis Organization (TX-ISAO), Report #ISAO-CT-0052
  • Mon, Jul 18, 2022 – Paper copy received by Primary Arms
  • Tue, Aug 2, 2022 – This public disclosure

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

GNU C Library 2.36 released

Post Syndicated from original https://lwn.net/Articles/903556/

Version
2.36
of the GNU C Library has been released. Changes include support
for the new DT_RELR relocation format,
wrappers for the
process_madvise(),
process_mrelease(),
pidfd_open(),
pidfd_getfd(), and
pidfd_send_signal() system calls,
wrappers for the new filesystem mounting API,
a DNS stub resolver that only does IPv4 queries,
support for the BSD
arc4random() API
(despite some last-minute
discussion
),
LoongArch architecture support,
and more.

Backblaze Drive Stats for Q2 2022

Post Syndicated from original https://www.backblaze.com/blog/backblaze-drive-stats-for-q2-2022/

As of the end of Q2 2022, Backblaze was monitoring 219,444 hard drives and SSDs in our data centers around the world. Of that number, 4,020 are boot drives, with 2,558 being SSDs, and 1,462 being HDDs. Later this quarter, we’ll review our SSD collection. Today, we’ll focus on the 215,424 data drives under management as we review their quarterly and lifetime failure rates as of the end of Q2 2022. Along the way, we’ll share our observations and insights on the data presented and, as always, we look forward to you doing the same in the comments section at the end of the post.

Lifetime Hard Drive Failure Rates

This report, we’ll change things up a bit and start with the lifetime failure rates. We’ll cover the Q2 data later on in this post. As of June 30, 2022, Backblaze was monitoring 215,424 hard drives used to store data. For our evaluation, we removed 413 drives from consideration as they were used for testing purposes or drive models which did not have at least 60 drives. This leaves us with 215,011 hard drives grouped into 27 different models to analyze for the lifetime report.

Notes and Observations About the Lifetime Stats

The lifetime annualized failure rate for all the drives listed above is 1.39%. That is the same as last quarter and down from 1.45% one year ago (6/30/2021).

A quick glance down the annualized failure rate (AFR) column identifies the three drives with the highest failure rates:

  • The 8TB HGST (model: HUH728080ALE604) at 6.26%.
  • The Seagate 14TB (model: ST14000NM0138) at 4.86%.
  • The Toshiba 16TB (model: MG08ACA16TA at 3.57%.

What’s common between these three models? The sample size, in our case drive days, is too small, and in these three cases leads to a wide range between the low and high confidence interval values. The wider the gap, the less confident we are about the AFR in the first place.

In the table above, we list all of the models for completeness, but it does make the chart more complex. We like to make things easy, so let’s remove those drive models that have wide confidence intervals and only include drive models that are generally available. We’ll set our parameters as follows: a 95% confidence interval gap of 0.5% or less, a minimum drive days value of one million to ensure we have a large enough sample size, and drive models that are 8TB or more in size. The simplified chart is below.

To summarize, in our environment, we are 95% confident that the AFR listed for each drive model is between the low and high confidence interval values.

Computing the Annualized Failure Rate

We use the term annualized failure rate, or AFR, throughout our Drive Stats reports. Let’s spend a minute to explain how we calculate the AFR value and why we do it the way we do. The formula for a given cohort of drives is:

AFR = ( drive_failures / ( drive_days / 365 )) * 100

Let’s define the terms used:

  • Cohort of drives: The selected set of drives (typically by model) for a given period of time (quarter, annual, lifetime).
  • AFR: Annualized failure rate, which is applied to the selected cohort of drives.
  • drive_failures: The number of failed drives for the selected cohort of drives.
  • drive_days: The number of days all of the drives in the selected cohort are operational during the defined period of time of the cohort (i.e., quarter, annual, lifetime).

For example, for the 16TB Seagate drive in the table above, we have calculated there were 117 drive failures and 4,117,553 drive days over the lifetime of this particular cohort of drives. The AFR is calculated as follows:

AFR = ( 117 / ( 4,117,553 / 365 )) * 100 = 1.04%

Why Don’t We Use Drive Count?

Our environment is very dynamic when it comes to drives entering and leaving the system; a 12TB HGST drive fails and is replaced by a 12TB Seagate, a new Backblaze Vault is added and 1,200 new 14TB Toshiba drives are added, a Backblaze Vault of 4TB drives is retired, and so on. Using drive count is problematic because it assumes a stable number of drives in the cohort over the observation period. Yes, we will concede that with enough math you can make this work, but rather than going back to college, we keep it simple and use drive days as it accounts for the potential change in the number of drives during the observation period and apportions each drive’s contribution accordingly.

For completeness, let’s calculate the AFR for the 16TB Seagate drive using a drive count-based formula given there were 16,860 drives and 117 failures.

Drive Count AFR = ( 117 / 16,860 ) * 100 = 0.69%

While the drive count AFR is much lower, the assumption that all 16,860 drives were present the entire observation period (lifetime) is wrong. Over the last quarter, we added 3,601 new drives, and over the last year, we added 12,003 new drives. Yet, all of these were counted as if they were installed on day one. In other words, using drive count AFR in our case would misrepresent drive failure rates in our environment.

How We Determine Drive Failure

Today, we classify drive failure into two categories: reactive and proactive. Reactive failures are where the drive has failed and won’t or can’t communicate with our system. Proactive failures are where failure is imminent based on errors the drive is reporting which are confirmed by examining the SMART stats of the drive. In this case, the drive is removed before it completely fails.

Over the last few years, data scientists have used the SMART stats data we’ve collected to see if they can predict drive failure using various statistical methodologies, and more recently, artificial intelligence and machine learning techniques. The ability to accurately predict drive failure, with minimal false positives, will optimize our operational capabilities as we scale our storage platform.

SMART Stats

SMART stands for Self-monitoring, Analysis, and Reporting Technology and is a monitoring system included in hard drives that reports on various attributes of the state of a given drive. Each day, Backblaze records and stores the SMART stats that are reported by the hard drives we have in our data centers. Check out this post to learn more about SMART stats and how we use them.

Q2 2022 Hard Drive Failure Rates

For the Q2 2022 quarterly report, we tracked 215,011 hard drives broken down by drive model into 27 different cohorts using only data from Q2. The table below lists the data for each of these drive models.

Notes and Observations on the Q2 2022 Stats

Breaking news, the OG stumbles: The 6TB Seagate drives (model: ST6000DX000) finally had a failure this quarter—actually, two failures. Given this is the oldest drive model in our fleet with an average age of 86.7 months of service, a failure or two is expected. Still, this was the first failure by this drive model since Q3 of last year. At some point in the future we can expect these drives will be cycled out, but with their lifetime AFR at just 0.87%, they are not first in line.

Another zero for the next OG: The next oldest drive cohort in our collection, the 4TB Toshiba drives (model: MD04ABA400V) at 85.3 months, had zero failures for Q2. The last failure was recorded a year ago in Q2 2021. Their lifetime AFR is just 0.79%, although their lifetime confidence interval gap is 1.3%, which as we’ve seen means we are lacking enough data to be truly confident of the AFR number. Still, at one failure per year, they could last another 97 years—probably not.

More zeroes for Q2: Three other drives had zero failures this quarter: the 8TB HGST (model: HUH728080ALE604), the 14TB Toshiba (model: MG07ACA14TEY), and the 16TB Toshiba (model: MG08ACA16TA). As with the 4TB Toshiba noted above, these drives have very wide confidence interval gaps driven by a limited number of data points. For example, the 16TB Toshiba had the most drive days—32,064—of any of these drive models. We would need to have at least 500,000 drive days in a quarter to get to a 95% confidence interval. Still, it is entirely possible that any or all of these drives will continue to post great numbers over the coming quarters, we’re just not 95% confident yet.

Running on fumes: The 4TB Seagate drives (model: ST4000DM000) are starting to show their age, 80.3 months on average. Their quarterly failure rate has increased each of the last four quarters to 3.42% this quarter. We have deployed our drive cloning program for these drives as part of our data durability program, and over the next several months, these drives will be cycled out. They have served us well, but it appears they are tired after nearly seven years of constant spinning.

The AFR increases, again: In Q2, the AFR increased to 1.46% for all drives models combined. This is up from 1.22% in Q1 2022 and up from 1.01% a year ago in Q2 2021. The aging 4TB Seagate drives are part of the increase, but the failure rates of both the Toshiba and HGST drives have increased as well over the last year. This appears to be related to the aging of the entire drive fleet and we would expect this number to go down as older drives are retired over the next year.

Four Thousand Storage Servers

In the opening paragraph, we noted there were 4,020 boot drives. What may not be obvious is that this equates to 4,020 storage servers. These are 4U servers with 45 or 60 drives in each with drives ranging in size from 4TB to 16TB. The smallest is 180TB of raw storage space (45 * 4TB drives) and the largest is 960TB of raw storage (60 * 16TB drives). These servers are a mix of Backblaze Storage Pods and third-party storage servers. It’s been a while since our last Storage Pod update, so look for something in late Q3 or early Q4.

Drive Stats at DEFCON

If you will be at DEFCON 30 in Las Vegas, I will be speaking live at the Data Duplication Village (DDV) at 1 p.m. on Friday, August 12th. The all-volunteer DDV is located in the lower level of the executive conference center of the Flamingo hotel. We’ll be talking about Drive Stats, SSDs, drive life expectancy, SMART stats, and more. I hope to see you there.

Never Miss the Drive Stats Report

Sign up for the Drive Stats Insiders newsletter and be the first to get Drive Stats data every quarter as well as the new Drive Stats SSD edition.

➔ Sign Up

The Hard Drive Stats Data

The complete data set used to create the information used in this review is available on our Hard Drive Test Data page. You can download and use this data for free for your own purpose. All we ask are three things: 1) you cite Backblaze as the source if you use the data, 2) you accept that you are solely responsible for how you use the data, and 3) you do not sell this data to anyone; it is free.

If you want the tables and charts used in this report, you can download the .zip file from Backblaze B2 Cloud Storage which contains the .jpg and/or .xlsx files as applicable.
Good luck and let us know if you find anything interesting.

Want More Drive Stats Insights?

Check out our 2021 Year-end Drive Stats Report.

Interested in the SSD Data?

Read our first SSD-based Drive Stats Report.

The post Backblaze Drive Stats for Q2 2022 appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Collaboration Drives Secure Cloud Innovation: Insights From AWS re:Inforce

Post Syndicated from Jesse Mack original https://blog.rapid7.com/2022/08/02/collaboration-drives-secure-cloud-innovation-insights-from-aws-re-inforce/

Collaboration Drives Secure Cloud Innovation: Insights From AWS re:Inforce

This year’s AWS re:Inforce conference brought together a wide range of organizations that are shaping the future of the cloud. Last week in Boston, cloud service providers (CSPs), security vendors, and other leading organizations gathered to discuss how we can go about building cloud environments that are both safe and scalable, driving innovation without sacrificing security.

This array of attendees looks a lot like the cloud landscape itself. Multicloud architectures are now the norm, and organizations have begun to search for ways to bring their lengthening lists of vendors together, so they can gain a more cohesive picture of what’s going on in their environment. It’s a challenge, to be sure — but also an opportunity.

These themes came to the forefront in one of Rapid7’s on-demand booth presentations at AWS re:Inforce, “Speeding Up Your Adoption of CSP Innovation.” In this talk, Chris DeRamus, VP of Technology – Cloud Security at Rapid7, sat down with Merritt Baer — Principal, Office of the CISO at AWS — and Nick Bialek — Lead Cloud Security Engineer at Northwestern Mutual — to discuss how organizations can create processes and partnerships that help them quickly and securely utilize new services that CSPs roll out. Here’s a closer look at what they had to say.

Building a framework

The first step in any security program is drawing a line for what is and isn’t acceptable — and for many organizations, compliance frameworks are a key part of setting that baseline. This holds true for cloud environments, especially in highly regulated industries like finance and healthcare. But as Merritt pointed out, what that framework looks like varies based on the organization.

“It depends on the shop in terms of what they embrace and how that works for them,” she said. Benchmarks like CIS and NIST can be a helpful starting point in moving toward “continuous compliance,” she noted, as you make decisions about your cloud architecture, but the journey doesn’t end there.

For example, Nick said he and his team at Northwestern Mutual use popular compliance benchmarks as a foundation, leveraging curated packs within InsightCloudSec to give them fast access to the most common compliance controls. But from there, they use multiple frameworks to craft their own rigorous internal standards, giving them the best of all worlds.

The key is to be able to leverage detective controls that can find noncompliant resources across your environment so you can take automated actions to remediate — and to be able to do all this from a single vantage point. For Nick’s team, that is InsightCloudSec, which provides them a “single engine to determine compliance with a single set of security controls, which is very powerful,” he said.

Evaluating new services

Consolidating your view of the cloud environment is critical — but when you want to bring on a new service and quickly evaluate it for risk, Merritt and Nick agreed on the importance of embracing collaboration and multiplicity. When it’s working well, a multicloud approach can allow this evaluation process to happen much more quickly and efficiently than a single organization working on their own.

“We see success when customers are embracing this deliberate multi-account architecture,” Merritt said of her experience working with AWS users.

At Northwest Mutual, Nick and his team use a group evaluation approach when onboarding a new cloud service. They’ll start the process with the provider, such as AWS, then ask Rapid7 to evaluate the service for risks. Finally, the Northwest Mutual team will do an assessment that pays close attention to the factors that matter most to them, like disaster recovery and identity and access management.

This model helps Nick and his team realize the benefits of the cloud. They want to be able to consume new services quickly so they can innovate at scale, but their team alone can’t keep up the work needed to fully vet each new resource for risks. They need a partner that can help them keep pace with the speed and elasticity of the cloud.

“You need someone who can move fast with you,” Nick said.

Automating at scale

Another key component of operating quickly and at scale is automation. “Reducing toil and manual work,” as Nick put it, is essential in the context of fast-moving and complex cloud environments.

“The only way to do anything at scale is to leverage automation,” Merritt insisted. Shifting security left means weaving it into all decisions about IT architecture and application development — and that means innovation and security are no longer separate ideas, but simultaneous parts of the same process. When security needs to keep pace with development, being able to detect configuration drift and remediate it with automated actions can be the difference between success and stalling out.

Plus, who actually likes repetitive, manual tasks anyway?

“You can really put a lot of emphasis on narrowing that gray area of human decision-making down to decisions that are truly novel or high-stakes,” Merritt said.

This leveling-up of decision-making is the real opportunity for security in the age of cloud, Merritt believes. Security teams get to be freed from their former role as “the shop of no” and instead work as innovators to creatively solve next-generation problems. Instead of putting up barriers, security in the age of cloud means laying down new roads — and it’s collaboration across internal teams and with external vendors that makes this new model possible.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Load Balancing with Weighted Pools

Post Syndicated from Brian Batraski original https://blog.cloudflare.com/load-balancing-with-weighted-pools/

Load Balancing with Weighted Pools

Load Balancing with Weighted Pools

Anyone can take advantage of Cloudflare’s far-reaching network to protect and accelerate their online presence. Our vast number of data centers, and their proximity to Internet users around the world, enables us to secure and accelerate our customers’ Internet applications, APIs and websites. Even a simple service with a single origin server can leverage the massive scale of the Cloudflare network in 270+ cities. Using the Cloudflare cache, you can support more requests and users without purchasing new servers.

Whether it is to guarantee high availability through redundancy, or to support more dynamic content, an increasing number of services require multiple origin servers. The Cloudflare Load Balancer keeps our customer’s services highly available and makes it simple to spread out requests across multiple origin servers. Today, we’re excited to announce a frequently requested feature for our Load Balancer – Weighted Pools!

What’s a Weighted Pool?

Before we can answer that, let’s take a quick look at how our load balancer works and define a few terms:

Origin Servers – Servers which sit behind Cloudflare and are often located in a customer-owned datacenter or at a public cloud provider.

Origin Pool – A logical collection of origin servers. Most pools are named to represent data centers, or cloud providers like “us-east,” “las-vegas-bldg1,” or “phoenix-bldg2”. It is recommended to use pools to represent a collection of servers in the same physical location.

Traffic Steering Policy – A policy specifies how a load balancer should steer requests across origin pools. Depending on the steering policy, requests may be sent to the nearest pool as defined by latitude and longitude, the origin pool with the lowest latency, or based upon the location of the Cloudflare data center.

Pool Weight – A numerical value to describe what percentage of requests should be sent to a pool, relative to other pools.

Load Balancing with Weighted Pools

When a request from a visitor arrives at the Cloudflare network for a hostname with a load balancer attached to it, the load balancer must decide where the request should be forwarded. Customers can configure this behavior with traffic steering policies.

The Cloudflare Load Balancer already supports Standard Steering, Geo Steering, Dynamic Steering, and Proximity Steering. Each of these respective traffic steering policies control how requests are distributed across origin pools. Weighted Pools are an extension of our standard, random steering policy which allows the specification of what relative percentage of requests should be sent to each respective pool.

In the example above, our load balancer has two origin pools, “las-vegas-bldg1” (which is a customer operated data center), and “us-east-cloud” (which is a public cloud provider with multiple virtual servers). Each pool has a weight of 0.5, so 50% of requests should be sent to each respective pool.

Why would someone assign weights to origin pools?

Before we built this, Weighted Pools was a frequently requested feature from our customers. Part of the reason we’re so excited about this feature is that it can be used to solve many types of problems.

Unequally Sized Origin Pools

In the example below, the amount of dynamic and uncacheable traffic has significantly increased due to a large sales promotion. Administrators notice that the load on their Las Vegas data center is too high, so they elect to dynamically increase the number of origins within their public cloud provider. Our two pools, “las-vegas-bldg1” and “us-east-cloud” are no longer equally sized. Our pool representing the public cloud provider is now much larger, so administrators change the pool weights so that the cloud pool receives 0.8 (80%) of the traffic, relative to the 0.2 (20%) of the traffic which the Las Vegas pool receives. The administrators were able to use pool weights to very quickly fine-tune the distribution of requests across unequally sized pools.

Load Balancing with Weighted Pools

Data center kill switch

In addition to balancing out unequal sized pools, Weighted Pools may also be used to completely take a data center (an origin pool) out of rotation by setting the pool’s weight to 0. This feature can be particularly useful if a data center needs to be quickly eliminated during troubleshooting or a proactive maintenance where power may be unavailable. Even if a pool is disabled with a weight of 0, Cloudflare will still monitor the pool for health so that the administrators can assess when it is safe to return traffic.

Network A/B testing

One final use case we’re excited about is the ability to use weights to attract a very small amount of requests to pool. Did the team just stand up a brand-new data center, or perhaps upgrade all the servers to a new software version? Using weighted pools, the administrators can use a load balancer to effectively A/B test their network. Only send 0.05 (5%) of requests to a new pool to verify the origins are functioning properly before gradually increasing the load.

How do I get started?

When setting up a load balancer, you need to configure one or more origin pools, and then place origins into your respective pools. Once you have more than one pool, the relative weights of the respective pools will be used to distribute requests.

To set up a weighted pool using the Dashboard, create a load balancer in the Traffic > Load Balancing area.

Once you have set up the load balancer, you’re navigated to the Origin Pools setup page. Under the Traffic Steering Policy, select Random, and then assign relative weights to every pool.

If your weights do not add up to 1.00 (100%), that’s fine! We will do the math behind the scenes to ensure how much traffic the pool should receive relative to other pools.

Load Balancing with Weighted Pools
Load Balancing with Weighted Pools
Load Balancing with Weighted Pools

Weighted Pools may also be configured via the API. We’ve edited an example illustrating the relevant parts of the REST API.

  • The load balancer should employ a “steering_policy” of random.
  • Each pool has a UUID, which can then be assigned a “pool_weight.”

 {
    "description": "Load Balancer for www.example.com",
    "name": "www.example.com",
    "enabled": true,
    "proxied": true,
    "fallback_pool": "9290f38c5d07c2e2f4df57b1f61d4196",
    "default_pools": [
        "9290f38c5d07c2e2f4df57b1f61d4196",
        "17b5962d775c646f3f9725cbc7a53df4"
    ],
    "steering_policy": "random",
    "random_steering": {
        "pool_weights": {
            "9290f38c5d07c2e2f4df57b1f61d4196": 0.8
        },
        "default_weight": 0.2
    }
}

We’re excited to launch this simple, yet powerful and capable feature. Weighted pools may be utilized in tons of creative new ways to solve load balancing challenges. It’s available for all customers with load balancers today!

Developer Docs:
https://developers.cloudflare.com/load-balancing/how-to/create-load-balancer/#create-a-load-balancer

API Docs:
https://api.cloudflare.com/#load-balancers-create-load-balancer

Security updates for Tuesday

Post Syndicated from original https://lwn.net/Articles/903555/

Security updates have been issued by Debian (curl and jetty9), Fedora (dovecot), Gentoo (vault), Scientific Linux (java-1.8.0-openjdk, java-11-openjdk, and squid), SUSE (booth, dovecot22, dwarves and elfutils, firefox, gimp, java-11-openjdk, kernel, and oracleasm), and Ubuntu (linux, linux-hwe-5.15, linux-lowlatency, linux-lowlatency-hwe-5.15, net-snmp, and samba).

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close