Building Resilient and High Performing Cloud-based Applications in Hawaii

Post Syndicated from Marie Yap original https://aws.amazon.com/blogs/architecture/building-resilient-and-high-performing-cloud-based-applications-in-hawaii/

Hawaii is building a digital economy for a sustainable future. Many local businesses are already embarking on their journey to the cloud to meet their customers’ growing demand for digital services. To access Amazon Web Services (AWS) on the US mainland, customers’ data must traverse through submarine fiber-optic cable networks approximately 2,800 miles across the Pacific Ocean. As a result, organizations have two primary concerns:

  • Resiliency concerns about multiple outage events that could arise from breaks in the submarine cables.
  • Latency concerns for mission-critical applications driven by physical distance.

These problems can be solved by architecting the workloads for reliability, secure connectivity, and high performance.

Designing network connectivity that is reliable, secure, and highly performant

A typical workload in AWS can be broken down into three layers – Network, Infrastructure, and Application. For each layer, we can design for resiliency and latency concerns. Starting at the network layer, there are two recommended options for connecting the on-premises network within the island to AWS.

  • Use of AWS Direct Connect over a physical connection. AWS Direct Connect is a dedicated network connection that connects your on-premises environment to AWS Regions. In this case, the connection is traversing the fiber-optic cable across the Pacific Ocean to the mainland’s meet-me-point facilities. It can be provisioned from 50 Mbps up to 100 Gbps. This provides you with a presence in an AWS Direct Connect location, a third-party colocation facility, or an Internet Service Provider (ISP) that provides last-mile connectivity to AWS. In addition, the Direct Connect location establishes dedicated connectivity to Amazon Virtual Private Clouds (VPC). This improves application performance and addresses latency concerns by connecting directly to AWS and bypassing the public internet.
  • Use of AWS VPN over an internet connection. As a secondary option to Direct Connect, AWS Site-to-Site VPN provide connectivity into AWS over the public internet using VPN encryption technologies. The Site-to-Site VPN connects on-premises sites to AWS resources in an Amazon VPC. As a result, you can securely connect your on-premises network to AWS using an internet connection.

We recommend choosing the us-west-2 AWS Region in Oregon to build high performant connectivity closest to Hawaii. The us-west-2 Region generally provides more AWS services at a lower cost versus us-west-1. In addition, there are various options for AWS Direct Connect Locations in the US West Region. Many of these locations support up to 100 Gbps and support MACsec, which is an IEEE standard for security encryption in wired Ethernet LANs. Typically, customers will use multiple 10-Gbps connections for higher throughput and redundancy.

Subsea Cable Hawaii Cable Landing Station Mainland Cable Landing Station Nearest Direct Connect Location
Southern Cross Cable Network (SCCN)
Kahe Point (Oahu) Morro Bay, CA CoreSite, Equinix
Southern Cross Cable Network (SCCN) Kahe Point (Oahu) Hillsboro, OR Equnix, EdgeConnex, Pittock Block, CoreSite, T5, TierPoint
Hawaiki Kapolei (Oahu) Hillsboro, OR Equnix, EdgeConnex, Pittock Block, CoreSite, T5, TierPoint
Asia-America Gateway (AAG) Keawaula (Oahu) San Luis Obispo, CA CoreSite, Equinix
Japan-US Cable Network (JUS) Makaha (Oahu) Morro Bay, CA CoreSite, Equinix
SEA-US Makaha (Oahu) Hermosa Beach, CA CoreSite, Equinix, T5

Table 1. Subsea fiber-optic cables connecting Hawaii to the US mainland

(Source: Submarine Cable Map from TeleGeography)

To build resilient connectivity, six cables connect Hawaii to the mainland US: Hawaiki, SEA-US, Asia-America Gateway (AAG), Japan-US (JUS), and two Southern Cross (SCCN) cables. In addition, these cables connect to various locations on the US West Coast. If you require high resiliency, we recommend a minimum of two physically redundant Direct Connect connections into AWS. In addition, we recommend designing four Direct Connect connections that span two Direct Connect locations for maximum resiliency. If you build your architecture following these recommendations, AWS offers this published service level agreement (SLA).

Figure 1. Redundant direct connection from Hawaii to the US mainland

Figure 1. Redundant direct connection from Hawaii to the US mainland

Most customers select an ISP to get them connectivity across the Pacific Ocean to an AWS Direct location. The Direct Connect locations are third-party colocation providers who act as meet-me points for AWS customers and the AWS Regions.  For example, our local AWS Partner DRFortress connects multiple ISPs in a data center in Hawaii to the AWS US West Region. We recommend having at least two ISPs for resilient applications, each providing connectivity across a separate subsea cable from Hawaii to the mainland. If one cable should fail for any reason, connectivity to AWS would still be available. The red links in figure 2 are the ISP-provided connectivity that spans the Pacific Ocean. This is a minimum starting point for business-critical applications and should be designed with additional Direct Connect links for greater resiliency.

Architecting for high performance and resiliency

Moving from the network to the infrastructure and application layer, organizations have the option in building their application all in the cloud or in combination with an on-premises environment. An example of an application built all in the cloud is the LumiSight platform in AWS built by local AWS Partner, DataHouse. LumiSight has helped dozens of organizations quickly and securely reopen safely during the pandemic.

Other customers need a hybrid cloud architecture solution. These organizations require that their data processing and locally hosted applications analysis is close to other components within the island’s data center. With this proximity, they can deliver near real-time responses to their end users. AWS Outposts Family extends the capabilities of an AWS Region to the island. This enables local businesses to build and run low latency applications on-premises on an AWS fully managed infrastructure. You can now deploy Compute, Storage, Containers, Data Analytics clusters, Relational, and Cache databases in high performance, redundant and secure infrastructure maintained by AWS. Outposts can be shipped to Hawaii, connecting to the us-west-1 or us-west-2 Regions.

Another option for improving application performance is providing an efficient virtual desktop to access their applications anywhere. Amazon WorkSpaces provides a secure, managed cloud-based virtual desktop experience. Many workers who bring their own device (BYOD) or work remotely use Workspaces to access their corporate applications securely. Workspaces use streaming protocols that provide a secure and responsive desktop experience to end users located in remote Regions, like Hawaii. Workspaces can quickly provide a virtual desktop without managing the infrastructure, OS versions, and patches. You can test your connection to Workspaces from Hawaii, or anywhere else in the world, at the Connection Health Check page.

Architecting for resiliency in the infrastructure and application stack is vital for Business Continuity and Disaster Recovery (BCDR) plans. Organizations in Hawaii who are already using VMware can take advantage of creating a recovery site using VMware Cloud on AWS as their solution for disaster recovery. The VMware Cloud on AWS is a fully managed VMware software-defined Data Center (SDDC) running on AWS, which provides access to native AWS services. Organizations can pair their on-premises vCenter and virtual machines to the fully managed vCenter and virtual machines residing in the cloud. The active Site Recovery Manager provides the automation of failing over and failing back applications between on-premises to the cloud DR site and vice versa. Additionally, organizations can define their SDDC in the us-west-2 Region using AWS Direct Connect to minimize the latency of replicating the data from and to the data centers in the islands.

Conclusion

Organizations in Hawaii can build resilient and high performant cloud-based workloads with the help of AWS services in each layer of their workloads. Starting with the network layer, you can establish reliable and lower latency connectivity through redundant AWS Direct Connect connections. Next, for low latency, hybrid applications, we extend infrastructure capabilities locally through AWS Outposts. We also improve the user experience in accessing cloud-based applications by providing Amazon WorkSpaces as the virtual desktop. Finally, we build resilient infrastructure and applications using a familiar solution called VMware Cloud on AWS.

To start learning the fundamentals and building on AWS, visit the Getting Started Resource Center.

Metasploit Weekly Wrap-Up

Post Syndicated from Brendan Watters original https://blog.rapid7.com/2022/01/21/metasploit-wrap-up-145/

Metasploit Weekly Wrap-Up
Image Credit: https://upload.wikimedia.org/wikipedia/commons/c/c7/Logs.jpg without change

while (j==shell); Log4j;

Metasploit Weekly Wrap-Up

The Log4j loop continues as we release a module targeting vulnerable vCenter releases. This is a good time to suggest that you check your vCenter releases and maybe even increase the protection surrounding them, as it’s been a rough year-plus for vCenter.

Let your shell do the walking

bcoles sent us a module that targets Grandstream GXV3175IP phones that allows remote code execution. It’s always fun to get a shell on a phone.

New module content (2)

  • Grandstream GXV3175 ‘settimezone’ Unauthenticated Command Execution by Brendan Scarvell, alhazred, and bcoles, which exploits CVE-2019-10655 – A new module has been added in that exploits CVE-2019-10655, an unauthenticated remote code execution bug in Grandstream GXV3175. Authentication is bypassed via a buffer overflow in the way the phonecookie cookie is parsed, after which a command injection vulnerability in the ‘settimezone’ action’s ‘timezone’ parameter is exploited to gain RCE as the root user.
  • VMware vCenter Server Unauthenticated JNDI Injection RCE (via Log4Shell) by RageLtMan, Spencer McIntyre, jbaines-r7, and w3bd3vil, which exploits CVE-2021-44228 – This PR adds a vCenter-specific exploit leveraging the Log4Shell vulnerability to achieve unauthenticated RCE as root / SYSTEM. This exploit has been tested on both Windows and Linux targets.

Enhancements and features

  • #16075 from bcoles – The post/multi/manage/sudo module has been enhanced to print out a warning message and exit early if the session type that is trying to be upgraded via sudo is Meterpreter, since Meterpreter does not support sudo elevation at present.

Bugs fixed

  • #16029 from cdelafuente-r7 – A bug existed in the normalize function of lib/msf/core/opt_path.rb whereby the path parameter passed in wasn’t checked to see if it was empty prior to calling File.expand_path on it. In these cases the path returned would be that of the current directory, which could lead to unexpected results. This has been fixed with improved validation to ensure that the path parameter is not an empty string prior to expanding the path.
  • #16058 from bcoles – This change fixes a bug where a stack trace was printed in post/multi/recon/local_exploit_suggester when an invalid session option was specified.
  • #16063 from bcoles – A bug has been fixed in the local_admin_search_enum module whereby a typo was causing the module to crash on an undefined variable. The typo has been corrected and the module now accesses the correct variable. This has been addressed by fixing the typo, which should now make the module access the correct variable.
  • #15727 from NeffIsBack – This PR adds more robust NTLM message parsing with better error handling and messaging when pulling out the NTLM hashes.

Get it

As always, you can update to the latest Metasploit Framework with msfupdate
and you can get more details on the changes since the last blog post from
GitHub:

If you are a git user, you can clone the Metasploit Framework repo (master branch) for the latest.
To install fresh without using git, you can use the open-source-only Nightly Installers or the
binary installers (which also include the commercial edition).

[$] Raw photo development with darktable

Post Syndicated from original https://lwn.net/Articles/881853/rss

One of your editor’s long-time hobbies is photography; it is an activity
that can be rewarding even with the lack of any particular talent — a useful
attribute. Photography has changed greatly over the years; as a result,
those hard-earned darkroom skills are of little use, and photo processing
has become yet another software problem. This is a field that supports a
lot of proprietary software, but there is also no shortage of free software
available. The time has come to combine work and pleasure and catch up
with the state of free software for photography, starting with the darktable raw photo editor.

How to enrich AWS Security Hub findings with account metadata

Post Syndicated from Siva Rajamani original https://aws.amazon.com/blogs/security/how-to-enrich-aws-security-hub-findings-with-account-metadata/

In this blog post, we’ll walk you through how to deploy a solution to enrich AWS Security Hub findings with additional account-related metadata, such as the account name, the Organization Unit (OU) associated with the account, security contact information, and account tags. Account metadata can help you search findings, create insights, and better respond to and remediate findings.

AWS Security Hub ingests findings from multiple AWS services, including Amazon GuardDuty, Amazon Inspector, Amazon Macie, AWS Firewall Manager, AWS Identity and Access Management (IAM) Access Analyzer, and AWS Systems Manager Patch Manager. Findings from each service are normalized into the AWS Security Finding Format (ASFF), so you can review findings in a standardized format and take action quickly. You can use AWS Security Hub to provide a single view of all security-related findings, and to set up alerts, automate remediation, and export specific findings to third‑party incident management systems.

The Security or DevOps teams responsible for investigating, responding to, and remediating Security Hub findings may need additional account metadata beyond the account ID, to determine what to do about the finding or where to route it. For example, determining whether the finding originated from a development or production account can be key to determining the priority of the finding and the type of remediation action needed. Having this metadata information in the finding allows customers to create custom insights in Security Hub to track which OUs or applications (based on account tags) have the most open security issues. This blog post demonstrates a solution to enrich your findings with account metadata to help your Security and DevOps teams better understand and improve their security posture.

Solution Overview

In this solution, you will use a combination of AWS Security Hub, Amazon EventBridge and AWS Lambda to ingest the findings and automatically enrich them with account related metadata by querying AWS Organizations and Account management service APIs. The solution architecture is shown in Figure 1 below:

Figure 1: Solution Architecture and workflow for metadata enrichment

Figure 1: Solution Architecture and workflow for metadata enrichment

The solution workflow includes the following steps:

  1. New findings and updates to existing Security Hub findings from all the member accounts flow into the Security Hub administrator account. Security Hub generates Amazon EventBridge events for the findings.
  2. An EventBridge rule created as part of the solution in the Security Hub administrator account will trigger a Lambda function configured as a target every time an EventBridge notification for a new or updated finding imported into Security Hub matches the EventBridge rule shown below:
    {
      "detail-type": ["Security Hub Findings - Imported"],
      "source": ["aws.securityhub"],
      "detail": {
        "findings": {
          "RecordState": ["ACTIVE"],
          "UserDefinedFields": {
            "findingEnriched": [{
              "exists": false
            }]
          }
        }
      }
    }

  3. The Lambda function uses the account ID from the event payload to retrieve both the account information and the alternate contact information from the AWS Organizations and Account management service API. The following code within the helper.py constructs the account_details object representing the account information to enrich the finding:
    def get_account_details(account_id, role_name):
        account_details ={}
        organizations_client = AwsHelper().get_client('organizations')
        response = organizations_client.describe_account(AccountId=account_id)
        account_details["Name"] = response["Account"]["Name"]
        response = organizations_client.list_parents(ChildId=account_id)
        ou_id = response["Parents"][0]["Id"]
        if ou_id and response["Parents"][0]["Type"] == "ORGANIZATIONAL_UNIT":
            response = organizations_client.describe_organizational_unit(OrganizationalUnitId=ou_id)
            account_details["OUName"] = response["OrganizationalUnit"]["Name"]
        elif ou_id:
            account_details["OUName"] = "ROOT"
        if role_name:
            account_client = AwsHelper().get_session_for_role(role_name).client("account")
        else:
            account_client = AwsHelper().get_client('account')
        try:
            response = account_client.get_alternate_contact(
                AccountId=account_id,
                AlternateContactType='SECURITY'
            )
            if response['AlternateContact']:
                print("contact :{}".format(str(response["AlternateContact"])))
                account_details["AlternateContact"] = response["AlternateContact"]
        except account_client.exceptions.AccessDeniedException as error:
            #Potentially due to calling alternate contact on Org Management account
            print(error.response['Error']['Message'])
        
        response = organizations_client.list_tags_for_resource(ResourceId=account_id)
        results = response["Tags"]
        while "NextToken" in response:
            response = organizations_client.list_tags_for_resource(ResourceId=account_id, NextToken=response["NextToken"])
            results.extend(response["Tags"])
        
        account_details["tags"] = results
        AccountHelper.logger.info("account_details: %s" , str(account_details))
        return account_details

  4. The Lambda function updates the finding using the Security Hub BatchUpdateFindings API to add the account related data into the Note and UserDefinedFields attributes of the SecurityHub finding:
    #lookup and build the finding note and user defined fields  based on account Id
    enrichment_text, tags_dict = enrich_finding(account_id, assume_role_name)
    logger.debug("Text to post: %s" , enrichment_text)
    logger.debug("User defined Fields %s" , json.dumps(tags_dict))
    #add the Note to the finding and add a userDefinedField to use in the event bridge rule and prevent repeat lookups
    response = secHubClient.batch_update_findings(
        FindingIdentifiers=[
            {
                'Id': enrichment_finding_id,
                'ProductArn': enrichment_finding_arn
            }
        ],
        Note={
            'Text': enrichment_text,
            'UpdatedBy': enrichment_author
        },
        UserDefinedFields=tags_dict
    )

    Note: All state change events published by AWS services through Amazon Event Bridge are free of cost. The AWS Lambda free tier includes 1M free requests per month, and 400,000 GB-seconds of compute time per month at the time of publication of this post. If you process 2M requests per month, the estimated cost for this solution would be approximately $7.20 USD per month.

  5. Prerequisites

    1. Your AWS organization must have all features enabled.
    2. This solution requires that you have AWS Security Hub enabled in an AWS multi-account environment which is integrated with AWS Organizations. The AWS Organizations management account must designate a Security Hub administrator account, which can view data from and manage configuration for its member accounts. Follow these steps to designate a Security Hub administrator account for your AWS organization.
    3. All the members accounts are tagged per your organization’s tagging strategy and their security alternate contact is filled. If the tags or alternate contacts are not available, the enrichment will be limited to the Account Name and the Organizational Unit name.
    4. Trusted access must be enabled with AWS Organizations for AWS Account Management service. This will enable the AWS Organizations management account to call the AWS Account Management API operations (such as GetAlternateContact) for other member accounts in the organization. Trusted access can be enabled either by using AWS Management Console or by using AWS CLI and SDKs.

      The following AWS CLI example enables trusted access for AWS Account Management in the calling account’s organization.

      aws organizations enable-aws-service-access --service-principal account.amazonaws.com

    5. An IAM role with a read only access to lookup the GetAlternateContact details must be created in the Organizations management account, with a trust policy that allows the Security Hub administrator account to assume the role.

    Solution Deployment

    This solution consists of two parts:

    1. Create an IAM role in your Organizations management account, giving it necessary permissions as described in the Create the IAM role procedure below.
    2. Deploy the Lambda function and the other associated resources to your Security Hub administrator account

    Create the IAM role

    Using console, AWS CLI or AWS API

    Follow the Creating a role to delegate permissions to an IAM user instructions to create a IAM role using the console, AWS CLI or AWS API in the AWS Organization management account with role name as account-contact-readonly, based on the trust and permission policy template provided below. You will need the account ID of your Security Hub administrator account.

    The IAM trust policy allows the Security Hub administrator account to assume the role in your Organization management account.

    IAM Role trust policy

    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "AWS": "arn:aws:iam::<SH administrator Account ID>:root"
          },
          "Action": "sts:AssumeRole",
          "Condition": {}
        }
      ]
    }

    Note: Replace the <SH Delegated Account ID> with the account ID of your Security Hub administrator account. Once the solution is deployed, you should update the principal in the trust policy shown above to use the new IAM role created for the solution.

    IAM Permission Policy

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "account:GetAlternateContact"
                ],
                "Resource": "arn:aws:account::<Org. Management Account id>:account/o-*/*"
            }
        ]
    }

    The IAM permission policy allows the Security Hub administrator account to look up the alternate contact information for the member accounts.

    Make a note of the Role ARN for the IAM role similar to this format:

    arn:aws:iam::<Org. Management Account id>:role/account-contact-readonly. 
    			

    You will need this while the deploying the solution in the next procedure.

    Using AWS CloudFormation

    Alternatively, you can use the  provided CloudFormation template to create the role in the management account. The IAM role ARN is available in the Outputs section of the created CloudFormation stack.

    Deploy the Solution to your Security Hub administrator account

    You can deploy the solution using either the AWS Management Console, or from the GitHub repository using the AWS SAM CLI.

    Note: if you have designated an aggregation Region within the Security Hub administrator account, you can deploy this solution only in the aggregation Region, otherwise you need to deploy this solution separately in each Region of the Security Hub administrator account where Security Hub is enabled.

    To deploy the solution using the AWS Management Console

    1. In your Security Hub administrator account, launch the template by choosing the Launch Stack button below, which creates the stack the in us-east-1 Region.

      Note: if your Security Hub aggregation region is different than us-east-1 or want to deploy the solution in a different AWS Region, you can deploy the solution from the GitHub repository described in the next section.

      Select this image to open a link that starts building the CloudFormation stack

    2. On the Quick create stack page, for Stack name, enter a unique stack name for this account; for example, aws-security-hub–findings-enrichment-stack, as shown in Figure 2 below.
      Figure 2: Quick Create CloudFormation stack for the Solution

      Figure 2: Quick Create CloudFormation stack for the Solution

    3. For ManagementAccount, enter the AWS Organizations management account ID.
    4. For OrgManagementAccountContactRole, enter the role ARN of the role you created previously in the Create IAM role procedure.
    5. Choose Create stack.
    6. Once the stack is created, go to the Resources tab and take note of the name of the IAM Role which was created.
    7. Update the principal element of the IAM role trust policy which you previously created in the Organization management account in the Create the IAM role procedure above, replacing it with the role name you noted down, as shown below.
      Figure 3 Update Management Account Role’s Trust

      Figure 3 Update Management Account Role’s Trust

    To deploy the solution from the GitHub Repository and AWS SAM CLI

    1. Install the AWS SAM CLI
    2. Download or clone the github repository using the following commands
      $ git clone https://github.com/aws-samples/aws-security-hub-findings-account-data-enrichment.git
      $ cd aws-security-hub-findings-account-data-enrichment

    3. Update the content of the profile.txt file with the profile name you want to use for the deployment
    4. To create a new bucket for deployment artifacts, run create-bucket.sh by specifying the region as argument as below.
      $ ./create-bucket.sh us-east-1

    5. Deploy the solution to the account by running the deploy.sh script by specifying the region as argument
      $ ./deploy.sh us-east-1

    6. Once the stack is created, go to the Resources tab and take note of the name of the IAM Role which was created.
    7. Update the principal element of the IAM role trust policy which you previously created in the Organization management account in the Create the IAM role procedure above, replacing it with the role name you noted down, as shown below.
      "AWS": "arn:aws:iam::<SH Delegated Account ID>: role/<Role Name>"

    Using the enriched attributes

    To test that the solution is working as expected, you can create a standalone security group with an ingress rule that allows traffic from the internet. This will trigger a finding in Security Hub, which will be populated with the enriched attributes. You can then use these enriched attributes to filter and create custom insights, or take specific response or remediation actions.

    To generate a sample Security Hub finding using AWS CLI

    1. Create a Security Group using following AWS CLI command:
      aws ec2 create-security-group --group-name TestSecHubEnrichmentSG--description "Test Security Hub enrichment function"

    2. Make a note of the security group ID from the output, and use it in Step 3 below.
    3. Add an ingress rule to the security group which allows unrestricted traffic on port 100:
      aws ec2 authorize-security-group-ingress --group-id <Replace Security group ID> --protocol tcp --port 100 --cidr 0.0.0.0/0

    Within few minutes, a new finding will be generated in Security Hub, warning about the unrestricted ingress rule in the TestSecHubEnrichmentSG security group. For any new or updated findings which do not have the UserDefinedFields attribute findingEnriched set to true, the solution will enrich the finding with account related fields in both the Note and UserDefinedFields sections in the Security Hub finding.

    To see and filter the enriched finding

    1. Go to Security Hub and click on Findings on the left-hand navigation.
    2. Click in the filter field at the top to add additional filters. Choose a filter field of AWS Account ID, a filter match type of is, and a value of the AWS Account ID where you created the TestSecHubEnrichmentSG security group.
    3. Add one more filter. Choose a filter field of Resource type, a filter match type of is, and the value of AwsEc2SecurityGroup.
    4. Identify the finding for security group TestSecHubEnrichmentSG with updates to Note and UserDefinedFields, as shown in Figures 4 and 5 below:
      Figure 4: Account metadata enrichment in Security Hub finding’s Note field

      Figure 4: Account metadata enrichment in Security Hub finding’s Note field

      Figure 5: Account metadata enrichment in Security Hub finding’s UserDefinedFields field

      Figure 5: Account metadata enrichment in Security Hub finding’s UserDefinedFields field

      Note: The actual attributes you will see as part of the UserDefinedFields may be different from the above screenshot. Attributes shown will depend on your tagging configuration and the alternate contact configuration. At a minimum, you will see the AccountName and OU fields.

    5. Once you confirm that the solution is working as expected, delete the stand-alone security group TestSecHubEnrichmentSG, which was created for testing purposes.

    Create custom insights using the enriched attributes

    You can use the attributes available in the UserDefinedFields in the Security Hub finding to filter the findings. This lets you generate custom Security Hub Insight and reports tailored to suit your organization’s needs. The example shown in Figure 6 below creates a custom Security Hub Insight for findings grouped by severity for a specific owner, using the Owner attribute within the UserDefinedFields object of the Security Hub finding.

    Figure 6: Custom Insight with Account metadata filters

    Figure 6: Custom Insight with Account metadata filters

    Event Bridge rule for response or remediation action using enriched attributes

    You can also use the attributes in the UserDefinedFields object of the Security Hub finding within the EventBridge rule to take specific response or remediation actions based on values in the attributes. In the example below, you can see how the Environment attribute can be used within the EventBridge rule configuration to trigger specific actions only when value matches PROD.

    {
      "detail-type": ["Security Hub Findings - Imported"],
      "source": ["aws.securityhub"],
      "detail": {
        "findings": {
          "RecordState": ["ACTIVE"],
          "UserDefinedFields": {
            "Environment": "PROD"
          }
        }
      }
    }

    Conclusion

    This blog post walks you through a solution to enrich AWS Security Hub findings with AWS account related metadata using Amazon EventBridge notifications and AWS Lambda. By enriching the Security Hub findings with account related information, your security teams have better visibility, additional insights and improved ability to create targeted reports for specific account or business teams, helping them prioritize and improve overall security response. To learn more, see:

 
If you have feedback about this post, submit comments in the Comments section below. If you have any questions about this post, start a thread on the AWS Security Hub forum.

Want more AWS Security news? Follow us on Twitter.

Siva Rajamani

Siva Rajamani

Siva Rajamani is a Boston-based Enterprise Solutions Architect at AWS. Siva enjoys working closely with customers to accelerate their AWS cloud adoption and improve their overall security posture.

Prashob Krishnan

Prashob Krishnan

Prashob Krishnan is a Denver-based Technical Account Manager at AWS. Prashob is passionate about security. He enjoys working with customers to solve their technical challenges and help build a secure scalable architecture on the AWS Cloud.

Amazon GuardDuty Enhances Detection of EC2 Instance Credential Exfiltration

Post Syndicated from Sébastien Stormacq original https://aws.amazon.com/blogs/aws/amazon-guardduty-enhances-detection-of-ec2-instance-credential-exfiltration/

Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts, workloads, and data stored in Amazon Simple Storage Service (Amazon S3). Informed by a multitude of public and AWS-generated data feeds and powered by machine learning, GuardDuty analyzes billions of events in pursuit of trends, patterns, and anomalies that are recognizable signs that something is amiss. You can enable it with a click and see the first findings within minutes.

Today, we are adding to GuardDuty the ability to detect when your Amazon Elastic Compute Cloud (Amazon EC2) instance credentials are being used from another AWS Account. EC2 instance credentials are the temporary credentials made available through the EC2 metadata service to any applications running on an instance, when an AWS Identity and Access Management (IAM) role is attached to it.

What Are the Risks?
When your workloads deployed on EC2 instances access AWS services, they use an access key, a secret access key, and a session token. The secure mechanism to pass access key credentials to your workloads is to define the permissions required by your workload, create one or several IAM policies with the permissions, attach the policies to an IAM role and, finally, attach the role to the instance.

Any process running on an EC2 instance with a role attached can retrieve the security credentials by calling the EC2 metadata service:

curl 169.254.169.254/latest/meta-data/iam/security-credentials/role_name
{
  "Code" : "Success",
  "LastUpdated" : "2021-09-05T18:24:45Z",
  "Type" : "AWS-HMAC",
  "AccessKeyId" : "AS...J5",
  "SecretAccessKey" : "r1...9m",
  "Token" : "IQ...z5Q==",
  "Expiration" : "2021-09-06T00:44:06Z"
}

These credentials are limited in time and in scope. They are valid for a maximum of six hours. They are limited to the scope of the permissions attached to the IAM role associated with the EC2 instance.

All AWS SDK are able to retrieve and renew such credentials automatically. No additional code is necessary in your application.

Now imagine that your application running on the EC2 instance is compromised and a malicious actor managed to access the instance’s meta data service. The malicious actor would extract the credentials. These credentials have the permissions you defined in the IAM role attached to the instance. Depending on your application, attackers might have the possibility to exfiltrate data from S3 or DynamoDB, to start or terminate EC2 instances, or even to create new IAM users or roles.

Since the launch of GuardDuty, it has detected when such credentials are used from IP addresses outside of AWS. Smart attackers therefore might hide their activity from another AWS account to operate outside of the sight of GuardDuty. Starting today, GuardDuty also detects when the credentials are used from other AWS accounts, inside the AWS network.

What Alerts Are Generated?
There are legitimate reasons why the source IP address communicating with AWS Services APIs might be different than the EC2 instance IP address. Think about complex network topologies that route traffic to one or multiple VPCs; AWS Transit Gateway, or AWS Direct Connect for example. In addition, multi-Region configurations, or not using AWS Organizations, makes it non trivial to detect if the AWS account using the credentials belongs to you or not. Large companies have implemented their own solution to detect such security compromises, but these type of solutions are not easy to build and to maintain. Only a handful of organizations have the resources required to tackle this challenge. When they do so, they distract their engineering efforts from their core business. This is why we decided to address this.

Starting today, GuardDuty generates alerts when it detects a misuse of EC2 instance credentials. When the credentials are used from an affiliated account, the alert is labeled as medium-severity. Otherwise, a high-severity alert is generated. Affiliated accounts are accounts monitored by the same GuardDuty administrator account, also known as GuardDuty member accounts. They might be part of your organization or not.

In Practice
To learn how it’s working, let’s capture and exfiltrate a set of EC2 credentials from one of my EC2 instances. I use SSH to connect to one of my instances, and I use curl to retrieve the credentials, as shown earlier:

curl 169.254.169.254/latest/meta-data/iam/security-credentials/role_name
{
  "Code" : "Success",
  "LastUpdated" : "2021-09-05T18:24:45Z",
  "Type" : "AWS-HMAC",
  "AccessKeyId" : "AS...J5",
  "SecretAccessKey" : "r1...9m",
  "Token" : "IQ...z5Q==",
  "Expiration" : "2021-09-06T00:44:06Z"
}

The instance has an IAM role with permissions allowing to read S3 buckets in this AWS account. I copy and paste the credentials. Then I connect to another EC2 instance running in a different AWS account, not affiliated with the same GuardDuty administrator account. I use SSH to connect to that other instance, and then I configure the AWS CLI with the compromised credentials. I attempt to access a private S3 bucket.


# first verify I do not have access 
[ec2-user@ip-1-1-0-79 ~]$ aws s3 ls s3://my-private-bucket

An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied

# then I configure the CLI using the compromised credentials
[ec2-user@ip-1-1-0-79 ~]$ aws configure
AWS Access Key ID [None]: AS...J5
AWS Secret Access Key [None]: r1...9m
Default region name [None]: us-east-1
Default output format [None]:

[ec2-user@ip-1-1-0-79 ~]$ aws configure set aws_session_token IQ...z5Q==

# Finally, I attempt to access S3 again
[ec2-user@ip-1-1-0-79 ~]$ aws s3 ls s3://my-private-bucket
                     PRE folder1/
                     PRE folder2/
                     PRE folder3/
2021-01-22 16:37:48 6148 .DS_Store

Shortly after, I use the AWS Management Console to access GuardDuty in the AWS account where I stole the credentials. I can verify a high-severity alert was generated.

GuardDuty EC2 credentials exfiltration alarm

And So What?
Attackers may extract credentials when they have remote code execution (RCE), local presence on the instance, or by exploiting application-level vulnerabilities like Server Side Request Forgery (SSRF) and XML External Entity (XXE) injection. There are multiple methods to mitigate RCE or local access, including rebuilding the instances from a secured and patched AMI to eliminate remote access, rotate access credentials, and so on. When the vulnerability is at the application level, you or the application vendor are required to patch the application code to eliminate the vulnerability.

When you receive an alert indicating a risk of compromised credentials, the first thing to do is to verify the account ID. Is it one of your company accounts or not? During the analysis, when the business case allows, you may terminate the compromised instances or shut down the application. This prevents the attacker from extracting renewed instance credentials upon expiration. When in doubt, contact the AWS Trust & Safety team using the Report Amazon AWS abuse form or by contacting [email protected]. Provide all the necessary information, including the suspicious AWS account ID, logs in plaintext, and so on, when you submit your request.

Availability
This new ability is available in all AWS Regions at no additional cost. It is enabled by default when GuardDuty is already enabled on your AWS account.

Otherwise, enable GuardDuty now, and start the 30-day trial period.

— seb

Security updates for Friday

Post Syndicated from original https://lwn.net/Articles/882119/rss

Security updates have been issued by Debian (aide, flatpak, kernel, libspf2, and usbview), Fedora (kernel, libreswan, nodejs, texlive-base, and wireshark), openSUSE (aide, cryptsetup, grafana, permissions, rust1.56, and stb), SUSE (aide, apache2, cryptsetup, grafana, permissions, rust1.56, and webkit2gtk3), and Ubuntu (aide, thunderbird, and usbview).

Securing Zabbix 6.0 LTS by Kārlis Saliņš / Zabbix Summit Online 2021

Post Syndicated from Kārlis Saliņš original https://blog.zabbix.com/securing-zabbix-6-0-lts-by-karlis-salins-zabbix-summit-online-2021/18962/

Security is an essential dimension of any tool in your IT infrastructure, and Zabbix is no exception. With Zabbix 6.0 LTS, our users will be able to secure their Zabbix instance on multiple layers – from encrypting your network communication to flexible user access control,  API token provisioning, and custom user password policies. Let’s take a look at the full set of security features that Zabbix 6.0 LTS provides to its users.

The full recording of the speech is available on the official Zabbix Youtube channel.

Why do we need security?

Over time the security standards for IT infrastructures and software have greatly developed. We now have a robust set of standards that we have to follow to ensure the uninterrupted and secure delivery of internal and external services. We also need to ensure that any sensitive information stays confidential and minimize the risk for any possible data breaches. In the case of Zabbix, we have the ability to define a flexible set of security measures to ensure our data stays secure:

  • We can encrypt our connections between every Zabbix component to protect our data
    • Multiple encryption methods are supported
    • Zabbix administrators can configure supported cipher suites based on their company policy
  • Role-based access enables Zabbix administrators to define a flexible set of roles to restrict access to confidential information
    • This is especially vital for multi-tenant environments, where a single Zabbix instance is shared between multiple customers
  • Audit logging adds a layer of visibility that can help us detect potential security or configuration problems before they have a real impact

Security in Zabbix

Zabbix consists of multiple components – some of which are mandatory while others are optional. In the diagram below, you can see how the connection between every component can be encrypted. Users have the ability to choose which parts of their Zabbix infrastructure should be encrypted.

In addition to encrypting the communication between all of the Zabbix components, Zabbix administrators can also create flexible user roles for granular access control. The user passwords are stored in the database, encrypted by using bcrypt. With Zabbix 6.0 LTS, we can also define Zabbix password complexity policies – I’m going to cover that specific feature a bit further in this blog post.

As for your secrets – authentication credentials for your devices, SSH usernames and passwords, and other sensitive information – these can be stored in the HashiCorp Vault, which is an optional component specifically for storing your secrets in a very secure fashion.

Zabbix user types

You can have 3 types of users in Zabbix. We need to understand the access restrictions that these user types enforce, before we can talk more about user roles:

  • Zabbix Super Admin
    • Unlimited access to everything
  • Zabbix Admin
    • Can create hosts and templates
    • Permission-based access to Zabbix entities
  • Zabbix User
    • Permission-based access to Zabbix entities
    • Has access only to the monitoring information
    • Has no access to configuration sections in Zabbix GUI

User roles can be used to create a  user role of a particular type and further restrict the access for all users that belong to this role. For example, we can have a User Role that is based on the Super Admin type but, instead of having access to every Zabbix section, we can restrict the members of this role to have access only to parts of Zabbix Administration or Configuration sections.

Obtaining API tokens

If you plan on using the Zabbix API, the first step to executing your API workflows is issuing a user.login API call and obtaining an API token, which will be used in all future API calls until the user is logged out.

Now we have a better way of generating the API tokens. You can now generate the API tokens by accessing Administration – General – API tokens. Here you can generate new API tokens, select the user who can use the particular token, and also define an expiration date for the token.

Once the token has been created, we will see a confirmation screen with the generated token information. You should copy and save the token here since you will not be able to obtain the token later! If you forget to save it, you can always generate a new token.

Below we can see a list of our API tokens, their expiration dates, names, users, and other information:

Using the API tokens is exactly the same as you would use them with the old approach. Simply place your token in the auth field of your API call and, if the token is valid and the user has the necessary permissions, the API call will succeed:

Secret macros

Macros can be used in many different parts of Zabbix – from using them in your triggers to create dynamic trigger thresholds to using them to define user names and passwords for different types of checks, such as SNMP, SSH, Zabbix agent checks, and many others. Now we can add an extra layer of obfuscation for our macros by creating secret macros:

  • The value of a secret macro is not displayed in the frontend
  • The value of a secret macro does not get cloned or exported with host/template export
  • Secret macros are stored in the Zabbix database
  • Database connections and access must be secured to prevent the potential attackers from obtaining these values

Zabbix + HashiCorp vault

 

The HashiCorp Vault is an optional component that we can use for storing Zabbix secrets, thus providing an additional layer of security. The secrets are stored in the vault, and every Zabbix reloads the Zabbix configuration cache, the secrets are picked up from the vault (every minute by default):

  • HashiCorp Vault can be used for storing secrets such as authentication information and  Zabbix database credentials
  • the HashiCorp Vault provides a unified interface to access any secret while ensuring tight access control and keeping a detailed audit log
  • Initially, the vault is sealed and must be unsealed using unseal keys
  • Secrets are stored in the Zabbix configuration cache
  • The values of secrets are retrieved on every Zabbix configuration cache update

Initially, we have to provide the vault token and vault URL in the Zabbix server configuration file. If we choose to store the Zabbix database credentials in the secret vault, we also have to provide the path to the location of the secret:

### Option: VaultToken
# Vault authentication token that should have been generated
# exclusively for Zabbix server with read only permission
VaultToken=verysecretrandomlygeneretedvaultstring

### Option: VaultURL
# Vault server HTTP[S] URL. System wide CA certificates directory
# will be used if SSLCALocation is not specified.
VaultURL=https://my.organization.vault:8200

### Option: VaultDBPath
# Vault path from where credentials for database will be retrieved
# by keys 'password' and 'username'.
VaultDBPath=my/secret/location

Security improvements in Zabbix 6.0 LTS

The security improvements that we discussed previously have been available in the previous non-LTS versions. If you’re migrating from Zabbix 5.0 LTS to Zabbix 6.0 LTS all of these will be new to you. Next, let’s cover the improvements that have been added specifically in the Zabbix 6.0 LTS. If you’re using the latest major Zabbix version –  Zabbix 5.4, then all of the following features await you in Zabbix 6.0 LTS.

Improved Audit log

The audit log has seen some major improvements. Not only is the audit log capable of logging every operation performed by the Zabbix server and Zabbix frontend, but it has also received an internal re-design to ensure minimal possible performance impact and improved UX:

  • Improved API operation logging
  • Scalability improvements when logging a large number of audit log entries
  • Quality of life improvements when it comes to filtering the audit log
  • More types of operations are now logged in the audit log
    • Script execution
    • Global macro changes
    • Changes resulting from the execution of low-level discovery rules
    • And much more

The above image shows that not only new types of operations are being logged in the audit log, but we can also see the result of those operations, such as – results of script executions, macro changes, new users being created in Zabbix, and more.

User password complexity settings

One of the major security improvements that are introduced in Zabbix 6.0 LTS is the ability to define custom password complexity requirements. Zabbix administrators can select between multiple password complexity requirements and apply them for their Zabbix instance:

  • Prevent using simple passwords like password
  • Set the minimum password length
  • Prevent the usage of  name/last name/username in the password
  • Prevent usage of easy to guess passwords, such as abcd1234, asdf1234
    • The passwords analysis is based on a dictionary of ~1 million most common passwords
  • The default Admin password on a fresh Zabbix instance is not changed – you have to update it after the new Zabbix instance is deployed!

Monitoring of certificate attributes

Another dimension of security that has improved with the release of Zabbix 6.0 LTS – security monitoring. Now we can use Zabbix Agent 2 to monitor SSL certificate attributes:

  • Zabbix Agent 2 built-in plugin for certificate monitoring available starting from Zabbix Agent 2 5.0.15
  • Collects information about the website certificate
  • Official templates are available out of the box and can be downloaded on our git page
  • web.certificate.get[hostname,<port>,<address

Below you can see an example of the certificate information obtained from the www.zabbix.com website. Here’s what the item key looks like for our example:

zabbix_agent2 -t web.certificate.get[www.zabbix.com]

And the output that is received by Zabbix:

As we can see, the resulting information tells us that the certificate is valid and has been verified successfully. Next, let’s take a look at a different kind of result:

Here we can see that the certificate is still valid, but we also receive a warning that it’s a self-signed certificate.

You may have noticed that the information obtained by this item comes in a JSON containing multiple values. That’s because most of the .get items obtain information in bulk. We can use this item as a master item and create multiple dependent items, each of which will collect an individual value. This can be configured by using JSONPath preprocessing.

Let’s take a look at some trigger examples that we can use together with the certificate monitoring item:

  • Check if the certificate is invalid
    • {HOST:cert.validation.str("invalid")} = 1
  • Check if the certificate expires in less than 7 days
    • ({HOST:cert.not_after.last()} - {HOST:cert.not_after.now ()}) / 86400 < 7
  • Check if the Certificate fingerprint has changed
    • {TEMPLATE_NAME:cert.sha1_fingerprint.diff()}=1

User permissions in service trees

With the reworked business service monitoring, we have also added the ability to define access permissions for a particular service or multiple services that have a matching tag. Permissions can be defined for Read-only access or Read-write access.

In the example below, we can see three child services. Our user has read-only access for the first two services while having read-write permissions for the third service.

If you wish to read more about the re-designed Services section and Zabbix business service monitoring, you can find a blog post based on the Zabbix Summit 2021 speech on the topic here.

Zabbix security best practices

Finally, let’s go through a checklist of best practices that will help you to secure your environment on all levels.

  • Define proper user roles for different types of Zabbix users in your organization
    • If you have a multi-tenant Zabbix environment, make sure that each tenant can access only the information related to their organization!
  • Enable HTTPS on your Zabbix web frontend
    • Unencrypted connections to the web frontend mean that an attacker could potentially obtain sensitive information
  • Encrypt your connections to the Zabbix database
  • Use the secret vault to harden your security and store your secrets in a vault
  • Encrypt the traffic between the Zabbix server, Zabbix proxies, and Zabbix agents
    • Here you can choose between PSK and Certificate-based encryptions
    • Connections between command-line tools like zabbix-sender can and should also be encrypted!
  • Agentless checks should also be configured in a secure manner
    • HTTP authentication and SSL for HTTP checks
    • Public key authentication for SSH checks
  • Define custom password complexity settings to prevent weak passwords for your Zabbix users
  • Define allow and deny lists for the item keys in your Zabbix agent configuration
    • This way you can restrict the collection of sensitive metrics by potential attackers
  • Use secret macros instead of displaying your macro values in plain-text
  • Keep Zabbix updated and update to the latest minor Zabbix release
    • Minor releases contain critical bug and security fixes
    • The upgrade process is extremely simple since no changes are made to the database schema in the minor version updates

Advanced Zabbix Security Administration course

If you wish to learn more about securing your Zabbix instance and get some practical experience in a controlled environment while being guided by Zabbix certified trainers – then the Advanced Zabbix Security Administration course is for you! In this course you will have the chance to learn and practice the following topics:

You can find more information about the Advanced Zabbix Security Administration course as well as the course pricing on the Zabbix website.

Questions

Q: Can we encrypt individual Zabbix components, or is it an all-or-nothing approach?

A: Connections between each component can be encrypted individually. You can, for example, encrypt only connections between Zabbix server and Zabbix proxies if you wish to do so. You can even leave connections between some proxies unencrypted, while others can be fully secured.

 

Q: Can I mix different Zabbix authentication approaches – for example, can I use LDAP together with internal authentication?

A: It is not only possible, but it’s also recommended. If you’re using LDAP authentication – always have a user with internal authentication configured for it. This will save you in situations where something goes wrong with the LDAP servers or connectivity.

The post Securing Zabbix 6.0 LTS by Kārlis Saliņš / Zabbix Summit Online 2021 appeared first on Zabbix Blog.

Internet outage in Yemen amid airstrikes

Post Syndicated from João Tomé original https://blog.cloudflare.com/internet-outage-in-yemen-amid-airstrikes/

Internet outage in Yemen amid airstrikes

The early hours of Friday, January 21, 2022, started in Yemen with a country-wide Internet outage. According to local and global news reports airstrikes are happening in the country and the outage is likely related as there are reports that a telecommunications building in Al-Hudaydah where the FALCON undersea cable lands.

Cloudflare Radar shows that Internet traffic dropped close to zero between 21:30 UTC (January 20, 2022) and by 22:00 UTC (01:00 in local time).

Internet outage in Yemen amid airstrikes

The outage affected the main state-owned ISP, Public Telecommunication Corporation (AS30873 in blue in the next chart), which represents almost all the Internet traffic in the country.

Internet outage in Yemen amid airstrikes

Looking at BGP (Border Gateway Protocol) updates from Yemen’s ASNs around the time of the outage, we see a clear spike at the same time the main ASN was affected ~21:55 UTC, January 20, 2022. These update messages are BGP signalling that Yemen’s main ASN was no longer routable, something similar to what we saw happening in The Gambia and Kazakhstan but for very different reasons.

Internet outage in Yemen amid airstrikes

So far, 2022 has started with a few significant Internet disruptions for different reasons:

1. An Internet outage in The Gambia because of a cable problem.
2. An Internet shutdown in Kazakhstan because of unrest.
3. A mobile Internet shutdown in Burkina Faso because of a coup plot.
4. An Internet outage in Tonga because of a volcanic eruption (still ongoing).

You can keep an eye on Cloudflare Radar to monitor this situation as it unfolds.

China’s Olympics App Is Horribly Insecure

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/01/chinas-olympics-app-is-horribly-insecure.html

China is mandating that athletes download and use a health and travel app when they attend the Winter Olympics next month. Citizen Lab examined the app and found it riddled with security holes.

Key Findings:

  • MY2022, an app mandated for use by all attendees of the 2022 Olympic Games in Beijing, has a simple but devastating flaw where encryption protecting users’ voice audio and file transfers can be trivially sidestepped. Health customs forms which transmit passport details, demographic information, and medical and travel history are also vulnerable. Server responses can also be spoofed, allowing an attacker to display fake instructions to users.
  • MY2022 is fairly straightforward about the types of data it collects from users in its public-facing documents. However, as the app collects a range of highly sensitive medical information, it is unclear with whom or which organization(s) it shares this information.
  • MY2022 includes features that allow users to report “politically sensitive” content. The app also includes a censorship keyword list, which, while presently inactive, targets a variety of political topics including domestic issues such as Xinjiang and Tibet as well as references to Chinese government agencies.
  • While the vendor did not respond to our security disclosure, we find that the app’s security deficits may not only violate Google’s Unwanted Software Policy and Apple’s App Store guidelines but also China’s own laws and national standards pertaining to privacy protection, providing potential avenues for future redress.

News article:

It’s not clear whether the security flaws were intentional or not, but the report speculated that proper encryption might interfere with some of China’s ubiquitous online surveillance tools, especially systems that allow local authorities to snoop on phones using public wireless networks or internet cafes. Still, the researchers added that the flaws were probably unintentional, because the government will already be receiving data from the app, so there wouldn’t be a need to intercept the data as it was being transferred.

[…]

The app also included a list of 2,422 political keywords, described within the code as “illegalwords.txt,” that worked as a keyword censorship list, according to Citizen Lab. The researchers said the list appeared to be a latent function that the app’s chat and file transfer function was not actively using.

The US government has already advised athletes to leave their personal phones and laptops home and bring burners.

Fall 2021 PCI DSS report now available with 7 services added to compliance scope

Post Syndicated from Michael Oyeniya original https://aws.amazon.com/blogs/security/fall-2021-pci-dss-report-now-available-with-7-services-added-to-compliance-scope/

We’re continuing to expand the scope of our assurance programs at Amazon Web Services (AWS) and are pleased to announce that seven new services have been added to the scope of our Payment Card Industry Data Security Standard (PCI DSS) certification. These new services provide our customers with more options to process and store their payment card data and to architect their cardholder data environment (CDE) securely in AWS.

You can see the full list of services on our Services in Scope by Compliance program page. The seven new services are:

The Asia-Pacific (Jakarta) Region was newly added to scope, and assessed as PCI compliant as part of the Fall 2021 PCI assessment.

We were evaluated by Coalfire, a third-party Qualified Security Assessor (QSA). The Attestation of Compliance (AOC) that shows AWS PCI compliance status is available through AWS Artifact.

We value your feedback and questions—feel free to reach out to our team or give feedback about this post through our Contact Us page.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Author

Michael Oyeniya

Michael is a Compliance Program Manager at AWS on the Global Audits team, managing the PCI compliance program. He holds a Master’s degree in management and has over 18 years of experience in information technology security risk and control.

Introducing AWS Lambda batching controls for message broker services

Post Syndicated from Julian Wood original https://aws.amazon.com/blogs/compute/introducing-aws-lambda-batching-controls-for-message-broker-services/

This post is written by Mithun Mallick, Senior Specialist Solutions Architect.

AWS Lambda now supports configuring a maximum batch window for instance-based message broker services to fine tune when Lambda invocations occur. This feature gives you an additional control on batching behavior when processing data. It applies to Amazon Managed Streaming for Apache Kafka (Amazon MSK), self-hosted Apache Kafka, and Amazon MQ for Apache ActiveMQ and RabbitMQ.

Apache Kafka is an open source event streaming platform used to support workloads such as data pipelines and streaming analytics. It is conceptually similar to Amazon Kinesis. Amazon MSK is a fully managed, highly available service that simplifies the setup, scaling, and management of clusters running Kafka.

Amazon MQ is a managed, highly available message broker service for Apache ActiveMQ and RabbitMQ that makes it easier to set up and operate message brokers on AWS. Amazon MQ reduces your operational responsibilities by managing the provisioning, setup, and maintenance of message brokers for you.

Amazon MSK, self-hosted Apache Kafka and Amazon MQ for ActiveMQ and RabbitMQ are all available as event sources for AWS Lambda. You configure an event source mapping to use Lambda to process items from a stream or queue. This allows you to use these message broker services to store messages and asynchronously integrate them with downstream serverless workflows.

In this blog, I explain how message batching works. I show how to use the new maximum batching window control for the managed message broker services and self-managed Apache Kafka.

Understanding batching

For event source mappings, the Lambda service internally polls for new records or messages from the event source, and then synchronously invokes the target Lambda function. Lambda reads the messages in batches and provides these to your function as an event payload. Batching allows higher throughput message processing, up to 10,000 messages in a batch. The payload limit of a single invocation is 6 MB.

Previously, you could only use batch size to configure the maximum number of messages Lambda would poll for. Once a defined batch size is reached, the poller invokes the function with the entire set of messages. This feature is ideal when handling a low volume of messages or batches of data that take time to build up.

Batching window

The new Batch Window control allows you to set the maximum amount of time, in seconds, that Lambda spends gathering records before invoking the function. This brings similar batching functionality that AWS supports with Amazon SQS to Amazon MQ, Amazon MSK and self-managed Apache Kafka. The Lambda event source mapping batching functionality can be described as follows.

Batching controls with Lambda event source mapping

Batching controls with Lambda event source mapping

Using MaximumBatchingWindowInSeconds, you can set your function to wait up to 300 seconds for a batch to build before processing it. This allows you to create bigger batches if there are enough messages. You can manage the average number of records processed by the function with each invocation. This increases the efficiency of each invocation, and reduces the frequency.

Setting MaximumBatchingWindowInSeconds to 0 invokes the target Lambda function as soon as the Lambda event source receives a message from the broker.

Message broker batching behavior

For ActiveMQ, the Lambda event source mapping uses the Java Message Service (JMS) API to receive messages. For RabbitMQ, Lambda uses a RabbitMQ client library to get messages from the queue.

The Lambda event source mappings act as a consumer when polling the queue. The batching pattern for all instance-based message broker services is the same. As soon as a message is received, the batching window timer starts. If there are more messages, the consumer makes additional calls to the broker and adds them to a buffer. It keeps a count of the number of messages and the total size of the payload.

The batch is considered complete if the addition of a new message makes the batch size equal to or greater than 6 MB, or the batch window timeout is reached. If the batch size is greater than 6 MB, the last message is returned back to the broker.

Lambda then invokes the target Lambda function synchronously and passes on the batch of messages to the function. The Lambda event source continues to poll for more messages and as soon as it retrieves the next message, the batching window starts again. Polling and invocation of the target Lambda function occur in separate processes.

Kafka uses a distributed append log architecture to store messages. This works differently from ActiveMQ and RabbitMQ as messages are not removed from the broker once they have been consumed. Instead, consumers must maintain an offset to the last record or message that was consumed from the broker. Kafka provides several options in the consumer API to simplify the tracking of offsets.

Amazon MSK and Apache Kafka store data in multiple partitions to provide higher scalability. Lambda reads the messages sequentially for each partition and a batch may contain messages from different partitions.  Lambda then commits the offsets once the target Lambda function is invoked successfully.

Configuring the maximum batching window

To reduce Lambda function invocations for existing or new functions, set the MaximumBatchingWindowInSeconds value close to 300 seconds. A longer batching window can introduce additional latency. For latency-sensitive workloads set the MaximumBatchingWindowInSeconds value to an appropriate setting.

To configure Maximum Batching on a function in the AWS Management Console, navigate to the function in the Lambda console. Create a new Trigger, or edit an existing once. Along with the Batch size you can configure a Batch window. The Trigger Configuration page is similar across the broker services.

Max batching trigger window

Max batching trigger window

You can also use the AWS CLI to configure the --maximum-batching-window-in-seconds parameter.

For example, with Amazon MQ:

aws lambda create-event-source-mapping --function-name my-function \
--maximum-batching-window-in-seconds 300 --batch-size 100 --starting-position AT_TIMESTAMP \
--event-source-arn arn:aws:mq:us-east-1:123456789012:broker:ExampleMQBroker:b-24cacbb4-b295-49b7-8543-7ce7ce9dfb98

You can use AWS CloudFormation to configure the parameter. The following example configures the MaximumBatchingWindowInSeconds as part of the AWS::Lambda::EventSourceMapping resource for Amazon MQ:

  LambdaFunctionEventSourceMapping:
    Type: AWS::Lambda::EventSourceMapping
    Properties:
      BatchSize: 10
      MaximumBatchingWindowInSeconds: 300
      Enabled: true
      Queues:
        - "MyQueue"
      EventSourceArn: !GetAtt MyBroker.Arn
      FunctionName: !GetAtt LambdaFunction.Arn
      SourceAccessConfigurations:
        - Type: BASIC_AUTH
          URI: !Ref secretARNParameter

You can also use AWS Serverless Application Model (AWS SAM) to configure the parameter as part of the Lambda function event source.

MQReceiverFunction:
      Type: AWS::Serverless::Function 
      Properties:
        FunctionName: MQReceiverFunction
        CodeUri: src/
        Handler: app.lambda_handler
        Runtime: python3.9
        Events:
          MQEvent:
            Type: MQ
            Properties:
              Broker: !Ref brokerARNParameter
              BatchSize: 10
              MaximumBatchingWindowInSeconds: 300
              Queues:
                - "workshop.queueC"
              SourceAccessConfigurations:
                - Type: BASIC_AUTH
                  URI: !Ref secretARNParameter

Error handling

If your function times out or returns an error for any of the messages in a batch, Lambda retries the whole batch until processing succeeds or the messages expire.

When a function encounters an unrecoverable error, the event source mapping is paused and the consumer stops processing records. Any other consumers can continue processing, provided that they do not encounter the same error.  If your Lambda event records exceed the allowed size limit of 6 MB, they can go unprocessed.

For Amazon MQ, you can redeliver messages when there’s a function error. You can configure dead-letter queues (DLQs) for both Apache ActiveMQ, and RabbitMQ. For RabbitMQ, you can set a per-message TTL to move failed messages to a DLQ.

Since the same event may be received more than once, functions should be designed to be idempotent. This means that receiving the same event multiple times does not change the result beyond the first time the event was received.

Conclusion

Lambda supports a number of event sources including message broker services like Amazon MQ and Amazon MSK. This post explains how batching works with the event sources and how messages are sent to the Lambda function.

Previously, you could only control the batch size. The new Batch Window control allows you to set the maximum amount of time, in seconds, that Lambda spends gathering records before invoking the function. This can increase the overall throughput of message processing and reduces Lambda invocations, which may improve cost.

For more serverless learning resources, visit Serverless Land.

Gain insights into your Amazon Kinesis Data Firehose delivery stream using Amazon CloudWatch

Post Syndicated from Alon Gendler original https://aws.amazon.com/blogs/big-data/gain-insights-into-your-amazon-kinesis-data-firehose-delivery-stream-using-amazon-cloudwatch/

The volume of data being generated globally is growing at an ever-increasing pace. Data is generated to support an increasing number of use cases, such as IoT, advertisement, gaming, security monitoring, machine learning (ML), and more. The growth of these use cases drives both volume and velocity of streaming data and requires companies to capture, processes, transform, analyze, and load the data into various data stores in near-real time.

Amazon Kinesis Data Firehose is the easiest way to reliably load streaming data into data lakes, data stores, and analytics services. As the volume of the data you stream into Kinesis Data Firehose grows, you should gain insights and monitor the health of your data ingestion, transformation, and delivery.

In this post, we review the capabilities of using Firehose delivery stream metrics and the Amazon CloudWatch dashboard located on your Kinesis Data Firehose console. These capabilities allow you to create alerts when, for example, if the destination you configured in Kinesis Data Firehose has missing privileges, misconfigurations, or other issues, then Firehose will be able to detect it for you and report it as a failure. Other errors that might also occur are if you configured data transformation using Lambda and your Lambda function invocation failed, or if you have reached the Kinesis Firehose quota limits associated with your AWS account. In these cases, the data delivery from Kinesis Data Firehose to its destination may delay or fail. The CloudWatch alerts described in this post should help identify such cases in a timely manner.

This post also covers the different proactive actions that you can take when alarms are being triggered, such as submitting a request to increase quota or adding exponential backoff to your data producers.

Monitoring the delivery streams and taking these actions makes sure that data is delivered to your destinations without interruptions, enabling your business to gain insights in near-real time.

Monitor data ingestion to Kinesis Data Firehose

You can deliver data from your data producers to Kinesis Data Firehose through Amazon Kinesis Data Streams (as described later in this post), using Kinesis Agent, or directly using the Kinesis Data Firehose API operations PutRecord and PutRecordBatch. When you use Kinesis Data Streams as a data source, Kinesis Data Firehose scales automatically as your Kinesis Data Stream scales. When using the API operations for direct ingestion, you need to check the quota limits associated with your AWS account to avoid API requests throttling. Depending on your data producer behavior, this throttling can cause your data producers to retry the operation, which results in a delay of the data delivery to your destination. This throttling can also result in data loss if your data producers don’t implement a retry mechanism.

To gain deeper insights into Firehose delivery stream usage, we provide additional CloudWatch metrics that help you monitor and proactively scale quota limits: ThrottledRecords, RecordsPerSecondLimit, BytesPerSecondLimit, and PutRequestsPerSecondLimit. You can use the CloudWatch metrics dashboard (on the Monitoring tab on your Kinesis Data Firehose console) to easily visualize current usage and the quota limits.

When ingesting data directly to your delivery stream using PutRecord or PutRecordBatch, you should monitor the ThrottledRecords metric. This metric represents the number of records that were actually throttled because data ingestion exceeded one of the delivery stream limits. Kinesis Data Firehose calculates the throttling rates during the ingestion at a 1-second granularity, but the data ingestion metrics we mentioned are aggregated and emitted to CloudWatch every 5 minutes. Because of that, you can get throttled within that 5-minute window even if the data ingestion metrics don’t show that you reached the limit.

To receive alerts before your data producers are actually throttled, you can use additional CloudWatch metrics to alert you when you’re about to reach one of the delivery stream limits. You can achieve this by using the CloudWatch metrics IncomingRecords, IncomingBytes, and IncomingPutRequests. To check the limits of these metrics, refer to Amazon Kinesis Data Firehose Quota.

You can use the following ingestion metrics and their corresponding limit metrics to create a CloudWatch alarm:

  • RecordsPerSecondLimit – The maximum number of records that can be ingested in a second (IncomingRecords)
  • BytesPerSecondLimit – The maximum volume of data that can be ingested in a second (IncomingBytes)
  • PutRequestsPerSecondLimit – The maximum number of successful PutRecord and PutRecordBatch API requests that can be performed in a second (IncomingPutRequests)

To set up an alarm that alerts you when your ingestion rates are close to a quota, you should look for a percentage relationship between the ingestion rate and its corresponding limit. Because Kinesis Data Firehose emits metrics to CloudWatch every 5 minutes, you need to divide your metric with the 5-minute aggregation period, expressed as seconds (300). For example, to generate an alert when the incoming records per second rate is breaching 80% of your API operations quota, your CloudWatch alarm should be defined as follows:

This gives you a way to proactively understand how close your ingestion rates are to your delivery stream limits, and the flexibility to modify the percentage levels based on your use case. To prevent a throttling bottleneck, you should separately monitor the three delivery stream ingestion rate metrics we discussed.

Define alerts using CloudWatch alarms

You can define CloudWatch alarms manually through the AWS Management Console or by using AWS CloudFormation. In this post we cover both methods, starting with the CloudFormation template.

The following template creates your CloudWatch alarms, which you can review and customize to suit your needs.

During the stack creation process, you provide the Firehose delivery stream name that you want to monitor, and the quota percentage where you want to be notified when it’s being breached, such as 80%. After the stack creation is successful, you have four CloudWatch alarms ready.

To create your CloudWatch alarms manually through the console, complete the following steps:

  1. On the Kinesis Data Firehose console, find your delivery stream.
  2. On the Monitoring tab, choose the more options icon of the metric you want to monitor (for this example, we monitor incoming records per second).
  3. On the options menu, choose View in metrics.

On the CloudWatch console, you can see a graph that represents your current API operations (blue line) and the quota limit (red line).

  1. To create an alarm, choose Math expression.
  2. Select Common and choose Percentage.
  3. For the metric name, enter Percentage of records per second quota.
  4. We use the metric expression 100*(e1/m2), which represents the formula 100*(BytesPerSecond/BytesPerSecondLimit) that was described earlier and reflects how close you are to your maximum in percentage.
  5. Change the expression of the metric e1 from METRICS("m1")/300 to m1/300.

You can also change the Y axis label.

  1. On the Graph options tab, under Left Y Axis, for Label, enter Percentage.
  2. Now that you have the expression to use for the alarm, deselect every other expression and metric on the page.

The only expression selected should be the one you just created. You should now see the desired percentage, as in the following screenshot.

Create a CloudWatch alarm

You have now created an expression on your IncomingRecords and RecordsPerSecond quota, which you can use as a base for the alarm. With this, you can configure the tolerance level that your business use case requires.

  1. Choose the alarm icon next to your expression.
  2. In the Specify metric and conditions section, choose to receive an alert when the alarms breach the 75% limit.
  3. In the Configure actions section, specify how to forward this alarm.

You can forward this alarm to your monitoring systems or to an email address through an Amazon Simple Notification Service (Amazon SNS) topic. For this post, we create a new SNS topic and subscribe [email protected] to it.

Actions you can take when approaching the limits

When you’re getting close to your limits, you can take several different actions, which we describe in this section.

Request a service quota increase

One action you can take when seeing an alert is to request an increase in quota using the Amazon Kinesis Data Firehose Limits form. The three quotas scale proportionally, for example, if you increase the throughput quota in US East (N. Virginia), US West (Oregon), or Europe (Ireland) from 5 MiB/second to 10 MiB/second, the other two quotas increase from 2,000 requests/second to 4,000 requests/second and from 500,000 records/second to 1 million records/second. For more information about the service quota limits by AWS Region, see Amazon Kinesis Data Firehose Quota.

Use the PutRecordBatch API

If you use the API call PutRecord to deliver events to a Firehose data stream and you’re reaching the request/second quota limit, consider using the PutRecordBatch API operation. PutRecordBatch writes multiple data records into a delivery stream in a single call to achieve higher throughput per producer than writing single records, and reduces the amount of requests per second to your delivery stream.

Implement exponential backoff

As we mentioned before, even when you’re monitoring your delivery stream, you can still have bursts in your data stream. This could be caused by sudden spikes in usage of your system or external events like high trading activity in financial markets. To protect the producers from multiple throttled records, you should implement an exponential backoff. Exponential backoff is a commonly used algorithm that you can use to decrease the rate of submitting records to Kinesis Data Firehose when being throttled, so that the producer can slowly retry in order to successfully send the records.

The following are the Kinesis Data Firehose API responses when records are throttled:

  • If you’re using the API operation PutRecord, the returned error from the service is ServiceUnavailableException with HTTP status code 500.
  • If you’re using PutRecordBatch, you should iterate through the RequestResponses array and look for individual PutRecordBatchResponseEntry with ErrorCode 500 and ErrorMessage ServiceUnavailableException. Also make sure to check the value of FailedPutCount in the response even when the API call succeeds.

In both cases, you should use exponential backoff and retry the operation. For more information about implementing exponential backoff, see Error retries and exponential backoff in AWS.

Use Kinesis Data Streams with Kinesis Data Firehose

Kinesis Data Streams is a massively scalable and durable real-time data streaming service. Your data producers can produce data directly to Kinesis Data Streams, and you can configure Kinesis Data Firehose to consume the data from Kinesis Data Streams and deliver it to your destination. When you use Kinesis Data Streams as the source for the Firehose delivery stream, the throughput limits mentioned before don’t apply. You don’t need to worry about throughput limits because Kinesis Data Firehose scales automatically to match the number of shards your Kinesis data stream has.

If you’re attaching a Firehose delivery stream as a consumer to your Kinesis data stream, and you have multiple consumer applications that read data from your Kinesis data stream such as AWS Lambda (see Using AWS Lambda with Amazon Kinesis), make sure that the total consumer applications aren’t breaching the shard’s 2 MB total read rate. This can cause the Kinesis data stream to throttle your consumer applications’ reading throughput, including Kinesis Data Firehose.

If more read capacity is required, some application consumers such as Lambda (see AWS Lambda supports Kinesis Data Streams Enhanced Fan-Out and HTTP/2 for faster streaming) or custom consumers that were developed with the Kinesis Consumer Library can support dedicated throughput from Kinesis Data Streams using enhanced fan-out, which currently isn’t supported by Kinesis Data Firehose. This feature provides these consumer applications isolated connection to the stream with 2 MB/second outbound throughput, so they don’t impact other consumer applications that are reading from the shards.

If you need more ingest capacity, you can easily scale up the number of shards in the stream using the console or the UpdateShardCount API.

Monitor data delivery of Kinesis Data Firehose

In case of network timeouts, missing privileges, or misconfigurations of your delivery stream such as incorrect destination configuration or AWS Key Management Service (AWS KMS) key ARN, the data delivery of your data from Kinesis Data Firehose to its destination may delay or fail. Errors might also occur if you configured data transformation using Lambda and your Lambda function invocation failed.

When Kinesis Data Firehose encounters delivery or processing errors, it retries until the configured retry duration expires. If the retry duration ends and the data hasn’t delivered successfully, Kinesis Data Firehose retains the data internally up to a maximum period of 24 hours. If the issue continues beyond the 24-hour maximum retention period, then Kinesis Data Firehose discards the data, resulting in a data loss.

When such data delivery issues persist, the data freshness metric, which is the age of the oldest record in Kinesis Data Firehose that hasn’t been delivered yet, constantly increases. To be alerted in such cases, you should create a CloudWatch alarm for when the data freshness metric exceeds the threshold of 4 hours. We also recommend setting an alarm to observe the historical p90 of the data freshness metric value. For example, set a certain tolerance level (such as 50% above the observed value) as an alarm threshold to detect data freshness variations.

You should monitor the data freshness metric that is relevant to your Kinesis Data Firehose destination, such as DeliveryToS3.DataFreshness, DeliveryToAmazonOpenSearchService.DataFreshness, DeliveryToSplunk.DataFreshness, or DeliveryToHttpEndpoint.DataFreshness. For more information, see Monitoring Kinesis Data Firehose Using CloudWatch Metrics.

If this alarm is triggered, you should take action to understand the root cause of the data freshness variation. A reason for such a variation could be a change in your Lambda transformation logic or configuration change of Lambda concurrency when using Kinesis Data Firehose data transformation. It could also be a result of change in the configuration parameters, format conversion schema, or ingested record type. For more information, see Data Freshness Metric Increasing or Not Emitted or you can submit a technical support request if needed.

When data delivery fails because of data transformation or an issue at the destination, in some cases you can find detailed failure logs in CloudWatch Logs, which can help you troubleshot the problem.

We also recommend monitoring the data delivery byte rate to your destination (for example, DeliveryToS3.Byte), which must match or exceed your data ingestion byte rate (IncomingBytes) on a sustained average basis to avoid increase of the data freshness metric and possible eventual data loss. If the observed delivery data rates are lower than the ingestion rates, consider tuning bottlenecks such as Lambda concurrency levels or your Lambda transformation logic if used with Kinesis Data Firehose data transformation.

To gain additional insights on the delivery of your data to its destination, we provide CloudWatch metrics you can monitor. For example, you can monitor the number of records delivered to keep track of data ingested into your destinations from Kinesis Data Firehose. For more information and additional metrics per destination, see Monitoring Kinesis Data Firehose Using CloudWatch Metrics.

Conclusion

In this post, we discussed the capabilities of using the Firehose delivery stream metrics and the CloudWatch dashboard located on your Kinesis Data Firehose console. This allows you to gain operational insights into the data ingestion and data delivery of your Firehose deliv­­ery stream, and also create CloudWatch alerts to be notified when one of your thresholds is breached. We also covered the different actions that you can take when these alarms are triggered, such as submitting a request to increase your quota or adding exponential backoff to your data producers.

Monitor your delivery streams and take these actions to make sure that your business data is delivered to your destinations without interruptions, enabling your business to gain insights in near-real time.


About the Author

Alon Gendler is a Startup Solutions Architect Manager at Amazon Web Services. He works with AWS customers to help them solve complex problems and architect secure, resilient, scalable and high performance applications in the cloud. Alon is passionate about Data and helping customers get the most out of it.

How to Download and Back Up YouTube Videos

Post Syndicated from Barry Kaufman original https://www.backblaze.com/blog/how-to-download-and-back-up-youtube-videos/

We like to think of our YouTube videos as being eternal, that somehow once we upload this little clip of our life, it will remain there safe in its URL forever.

The fact is, nothing lasts forever online except for those embarrassing pictures someone posted of you 10 years ago and the 1996 Space Jam website. Content is deleted every day, whether because a website shutters its operations or because the content gets caught up in the vagaries of copyright law. Your YouTube videos are no different.

If you’ve got a bunch of content living on YouTube and nowhere else, it’s time to download and back up your videos so you can control your content’s digital fate. In this post, learn how to download videos from YouTube and make sure they’re backed up safely.

How to Back Up Your Digital Life

Check out our series of guides to help you protect content across many different platforms—including social media, sync services, and more. This list is always a work in progress—please comment below if you’d like to see another platform covered.

Why Back Up Your YouTube Videos?

Aside from the simple fact that having a solid data backup plan can help you avoid the fallout from all manner of tragedies like hardware loss, theft, or damage, keeping your YouTube videos backed up protects you from the ups and downs of an ever-changing YouTube ecosystem. Google’s side project has a bit of a troubled history of deleting videos without the owner’s knowledge or consent. After all, when you have terms of service that border on labyrinthine, enforced by an algorithm to strip spam, fraud, hate speech, copyright infringement, and all manner of ickiness from 30,000 hours of video uploaded every hour, there are bound to be some casualties.

So how can you protect your precious memories from being dissolved in the digital ether? How can you ensure that your skillfully edited masterpiece doesn’t become a casualty of the algorithm? What if, let’s just say for example, you went up in a biplane one time and the camera on which you filmed this adventure has long been lost to the scrap heap of your junk drawer? What if a YouTube video is the only evidence you have of that time you forgot you had a cargo topper on your minivan and almost wrecked at the Mall of America? Hypothetically speaking?

The answer? Just as you upload the video to YouTube, it’s time to back it up both locally and in the cloud. And if you have a whole library of videos on YouTube, it’s time to download them so you can back those up, too.

A Short History of Downloading YouTube Content

There was a time not too long ago when downloading YouTube videos, even your own, meant delving into some of the darker corners of the internet. Often hosted on foreign servers to avoid Digital Millennium Copyright Act enforcement, these sites still exist. But now there’s a far easier native solution for downloading your content.

While they have done their level best to obscure this option, it’s right there for anyone to use. Just follow these simple steps below.

How to Download YouTube Videos

First, open the YouTube Creator Studio. YouTube Creator Studio is a terrific tool the site offers for managing your videos, customizing your channel, viewing analytics, and even monetizing your content. It’s also pretty well hidden, for reasons that aren’t immediately obvious.

To access YouTube Creator Studio on a desktop, click the hamburger menu at the top left of your screen and select “Your Videos.”

In this screenshot, my subscriptions have been blurred so you don’t judge me.

This will bring you to the content page of YouTube, with all of your cinematic achievements laid out before you. Select the video you want, click the kebab menu (the three vertical dots), and then select download. It’s just that easy!

If you’re curious, the video below the one I’m downloading is my dog riding an invisible bicycle.

You can also select multiple videos, click more actions, and download your videos.

Downloading Your Videos on Mobile

To download your videos on mobile, use your phone’s “phone” function to call up someone who has a desktop computer because YouTube Creator on mobile doesn’t let you download videos.

Backing Up Your Videos

Now that you’ve saved all of these videos from being potentially lost forever, how do you make sure they’re stored safely? By saving them locally, you haven’t really addressed the problem that they could be easily lost. Your computer and your external hard drives are, after all, probably more susceptible to data loss than YouTube is.

Which brings us to the 3-2-1 cloud backup strategy. Make sure to have three copies of your data on two different media (read: devices) with one stored off-site (typically in the cloud). Having two backups of your newly downloaded data on-site helps you recover quickly if you ever lose those videos you spent time capturing. And storing a copy in the cloud keeps one copy of your data geographically separated from the others in case of a major disaster like hardware loss, theft, or damage. But how you plan on using these videos will have an impact on which cloud storage method you pick.

If you want to keep copies of your YouTube archive locally, Backblaze Personal Backup is your best bet. It runs silently in the background of your computer. As soon as those YouTube videos hit your hard drive, it will automatically begin backing them up to the cloud, giving you a local copy and a copy on the cloud. If you create a second local copy on an external hard drive, you’re fully backed up and following a good 3-2-1 strategy.

If space is limited locally, and you don’t necessarily need the files on your own computer, Backblaze B2 Cloud Storage gives you plenty of space in the cloud to stash them until they’re needed. Say, when you have to prove to someone that you went up in a biplane that one time. Paired with local copies elsewhere, you could also use this method to achieve a 3-2-1 strategy without taking up a huge amount of space on your machine.

Do you have any techniques on how you download your data from YouTube or other social sites? Share them in the comments section below!

The post How to Download and Back Up YouTube Videos appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close