AWS achieves ISO/IEC 27701:2019 certification

Post Syndicated from Anastasia Strebkova original https://aws.amazon.com/blogs/security/aws-achieves-iso-iec-27701-2019-certification/

We’re excited to announce that Amazon Web Services (AWS) has achieved ISO/IEC 27701:2019 certification with no findings. This certification is a rigorous third-party independent assessment of the Privacy Information Management System (PIMS) of a cloud service provider.

ISO/IEC 27701:2019 specifies requirements and guidelines to establish and continuously improve a PIMS, including processing of Personally Identifiable Information (PII), and is an extension of the ISO/IEC 27001 and ISO/IEC 27002 standards for information security management. It provides a set of additional controls and associated guidance that is intended to address public cloud PIMS and PII management requirements that aren’t addressed by the existing ISO/IEC 27002 control set, for both processors and controllers.

The certification demonstrates that a cloud service provider has an effective PIMS in place to support customers, who may be working towards compliance with the European General Data Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA), and other data privacy regulations. The independent third-party assessment of AWS alignment to this internationally recognized code of practice demonstrates that AWS is committed to the privacy and protection of customers’ content and can help customers in pursuing their international and local compliance objectives.

Ernst & Young CertifyPoint issued the certificate on August 11, 2021. The covered AWS Regions are included on the ISO/IEC 27701:2019 certificate, and the full list of AWS services in scope for ISO/IEC 27701:2019 is available on our ISO and CSA STAR Certified webpage. You can view and download our ISO/IEC 27701:2019 certificate online, and in the AWS Management Console through AWS Artifact.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Anastasia Strebkova

Anastasia is a Security Assurance Manager at Amazon Web Services on the Global Audits team, managing the AWS ISO portfolio. She has previously worked on IT audits, governance, risk, privacy, business continuity, and information security program management for cloud enterprises. Anastasia holds a Bachelor of Arts degree in Civil Law from Moscow Law Academy.

Karkinos – Beginner Friendly Penetration Testing Tool

Post Syndicated from original https://www.darknet.org.uk/2021/08/karkinos-beginner-friendly-penetration-testing-tool/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

Karkinos – Beginner Friendly Penetration Testing Tool

Karkinos is a light-weight Beginner Friendly Penetration Testing Tool, which is basically a ‘Swiss Army Knife’ for pen-testing and/or hacking CTF’s.

Karkinos Beginner Friendly Penetration Testing Tool Features

  • Encoding/Decoding characters
  • Encrypting/Decrypting text or files
  • Reverse shell handling
  • Cracking and generating hashes

How to Install Karkinos Beginner Friendly Penetration Testing Tool

Dependencies are:

  • Any server capable of hosting PHP
  • Tested with PHP 7.4.9
  • Tested with Python 3.8
  • Make sure it is in your path as:
  • Windows: python
  • Linux: python3
  • If it is not, please change the commands in includes/pid.php
  • Pip3
  • Raspberry Pi Zero friendly 🙂 (crack hashes at your own risk)

Then:

  1. git clone https://github.com/helich0pper/Karkinos.git
  2. cd Karkinos
  3. pip3 install -r requirements.txt
  4. cd wordlists && unzip passlist.zip You can also unzip it manually using file explorer.

Read the rest of Karkinos – Beginner Friendly Penetration Testing Tool now! Only available at Darknet.

Scaling Data Analytics Containers with Event-based Lambda Functions

Post Syndicated from Brian Maguire original https://aws.amazon.com/blogs/architecture/scaling-data-analytics-containers-with-event-based-lambda-functions/

The marketing industry collects and uses data from various stages of the customer journey. When they analyze this data, they establish metrics and develop actionable insights that are then used to invest in customers and generate revenue.

If you’re a data scientist or developer in the marketing industry, you likely often use containers for services like collecting and preparing data, developing machine learning models, and performing statistical analysis. Because the types and amount of marketing data collected are quickly increasing, you’ll need a solution to manage the scale, costs, and number of required data analytics integrations.

In this post, we provide a solution that can perform and scale with dynamic traffic and is cost optimized for on-demand consumption. It uses synchronous container-based data science applications that are deployed with asynchronous container-based architectures on AWS Lambda. This serverless architecture automates data analytics workflows utilizing event-based prompts.

Synchronous container applications

Data science applications are often deployed to dedicated container instances, and their requests are routed by an Amazon API Gateway or load balancer. Typically, an Amazon API Gateway routes HTTP requests as synchronous invocations to instance-based container hosts.

The target of the requests is a container-based application running a machine learning service (SciLearn). The service container is configured with the required dependency packages such as scikit-learn, pandas, NumPy, and SciPy.

Containers are commonly deployed on different targets such as on-premises, Amazon Elastic Container Service (Amazon ECS), Amazon Elastic Compute Cloud (Amazon EC2), and AWS Elastic Beanstalk. These services run synchronously and scale through Amazon Auto Scaling groups and a time-based consumption pricing model.

Figure 1. Synchronous container applications diagram

Figure 1. Synchronous container applications diagram

Challenges with synchronous architectures

When using a synchronous architecture, you will likely encounter some challenges related to scale, performance, and cost:

  • Operation blocking. The sender does not proceed until Lambda returns, and failure processing, such as retries, must be handled by the sender.
  • No native AWS service integrations. You cannot use several native integrations with other AWS services such as Amazon Simple Storage Service (Amazon S3), Amazon EventBridge, and Amazon Simple Queue Service (Amazon SQS).
  • Increased expense. A 24/7 environment can result in increased expenses if resources are idle, not sized appropriately, and cannot automatically scale.

The New for AWS Lambda – Container Image Support blog post offers a serverless, event-based architecture to address these challenges. This approach is explained in detail in the following section.

Benefits of using Lambda Container Image Support

In our solution, we refactored the synchronous SciLearn application that was deployed on instance-based hosts as an asynchronous event-based application running on Lambda. This solution includes the following benefits:

  • Easier dependency management with Dockerfile. Dockerfile allows you to install native operating system packages and language-compatible dependencies or use enterprise-ready container images.
  • Similar tooling. The asynchronous and synchronous solutions use Amazon Elastic Container Registry (Amazon ECR) to store application artifacts. Therefore, they have the same build and deployment pipeline tools to inspect Dockerfiles. This means your team will likely spend less time and effort learning how to use a new tool.
  • Performance and cost. Lambda provides sub-second autoscaling that’s aligned with demand. This results in higher availability, lower operational overhead, and cost efficiency.
  • Integrations. AWS provides more than 200 service integrations to deploy functions as container images on Lambda, without having to develop it yourself so it can be deployed faster.
  • Larger application artifact up to 10 GB. This includes larger application dependency support, giving you more room to host your files and packages in oppose to hard limit of 250 MB of unzipped files for deployment packages.

Scaling with asynchronous events

AWS offers two ways to asynchronously scale processing independently and automatically: Elastic Beanstalk worker environments and asynchronous invocation with Lambda. Both options offer the following:

  • They put events in an SQS queue.
  • They can be designed to take items from the queue only when they have the capacity available to process a task. This prevents them from becoming overwhelmed.
  • They offload tasks from one component of your application by sending them to a queue and process them asynchronously.

These asynchronous invocations add default, tunable failure processing and retry mechanisms through “on failure” and “on success” event destinations, as described in the following section.

Integrations with multiple destinations

“On failure” and “on success” events can be logged in an SQS queue, Amazon Simple Notification Service (Amazon SNS) topic, EventBridge event bus, or another Lambda function. All four are integrated with most AWS services.

“On failure” events are sent to an SQS dead-letter queue because they cannot be delivered to their destination queues. They will be reprocessed them as needed, and any problems with message processing will be isolated.

Figure 2 shows an asynchronous Amazon API Gateway that has placed an HTTP request as a message in an SQS queue, thus decoupling the components.

The messages within the SQS queue then prompt a Lambda function. This runs the machine learning service SciLearn container in Lambda for data analysis workflows, which are integrated with another SQS dead letter queue for failures processing.

Figure 2. Example asynchronous-based container applications diagram

Figure 2. Example asynchronous-based container applications diagram

When you deploy Lambda functions as container images, they benefit from the same operational simplicity, automatic scaling, high availability, and native integrations. This makes it an appealing architecture for our data analytics use cases.

Design considerations

The following can be considered when implementing Docker Container Images with Lambda:

  • Lambda supports container images that have manifest files that follow these formats:
    • Docker image manifest V2, schema 2 (Docker version 1.10 and newer)
    • Open Container Initiative (OCI) specifications (v1.0.0 and up)
  • Storage in Lambda:
  • On create/update, Lambda will cache the image to speed up the cold start of functions during execution. Cold starts occur on an initial request to Lambda, which can lead to longer startup times. The first request will maintain an instance of the function for only a short time period. If Lambda has not been called during that period, the next invocation will create a new instance
  • Fine grain role policies are highly recommended for security purposes
  • Container images can use the Lambda Extensions API to integrate monitoring, security and other tools with the Lambda execution environment.

Conclusion

We were able to architect this synchronous service based on a previously deployed on instance-based hosts and design it to become asynchronous on Amazon Lambda.

By using the new support for container-based images in Lambda and converting our workload into an asynchronous event-based architecture, we were able to overcome these challenges:

  • Performance and security. With batch requests, you can scale asynchronous workloads and handle failure records using SQS Dead Letter Queues and Lambda destinations. Using Lambda to integrate with other services (such as EventBridge and SQS) and using Lambda roles simplifies maintaining a granular permission structure. When Lambda uses an Amazon SQS queue as an event source, it can scale up to 60 more instances per minute, with a maximum of 1,000 concurrent invocations.
  • Cost optimization. Compute resources are a critical component of any application architecture. Overprovisioning computing resources and operating idle resources can lead to higher costs. Because Lambda is serverless, it only incurs costs on when you invoke a function and the resources allocated for each request.

Create a custom Amazon S3 Storage Lens metrics dashboard using Amazon QuickSight

Post Syndicated from Jignesh Gohel original https://aws.amazon.com/blogs/big-data/create-amazon-s3-storage-lens-metrics-dashboard-amazon-quicksight/

Companies use Amazon Simple Storage Service (Amazon S3) for its flexibility, durability, scalability, and ability to perform many things besides storing data. This has led to an exponential rise in the usage of S3 buckets across numerous AWS Regions, across tens or even hundreds of AWS accounts. To optimize costs and analyze security posture, Amazon S3 Storage Lens provides a single view of object storage usage and activity across your entire Amazon S3 storage. S3 Storage Lens includes an embedded dashboard to understand, analyze, and optimize storage with over 29 usage and activity metrics, aggregated for your entire organization, with drill-downs for specific accounts, Regions, buckets, or prefixes. In addition to being accessible in a dashboard on the Amazon S3 console, the raw data can also be scheduled for export to an S3 bucket.

For most customers, the S3 Storage Lens dashboard will cover all your needs. However, you may require specialized views of your S3 Storage Lens metrics, including combining data across multiple AWS accounts, or with external data sources. For such cases, you can use Amazon QuickSight, which is a scalable, serverless, embeddable, machine learning (ML)-powered business intelligence (BI) service built for the cloud. QuickSight lets you easily create and publish interactive BI dashboards that include ML-powered insights.

In this post, you learn how to use QuickSight to create simple customized dashboards to visualize S3 Storage Lens metrics. Specifically, this solution demonstrates two customization options:

  • Combining S3 Storage Lens metrics with external sources and being able to filter and visualize the metrics based on one or multiple accounts
  • Restricting users to view Amazon S3 metrics data only for specific accounts

You can further customize these dashboards based on your needs.

Solution architecture

The following diagram shows the high-level architecture of this solution. In addition to S3 Storage Lens and QuickSight, we use other AWS Serverless services like AWS Glue and Amazon Athena.

Solution Architecture for Amazon S3 Storage Lens custom metrics

The data flow includes the following steps:

  1. S3 Storage Lens collects the S3 metrics and exports them daily to a user-defined S3 bucket. Note that first we need to activate S3 Storage Lens from the Amazon S3 console and configure it to export the file either in CSV or Apache Parquet format.
  2. An AWS Glue crawler scans the data from the S3 bucket and populates the AWS Glue Data Catalog with tables. It automatically infers schema, format, and data types from the S3 bucket.
  3. You can schedule the crawler to run at regular intervals to keep metadata, table definitions, and schemas in sync with data in the S3 bucket. It automatically detects new partitions in Amazon S3 and adds the partition’s metadata to the AWS Glue table.
  4. Athena performs the following actions:
    • Uses the table populated by the crawler in Data Catalog to fetch the schema.
    • Queries and analyzes the data in Amazon S3 directly using standard SQL.
  5. QuickSight performs the following actions:
    • Uses the Athena connector to import the Amazon S3 metrics data.
    • Fetches the external data from a custom CSV file.

To demonstrate this, we have a sample CSV file that contains the mapping of AWS account numbers to team names owning these accounts. QuickSight combines these datasets using the data source join feature.

  1. When the combined data is available in QuickSight, users can create custom analysis and dashboards, apply appropriate QuickSight permissions, and share dashboards with other users.

At a high level, this solution requires you to complete the following steps:

  1. Enable S3 Storage Lens in your organization’s payer account or designate a member account. For instructions to have a member account as a delegated administrator, see Enabling a delegated administrator account for Amazon S3 Storage Lens.
  2. Set up an AWS Glue crawler, which populates the Data Catalog to query S3 Storage Lens data using Athena.
  3. Use QuickSight to import data (using the Athena connector) and create custom visualizations and dashboards that can be shared across multiple QuickSight users or groups.

Enable and configure the S3 Storage Lens dashboard

S3 Storage Lens includes an interactive dashboard available on the Amazon S3 console. It shows organization-wide visibility into object storage usage, activity trends, and makes actionable recommendations to improve cost-efficiency and apply data protection best practices. First you need to activate S3 Storage Lens via the Amazon S3 console. After it’s enabled, you can access an interactive dashboard containing preconfigured views to visualize storage usage and activity trends, with contextual recommendations. Most importantly, it also provides the ability to export metrics in CSV or Parquet format to an S3 bucket of your choice for further use. We use this export metrics feature in our solution. The following steps provide details on how you can enable this feature in your account.

  1. On the Amazon S3 console, under Storage Lens in the navigation pane, choose Dashboards.
  2. Choose Create dashboard.

Create S3 Storage Dashboard

  1. Provide the appropriate details in the Create dashboard
    • Make sure to select Include all accounts in your organization, Include Regions, and Include all Regions.

S3 Storage Lens Dashboard Configure

S3 Storage Lens has two tiers: Free metrics, which is free of charge, automatically available for all Amazon S3 customers, and contains 15 usage-related metrics; and Advanced metrics and recommendations, which has an additional charge, but includes all 29 usage and activity metrics with 15-month data retention, and contextual recommendations. For this solution, we select Free metrics. If you need additional metrics, you may select Advanced metrics.

  1. For Metrics export, select Enable.
  2. For Choose and output format, select Apache Parquet.
  3. For Destination bucket, select This account.
  4. For Destination, enter your S3 bucket path.

S3 Storage Lens Metrics Export Configuration

We highly recommend following security best practices for the S3 bucket you use, along with server-side encryption available with export. You can use an Amazon S3 key (SSE-S3) or AWS Key Management Service key (SSE-KMS) as encryption key types.

  1. Choose Create dashboard.

The data population process can take up to 48 hours. Proceed to the next steps only after the dashboard is available.

Set up the AWS Glue crawler

AWS Glue is serverless, fully managed extract, transform, and load (ETL) service that makes it simple and cost-effective to categorize your data, clean it, enrich it, and move it reliably between various data stores and data streams. AWS Glue consists of a central metadata repository known as the AWS Glue Data Catalog, an ETL engine, and a flexible scheduler that handles dependency resolution, job monitoring, and retries. We can use the AWS Glue to discover data, transform it, and make it available for search and querying. The AWS Glue Data Catalog is an index to the location, schema, and runtime metrics of your data. Athena uses this metadata definition to query data available in Amazon S3 using simple SQL statements.

The AWS Glue crawler populates the Data Catalog with tables from various sources, including Amazon S3. When the crawler runs, it classifies data to determine the format, schema, and associated properties of the raw data, performs grouping of data into tables or partitions, and writes metadata to the Data Catalog. You can configure the crawler to run at specific intervals to make sure the Data Catalog is in sync with the underlying source data.

For our solution, we use these services to catalog and query exported S3 Storage Lens metrics data. First, we create the crawler via the AWS Glue console. For the purpose of this example, we provide an AWS CloudFormation template that deploys the required AWS resources. This template creates the CloudFormation stack with three AWS resources in your AWS account:

When you create your stack with the CloudFormation template, provide the following information:

  • AWS Glue database name
  • AWS Glue crawler name
  • S3 URL path pointing to the reports folder where S3 Storage Lens has exported metrics data. For example, s3://[Name of the bucket]/StorageLens/o-lcpjprs6wq/s3-storage-lense-parquet-v1/V_1/reports/.

After the stack is complete, navigate to the AWS Glue console and confirm that a new crawler job is listed on the Crawlers page. When the crawler runs for the first time, it creates the table reports in the Data Catalog. The Data Catalog may need to be periodically refreshed, so this job is configured to run every day at midnight to sync the data. You can change this configuration to your desired schedule.

After the crawler job runs, we can confirm that the data is accessible using the following query in Athena (make sure to run this query in the database provided in the CloudFormation template):

select * from reports limit 10

Running this query should return results similar to the following screenshot.

Query Results

Create a QuickSight dashboard

When the data is available to access using Athena, we can use QuickSight to create customized analytics and publish dashboards across multiple users. This process involves creating a new QuickSight dataset, creating the analysis using this dataset, creating the dashboard, and configuring user permissions and security.

To get started, you must be signed in to QuickSight using the same payer account. If you’re signing into QuickSight for the first time, you’re prompted to complete the initial signup process (for example, choosing QuickSight Enterprise Edition). You’re also required to provide QuickSight access to your S3 bucket and Athena. For instructions on adding permissions, see Insufficient Permissions When Using Athena with Amazon QuickSight.

  1. In the QuickSight navigation pane, choose Datasets.
  2. Choose New dataset and select Athena.

QuickSight Create Dataset

  1. For Data source name, enter a name.
  2. Choose Create data source.

QuickSight create Athena Dataset

  1. For Catalog, choose AwsDataCatalog.
  2. For Database, choose the AWS Glue database that contains the table for S3 Storage Lens.
  3. For Tables, select your table (for this post, reports).

QuickSight table selection

  1. Choose Edit dataset and choose the query mode SPICE.
  2. Change the format of report_date and dt to Date.
  3. Choose Save.

We can use the cross data source join feature in QuickSight to connect external data sources to the S3 Storage Lens dataset. For this example, let’s say we want to visualize the number of S3 buckets mapped to the internal teams. This data is external to S3 Storage Lens and stored in a CSV file, which contains the mapping between the account numbers and internal team names.

Account to Team Mapping

  1. To import this data into our existing dataset, choose Add data.

QuickSight add external data

  1. Choose Upload a file to import the CSV file to the dataset.

QuickSight Upload External File

We’re redirected to the join configuration screen. From here, we can configure the join type and join clauses to connect these two data sources. For more information about the cross data source join functionality, see Joining across data sources on Amazon QuickSight.

  1. For this example, we need to perform the left join on columns aws_account_number (from the reports table) and Account (from the Account-to-Team-mapping table). This left join returns all records from the reports table and matching records from Account-to-Team-mapping.
  2. Choose Apply after selecting this configuration.

QuickSight DataSet Join

  1. Choose Save & visualize.

From here, you can create various analyses and visualizations on the imported datasets. For instructions on creating visualizations, see Creating an Amazon QuickSight Visual. We provide a sample template you can use to get the basic dashboard. This dashboard provides metrics for total Amazon S3 storage size, object count, S3 bucket by internal team, and more. It also allows authorized users to filter the metrics based on accounts and report dates. This is a simple report that can be further customized based on your needs.

Quicksight Final Dashboard

S3 Storage Lens’s IAM security policies don’t apply to imported data into QuickSight. So before you share this dashboard with anyone, one might want to restrict access according to the security requirement and business role of the user. For a comprehensive set of security features, see AWS Security in Amazon QuickSight. For implementation examples, see Applying row-level and column-level security on Amazon QuickSight dashboards. In our example, instead of all users having access to view S3 Storage Lens data for all accounts, you might want to restrict user access to only specific accounts.

QuickSight provides a feature called row-level security that can restrict user access to only a subset of table rows (or records). You can base the selection of these subsets of rows on filter conditions defined on specific columns.

For our current example, we want to allow user access to view the Amazon S3 metrics dashboard only for a few accounts. For this, we can use the column aws_account_number as filter criteria with account number values. We can implement this by creating a CSV file with columns named UserName and aws_account_number, and adding the rows for users and a list of account numbers (comma-separated). In the following example file, we have added a sample value for the user awslabs-qs-1 with a specific account. This means that user awslabs-qs-1 can only see the rows (or records) that match with the corresponding aws_account_number values specified in the permission CSV.

QuickSight Permissions file

For instructions on applying a permission rule file, see Using Row-Level Security (RLS) to Restrict Access to a Dataset.

You can further customize this QuickSight analysis to produce additional visualizations, apply additional permissions, and publish it to enterprise users and groups with various levels of security.

Conclusion

Harnessing the knowledge of S3 Storage Lens metrics with other custom data enables you to discover anomalies and identify cost-efficiencies across accounts. In this post, we used serverless components to build a workflow to use this data for real-time visualization. You can use this workflow to scale up and design an enterprise-level solution with a multi-account strategy and control fine-grained access to its data using the QuickSight row-level security feature.


About the Authors

Jignesh Gohel is a Technical Account Manager at AWS. In this role, he provides advocacy and strategic technical guidance to help plan and build solutions using best practices, and proactively keep customers’ AWS environments operationally healthy. He is passionate about building modular and scalable enterprise systems on AWS using serverless technologies. Besides work, Jignesh enjoys spending time with family and friends, traveling and exploring the latest technology trends.

 

Suman Koduri is a Global Category Lead for Data & Analytics category in AWS Marketplace. He is focused towards business development activities to further expand the presence and success of Data & Analytics ISVs in AWS Marketplace.  In this role, he leads the scaling, and evolution of new and existing ISVs, as well as field enablement and strategic customer advisement for the same. In his spare time, he loves running half marathon’s and riding his motorcycle.

Authenticate AWS Client VPN users with AWS Single Sign-On

Post Syndicated from Sylvia Qi original https://aws.amazon.com/blogs/security/authenticate-aws-client-vpn-users-with-aws-single-sign-on/

AWS Client VPN is a managed client-based VPN service that enables users to use an OpenVPN-based client to securely access their resources in Amazon Web Services (AWS) and in their on-premises network from any location. In this blog post, we show you how you can integrate Client VPN with your existing AWS Single Sign-On via a custom SAML 2.0 application to authenticate and authorize your Client VPN connections and traffic.

Maintaining a separate set of credentials to authenticate users and authorize access for each resource is not only tedious, it’s not scalable. A common way to solve this challenge is to use a central identity store such as AWS SSO, which functions as your identity provider (IdP). You can then use Security Assertion Markup Language 2.0 (SAML 2.0) to integrate AWS SSO with each of your resources or applications, also known as service providers (SPs). The IdP authenticates users and passes their identity and security information to the SP via SAML. With SAML, you can enable a single sign-on experience for your users across many SAML-enabled applications and services. Users authenticate with the IdP once using a single set of credentials, and then have access to multiple applications and services without additional sign-ins.

Client VPN supports identity federation with SAML 2.0 for Client VPN endpoints. Deploying custom SAML applications can present some challenges, specifically around the mapping of attributes between what the SP expects to receive and what the IdP can provide. We’ve taken the guesswork out of the process and show you the exact mappings needed for the Client VPN to AWS SSO integration. The integration lets you use AWS SSO groups to not only grant access to create a Client VPN connection, but also to allow access to specific network ranges based upon group membership. We walk you through setting up all of the components required to implement the authentication workflow described in Figure 1. This consists of creating the custom SAML applications and tying them into AWS Identity and Access Management (IAM), creating and configuring the Client VPN endpoint, creating a Client VPN connection with an AWS SSO user, and testing your connectivity.
 

Figure 1: Authentication workflow

Figure 1: Authentication workflow

The steps illustrated in Figure 1 are:

  1. The user opens the AWS-provided VPN client on their device and initiates a connection to the Client VPN endpoint.
  2. The Client VPN endpoint sends an IdP URL and authentication request back to the client, based on the information that was provided in the IAM SAML provider.
  3. The AWS provided VPN client opens a new browser window on the user’s device. The browser makes a request to the IdP and displays a sign-in page. This is the same sign-in experience as the AWS SSO user portal, as the IdP URL points to a custom SAML application created within AWS SSO.
  4. The user enters their credentials on the sign-in page, and the IdP sends a signed SAML assertion back to the client in the form of an HTTP POST to the AWS provided VPN client.
  5. The SAML assertion is passed from the AWS provided VPN client to the Client VPN endpoint.
  6. The endpoint validates the assertion and either allows or denies access to the user.

Prerequisites

Here are the requirements to complete the VPN and SSO setup:

  • AWS SSO is configured to use the internal AWS SSO identity store. Refer to the AWS Single Sign-On Getting Started guide for help configuring AWS SSO. AWS SSO can exist in a different AWS account than the account where you deploy Client VPN endpoints. The steps outlined in this blog post are specific to the internal AWS SSO identity store, however they could be adapted to support other identity stores that support SAML 2.0.
  • Two AWS SSO users and two AWS SSO groups for testing. Each user should be a member of only one of the SSO groups. The purpose of this configuration is to demonstrate how access can be allowed or denied based upon group membership.
  • An Amazon Virtual Private Cloud (Amazon VPC) with an Amazon Elastic Compute Cloud (Amazon EC2) instance for connectivity testing.
  • An x.509 certificate imported into AWS Certificate Manager (ACM). You can generate a self-signed certificate for this walkthrough, however you should review the prerequisites for importing certificates into ACM. This certificate will be used for encrypted communication between the client VPN software and the client VPN endpoint.
  • Administrative access to your AWS environment, or at least sufficient access to create AWS SSO applications, ACM certificates, EC2 Instances, and Client VPN endpoints.
  • A client device running Windows or macOS with the latest version of Client VPN software installed. You can download it from the AWS Client VPN download.

Solution walkthrough

For this solution, you’ll complete the following steps:

  1. Establish trust with your IdP
  2. Create and configure Client VPN SAML applications in AWS SSO.
  3. Integrate the Client VPN SAML applications with IAM.
  4. Create and configure the Client VPN endpoint.
  5. Test the solution.
  6. Cleanup the test environment.

Establish trust with your IdP

In this walkthrough, Client VPN is the SAML SP and AWS SSO is the SAML IdP. One of the key steps to deploying this solution is to establish trust between the SP and IdP. This one-time configuration is done by creating custom SAML applications within AWS SSO and exporting application-specific metadata information from the applications. This metadata is then uploaded—in the form of IAM IdPs—into your AWS account where the Client VPN endpoint is created. IAM IdPs let you manage your user identities in a centralized identity store, such as AWS SSO, and grant those user identities permissions to AWS resources within your account. For organizations with multiple AWS accounts, the use of IAM IdPs resolves the management, scalability, and security issues associated with creating IAM users directly within each account.

Create and configure the Client VPN SAML applications in AWS SSO

Create two custom SAML 2.0 applications in AWS SSO. One will be the IdP for the Client VPN software, the other will be a self-service portal that allows users to download their Client VPN software and client configuration file.

To create the VPN client SAML application:

  1. In the AWS SSO console, select Applications from the left pane and select Add a new application.
  2. Select Add a custom SAML 2.0 application to use as the IdP for the Client VPN software.
     
    Figure 2: Add a SAML application

    Figure 2: Add a SAML application

  3. In the Details section, set Display name to VPN Client.
  4. In the Application Metadata section, select If you don’t have a metadata file, you can manually type your metadata values and enter the following values:
    • Application ACS URL: http://127.0.0.1:35001
    • Application SAML audience: urn:amazon:webservices:clientvpn
  5. Accept the default values for all other fields.
  6. Choose Save Changes.
  7. Select the Attribute mappings tab and configure the mappings as shown in the table and Figure 3 below.

    Note: For production environments, you should grant access to these applications via an AWS SSO group instead of individual users as shown in this walkthrough.

    User attribute in the application Maps to this string value or user attribute in AWS SSO Format
    Subject ${user:email} emailAddress
    Name ${user:email} unspecified
    FirstName ${user:givenName} unspecified
    LastName ${user:familyName} unspecified
    memberOf ${user:groups} unspecified
    Figure 3: VPN client attribute mappings

    Figure 3: VPN client attribute mappings

  8. On the Assign users tab, add your two test user accounts.
  9. On the application configuration page, choose the download link for AWS SSO SAML metadata. Save the file to use in a later step.

To create the VPN client self-service SAML application

  1. In the AWS SSO console, select Applications from the left pane and select Add a new application.
  2. Select Add a custom SAML 2.0 application to use as the application that will serve as the IdP for the Client VPN software.
     
    Figure 4: Add a SAML application

    Figure 4: Add a SAML application

  3. In the Details section, set Display name to VPN Client Self Service.
  4. In the Application Metadata section, select If you don’t have a metadata file, you can manually type your metadata values and enter the following values:
    • Application ACS URL: https://self-service.clientvpn.amazonaws.com/api/auth/sso/saml
    • Application SAML audience: urn:amazon:webservices:clientvpn
  5. Accept the default values for all other fields.
  6. Choose Save Changes.
  7. Choose the Attribute mappings tab and configure the mappings as shown in the following table and in Figure 5.

    Note: For production environments you should grant access to these applications via an AWS SSO group instead of individual users as shown in this walkthrough. For the purposes of this walkthrough, you grant individual users access to the SAML applications but grant network access via group membership. This is done to allow easier demonstration of the ability to grant or deny network specific access via groups when testing the solution.

    User attribute in the application Maps to this string value or user attribute in AWS SSO Format
    Subject ${user:email} emailAddress
    Name ${user:email} unspecified
    FirstName ${user:givenName} unspecified
    LastName ${user:familyName} unspecified
    memberOf ${user:groups} unspecified
    Figure 5: VPN Client self-service attribute mappings

    Figure 5: VPN Client self-service attribute mappings

  8. On the Assign users tab, add your two test user accounts.
  9. On the application’s Configuration page, choose the download link for AWS SSO SAML metadata. Save the file to use in a later step.

Integrate the Client VPN SAML applications with IAM

Client VPN requires a unique IdP definition in IAM. You must set up the IdP in the same AWS account where the Client VPN endpoint will be created.

To create the IAM IdP:

  1. In the IAM console, select Identity providers and Add provider. Name the provider aws-client-vpn and upload the metadata document that you downloaded from the VPN Client SAML application.
  2. Add a second provider, name the provider aws-client-vpn-self-service and upload the metadata document that you downloaded from the VPN Client Self Service SAML application.

Create and configure the Client VPN endpoint

All Client VPN sessions end at the Client VPN endpoint. You configure the Client VPN endpoint to manage and control all Client VPN sessions. In the following steps, you create a Client VPN endpoint and configure it to use the newly added IAM IdPs. You then associate the endpoint with a VPC and configure authorization rules to allow traffic into the VPC, then set up the Client VPN self-service portal.

To create the Client VPN endpoint

  1. Open the AWS VPC console and select Client VPN Endpoints and then select Create Client VPN endpoint.
  2. Enter a Name Tag and Description for the endpoint.
  3. Enter 172.16.0.0/22 for the Client IPv4 CIDR. This is the IP range that will be allocated to your VPN clients. It shouldn’t overlap the CIDR of your AWS VPCs or of the network that your client device is connected to and must be at least a /22 bitmask. You can adjust this value as needed for your specific network requirements. The Client IPv4 CIDR value can only be set during endpoint creation.

    Note: For production environments you should review the Client VPN documentation for scaling considerations before you create the endpoint.

  4. In the Server certificate ARN drop down menu, select the ACM certificate that you created for your VPN clients.
  5. Set the Authentication Options to Use user-based authentication with Federated authentication. Select the aws-client-vpn IAM IdP for the SAML provider ARN, and select the aws-client-vpn-self-service IAM IdP as the Self-service SAML provider ARN.
     
    Figure 6: Authentication settings

    Figure 6: Authentication settings

  6. For this walkthrough, set Connection Logging to No. Connection logging is a feature of Client VPN that enables you to capture connection logs for your Client VPN endpoint. Those logs are published to an Amazon CloudWatch Logs log group in your account. For production environments or for troubleshooting purposes, you can enable connection logging while or after you create the endpoint.
  7. Select the VPC ID to associate with the endpoint. This should be the VPC with an EC2 instance deployed that can be used to test connectivity. You can select an existing security group, or create a new one for the VPN endpoint. The only requirement for this walkthrough is that it has outbound rules that allow access to your test EC2 instance. For additional flexibility, you can create and apply multiple security groups that use different rulesets to the endpoint to provide fine-grained control of which resources can be accessed within the VPC.
  8. Select Enable self-service portal and—if desired—select Enable split-tunnel. Split tunneling is designed to ensure that only client traffic destined for the IP ranges configured on the Client VPN endpoint is routed to your VPC. By default, all traffic, including internet bound traffic, is routed through your VPC.
  9. Choose Create Client VPN endpoint.

To configure the Client VPN endpoint

  1. On the Client VPN endpoint Associations tab, select Associate. Select the same VPC that you chose when you set up the endpoint and select a subnet to associate. This creates an elastic network interface (ENI) in the selected subnet that will be the ingress point from VPN clients into your AWS VPC. For production environments, you should select at least two subnets based upon your redundancy requirements.
  2. Authorizing VPN ingress traffic from your users can be done either globally for all users or via group membership. When granting access via an AWS SSO group, you must use the group ID of the AWS SSO group, not the friendly name of the group. After selecting a group in the AWS SSO management console, you can find group ID in the Details section. You can also obtain the group ID by using AWS Command Line Interface (AWS CLI) to issue the following command, replacing the <AWSRegion>, <Identity Store ID>, and <AWS SSO Group Display Name> variables with your information. This command should be issued within the same AWS account where AWS SSO is configured. The identity store ID can be found in the AWS SSO console under Settings.
    aws identitystore list-groups --region <AWSRegion> --identity-store-id <Identity Store ID> --filter AttributePath=DisplayName,AttributeValue=<AWS SSO Group Display Name>
    

  3. Create an ingress authorization rule by selecting Authorize Ingress on the Authorization tab. Configure the destination network to enable as 0.0.0.0/0, set Grant access to: Allow access to users in a specific access group and enter the access group ID that you discovered in the previous step. This should be the group that contains one of your test user accounts. For production environments, you should follow the principle of least privilege and narrow the destination network range to only what is required. Ingress authorization rules can be used to restrict network access to specific network ranges based upon IdP group membership. You can use a client connection handler to enforce additional security policies on Client VPN connections. Refer to the Client VPN documentation for additional details.
    Figure 7: Add authorization rule

    Figure 7: Add authorization rule

  4. From the Client VPN Endpoint Summary tab, copy the Self-service portal URL to use in the next step.

To set up the Client VPN self-service portal

  1. Open the Client VPN self-service SAML application in the AWS SSO management console to edit the configuration.
  2. In the Application start URL textbox, paste the Client VPN endpoint self-service portal URL that you copied in the previous section. This ties the Client VPN self-service SAML application to the self-service portal URL for the specific Client VPN endpoint that you created, allowing users to download their AWS VPN Client configuration file.
     
    Figure 8: Client VPN self-service portal

    Figure 8: Client VPN self-service portal

Test the solution

During the testing phase, you download the VPN client configuration file and configure the VPN client application. You then create a Client VPN connection and validate that you have access to your target VPC. You also test the Client VPN connection with multiple user accounts in order to confirm that the ingress authorization rules are functioning as expected.

To test the Client VPN solution:

  1. Open an internet browser and sign in to your AWS SSO user portal as a user who has access to the VPN Client SAML applications and is a member of the AWS SSO group defined in the VPN endpoint ingress authorization rule. You should see two new SAML applications. Select the VPN client self-service application.
  2. In the VPN Client Self Service portal, you can download the AWS VPN Client software if you haven’t already done so. Select Download client configuration and save the file on your local device. Close the browser window that you used to sign in to the AWS SSO user portal.
  3. Open the AWS VPN Client application and configure a new profile, selecting the client configuration file that you downloaded in the previous step. Once your client profile has been created, select Connect.
     
    Figure 9: VPN Client ready to connect

    Figure 9: VPN Client ready to connect

  4. A new browser window should open automatically to an AWS SSO sign-in page. Enter the credentials of your test user who is a member of the AWS SSO group defined in your ingress authorization rule.
  5. Upon a successful connection through the VPN client, you can make a management connection (RDP, SSH, HTTP, or other) to one of the EC2 instances within your VPC. Connect to the private IPv4 address of your EC2 instance (rfc1918)—you should not attempt to connect to your EC2 instance through an EIP. You might need to adjust the security group rules on your EC2 instance to allow traffic from the subnets that you selected when you created the VPN endpoint associations.
  6. Once you have a successful connection to your test EC2 instance and you know that your Client VPN connectivity is working, you should also validate that access is denied for users who aren’t a member of the group specified in your ingress authorization rule.
    1. Disconnect from your Client VPN connection and close all browser windows.
    2. Depending upon your internet browser and its configuration, you might need to delete any cookies associated with your AWS SSO user portal in order to sign in as a different AWS SSO user.
    3. Initiate a new Client VPN connection and sign in as the test user account that is not a member of the AWS SSO group specified in the ingress authorization rule.
    4. You should be able to successfully establish the Client VPN connection, but not to access your test EC2 instance. This validates that the ingress authorization rule isn’t allowing Client VPN traffic from users who aren’t a member of the AWS SSO group to enter your VPC.

Troubleshooting

If you have any issues completing the walkthrough and testing, here are some things that you can check:

  • In the AWS VPC management console, review the Connections tab to verify that you see a connection from your test user account and that it’s active.
  • Confirm that your test user account is in the group that was defined in your ingress authorization rule.
  • Confirm that the access group ID specified in the ingress authorization rule is for the AWS SSO group that your test user is a member of.
  • Confirm that the AWS SSO group still exists and hasn’t been deleted. You might encounter an error message similar to the one shown in Figure 10 if you attempt a Client VPN connection but the AWS SSO group no longer exists.
     
    Figure 10: Error message

    Figure 10: Error message

  • If you receive a credential error when attempting to sign in to the AWS SSO browser window that’s launched by the VPN Client application, you might have an issue with the ACM certificate that you’re using. There can be authentication related issues if the root CA certificates aren’t correct or if any part of the certificate chain is missing.
  • Validate your EC2 instance security group rules and VPC route table configuration. From a routing perspective, your test EC2 instance must be accessible from the subnet that you selected when you created the Client VPN endpoint association.
  • If you want to see the SAML assertion that’s being sent to the AWS VPN client application. Sign in to the AWS SSO user portal, and hold down the Shift key while selecting the VPN client SAML application. A new browser tab will open with the SAML assertion visible. The SAML assertion contains the access group IDs of all groups that your test user is a member of. You can use this information to validate that the correct group memberships and group IDs are defined in your ingress authorization rules.
  • Make sure that TCP port 35001 is available on your client device. It shouldn’t be used by any other process or blocked by a firewall. Port 35001 only needs to be open on your localhost interface. The SAML assertion is sent to localhost on port 35001 as an HTTP POST from the browser window opened by the AWS VPN client application after a successful sign-in.

Clean up the test environment

To avoid charges for the use of AWS EC2, Client VPN, SSO, or ACM services, remove any components that were created as part of this walkthrough. Components that can be deleted if applicable are:

  1. The Client VPN endpoint. You must first remove all associations that were created for the endpoint.
  2. The EC2 instance and VPC.
  3. The test IdPs from IAM.
  4. The VPN client custom SAML applications from AWS SSO.
  5. AWS SSO users and groups.
  6. The ACM certificate.

Conclusion

In this blog post, we’ve shown how you can integrate Client VPN and AWS SSO to provide a familiar and seamless VPN connection experience to your users. By adding the Client VPN self-service portal, you can reduce the effort needed to deploy the solution by allowing users to perform their own VPN client application installation and configuration. We demonstrated the creation of IdPs using AWS SSO custom applications and then showed you how to configure a Client VPN endpoint to use SAML-based federated authentication and associate it with the IdPs. Client VPN users can then use their centralized credentials to connect to the Client VPN endpoint and access specific network ranges based upon their group membership or further refined through a client connection handler.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Drew Marumoto

Drew is a DevOps Consultant with Aws Professional Service. A long time system administrator with a passion for automation and orchestration, he enjoys solving difficult problems for customers and helping them achieve their business goals.

Author

Sylvia Qi

Sylvia is a DevOps Consultant focusing on architecting and automating DevOps processes, helping customers through their DevOps transformation journey, and achieving their goals. In her spare time, she enjoyes biking, swimming, painting, and photograhy.

Building a serverless GIF generator with AWS Lambda: Part 1

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/building-a-serverless-gif-generator-with-aws-lambda-part-1/

Many video streaming services show GIF animations in the frontend when users fast forward and rewind throughout a video. This helps customers see a preview and makes the user interface more intuitive.

Generating these GIF files is a compute-intensive operation that becomes more challenging to scale when there are many videos. Over a typical 2-hour movie with GIF previews every 30 seconds, you need 240 separate GIF animations to support this functionality.

In a server-based solution, you can use a library like FFmpeg to load the underlying MP4 file and create the GIF exports. However, this may be a serial operation that slows down for longer videos or when there are many different videos to process. For a service providing thousands of videos to customers, they need a solution where the encoding process keeps pace with the number of videos.

In this two-part blog post, I show how you can use a serverless approach to create a scalable GIF generation service. I explain how you can use parallelization in AWS Lambda-based workloads to reduce processing time significantly.

The example application uses the AWS Serverless Application Model (AWS SAM), enabling you to deploy the application more easily in your own AWS account. This walkthrough creates some resources covered in the AWS Free Tier but others incur cost. To set up the example, visit the GitHub repo and follow the instructions in the README.md file.

Overview

To show the video player functionality, visit the demo front end. This loads the assets for an example video that is already processed. In the example, there are GIF animations every 30 seconds and freeze frames for every second of video. Move the slider to load the frame and GIF animation at that point in the video:

Serverless GIF generator demo

  1. The demo site defaults to an existing video. After deploying the backend application, you can test your own videos here.
  2. Move the slider to select a second in the video playback. This simulates a typical playback bar used in video application frontends.
  3. This displays the frame at the chosen second, which is a JPG file created by the backend application.
  4. The GIF animation for the selected playback point is a separate GIF file, created by the backend application.

Comparing server-based and serverless solutions

The example application uses FFmpeg, an open source application to record and process video. FFmpeg creates the GIF animations and individual frames for each second of video. You can use FFmpeg directly from a terminal or in scripts and applications. In comparing the performance, I use the AWS re:Invent 2019 keynote video, which is almost 3 hours long.

To show the server-based approach, I use a Node.js application in the GitHub repo. This loops through the video length in 30-second increments, calling FFmpeg with the necessary command line parameters:

const main = async () => {
	const length = 10323
	const inputVideo = 'test.mp4'
	const ffTmp = './output'
	const snippetSize = 30
 	const baseFilename = inputVideo.split('.')[0]

	console.time('task')
	for (let start = 0; start < length; start += snippetSize) {
		const gifName = `${baseFilename}-${start}.gif`
		const end = start + snippetSize -1

		console.log('Now creating: ', gifName)
		// Generates gif in local tmp
		await execPromise(`${ffmpegPath} -loglevel error -ss ${start} -to ${end} -y -i "${inputVideo}" -vf "fps=10,scale=240:-1:flags=lanczos,split[s0][s1];[s0]palettegen[p];[s1][p]paletteuse" -loop 0 ${ffTmp}/${gifName}`)
		await execPromise(`${ffmpegPath} -loglevel error -ss ${start} -to ${end} -i "${inputVideo}" -vf fps=1 ${ffTmp}/${baseFilename}-${start}-frame-%d.jpg`)
	}
	console.timeEnd('task')
}

Running this script from my local development machine, it takes almost 21 minutes to complete the operation and creates over 10,000 output files.

Script output

After deploying the example serverless application in the GitHub repo, I upload the video to the source Amazon S3 bucket using the AWS CLI:

aws s3 cp ./reinvent.mp4 s3://source-bucket-name --acl public-read

The completed S3 PutObject operation starts the GIF generation process. The function that processes the GIF files emits the following metrics on the Monitor tab in the Lambda console:

Lambda function metrics

  1. There are 345 invocations to process the source file into 30-second GIF animations.
  2. The average duration for each invocation is 4,311 ms. The longest is 9,021 ms.
  3. No errors occurred in processing the video.
  4. Lambda scaled up to 344 concurrent execution environments.

After approximately 10 seconds, the conversion is complete and there are over 10,000 objects in the application’s output S3 bucket:

Test results

The main reason that the task duration is reduced from nearly 21 minutes to around 10 seconds is parallelization. In the server-based approach, each 30 second GIF is processed sequentially. In the Lambda-based solution, all of the 30-second clips are generated in parallel, at around the same time:

Server-based vs Lambda-based

Solution architecture

The example application uses the following serverless architecture:

Solution architecture

  1. When the original MP4 video is put into the source S3 bucket, it invokes the first Lambda function.
  2. The Snippets Lambda function detects the length of the video. It determines the number of 30-second snippets and then puts events onto the default event bus. There is one event for each snippet.
  3. An Amazon EventBridge rule matches for events created by the first Lambda function. It invokes the second Lambda function.
  4. The Process MP4 Lambda function receives the event as a payload. It loads the original video using FFmpeg to generate the GIF and per-second frames.
  5. The resulting files are stored in the output S3 bucket.

The first Lambda function uses the following code to detect the length of the video and create events for EventBridge:

const createSnippets = async (record) => {
	// Get signed URL for source object
	const params = {
		Bucket: record.s3.bucket.name, 
		Key: record.s3.object.key, 
		Expires: 300
	}
	const url = s3.getSignedUrl('getObject', params)

	// Get length of source video
	const metadata = await ffProbe(url)
	const length = metadata.format.duration
	console.log('Length (seconds): ', length)

	// Build data array for DynamoDB
	const items = []
	const snippetSize = parseInt(process.env.SnippetSize)

	for (let start = 0; start < length; start += snippetSize) {
		items.push({
			key: record.s3.object.key,
			start,
			end: (start + snippetSize - 1),
			length,
			tsCreated: Date.now()
		})
	}
	// Send events to EventBridge
	await writeBatch(items)
}

The eventbridge.js file contains a function that sends the event array to the default bus in EventBridge. It uses the putEvents method in the EventBridge JavaScript SDK to send events in batches of 10:

const writeBatch = async (items) => {

    console.log('writeBatch items: ', items.length)

    for (let i = 0; i < items.length; i += BATCH_SIZE ) {
        const tempArray = items.slice(i, i + BATCH_SIZE)

        // Create new params array
        const paramsArray = tempArray.map((item) => {
            return {
                DetailType: 'newVideoCreated',
                Source: 'custom.gifGenerator',
                Detail: JSON.stringify ({
                    ...item
                })
            }
        })

        // Create params object for DDB DocClient
        const params = {
            Entries: paramsArray
        }

        // Write to DDB
        const result = await eventbridge.putEvents(params).promise()
        console.log('Result: ', result)
    }
}

The second Lambda function is invoked by an EventBridge rule, matching on the Source and DetailType values. Both the Lambda function and the rule are defined in the AWS SAM template:

  GifsFunction:
    Type: AWS::Serverless::Function 
    Properties:
      CodeUri: gifsFunction/
      Handler: app.handler
      Runtime: nodejs14.x
      Timeout: 30
      MemorySize: 4096
      Layers:
        - !Ref LayerARN
      Environment:
        Variables:
          GenerateFrames: !Ref GenerateFrames
          GifsBucketName: !Ref GifsBucketName
          SourceBucketName: !Ref SourceBucketName
      Policies:
        - S3ReadPolicy:
            BucketName: !Ref SourceBucketName
        - S3CrudPolicy:
            BucketName: !Ref GifsBucketName
      Events:
        Trigger:
          Type: EventBridgeRule 
          Properties:
            Pattern:        
              source:
                - custom.gifGenerator
              detail-type:
                - newVideoCreated        

This Lambda function receives an event payload specifying the start and end time for the video clip it should process. The function then calls the FFmpeg application to generate the output files. It stores these in the local /tmp storage before uploading to the output S3 bucket:

const processMP4 = async (event) => {
    // Get settings from the incoming event
    const originalMP4 = event.detail.Key 
    const start =  event.detail.start
    const end =  event.detail.end

    // Get signed URL for source object
    const params = {
        Bucket: process.env.SourceBucketName, 
        Key: originalMP4, 
        Expires
    }
    const url = s3.getSignedUrl('getObject', params)
    console.log('processMP4: ', { url, originalMP4, start, end })

    // Extract frames from MP4 (1 per second)
    console.log('Create GIF')
    const baseFilename = params.Key.split('.')[0]
    
    // Create GIF
    const gifName = `${baseFilename}-${start}.gif`
    // Generates gif in local tmp
    await execPromise(`${ffmpegPath} -loglevel error -ss ${start} -to ${end} -y -i "${url}" -vf "fps=10,scale=240:-1:flags=lanczos,split[s0][s1];[s0]palettegen[p];[s1][p]paletteuse" -loop 0 ${ffTmp}/${gifName}`)
    // Upload gif to local tmp
    // Generate frames
    if (process.env.GenerateFrames === 'true') {    
        console.log('Capturing frames')
        await execPromise(`${ffmpegPath} -loglevel error -ss ${start} -to ${end} -i "${url}" -vf fps=1 ${ffTmp}/${baseFilename}-${start}-frame-%d.jpg`)
    }

    // Upload all generated files
    await uploadFiles(`${baseFilename}/`)

    // Cleanup temp files
    await tmpCleanup()
}

Creating and using the FFmpeg Lambda layer

FFmpeg uses operating system-specific binaries that may be different on your development machine from the Lambda execution environment. The easiest way to test the code on a local machine and deploy to Lambda with the appropriate binaries is to use a Lambda layer.

As described in the example application’s README file, you can create the FFmpeg Lambda layer by deploying the ffmpeg-lambda-layer application in the AWS Serverless Application Repository. After deployment, the Layers menu in the Lambda console shows the new layer. Copy the version ARN and use this as a parameter in the AWS SAM deployment:

ffmpeg Lambda layer

On your local machine, download and install the FFmpeg binaries for your operating system. The package.json file for the Lambda functions uses two npm installer packages to help ensure the Node.js code uses the correct binaries when tested locally:

{
  "name": "gifs",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "keywords": [],
  "author": "James Beswick",
  "license": "MIT-0",
  "dependencies": {
  },
  "devDependencies": {
    "@ffmpeg-installer/ffmpeg": "^1.0.20",
    "@ffprobe-installer/ffprobe": "^1.1.0",
    "aws-sdk": "^2.927.0"
  }
}

Any npm packages specified in the devDependencies section of package.json are not deployed to Lambda by AWS SAM. As a result, local testing uses local ffmpeg binaries and Lambda uses the previously deployed Lambda layer. This approach also helps to reduce the size of deployment uploads and can help make the testing and deployment cycle faster in development.

Using FFmpeg from a Lambda function

FFmpeg is an application that’s generally used via a terminal. It takes command line parameters as options to determine input files, output locations, and the type of processing needed. To use terminal commands from Lambda, both functions use the asynchronous child_process module in Node.js. The execPromise function wraps this module in a Promise so the main function can use async/await syntax:

const { exec } = require('child_process')

const execPromise = async (command) => {
    console.log(command)
    return new Promise((resolve, reject) => {
        const ls = exec(command, function (error, stdout, stderr) {
          if (error) {
            console.error('Error: ', error)
            reject(error)
          }
          if (stdout) console.log('stdout: ', stdout)
          if (stderr) console.error('stderr: ' ,stderr)
        })
        
        ls.on('exit', (code) => {
          console.log('execPromise finished with code ', code)
          resolve()
        })
    })
}

As a result, you can then call FFmpeg by constructing a command line with options parameters and passing to execPromise:

await execPromise(`${ffmpegPath} -loglevel error -ss ${start} -to ${end} -i "${url}" -vf fps=1 ${ffTmp}/${baseFilename}-${start}-frame-%d.jpg`)

Alternatively, you can also use the fluent-ffmpeg npm library, which exposes the command line options as methods. The example application uses this in the ffmpeg-promisify.js file:

const ffmpeg = require("fluent-ffmpeg")
   ffmpeg(source)
      .noAudio()
      .size(`${IMG_WIDTH}x${IMG_HEIGHT}`)
      .setStartTime(startFormatted)
      .setDuration(snippetSize - 1)
      .output(outputFile)
      .on("end", async (err) => {
        // do work
      })
      .on("error", function (err) {
        console.error('FFMPEG error: ', err)
      })
      .run()

Deploying the application

In the GitHub repository, there are detailed deployment instructions for the example application. The repo contains separate directories for the demo frontend application, server-based script, and the two versions of backend service.

After deployment, you can test the application by uploading an MP4 video to the source S3 bucket. The output GIF and JPG files are written to the application’s destination S3 bucket. The files from each MP4 file are grouped in a folder in the bucket:

S3 bucket contents

Frontend application

The frontend application allows you to visualize the outputs of the backend application. There is also a hosted version of this application. This accepts custom parameters to load graphics resources from S3 buckets in your AWS account. Alternatively, you can run the frontend application locally.

To launch the frontend application:

  1. After cloning the repo, change to the frontend directory.
  2. Run npm install to install Vue.js and all the required npm modules from package.json.
  3. Run npm run serve to start the development server. After building the modules in the project, the terminal shows the local URL where the application is running:
    Terminal output
  4. Open a web browser and navigate to http://localhost:8080 to see the application:
    Localhost browser application

Conclusion

In part 1 of this blog post, I explain how a GIF generation service can support a front-end application for video streaming. I compare the performance of a server-based and serverless approach and show how parallelization can significantly improve processing time. I walk through the solution architecture used in the example application and show how you can use FFmpeg in Lambda functions.

Part 2 covers advanced topics around this implementation. It explains the scaling behavior and considers alternative approaches, and looks at the cost of using this service.

For more serverless learning resources, visit Serverless Land.

Security updates for Monday

Post Syndicated from original https://lwn.net/Articles/867791/rss

Security updates have been issued by Debian (exiv2, grilo, gthumb, and redis), Fedora (krb5, nbdkit, and rubygem-addressable), Mageia (libass and opencontainers-runc), openSUSE (cacti, cacti-spine, go1.15, opera, qemu, and spectre-meltdown-checker), Red Hat (java-1.7.1-ibm, java-1.8.0-ibm, libsndfile, and libX11), SUSE (389-ds, qemu, and spectre-meltdown-checker), and Ubuntu (grilo).

CDK Corner – August 2021

Post Syndicated from Richard H Boyd original https://aws.amazon.com/blogs/devops/cdk-corner-august-2021/

We’re now well into the dog days of summer but that hasn’t slowed down the community one bit. In the past few months the team has delivered 3 big features that I think the community will love. The biggest new feature is the Construct Hub Developer Preview. Alex Pulver describes it as “a one-stop destination for finding, reusing, and sharing constructs”. The Construct Hub features constructs created by AWS, AWS Partner Network (APN) Partners, and the community. While the Construct Hub only includes TypeScript and Python constructs today, we will be adding support for the remaining jsii-supported languages in the coming months.

The next big release is the General Availability of CDK Pipelines. Instead of writing a new blog post for the General Availability launch, Rico updated the original announcement post to reflect updates made for GA. CDK Pipelines is a high-level construct library that makes it easy to set up a continuous deployment pipeline for your CDK-based applications. CDK Pipelines was announced last year and has seen several improvements, such as support for Docker registry credentials, cached builds, and getting started templates. To get started with your own CDK Pipelines, refer to the original launch blog post, that Rico has kindly updated for the GA announcement.

Since the launch of AWS CDK, customers have asked for a way to test their CDK constructs. The CDK assert library was previously only available for constructs written in TypeScript. With this new release, in which we have re-written the assert library to support jsii and renamed it the assertions library, customers can now test CDK constructs written in any supported language. The assertions library enables customers to test the generated AWS CloudFormation templates that a construct produces to ensure that the construct is designed and implemented properly. In June, Niranjan delivered an initial version of the assert library available in all supported languages. The RFC for “polyglot assert” describes the motivation fot this feature and some examples for using it.

Some key new constructs for AWS services include: – New L1 Constructs for AWS Location Services – New L2 Constructs for AWS CodeStar Connections – New L2 Constructs for Amazon CloudFront Functions – New L2 Constructs for AWS Service Catalog App Registry

This edition’s featured contribution comes from AWS Community Hero Philipp Garbe. Philipp added the ability to load Docker images from an existing tarball instead of rebuilding it from a Dockerfile.

[The Lost Bots] Episode 4: Deception Technology

Post Syndicated from Rapid7 original https://blog.rapid7.com/2021/08/30/the-lost-bots-episode-4-deception-technology/

[The Lost Bots] Episode 4: Deception Technology

Welcome back to The Lost Bots, a vlog series where Rapid7 Detection and Response Practice Advisor Jeffrey Gardner talks all things security with fellow industry experts. This episode is a little different, as it’s Jeffrey talking one-on-one with you about one of his favorite subjects: deception technology! Watch below to learn about the history, special characteristics, goals, and possible roadblocks (with counterpoints!) of what he likes to call “HoneyThings,” and also learn practical advice about the application of this amazing technology.



[The Lost Bots] Episode 4: Deception Technology

Stay tuned for future episodes of The Lost Bots! Coming soon: Jeffrey tackles insider threats where the threat is definitely inside your organization, but maybe not in the way you think.

Национален обект за авери, паркинги и купони

Post Syndicated from original https://yurukov.net/blog/2021/nsb/

Покрай един случай със строеж автомивка в спортен обект от национално значение се заинтересувах за други такива случайно и трайно настанили се бизнеси несвързани със спорта. Причината е, че според стария Закон за физическото възпитание и спорта на тези спортни обекти не може да се провежда друга дейност освен спортна. Т.е. не би трябвало да се отдават под наем обекти за друго освен спорт. В новия закон от 2019-та понятието „национално значение“ вече удобно е премахнато, заедно с ясните ограничения идващи с него. По смисъл обаче остава изискването там да се извършва единствено спортна дейност.

Не това виждаме в Национален спортен комплекс Диана. Той се управлява от НСБ ЕАД, която е изцяло държавна собственост и подопечна на спортния министър. Като такава липсва всякаква прозрачност и редовно отказва да дава информация под оправданието, че е частна компания и „може да прави каквото си иска в тези имоти“. Последното, впрочем, е цитат от частен разговор с член от борда на държавната компания, който се опитваше пламенно да ме убеди, че всъщност всичко било наред и да ме кара да спра да се вглеждам в дейността на НСБ и какво правят със Сталийски.

В края на миналата година изпратих серия от искания до НСБ ЕАД и ми отговориха, че нямат задължение да ми отговарят. След това питах министерството и отново нямаше отговор. За съжаление, нямах възможност да отговарям. При последното ми питане за част от обектите не получих дълго отговор. Едва след като изпратих жалба до съда за мълчалив отговор, получих отговори, в които твърдят, че са ми пратили писмо, което не ме е намерило. Нищо, че изрично съм посочил, че трябва да му отговорят електронно, а заявлението, жалбата и цялата комуникация ми беше през портала за достъп до обществена информация и чрез електронно връчване. След няколко подканвания и ново писмо до съда все пак ми ги пратиха електронно.

Ресторант 101, басейни и Сталийски

Тук в историята се замесват Борисов, любимецът му Сталийски и т.н. кръг Котараците. Покрай ресторанта, Сталийски е взел и трите открити басейна и е превърнал всичко в неофициален свой офис. Повече за дейността му ще прочетете в тази подробна статия на Георги Филипов. От друг отговор на спортното министерство ми казаха, че не пазят никаква информация как се използва конкретният обект, дали е отдаден под наем за спортна дейност на клубове или федерации, дали е имало състезания там и въобще какво се случва с него. По закон именно министъра на спорта отговаря за това там да се извършва спортна дейност. В действителност е превърнат в плаж и място за частни партита.

Площта използвана от ресторанта заедно с паркингите, навесите и градината е 2500 кв.м. Сградата е 750 кв.м. Басейните с трибуните са общо 8500 кв.м. Паркингите, всъщност, не са част от договора, но са прилежно маркирани с колчета и пазени от охраната. При запитвания от район Изгрев отговарят, че външния паркинг са част от имота на спортната база и нямат правомощия да ги санкционират. Не отговарят обаче за паркирането върху това, което по скица е тротоар на два метра от оградата.

Всички тези около 11000 кв.м. Сталийски получава за 1500 евро наем на месец за срок от 10 г. Ако добавим инвестициите предвидени по договор, се добавят 1600 евро на месец. Ресторантът си го е строил сам, макар никой да няма право да строи на този имот освен НСБ ЕАД. Разрешителното за строеж е също само тяхно. Така последните обясняват и защо няма нито една обществена поръчка както за този, така и за останалите имоти, за които съм питал.

За сравнение, за 3100 евро на месец в Бояна може да се наеме нелоша къща с двор близък до площта само на ресторанта. Без огромните басейни, паркингите и ключовото местоположение. Настояват, че е имало открит конкурс, но само този кандидат е проявил интерес. На страницата им обаче такъв няма. Всъщност, цялата страница с конкурсната документация не се отваря от години. И да, домейнът на „конкурсната“ им документация е „nsb.smiah.net“. Съобщения за обявени конкурси, като този за автомивката или въобще не се публикуват, или изчезват след няколко дни.

Целият отговор на НСБ ЕАД може да прочетете сами тук. Разбира се, малка подробност тук е, че „Ем фууд енд кетъринг ЕООД“ с ЕИК 202605181 няма дейност свързана със спорта, но им е отдаден под наем спортен обект от национално значение, където друга дейност не може да се извършва.

Такси Yellow и трибуната на басейните

Следващият ми въпрос беше защо централният офис на таксита Yellow е в трибуните на спортен обект. Като цяло в този и други комплекси има много апартаменти и офиси, които се предоставят на спортни клубове и федерации за административната им дейност. В случая обаче, както при ресторанта, такситата нямат нищо общо със спорта. Тук виждате снимки от мястото от 2012-та, когато все още са били там и по-скорошни сателитни.

Обяснението на НСБ ЕАД, че докато трибуните са спортен обект, пространството под тях може да се използват за каквото и да е. Затова отдават на таксиджиите първо 10 офиса с обща площ 374 кв. за 2400 лв, а през март – половината офиси за 1860 лв. на месец. За сравнение, двойно по-скъпо струват на квадрат апартаменти и офиси в други сгради в околността. Паркингът, който се използва от такситата, пък обясняват с допълнителни договорки между различни наематели в комплекса.

Ето отново подробния отговор на НСБ ЕАД.

Трафопост или не

На територията на комплекс Диана има няколко любопитни частни бизнеса. Пример е паркинга зад залите за борба, които не е ясно кой управлява, но добре се знае, че не издава касови бележки. Друг е големият магазин за водопроводна техника. До хотел Диана 2 пък има големи складове и частен паркинг, използван отново като магазин. Има и един трафопост, който се използва отдавна като сервиз.

Именно за този трафопост ми стана интересен. Според кадастъра е именно „сграда за енергопроизводство“. Според отговора на НСБ всъщност по нотариален акт е просто едноетажна сграда и може да се използва за каквото и да е. По закон не може да е нещо несвързано със спорта, но както разбрахме, на НСБ, Кралев и предшествениците му не им пука много.

Наемната цена е 426 лв. на месец – отново 3-4 пъти по-малко от наемите на подобен обект в района. Интересното е, че по договор помещението е с площ 98 кв., по кадастър е 89 кв., а според представения нотариален акт е 92 кв. Явно техническа грешка. Друга грешка в отговора им е, че става въпрос за сграда с партида номер 151137. Всъщност бившият явно трафопост е с партида 151141. Това, което са ми посочили те – дали неволно, дали защото помнят с какво започнах – е разрушената вече сграда на мястото на плануваната от тях автомивка.

Дефиниция за безстопанственост

Това са само три от примерите в един спортен комплекс. Има още много накъдето и да се обърнеш. Положението с НСБ ЕАД е същото както с Автомагистрали ЕАД, макар и сумите да са с една-две нули по-малко. Всъщност, понякога толкова е кокошкарска историята, че Кралев хвърли неимоверни усилия да защитава един договор за 500 лева на месец, който се предполагаше, че щеше да помага на спорта. В действителност щеше да даде още едно парче земя на Сталийски и безценен способ за пране на пари.

В конкретния комплекс сега Сталийски държи 22.5% от земята за 1500 евро на месец. 44% от комплекса са дива зеленина и пътища. От остатъка 54% се използват всъщност за спорт – зали, кортове, затворения басейн, игрището, където Борисов редовно рита и други. Още 14% са паркинги. 9.5% са отдадени за магазини и офиси несвързани със спорта. Вие си направете сметката. Освен, че всичко се прави на тъмно и правят всичко възможно да скрият поръчки и заповеди, никой по веригата не намира проблем с отдаването на безценица на една трета от спортен комплекс за несвойствени цели.

The post Национален обект за авери, паркинги и купони first appeared on Блогът на Юруков.

The 5.14 kernel has been released

Post Syndicated from original https://lwn.net/Articles/867706/rss

Linus has released the 5.14 kernel.

So I realize you must all still be busy with all the galas and
fancy balls and all the other 30th anniversary events, but at some
point you must be getting tired of the constant glitz, the
fireworks, and the champagne. That ball gown or tailcoat isn’t the
most comfortable thing, either. The celebrations will go on for a
few more weeks yet, but you all may just need a breather from them.

And when that happens, I have just the thing for you – a new kernel
release to test and enjoy.


Headline features in 5.14 include:
core scheduling (at last),
the burstable CFS bandwidth controller,
some initial infrastructure for BPF program
loaders,
the rq_qos
I/O priority policy,
some improvements to the
SO_REUSEPORT networking option,
the control-group “kill” button,
the memfd_secret() system call,
the quotactl_fd() system call,
and much more. See the LWN merge-window summaries (part 1, part 2) for more details.

Свят широк и спасение дебне отвсякъде

Post Syndicated from original https://yurukov.net/blog/2021/istoriata-na-zueka/

Доста се обсъждаше вчерашното интервю на Зуека. Имаше противоречиви коментари. Някои го поздравяваха за смелостта на тази възраст да направи такава рязка промяна. Други обсъждаха политиката и се опитваха да правят връзка със скорошните събития. Трети имаха доста нелицеприятни изказвания.

Истината е, че не е ничия работа дали и как семейството на Васил е решило да промени живота си. Промените водят често други промени след себе си. Интервюто беше явен опит да се парират неизбежните неудобни въпроси за липсата му от ефира и даде да се разбере, че иска да бъде уважавано личното му пространство.

Затова е безпредметно да задълбаваме и търсим скрит смисъл какво конкретно каза. Беше бланкетна история целяща в голяма степен да го оставим на мира и да не дълбаем. Безпредметно е и да търсим поводи в събитията от последната година. Такова решение не се взима от днес за утре, а отнема месеци обсъждане и още толкова подготовка.

Може само да му пожелаем успех в промяната. Знам много добре какво е да смениш средата си, още повече с бъдещ първокласник. Добре е, че поддържа контакт с българската общност, защото това винаги помага противно на легендите.

Надявам се само да не бъде разочарован в очакванията си. Не трябва да забравяме, че Васил от сцената и участието си в медийния и обществен живот на страната преживява България по различен начин. Ще Надявам се само да не бъде разочарован в очакванията си. Не трябва да забравяме, че Васил от сцената и участието си в медийния и обществен живот на страната преживява България по различен начин. Ще преживее и чужбина различно. Това всъщност е вярно за всеки – опитът на решилия да замине е различен спрямо това с какви ресурси отива и в каква среда попада, дали има деца, дали и къде работи, дали има нужда, желание и способност да се интегрира. Усещането за дискриминация също зависи много от възрастта, статуса, етноса, опита от България и положението някъде там.

Нека единствено да го поздравим за смелостта да промени живота си и да се отдаде на нещо, което харесва да прави далеч от прожекторите. Макар доста да се разпознаха в историята поставена като прощална бележка пред същите тези прожектори, струва ми се, че единственият по-дълбок сигнал в нея е, че си е негова работа какво прави и следва да уважим това му решение без изрична драма.

The post Свят широк и спасение дебне отвсякъде first appeared on Блогът на Юруков.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close