Tag Archives: Media Services

Introducing AWS Deadline Cloud: Set up a cloud-based render farm in minutes

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/introducing-aws-deadline-cloud-set-up-a-cloud-based-render-farm-in-minutes/

Customers in industries such as architecture, engineering, & construction (AEC) and media & entertainment (M&E) generate the final frames for film, TV, games, industrial design visualizations, and other digital media with a process called rendering, which takes 2D/3D digital content data and computes an output, such as an image or video file. Rendering also requires significant compute power, especially to generate 3D graphics and visual effects (VFX) with resolutions as high as 16K for films and TV. This constrains the number of rendering projects that customers can take on at once.

To address this growing demand for rendering high-resolution content, customers often build what are called “render farms” which combine the power of hundreds or thousands of computing nodes to process their rendering jobs. Render farms can traditionally take weeks or even months to build and deploy, and they require significant planning and upfront commitments to procure hardware.

As a result, customers increasingly are transitioning to scalable, cloud-based render farms for efficient production instead of a dedicated render farm on-premises, which can require extremely high fixed costs. But, rendering in the cloud still requires customers to manage their own infrastructure, build bespoke tooling to manage costs on a project-by-project basis, and monitor software licensing costs with their preferred partners themselves.

Today, we are announcing the general availability of AWS Deadline Cloud, a new fully managed service that enables creative teams to easily set up a render farm in minutes, scale to run more projects in parallel, and only pay for what resources they use. AWS Deadline Cloud provides a web-based portal with the ability to create and manage render farms, preview in-progress renders, view and analyze render logs, and easily track these costs.

With Deadline Cloud, you can go from zero to render faster with integrations of digital content creation (DCC) tools and customization tools are built-in. You can reduce the effort and development time required to tailor your rendering pipeline to the needs of each job. You also have the flexibility to use licenses you already own or they are provided by the service for third-party DCC software and renderers such as Maya, Nuke, and Houdini.

Concepts of AWS Deadline Cloud
AWS Deadline Cloud allows you to create and manage rendering projects and jobs on Amazon Elastic Compute Cloud (Amazon EC2) instances directly from DCC pipelines and workstations. You can create a rendering farm, a collection of queues, and fleets. A queue is where your submitted jobs are located and scheduled to be rendered. A fleet is a group of worker nodes that can support multiple queues. A queue can be processed by multiple fleets.

Before you can work on a project, you should have access to the required resources, and the associated farm must be integrated with AWS IAM Identity Center to manage workforce authentication and authorization. IT administrators can create and grant access permissions to users and groups at different levels, such as viewers, contributors, managers, or owners.

Here are four key components of Deadline Cloud:

  • Deadline Cloud monitor – You can access statuses, logs, and other troubleshooting metrics for jobs, steps, and tasks. The Deadline Cloud monitor provides real-time access and updates to job progress. It also provides access to logs and other troubleshooting metrics, and you can browse multiple farm, fleet, and queue listings to view system utilization.
  • Deadline Cloud submitter – You can submit a rendering job directly using AWS SDK or AWS Command Line Interface (AWS CLI). You can also submit from DCC software using a Deadline Cloud submitter, which is a DCC-integrated plugin that supports Open Job Description (OpenJD), an open source template specification. With it, artists can submit rendering jobs from a third-party DCC interface they are more familiar with, such as Maya or Nuke, to Deadline Cloud, where project resources are managed and jobs are monitored in one location.
  • Deadline Cloud budget manager – You can create and edit budgets to help manage project costs and view how many AWS resources are used and the estimated costs for those resources.
  • Deadline Cloud usage explorer – You can use the usage explorer to track approximate compute and licensing costs based on public pricing rates in Amazon EC2 and Usage-Based Licensing (UBL).

Get started with AWS Deadline Cloud
To get started with AWS Deadline Cloud, define and create a farm with Deadline Cloud monitor, download the Deadline Cloud submitter, and install plugins for your favorite DCC applications with just a few clicks. You can define your rendering jobs in your DCC application and submit them to your created farm within the plugin’s user interfaces.

The DCC plugins detect the necessary input scene data and build a job bundle that uploads to the Amazon Simple Storage Service (Amazon S3) bucket in your account, transfer to Deadline Cloud for rendering the job, and provide completed frames to the S3 bucket for your customers to access.

1. Define a farm with Deadline Cloud monitor
Let’s create your Deadline Cloud monitor infrastructure and define your farm first. In the Deadline Cloud console, choose Set up Deadline Cloud to define a farm with a guided experience, including queues and fleets, adding groups and users, choosing a service role, and adding tags to your resources.

In this step, to choose all the default settings for your Deadline Cloud resources, choose Skip to Review in Step 3 after monitor setup. Otherwise choose Next and customize your Deadline Cloud resources.

Set up your monitor’s infrastructure and enter your Monitor display name. This name makes the Monitor URL, a web portal to manage your farms, queues, fleets, and usages. You can’t change the monitor URL after you finish setting up. The AWS Region is the physical location of your rendering farm, so you should choose the closest Region from your studio to reduce the latency and improve data transfer speeds.

To access the monitor, you can create new users and groups and manage users (such as by assigning them groups, permissions, and applications) or delete users from your monitor. Users, groups, and permissions can also be managed in the IAM Identity Center. So, if you don’t set up the IAM Identity Center in your Region, you should enable it first. To learn more, visit Managing users in Deadline Cloud in the AWS documentation.

In Step 2, you can define farm details such as the name and description of your farm. In Additional farm settings, you can set an AWS Key Management Service (AWS KMS) key to encrypt your data and tags to assign AWS resources for filtering your resources or tracking your AWS costs. Your data is encrypted by default with a key that AWS owns and manages for you. To choose a different key, customize your encryption settings.

You can choose Skip to Review and Create to finish the quick setup process with the default settings.

Let’s look at more optional configurations! In the step for defining queue details, you can set up an S3 bucket for your queue. Job assets are uploaded as job attachments during the rendering process. Job attachments are stored in your defined S3 bucket. Additionally, you can set up the default budget action, service access roles, and environment variables for your queue.

In the step for defining fleet details, set the fleet name, description, Instance option (either Spot or On-Demand Instance), and Auto scaling configuration to define the number of instances and the fleet’s worker requirements. We set conservative worker requirements by default. These values can be updated at any time after setting up your render farm. To learn more, visit Manage Deadline Cloud fleets in the AWS documentation.

Worker instances define EC2 instance types with vCPUs and memory size, for example, c5.large, c5a.large, and c6i.large. You can filter up to 100 EC2 instance types by either allowing or excluding types of worker instances.

Review all of the information entered to create your farm and choose Create farm.

The progress of your Deadline Cloud onboarding is displayed, and a success message displays when your monitor and farm are ready for use. To learn more details about the process, visit Set up a Deadline Cloud monitor in the AWS documentation.

In the Dashboard in the left pane, you can visit the overview of the monitor, farms, users, and groups that you created.

Choose Monitor to visit a web portal to manage your farms, queues, fleets, usages, and budgets. After signing in to your user account, you can enter a web portal and explore the Deadline Cloud resources you created. You can also download a Deadline Cloud monitor desktop application with the same user experiences from the Downloads page.

To learn more about using the monitor, visit Using the Deadline Cloud monitor in the AWS documentation.

2. Set up a workstation and submit your render job to Deadline Cloud
Let’s set up a workstation for artists on their desktops by installing the Deadline Cloud submitter application so they can easily submit render jobs from within Maya, Nuke, and Houdini. Choose Downloads in the left menu pane and download the proper submitter installer for your operating system to test your render farm.

This program installs the latest integrated plugin for Deadline Cloud submitter for Maya, Nuke, and Houdini.

For example, open a Maya on your desktop and your asset. I have an example of a wrench file I’m going to test with. Choose Windows in the menu bar and Settings/Preferences in the sub menu. In the Plugin Manager, search for DeadlineCloudSubmitter. Select Loaded to load the Deadline Cloud submitter plugin.

If you are not already authenticated in the Deadline Cloud submitter, the Deadline Cloud Status tab will display. Choose Login and sign in with your user credentials in a browser sign-in window.

Now, select the Deadline Cloud shelf, then choose the orange deadline cloud logo on the ‘Deadline’ shelf to launch the submitter. From the submitter window, choose the farm and queue you want your render submitted to. If desired, in the Scene Settings tab, you can override the frame range, change the Output Path, or both.

If you choose Submit, the wrench turntable Maya file, along with all of the necessary textures and alembic caches, will be uploaded to Deadline Cloud and rendered on the farm. You can monitor rendering jobs in your Deadline Cloud monitor.

When your render is finished, as indicated by the Succeeded status in the job monitor, choose the job, Job Actions, and Download Output. To learn more about scheduling and monitoring jobs, visit Deadline Cloud jobs in the AWS documentation.

View your rendered image with an image viewing application such as DJView. The image will look like this:

To learn more in detail about the developer-side setup process using the command line, visit Setting up a developer workstation for Deadline Cloud in the AWS documentation.

3. Managing budgets and usage for Deadline Cloud
To help you manage costs for Deadline Cloud, you can use a budget manager to create and edit budgets. You can also use a usage explorer to view how many AWS resources are used and the estimated costs for those resources.

Choose Budgets on the Deadline Cloud monitor page to create your budget for your farm.

You can create budget amounts and limits and set automated actions to help reduce or stop additional spend against the budget.

Choose Usage in the Deadline Cloud monitor page to find real-time metrics on the activity happening on each farm. You can look at the farm’s costs by different variables, such as queue, job, or user. Choose various time frames to find usage during a specific period and look at usage trends over time.

The costs displayed in the usage explorer are approximate. Use them as a guide for managing your resources. There may be other costs from using other connected AWS resources, such as Amazon S3, Amazon CloudWatch, and other services that are not accounted for in the usage explorer.

To learn more, visit Managing budgets and usage for Deadline Cloud in the AWS documentation.

Now available
AWS Deadline Cloud is now available in US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), and Europe (Ireland) Regions.

Give AWS Deadline Cloud a try in the Deadline Cloud console. For more information, visit the Deadline Cloud product page, Deadline Cloud User Guide in the AWS documentation, and send feedback to AWS re:Post for AWS Deadline Cloud or through your usual AWS support contacts.


New — Deliver Interactive Real-Time Live Streams with Amazon IVS

Post Syndicated from Donnie Prakoso original https://aws.amazon.com/blogs/aws/new-deliver-interactive-real-time-live-streams-with-amazon-ivs/

Live streaming is becoming an increasingly popular way to connect customers with their favorite influencers and brands through interactive live video experiences. Our customers, DeNA and Rooter, rely on Amazon Interactive Video Service (Amazon IVS), a fully managed live streaming solution, to build engaging live stream and interactive video experiences for their audiences.

In March we introduced Amazon IVS support for multiple hosts in live streams to further provide flexibility in building interactive experiences, by using a resource called stage. A stage is a virtual space where participants can exchange audio and video in real time.

However, latency is still a critical component to engaging audiences and enriching the overall experience. The lower the latency, the better it is to connect with live audiences in a direct and personal way. Previously, Amazon IVS supported real-time live streaming for up to 12 hosts via stages with around 3–5 seconds latency for viewers via channels. This latency gap restricts the ability to build interactive experiences with direct engagement for wider audiences.

Introducing Amazon IVS Real-Time Streaming
Today, I’m excited to share that with Amazon IVS Real-Time Streaming, you now can deliver real-time live streams to 10,000 viewers with up to 12 hosts from a stage, with latency that can be under 300 milliseconds from host to viewer.

This feature unlocks the opportunity for you to build interactive video experiences for social media applications or for latency sensitive use cases like auctions.

Now you will no longer have to compromise to achieve real-time latency for viewers. You can avoid such workarounds as using multiple AWS services or external tools. Instead, you can simply use Amazon IVS as a centralized service to deliver real-time interactive live streams, and you don’t even need to enable anything on your account to start using this feature.

Deliver Real-time Streams with The Amazon IVS Broadcast SDK
To deliver real-time streams, you need to interact with a stage resource and use the Amazon IVS Broadcast SDK available on iOS, Android, and web. With a stage, you can create a virtual space for participants to join as either viewers or hosts with real-time latency that can be under 300 ms.

You can use a stage to build an experience where hosts and viewers can go live together. For example, inviting viewers to become hosts and join other hosts in a Q&A session, delivering a singing competition, or having multiple guests in a talk show.

We published an overview on how to get started with a stage resource on the Add multiple hosts to live streams with Amazon IVS page. Let me do a quick refresher for the overall flow and how to interact with a stage resource.

First, you need to create a stage. You can do this via the console or programmatically using the Amazon IVS API. The following command is an example of how to create a stage using the create-stage API and AWS CLI.

$ aws ivs-realtime create-stage \
    --region us-east-1 \
    --name demo-realtime \
    "stage": {
        "arn": "arn:aws:ivs:us-east-1:xyz:stage/mEvTj9PDyBwQ",
        "name": "demo-realtime",
        "tags": {}

A key concept for a stage resource that enables participants to join as a host or a viewer is a participation token. A participant token is an authorization token that lets your participants publish or subscribe to a stage. When you’re using the create-stage API, you can also generate a participation token and add additional information by using attributes, including custom user IDs and their display names. The API responds with stage details and participant tokens.

$ aws ivs-realtime create-stage \
    --region us-east-1 \
    --name demo-realtime \
    --participant-token-configurations userId=test-1,capabilities=PUBLISH,SUBSCRIBE,attributes={demo-attribute=test-1}

    "participantTokens": [
            "attributes": {
                "demo-attribute": "test-1"
            "capabilities": [
            "participantId": "p7HIfs3v9GIo",
            "token": "TOKEN",
            "userId": "test-1"
    "stage": {
        "arn": "arn:aws:ivs:us-east-1:xyz:stage/mEvTj9PDyBwQ",
        "name": "demo-realtime",
        "tags": {}

In addition to the create-stage API, you can also programmatically generate participant tokens using the API. Currently, there are two capability values for a participant token, PUBLISH and SUBSCRIBE. If you need to invite a participant to host, you need to add a PUBLISH capability while creating the participant token. With the PUBLISH attribute, you can include video and audio of your host into a stream.

Here is an example on how you can generate a participant token.

$ aws ivs-realtime create-participant-token \
    --region us-east-1 \
	--capabilities PUBLISH \
	--stage-arn ARN \
	--user-id test-2

    "participantToken": {
        "capabilities": [
        "expirationTime": "2023-07-23T23:48:57+00:00",
        "participantId": "86KGafGbrXpK",
        "token": "TOKEN",
        "userId": "test-2"

Once you have generated a participant token, you need to distribute it to your respective clients using, for example, a WebSocket message. Then, within your client applications using Amazon IVS Broadcast SDK, you can use this participant token to let the your users join the stage as hosts or viewers. To learn more on how you can interact with a stage resource, you can see and review the sample demo for iOS or Android, and the supporting serverless applications for real-time demo.

At this point, you’re able to deliver real-time live streams using a stage to 10,000 viewers. If you need to extend the stream to a wider audience, you can use your stage as the input for a channel and use the Amazon IVS Low-Latency Streaming capability. With a channel, you can deliver high concurrency video from a single source with low latency that can be under 5 seconds to millions of viewers. You can learn more on how to publish a stage to a channel on the Amazon IVS Broadcast SDK documentation page, which includes information for iOS, Android, and web.

Layered Encoding Feature for Amazon IVS Real-Time Streaming Capability
End users prefer a live stream with good quality. However, the quality of the live stream depends on various factors, such as the health of their network connections and device performance.

The most common scenario is that viewers will receive a single version of video that is above their optimum viewing configuration. For example, if the host can produce high-quality video, the live stream can be enjoyed by viewers with good connections, but viewers with slower connections would experience loading delays or even an inability to watch the videos. However, if the host can only produce low-quality video, viewers with good connections will get less optimal video, while viewers with slower connections will have a better experience.

To address the issue, with this announcement we also released the layered encoding feature for Amazon IVS Real-Time Streaming capability. With layered encoding (also known as simulcast) when you publish to a stage, Amazon IVS will automatically send multiple variations of video and audio. This ensures your viewers can continue to enjoy the stream at the best quality they can receive based on their network conditions.

Customer Voices
During the private preview period, we heard lots of feedback from our customers about Amazon IVS Real-Time Streaming.

Whatnot is a live stream shopping platform and marketplace that allows collectors and enthusiasts to connect with their community to buy and sell products they’re passionate about. “Scaling live video auctions to our global community is one of our major engineering challenges. Ensuring real-time latency is fundamental to maintaining the integrity and excitement of our auction experience. By leveraging Amazon IVS Real-Time Streaming, we can confidently scale our operations worldwide, assuring a seamless and high-quality real-time video experience across our entire user base, whether on web or mobile platforms.”, Ludo Antonov, VP of Engineering.

Available Now
Amazon IVS Real-Time Streaming is available in all AWS Regions where Amazon IVS is available. To use Amazon IVS Real-Time Streaming, you pay an hourly rate for the duration that you have hosts or viewers connected to the stage resource as a participant.

Learn more about benefits, use cases, how to get started, and pricing details for Amazon IVS’s Real-Time Streaming and Low-Latency Streaming capabilities on the Amazon IVS page.

Happy streaming!

New – Amazon EC2 VT1 Instances for Live Multi-stream Video Transcoding

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/new-amazon-ec2-vt1-instances-for-live-multi-stream-video-transcoding/

Global demand for video content has been rapidly increasing and now has the major audiences of Internet and mobile network traffic. Over-the-top streaming services such as Twitch continue to see an explosion of content creators who are seeking live delivery with great image quality, while live event broadcasters are increasingly looking to embrace agile cloud infrastructure to reduce costs without sacrificing reliability, and efficiently scale with demand.

Today, I am happy to announce the general availability of Amazon EC2 VT1 instances that are designed to provide the best price performance for multi-stream video transcoding with resolutions up to 4K UHD. These VT1 instances feature Xilinx® Alveo™ U30 media accelerator transcoding cards with accelerated H.264/AVC and H.265/HEVC codecs and provide up to 30% better price per stream compared to the latest GPU-based EC2 instances and up to 60% better price per stream compared to the latest CPU-based EC2 instances.

Customers with their own live broadcast and streaming video pipelines can use VT1 instances to transcode video streams with resolutions up to 4K UHD. VT1 instances feature networking interfaces of up to 25 Gbps that can ingest multiple video streams over IP with low latency and low jitter. This capability makes it possible for these customers to fully embrace scalable, cost-effective, and resilient infrastructure.

Amazon EC2 VT1 Instance Type
EC2 VT1 instances are available in three sizes. The accelerated H.264/AVC and H.265/HEVC codecs are integrated into Xilinx Zynq ZU7EV SoCs. Each Xilinx® Alveo™ U30 media transcoding accelerator card contains two Zynq SoCs.

Instance size vCPUs Xilinx U30 card Memory Network bandwidth EBS-optimized bandwidth 1080p60 Streams per instance
vt1.3xlarge 12 1 24GB Up to 25 Gbps Up to 4.75 Gbps 8
vt1.6xlarge 24 2 48GB 25 Gbps 4.75 Gbps 16
vt1.24xlarge 96 8 192GB 25 Gbps 19 Gbps 64

The VT1 instances are suitable for transcoding multiple streams per instance. The streams can be processed independently in parallel or mixed (picture-in-picture, side-by-side, transitions). The vCPU cores help with implementing image processing, audio processing, and multiplexing. The Xilinx® Alveo™ U30 card can simultaneously output multiple streams at different resolutions (1080p, 720p, 480p, and 360p) and in both H.264 and H.265.

Each VT1 instance can be configured to produce parallel encoding with different settings, resolutions and transmission bit rate (“ABR ladders“). For example, a 4K UHD stream can be encoded at 60 frames per second with H.265 for high resolution display. Multiple lower resolutions can be encoded with H.264 for delivery to standard displays.

Get Started with EC2 VT1 Instances
You can now launch VT1 instances in the Amazon EC2 console, AWS Command Line Interface (AWS CLI), or using an SDK with the Amazon EC2 API.

We provide a number of sample video processing pipelines for the VT1 instances. There are tutorials and code examples in the GitHub repository that cover how to tune the codecs for image quality and transcoding latency, call the runtime for the U30 cards directly from your own applications, incorporate video filters such as titling and watermarking, and deploy with container orchestration frameworks.

Xilinx provides the “Xilinx Video Transcoding SDK” which includes:

VT1 instances can be coupled with Amazon Elastic Container Service (Amazon ECS) and Amazon Elastic Kubernetes Service (Amazon EKS) to efficiently scale transcoding workloads and with Amazon CloudFront to deliver content globally. VT1 instances can also be launched with Amazon Machine Images (AMIs) and containers developed by AWS Marketplace partners, such as Nginx for supplemental video processing functionality.

You can complement VT1 instances with AWS Media Services for reliable packaging and origination of transcoded content. To learn more, you can use a solution library of Live Streaming on AWS to build a live video workflow using these AWS services.

Available Now
Amazon EC2 VT1 instances are now available in the US East (N. Virginia), US West (Oregon), Europe (Ireland), Asia Pacific (Tokyo) Regions. To learn more, visit the EC2 VT1 instance page. Please send feedback to the AWS forum for Amazon EC2 or through your usual AWS support contacts.

– Channy