Tag Archives: Media Services

New — Deliver Interactive Real-Time Live Streams with Amazon IVS

Post Syndicated from Donnie Prakoso original https://aws.amazon.com/blogs/aws/new-deliver-interactive-real-time-live-streams-with-amazon-ivs/

Live streaming is becoming an increasingly popular way to connect customers with their favorite influencers and brands through interactive live video experiences. Our customers, DeNA and Rooter, rely on Amazon Interactive Video Service (Amazon IVS), a fully managed live streaming solution, to build engaging live stream and interactive video experiences for their audiences.

In March we introduced Amazon IVS support for multiple hosts in live streams to further provide flexibility in building interactive experiences, by using a resource called stage. A stage is a virtual space where participants can exchange audio and video in real time.

However, latency is still a critical component to engaging audiences and enriching the overall experience. The lower the latency, the better it is to connect with live audiences in a direct and personal way. Previously, Amazon IVS supported real-time live streaming for up to 12 hosts via stages with around 3–5 seconds latency for viewers via channels. This latency gap restricts the ability to build interactive experiences with direct engagement for wider audiences.

Introducing Amazon IVS Real-Time Streaming
Today, I’m excited to share that with Amazon IVS Real-Time Streaming, you now can deliver real-time live streams to 10,000 viewers with up to 12 hosts from a stage, with latency that can be under 300 milliseconds from host to viewer.

This feature unlocks the opportunity for you to build interactive video experiences for social media applications or for latency sensitive use cases like auctions.

Now you will no longer have to compromise to achieve real-time latency for viewers. You can avoid such workarounds as using multiple AWS services or external tools. Instead, you can simply use Amazon IVS as a centralized service to deliver real-time interactive live streams, and you don’t even need to enable anything on your account to start using this feature.

Deliver Real-time Streams with The Amazon IVS Broadcast SDK
To deliver real-time streams, you need to interact with a stage resource and use the Amazon IVS Broadcast SDK available on iOS, Android, and web. With a stage, you can create a virtual space for participants to join as either viewers or hosts with real-time latency that can be under 300 ms.

You can use a stage to build an experience where hosts and viewers can go live together. For example, inviting viewers to become hosts and join other hosts in a Q&A session, delivering a singing competition, or having multiple guests in a talk show.

We published an overview on how to get started with a stage resource on the Add multiple hosts to live streams with Amazon IVS page. Let me do a quick refresher for the overall flow and how to interact with a stage resource.

First, you need to create a stage. You can do this via the console or programmatically using the Amazon IVS API. The following command is an example of how to create a stage using the create-stage API and AWS CLI.

$ aws ivs-realtime create-stage \
    --region us-east-1 \
    --name demo-realtime \
{
    "stage": {
        "arn": "arn:aws:ivs:us-east-1:xyz:stage/mEvTj9PDyBwQ",
        "name": "demo-realtime",
        "tags": {}
    }
}

A key concept for a stage resource that enables participants to join as a host or a viewer is a participation token. A participant token is an authorization token that lets your participants publish or subscribe to a stage. When you’re using the create-stage API, you can also generate a participation token and add additional information by using attributes, including custom user IDs and their display names. The API responds with stage details and participant tokens.

$ aws ivs-realtime create-stage \
    --region us-east-1 \
    --name demo-realtime \
    --participant-token-configurations userId=test-1,capabilities=PUBLISH,SUBSCRIBE,attributes={demo-attribute=test-1}

{
    "participantTokens": [
        {
            "attributes": {
                "demo-attribute": "test-1"
            },
            "capabilities": [
                "PUBLISH",
                "SUBSCRIBE"
            ],
            "participantId": "p7HIfs3v9GIo",
            "token": "TOKEN",
            "userId": "test-1"
        }
    ],
    "stage": {
        "arn": "arn:aws:ivs:us-east-1:xyz:stage/mEvTj9PDyBwQ",
        "name": "demo-realtime",
        "tags": {}
    }
}

In addition to the create-stage API, you can also programmatically generate participant tokens using the API. Currently, there are two capability values for a participant token, PUBLISH and SUBSCRIBE. If you need to invite a participant to host, you need to add a PUBLISH capability while creating the participant token. With the PUBLISH attribute, you can include video and audio of your host into a stream.

Here is an example on how you can generate a participant token.

$ aws ivs-realtime create-participant-token \
    --region us-east-1 \
	--capabilities PUBLISH \
	--stage-arn ARN \
	--user-id test-2

{
    "participantToken": {
        "capabilities": [
            "PUBLISH"
        ],
        "expirationTime": "2023-07-23T23:48:57+00:00",
        "participantId": "86KGafGbrXpK",
        "token": "TOKEN",
        "userId": "test-2"
    }
}

Once you have generated a participant token, you need to distribute it to your respective clients using, for example, a WebSocket message. Then, within your client applications using Amazon IVS Broadcast SDK, you can use this participant token to let the your users join the stage as hosts or viewers. To learn more on how you can interact with a stage resource, you can see and review the sample demo for iOS or Android, and the supporting serverless applications for real-time demo.

At this point, you’re able to deliver real-time live streams using a stage to 10,000 viewers. If you need to extend the stream to a wider audience, you can use your stage as the input for a channel and use the Amazon IVS Low-Latency Streaming capability. With a channel, you can deliver high concurrency video from a single source with low latency that can be under 5 seconds to millions of viewers. You can learn more on how to publish a stage to a channel on the Amazon IVS Broadcast SDK documentation page, which includes information for iOS, Android, and web.

Layered Encoding Feature for Amazon IVS Real-Time Streaming Capability
End users prefer a live stream with good quality. However, the quality of the live stream depends on various factors, such as the health of their network connections and device performance.

The most common scenario is that viewers will receive a single version of video that is above their optimum viewing configuration. For example, if the host can produce high-quality video, the live stream can be enjoyed by viewers with good connections, but viewers with slower connections would experience loading delays or even an inability to watch the videos. However, if the host can only produce low-quality video, viewers with good connections will get less optimal video, while viewers with slower connections will have a better experience.

To address the issue, with this announcement we also released the layered encoding feature for Amazon IVS Real-Time Streaming capability. With layered encoding (also known as simulcast) when you publish to a stage, Amazon IVS will automatically send multiple variations of video and audio. This ensures your viewers can continue to enjoy the stream at the best quality they can receive based on their network conditions.

Customer Voices
During the private preview period, we heard lots of feedback from our customers about Amazon IVS Real-Time Streaming.

Whatnot is a live stream shopping platform and marketplace that allows collectors and enthusiasts to connect with their community to buy and sell products they’re passionate about. “Scaling live video auctions to our global community is one of our major engineering challenges. Ensuring real-time latency is fundamental to maintaining the integrity and excitement of our auction experience. By leveraging Amazon IVS Real-Time Streaming, we can confidently scale our operations worldwide, assuring a seamless and high-quality real-time video experience across our entire user base, whether on web or mobile platforms.”, Ludo Antonov, VP of Engineering.

Available Now
Amazon IVS Real-Time Streaming is available in all AWS Regions where Amazon IVS is available. To use Amazon IVS Real-Time Streaming, you pay an hourly rate for the duration that you have hosts or viewers connected to the stage resource as a participant.

Learn more about benefits, use cases, how to get started, and pricing details for Amazon IVS’s Real-Time Streaming and Low-Latency Streaming capabilities on the Amazon IVS page.

Happy streaming!
Donnie

New – Amazon EC2 VT1 Instances for Live Multi-stream Video Transcoding

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/new-amazon-ec2-vt1-instances-for-live-multi-stream-video-transcoding/

Global demand for video content has been rapidly increasing and now has the major audiences of Internet and mobile network traffic. Over-the-top streaming services such as Twitch continue to see an explosion of content creators who are seeking live delivery with great image quality, while live event broadcasters are increasingly looking to embrace agile cloud infrastructure to reduce costs without sacrificing reliability, and efficiently scale with demand.

Today, I am happy to announce the general availability of Amazon EC2 VT1 instances that are designed to provide the best price performance for multi-stream video transcoding with resolutions up to 4K UHD. These VT1 instances feature Xilinx® Alveo™ U30 media accelerator transcoding cards with accelerated H.264/AVC and H.265/HEVC codecs and provide up to 30% better price per stream compared to the latest GPU-based EC2 instances and up to 60% better price per stream compared to the latest CPU-based EC2 instances.

Customers with their own live broadcast and streaming video pipelines can use VT1 instances to transcode video streams with resolutions up to 4K UHD. VT1 instances feature networking interfaces of up to 25 Gbps that can ingest multiple video streams over IP with low latency and low jitter. This capability makes it possible for these customers to fully embrace scalable, cost-effective, and resilient infrastructure.

Amazon EC2 VT1 Instance Type
EC2 VT1 instances are available in three sizes. The accelerated H.264/AVC and H.265/HEVC codecs are integrated into Xilinx Zynq ZU7EV SoCs. Each Xilinx® Alveo™ U30 media transcoding accelerator card contains two Zynq SoCs.

Instance size vCPUs Xilinx U30 card Memory Network bandwidth EBS-optimized bandwidth 1080p60 Streams per instance
vt1.3xlarge 12 1 24GB Up to 25 Gbps Up to 4.75 Gbps 8
vt1.6xlarge 24 2 48GB 25 Gbps 4.75 Gbps 16
vt1.24xlarge 96 8 192GB 25 Gbps 19 Gbps 64

The VT1 instances are suitable for transcoding multiple streams per instance. The streams can be processed independently in parallel or mixed (picture-in-picture, side-by-side, transitions). The vCPU cores help with implementing image processing, audio processing, and multiplexing. The Xilinx® Alveo™ U30 card can simultaneously output multiple streams at different resolutions (1080p, 720p, 480p, and 360p) and in both H.264 and H.265.

Each VT1 instance can be configured to produce parallel encoding with different settings, resolutions and transmission bit rate (“ABR ladders“). For example, a 4K UHD stream can be encoded at 60 frames per second with H.265 for high resolution display. Multiple lower resolutions can be encoded with H.264 for delivery to standard displays.

Get Started with EC2 VT1 Instances
You can now launch VT1 instances in the Amazon EC2 console, AWS Command Line Interface (AWS CLI), or using an SDK with the Amazon EC2 API.

We provide a number of sample video processing pipelines for the VT1 instances. There are tutorials and code examples in the GitHub repository that cover how to tune the codecs for image quality and transcoding latency, call the runtime for the U30 cards directly from your own applications, incorporate video filters such as titling and watermarking, and deploy with container orchestration frameworks.

Xilinx provides the “Xilinx Video Transcoding SDK” which includes:

VT1 instances can be coupled with Amazon Elastic Container Service (Amazon ECS) and Amazon Elastic Kubernetes Service (Amazon EKS) to efficiently scale transcoding workloads and with Amazon CloudFront to deliver content globally. VT1 instances can also be launched with Amazon Machine Images (AMIs) and containers developed by AWS Marketplace partners, such as Nginx for supplemental video processing functionality.

You can complement VT1 instances with AWS Media Services for reliable packaging and origination of transcoded content. To learn more, you can use a solution library of Live Streaming on AWS to build a live video workflow using these AWS services.

Available Now
Amazon EC2 VT1 instances are now available in the US East (N. Virginia), US West (Oregon), Europe (Ireland), Asia Pacific (Tokyo) Regions. To learn more, visit the EC2 VT1 instance page. Please send feedback to the AWS forum for Amazon EC2 or through your usual AWS support contacts.

– Channy