Tag Archives: election

Give Your WordPress Blog a Voice With Our New Amazon Polly Plugin

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/give-your-wordpress-blog-a-voice-with-our-new-amazon-polly-plugin/

I first told you about Polly in late 2016 in my post Amazon Polly – Text to Speech in 47 Voices and 24 Languages. After that AWS re:Invent launch, we added support for Korean, five new voices, and made Polly available in all Regions in the aws partition. We also added whispering, speech marks, a timbre effect, and dynamic range compression.

New WordPress Plugin
Today we are launching a WordPress plugin that uses Polly to create high-quality audio versions of your blog posts. You can access the audio from within the post or in podcast form using a feature that we call Amazon Pollycast! Both options make your content more accessible and can help you to reach a wider audience. This plugin was a joint effort between the AWS team our friends at AWS Advanced Technology Partner WP Engine.

As you will see, the plugin is easy to install and configure. You can use it with installations of WordPress that you run on your own infrastructure or on AWS. Either way, you have access to all of Polly’s voices along with a wide variety of configuration options. The generated audio (an MP3 file for each post) can be stored alongside your WordPress content, or in Amazon Simple Storage Service (S3), with optional support for content distribution via Amazon CloudFront.

Installing the Plugin
I did not have an existing WordPress-powered blog, so I begin by launching a Lightsail instance using the WordPress 4.8.1 blueprint:

Then I follow these directions to access my login credentials:

Credentials in hand, I log in to the WordPress Dashboard:

The plugin makes calls to AWS, and needs to have credentials in order to do so. I hop over to the IAM Console and created a new policy. The policy allows the plugin to access a carefully selected set of S3 and Polly functions (find the full policy in the README):

Then I create an IAM user (wp-polly-user). I enter the name and indicate that it will be used for Programmatic Access:

Then I attach the policy that I just created, and click on Review:

I review my settings (not shown) and then click on Create User. Then I copy the two values (Access Key ID and Secret Access Key) into a secure location. Possession of these keys allows the bearer to make calls to AWS so I take care not to leave them lying around.

Now I am ready to install the plugin! I go back to the WordPress Dashboard and click on Add New in the Plugins menu:

Then I click on Upload Plugin and locate the ZIP file that I downloaded from the WordPress Plugins site. After I find it I click on Install Now to proceed:

WordPress uploads and installs the plugin. Now I click on Activate Plugin to move ahead:

With the plugin installed, I click on Settings to set it up:

I enter my keys and click on Save Changes:

The General settings let me control the sample rate, voice, player position, the default setting for new posts, and the autoplay option. I can leave all of the settings as-is to get started:

The Cloud Storage settings let me store audio in S3 and to use CloudFront to distribute the audio:

The Amazon Pollycast settings give me control over the iTunes parameters that are included in the generated RSS feed:

Finally, the Bulk Update button lets me regenerate all of the audio files after I change any of the other settings:

With the plugin installed and configured, I can create a new post. As you can see, the plugin can be enabled and customized for each post:

I can see how much it will cost to convert to audio with a click:

When I click on Publish, the plugin breaks the text into multiple blocks on sentence boundaries, calls the Polly SynthesizeSpeech API for each block, and accumulates the resulting audio in a single MP3 file. The published blog post references the file using the <audio> tag. Here’s the post:

I can’t seem to use an <audio> tag in this post, but you can download and play the MP3 file yourself if you’d like.

The Pollycast feature generates an RSS file with links to an MP3 file for each post:

Pricing
The plugin will make calls to Amazon Polly each time the post is saved or updated. Pricing is based on the number of characters in the speech requests, as described on the Polly Pricing page. Also, the AWS Free Tier lets you process up to 5 million characters per month at no charge, for a period of one year that starts when you make your first call to Polly.

Going Further
The plugin is available on GitHub in source code form and we are looking forward to your pull requests! Here are a couple of ideas to get you started:

Voice Per Author – Allow selection of a distinct Polly voice for each author.

Quoted Text – For blogs that make frequent use of embedded quotes, use a distinct voice for the quotes.

Translation – Use Amazon Translate to translate the texts into another language, and then use Polly to generate audio in that language.

Other Blogging Engines – Build a similar plugin for your favorite blogging engine.

SSML Support – Figure out an interesting way to use Polly’s SSML tags to add additional character to the audio.

Let me know what you come up with!

Jeff;

 

MagPi 66: Raspberry Pi media projects for your home

Post Syndicated from Rob Zwetsloot original https://www.raspberrypi.org/blog/magpi-66-media-pi/

Hey folks, Rob from The MagPi here! Issue 66 of The MagPi is out right now, with the ultimate guide to powering your home media with Raspberry Pi. We think the Pi is the perfect replacement or upgrade for many media devices, so in this issue we show you how to build a range of Raspberry Pi media projects.

MagPi 66

Yes, it does say Pac-Man robotics on the cover. They’re very cool.

The article covers file servers for sharing media across your network, music streaming boxes that connect to Spotify, a home theatre PC to make your TV-watching more relaxing, a futuristic Pi-powered moving photoframe, and even an Alexa voice assistant to control all these devices!

More to see

That’s not all though — The MagPi 66 also shows you how to build a Raspberry Pi cluster computer, how to control LEGO robots using the GPIO, and why your Raspberry Pi isn’t affected by Spectre and Meltdown.




In addition, you’ll also find our usual selection of product reviews and excellent project showcases.

Get The MagPi 66

Issue 66 is available today from WHSmith, Tesco, Sainsbury’s, and Asda. If you live in the US, head over to your local Barnes & Noble or Micro Center in the next few days. You can also get the new issue online from our store, or digitally via our Android and iOS apps. And don’t forget, there’s always the free PDF as well.

Subscribe for free goodies

Want to support the Raspberry Pi Foundation and the magazine, and get some cool free stuff? If you take out a twelve-month print subscription to The MagPi, you’ll get a Pi Zero W, Pi Zero case, and adapter cables absolutely free! This offer does not currently have an end date.

I hope you enjoy this issue! See you next month.

The post MagPi 66: Raspberry Pi media projects for your home appeared first on Raspberry Pi.

Netflix, Amazon and Hollywood Sue Kodi-Powered Dragon Box Over Piracy

Post Syndicated from Ernesto original https://torrentfreak.com/netflix-amazon-and-hollywood-sue-kodi-powered-dragon-box-over-piracy-180111/

More and more people are starting to use Kodi-powered set-top boxes to stream video content to their TVs.

While Kodi itself is a neutral platform, sellers who ship devices with unauthorized add-ons give it a bad reputation.

In recent months these boxes have become the prime target for copyright enforcers, including the Alliance for Creativity and Entertainment (ACE), an anti-piracy partnership between Hollywood studios, Netflix, Amazon, and more than two dozen other companies.

After suing Tickbox last year a group of key ACE members have now filed a similar lawsuit against Dragon Media Inc, which sells the popular Dragon Box. The complaint, filed at a California federal court, also lists the company’s owner Paul Christoforo and reseller Jeff Williams among the defendants.

According to ACE, these type of devices are nothing more than pirate tools, allowing buyers to stream copyright infringing content. That also applies to Dragon Box, they inform the court.

“Defendants market and sell ‘Dragon Box,’ a computer hardware device that Defendants urge their customers to use as a tool for the mass infringement of the copyrighted motion pictures and television shows,” the complaint, picked up by HWR, reads.

The movie companies note that the defendants distribute and promote the Dragon Box as a pirate tool, using phrases such as “Watch your Favourites Anytime For FREE” and “stop paying for Netflix and Hulu.”

Dragon Box

When users follow the instructions Dragon provides they get free access to copyrighted movies, TV-shows and live content, ACE alleges. The complaint further points out that the device uses the open source Kodi player paired with pirate addons.

“The Dragon Media application provides Defendants’ customers with a customized configuration of the Kodi media player and a curated selection of the most popular addons for accessing infringing content,” the movie companies write.

“These addons are designed and maintained for the overarching purpose of scouring the Internet for illegal sources of copyrighted content and returning links to that content. When Dragon Box customers click those links, those customers receive unauthorized streams of popular motion pictures and television shows.”

One of the addons that are included with the download and installation of the Dragon software is Covenant.

This addon can be accessed through a preinstalled shortcut which is linked under the “Videos” menu. Users are then able to browse through a large library of curated content, including a separate category of movies that are still in theaters.

In theaters

According to a statement from Dragon owner Christoforo, business is going well. The company claims to have “over 250,000 customers in 50 states and 4 countries and growing” as well as “374 sellers” across the world.

With this lawsuit, however, the company’s future has suddenly become uncertain.

The movie companies ask the California District for an injunction to shut down the infringing service and impound all Dragon Box devices. In addition, they’re requesting statutory damages which can go up to several million dollars.

At the time of writing the Dragon Box website is still in on air and the company has yet to comment on the allegations.

A copy of the complaint is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Combine Transactional and Analytical Data Using Amazon Aurora and Amazon Redshift

Post Syndicated from Re Alvarez-Parmar original https://aws.amazon.com/blogs/big-data/combine-transactional-and-analytical-data-using-amazon-aurora-and-amazon-redshift/

A few months ago, we published a blog post about capturing data changes in an Amazon Aurora database and sending it to Amazon Athena and Amazon QuickSight for fast analysis and visualization. In this post, I want to demonstrate how easy it can be to take the data in Aurora and combine it with data in Amazon Redshift using Amazon Redshift Spectrum.

With Amazon Redshift, you can build petabyte-scale data warehouses that unify data from a variety of internal and external sources. Because Amazon Redshift is optimized for complex queries (often involving multiple joins) across large tables, it can handle large volumes of retail, inventory, and financial data without breaking a sweat.

In this post, we describe how to combine data in Aurora in Amazon Redshift. Here’s an overview of the solution:

  • Use AWS Lambda functions with Amazon Aurora to capture data changes in a table.
  • Save data in an Amazon S3
  • Query data using Amazon Redshift Spectrum.

We use the following services:

Serverless architecture for capturing and analyzing Aurora data changes

Consider a scenario in which an e-commerce web application uses Amazon Aurora for a transactional database layer. The company has a sales table that captures every single sale, along with a few corresponding data items. This information is stored as immutable data in a table. Business users want to monitor the sales data and then analyze and visualize it.

In this example, you take the changes in data in an Aurora database table and save it in Amazon S3. After the data is captured in Amazon S3, you combine it with data in your existing Amazon Redshift cluster for analysis.

By the end of this post, you will understand how to capture data events in an Aurora table and push them out to other AWS services using AWS Lambda.

The following diagram shows the flow of data as it occurs in this tutorial:

The starting point in this architecture is a database insert operation in Amazon Aurora. When the insert statement is executed, a custom trigger calls a Lambda function and forwards the inserted data. Lambda writes the data that it received from Amazon Aurora to a Kinesis data delivery stream. Kinesis Data Firehose writes the data to an Amazon S3 bucket. Once the data is in an Amazon S3 bucket, it is queried in place using Amazon Redshift Spectrum.

Creating an Aurora database

First, create a database by following these steps in the Amazon RDS console:

  1. Sign in to the AWS Management Console, and open the Amazon RDS console.
  2. Choose Launch a DB instance, and choose Next.
  3. For Engine, choose Amazon Aurora.
  4. Choose a DB instance class. This example uses a small, since this is not a production database.
  5. In Multi-AZ deployment, choose No.
  6. Configure DB instance identifier, Master username, and Master password.
  7. Launch the DB instance.

After you create the database, use MySQL Workbench to connect to the database using the CNAME from the console. For information about connecting to an Aurora database, see Connecting to an Amazon Aurora DB Cluster.

The following screenshot shows the MySQL Workbench configuration:

Next, create a table in the database by running the following SQL statement:

Create Table
CREATE TABLE Sales (
InvoiceID int NOT NULL AUTO_INCREMENT,
ItemID int NOT NULL,
Category varchar(255),
Price double(10,2), 
Quantity int not NULL,
OrderDate timestamp,
DestinationState varchar(2),
ShippingType varchar(255),
Referral varchar(255),
PRIMARY KEY (InvoiceID)
)

You can now populate the table with some sample data. To generate sample data in your table, copy and run the following script. Ensure that the highlighted (bold) variables are replaced with appropriate values.

#!/usr/bin/python
import MySQLdb
import random
import datetime

db = MySQLdb.connect(host="AURORA_CNAME",
                     user="DBUSER",
                     passwd="DBPASSWORD",
                     db="DB")

states = ("AL","AK","AZ","AR","CA","CO","CT","DE","FL","GA","HI","ID","IL","IN",
"IA","KS","KY","LA","ME","MD","MA","MI","MN","MS","MO","MT","NE","NV","NH","NJ",
"NM","NY","NC","ND","OH","OK","OR","PA","RI","SC","SD","TN","TX","UT","VT","VA",
"WA","WV","WI","WY")

shipping_types = ("Free", "3-Day", "2-Day")

product_categories = ("Garden", "Kitchen", "Office", "Household")
referrals = ("Other", "Friend/Colleague", "Repeat Customer", "Online Ad")

for i in range(0,10):
    item_id = random.randint(1,100)
    state = states[random.randint(0,len(states)-1)]
    shipping_type = shipping_types[random.randint(0,len(shipping_types)-1)]
    product_category = product_categories[random.randint(0,len(product_categories)-1)]
    quantity = random.randint(1,4)
    referral = referrals[random.randint(0,len(referrals)-1)]
    price = random.randint(1,100)
    order_date = datetime.date(2016,random.randint(1,12),random.randint(1,30)).isoformat()

    data_order = (item_id, product_category, price, quantity, order_date, state,
    shipping_type, referral)

    add_order = ("INSERT INTO Sales "
                   "(ItemID, Category, Price, Quantity, OrderDate, DestinationState, \
                   ShippingType, Referral) "
                   "VALUES (%s, %s, %s, %s, %s, %s, %s, %s)")

    cursor = db.cursor()
    cursor.execute(add_order, data_order)

    db.commit()

cursor.close()
db.close() 

The following screenshot shows how the table appears with the sample data:

Sending data from Amazon Aurora to Amazon S3

There are two methods available to send data from Amazon Aurora to Amazon S3:

  • Using a Lambda function
  • Using SELECT INTO OUTFILE S3

To demonstrate the ease of setting up integration between multiple AWS services, we use a Lambda function to send data to Amazon S3 using Amazon Kinesis Data Firehose.

Alternatively, you can use a SELECT INTO OUTFILE S3 statement to query data from an Amazon Aurora DB cluster and save it directly in text files that are stored in an Amazon S3 bucket. However, with this method, there is a delay between the time that the database transaction occurs and the time that the data is exported to Amazon S3 because the default file size threshold is 6 GB.

Creating a Kinesis data delivery stream

The next step is to create a Kinesis data delivery stream, since it’s a dependency of the Lambda function.

To create a delivery stream:

  1. Open the Kinesis Data Firehose console
  2. Choose Create delivery stream.
  3. For Delivery stream name, type AuroraChangesToS3.
  4. For Source, choose Direct PUT.
  5. For Record transformation, choose Disabled.
  6. For Destination, choose Amazon S3.
  7. In the S3 bucket drop-down list, choose an existing bucket, or create a new one.
  8. Enter a prefix if needed, and choose Next.
  9. For Data compression, choose GZIP.
  10. In IAM role, choose either an existing role that has access to write to Amazon S3, or choose to generate one automatically. Choose Next.
  11. Review all the details on the screen, and choose Create delivery stream when you’re finished.

 

Creating a Lambda function

Now you can create a Lambda function that is called every time there is a change that needs to be tracked in the database table. This Lambda function passes the data to the Kinesis data delivery stream that you created earlier.

To create the Lambda function:

  1. Open the AWS Lambda console.
  2. Ensure that you are in the AWS Region where your Amazon Aurora database is located.
  3. If you have no Lambda functions yet, choose Get started now. Otherwise, choose Create function.
  4. Choose Author from scratch.
  5. Give your function a name and select Python 3.6 for Runtime
  6. Choose and existing or create a new Role, the role would need to have access to call firehose:PutRecord
  7. Choose Next on the trigger selection screen.
  8. Paste the following code in the code window. Change the stream_name variable to the Kinesis data delivery stream that you created in the previous step.
  9. Choose File -> Save in the code editor and then choose Save.
import boto3
import json

firehose = boto3.client('firehose')
stream_name = ‘AuroraChangesToS3’


def Kinesis_publish_message(event, context):
    
    firehose_data = (("%s,%s,%s,%s,%s,%s,%s,%s\n") %(event['ItemID'], 
    event['Category'], event['Price'], event['Quantity'],
    event['OrderDate'], event['DestinationState'], event['ShippingType'], 
    event['Referral']))
    
    firehose_data = {'Data': str(firehose_data)}
    print(firehose_data)
    
    firehose.put_record(DeliveryStreamName=stream_name,
    Record=firehose_data)

Note the Amazon Resource Name (ARN) of this Lambda function.

Giving Aurora permissions to invoke a Lambda function

To give Amazon Aurora permissions to invoke a Lambda function, you must attach an IAM role with appropriate permissions to the cluster. For more information, see Invoking a Lambda Function from an Amazon Aurora DB Cluster.

Once you are finished, the Amazon Aurora database has access to invoke a Lambda function.

Creating a stored procedure and a trigger in Amazon Aurora

Now, go back to MySQL Workbench, and run the following command to create a new stored procedure. When this stored procedure is called, it invokes the Lambda function you created. Change the ARN in the following code to your Lambda function’s ARN.

DROP PROCEDURE IF EXISTS CDC_TO_FIREHOSE;
DELIMITER ;;
CREATE PROCEDURE CDC_TO_FIREHOSE (IN ItemID VARCHAR(255), 
									IN Category varchar(255), 
									IN Price double(10,2),
                                    IN Quantity int(11),
                                    IN OrderDate timestamp,
                                    IN DestinationState varchar(2),
                                    IN ShippingType varchar(255),
                                    IN Referral  varchar(255)) LANGUAGE SQL 
BEGIN
  CALL mysql.lambda_async('arn:aws:lambda:us-east-1:XXXXXXXXXXXXX:function:CDCFromAuroraToKinesis', 
     CONCAT('{ "ItemID" : "', ItemID, 
            '", "Category" : "', Category,
            '", "Price" : "', Price,
            '", "Quantity" : "', Quantity, 
            '", "OrderDate" : "', OrderDate, 
            '", "DestinationState" : "', DestinationState, 
            '", "ShippingType" : "', ShippingType, 
            '", "Referral" : "', Referral, '"}')
     );
END
;;
DELIMITER ;

Create a trigger TR_Sales_CDC on the Sales table. When a new record is inserted, this trigger calls the CDC_TO_FIREHOSE stored procedure.

DROP TRIGGER IF EXISTS TR_Sales_CDC;
 
DELIMITER ;;
CREATE TRIGGER TR_Sales_CDC
  AFTER INSERT ON Sales
  FOR EACH ROW
BEGIN
  SELECT  NEW.ItemID , NEW.Category, New.Price, New.Quantity, New.OrderDate
  , New.DestinationState, New.ShippingType, New.Referral
  INTO @ItemID , @Category, @Price, @Quantity, @OrderDate
  , @DestinationState, @ShippingType, @Referral;
  CALL  CDC_TO_FIREHOSE(@ItemID , @Category, @Price, @Quantity, @OrderDate
  , @DestinationState, @ShippingType, @Referral);
END
;;
DELIMITER ;

If a new row is inserted in the Sales table, the Lambda function that is mentioned in the stored procedure is invoked.

Verify that data is being sent from the Lambda function to Kinesis Data Firehose to Amazon S3 successfully. You might have to insert a few records, depending on the size of your data, before new records appear in Amazon S3. This is due to Kinesis Data Firehose buffering. To learn more about Kinesis Data Firehose buffering, see the “Amazon S3” section in Amazon Kinesis Data Firehose Data Delivery.

Every time a new record is inserted in the sales table, a stored procedure is called, and it updates data in Amazon S3.

Querying data in Amazon Redshift

In this section, you use the data you produced from Amazon Aurora and consume it as-is in Amazon Redshift. In order to allow you to process your data as-is, where it is, while taking advantage of the power and flexibility of Amazon Redshift, you use Amazon Redshift Spectrum. You can use Redshift Spectrum to run complex queries on data stored in Amazon S3, with no need for loading or other data prep.

Just create a data source and issue your queries to your Amazon Redshift cluster as usual. Behind the scenes, Redshift Spectrum scales to thousands of instances on a per-query basis, ensuring that you get fast, consistent performance even as your dataset grows to beyond an exabyte! Being able to query data that is stored in Amazon S3 means that you can scale your compute and your storage independently. You have the full power of the Amazon Redshift query model and all the reporting and business intelligence tools at your disposal. Your queries can reference any combination of data stored in Amazon Redshift tables and in Amazon S3.

Redshift Spectrum supports open, common data types, including CSV/TSV, Apache Parquet, SequenceFile, and RCFile. Files can be compressed using gzip or Snappy, with other data types and compression methods in the works.

First, create an Amazon Redshift cluster. Follow the steps in Launch a Sample Amazon Redshift Cluster.

Next, create an IAM role that has access to Amazon S3 and Athena. By default, Amazon Redshift Spectrum uses the Amazon Athena data catalog. Your cluster needs authorization to access your external data catalog in AWS Glue or Athena and your data files in Amazon S3.

In the demo setup, I attached AmazonS3FullAccess and AmazonAthenaFullAccess. In a production environment, the IAM roles should follow the standard security of granting least privilege. For more information, see IAM Policies for Amazon Redshift Spectrum.

Attach the newly created role to the Amazon Redshift cluster. For more information, see Associate the IAM Role with Your Cluster.

Next, connect to the Amazon Redshift cluster, and create an external schema and database:

create external schema if not exists spectrum_schema
from data catalog 
database 'spectrum_db' 
region 'us-east-1'
IAM_ROLE 'arn:aws:iam::XXXXXXXXXXXX:role/RedshiftSpectrumRole'
create external database if not exists;

Don’t forget to replace the IAM role in the statement.

Then create an external table within the database:

 CREATE EXTERNAL TABLE IF NOT EXISTS spectrum_schema.ecommerce_sales(
  ItemID int,
  Category varchar,
  Price DOUBLE PRECISION,
  Quantity int,
  OrderDate TIMESTAMP,
  DestinationState varchar,
  ShippingType varchar,
  Referral varchar)
ROW FORMAT DELIMITED
      FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
LOCATION 's3://{BUCKET_NAME}/CDC/'

Query the table, and it should contain data. This is a fact table.

select top 10 * from spectrum_schema.ecommerce_sales

 

Next, create a dimension table. For this example, we create a date/time dimension table. Create the table:

CREATE TABLE date_dimension (
  d_datekey           integer       not null sortkey,
  d_dayofmonth        integer       not null,
  d_monthnum          integer       not null,
  d_dayofweek                varchar(10)   not null,
  d_prettydate        date       not null,
  d_quarter           integer       not null,
  d_half              integer       not null,
  d_year              integer       not null,
  d_season            varchar(10)   not null,
  d_fiscalyear        integer       not null)
diststyle all;

Populate the table with data:

copy date_dimension from 's3://reparmar-lab/2016dates' 
iam_role 'arn:aws:iam::XXXXXXXXXXXX:role/redshiftspectrum'
DELIMITER ','
dateformat 'auto';

The date dimension table should look like the following:

Querying data in local and external tables using Amazon Redshift

Now that you have the fact and dimension table populated with data, you can combine the two and run analysis. For example, if you want to query the total sales amount by weekday, you can run the following:

select sum(quantity*price) as total_sales, date_dimension.d_season
from spectrum_schema.ecommerce_sales 
join date_dimension on spectrum_schema.ecommerce_sales.orderdate = date_dimension.d_prettydate 
group by date_dimension.d_season

You get the following results:

Similarly, you can replace d_season with d_dayofweek to get sales figures by weekday:

With Amazon Redshift Spectrum, you pay only for the queries you run against the data that you actually scan. We encourage you to use file partitioning, columnar data formats, and data compression to significantly minimize the amount of data scanned in Amazon S3. This is important for data warehousing because it dramatically improves query performance and reduces cost.

Partitioning your data in Amazon S3 by date, time, or any other custom keys enables Amazon Redshift Spectrum to dynamically prune nonrelevant partitions to minimize the amount of data processed. If you store data in a columnar format, such as Parquet, Amazon Redshift Spectrum scans only the columns needed by your query, rather than processing entire rows. Similarly, if you compress your data using one of the supported compression algorithms in Amazon Redshift Spectrum, less data is scanned.

Analyzing and visualizing Amazon Redshift data in Amazon QuickSight

Modify the Amazon Redshift security group to allow an Amazon QuickSight connection. For more information, see Authorizing Connections from Amazon QuickSight to Amazon Redshift Clusters.

After modifying the Amazon Redshift security group, go to Amazon QuickSight. Create a new analysis, and choose Amazon Redshift as the data source.

Enter the database connection details, validate the connection, and create the data source.

Choose the schema to be analyzed. In this case, choose spectrum_schema, and then choose the ecommerce_sales table.

Next, we add a custom field for Total Sales = Price*Quantity. In the drop-down list for the ecommerce_sales table, choose Edit analysis data sets.

On the next screen, choose Edit.

In the data prep screen, choose New Field. Add a new calculated field Total Sales $, which is the product of the Price*Quantity fields. Then choose Create. Save and visualize it.

Next, to visualize total sales figures by month, create a graph with Total Sales on the x-axis and Order Data formatted as month on the y-axis.

After you’ve finished, you can use Amazon QuickSight to add different columns from your Amazon Redshift tables and perform different types of visualizations. You can build operational dashboards that continuously monitor your transactional and analytical data. You can publish these dashboards and share them with others.

Final notes

Amazon QuickSight can also read data in Amazon S3 directly. However, with the method demonstrated in this post, you have the option to manipulate, filter, and combine data from multiple sources or Amazon Redshift tables before visualizing it in Amazon QuickSight.

In this example, we dealt with data being inserted, but triggers can be activated in response to an INSERT, UPDATE, or DELETE trigger.

Keep the following in mind:

  • Be careful when invoking a Lambda function from triggers on tables that experience high write traffic. This would result in a large number of calls to your Lambda function. Although calls to the lambda_async procedure are asynchronous, triggers are synchronous.
  • A statement that results in a large number of trigger activations does not wait for the call to the AWS Lambda function to complete. But it does wait for the triggers to complete before returning control to the client.
  • Similarly, you must account for Amazon Kinesis Data Firehose limits. By default, Kinesis Data Firehose is limited to a maximum of 5,000 records/second. For more information, see Monitoring Amazon Kinesis Data Firehose.

In certain cases, it may be optimal to use AWS Database Migration Service (AWS DMS) to capture data changes in Aurora and use Amazon S3 as a target. For example, AWS DMS might be a good option if you don’t need to transform data from Amazon Aurora. The method used in this post gives you the flexibility to transform data from Aurora using Lambda before sending it to Amazon S3. Additionally, the architecture has the benefits of being serverless, whereas AWS DMS requires an Amazon EC2 instance for replication.

For design considerations while using Redshift Spectrum, see Using Amazon Redshift Spectrum to Query External Data.

If you have questions or suggestions, please comment below.


Additional Reading

If you found this post useful, be sure to check out Capturing Data Changes in Amazon Aurora Using AWS Lambda and 10 Best Practices for Amazon Redshift Spectrum


About the Authors

Re Alvarez-Parmar is a solutions architect for Amazon Web Services. He helps enterprises achieve success through technical guidance and thought leadership. In his spare time, he enjoys spending time with his two kids and exploring outdoors.

 

 

 

timeShift(GrafanaBuzz, 1w) Issue 28

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2018/01/05/timeshiftgrafanabuzz-1w-issue-28/

Happy new year! Grafana Labs is getting back in the swing of things after taking some time off to celebrate 2017, and spending time with family and friends. We’re diligently working on the new Grafana v5.0 release (planning v5.0 beta release by end of January), which includes a ton of new features, a new layout engine, and a polished UI. We’d love to hear your feedback!


Latest Stable Release

Grafana 4.6.3 is now available. Latest bugfixes include:

  • Gzip: Fixes bug Gravatar images when gzip was enabled #5952
  • Alert list: Now shows alert state changes even after adding manual annotations on dashboard #99513
  • Alerting: Fixes bug where rules evaluated as firing when all conditions was false and using OR operator. #93183
  • Cloudwatch: CloudWatch no longer display metrics’ default alias #101514, thx @mtanda

Download Grafana 4.6.3 Now


From the Blogosphere

Why Observability Matters – Now and in the Future: Our own Carl Bergquist teamed up with Neil Gehani, Director of Product at Weaveworks to discuss best practices on how to get started with monitoring your application and infrastructure. This video focuses on modern containerized applications instrumented to use Prometheus to generate metrics and Grafana to visualize them.

How to Install and Secure Grafana on Ubuntu 16.04: In this tutorial, you’ll learn how to install and secure Grafana with a SSL certificate and a Nginx reverse proxy, then you’ll modify Grafana’s default settings for even tighter security.

Monitoring Informix with Grafana: Ben walks us through how to use Grafana to visualize data from IBM Informix and offers a practical demonstration using Docker containers. He also talks about his philosophy of sharing dashboards across teams, important metrics to collect, and how he would like to improve his monitoring stack.

Monitor your hosts with Glances + InfluxDB + Grafana: Glances is a cross-platform system monitoring tool written in Python. This article takes you step by step through the pieces of the stack, installation, confirguration and provides a sample dashboard to get you up and running.


GrafanaCon Tickets are Going Fast!

Lock in your seat for GrafanaCon EU while there are still tickets avaialable! Join us March 1-2, 2018 in Amsterdam for 2 days of talks centered around Grafana and the surrounding monitoring ecosystem including Graphite, Prometheus, InfluxData, Elasticsearch, Kubernetes, and more.

We have some exciting talks lined up from Google, CERN, Bloomberg, eBay, Red Hat, Tinder, Fastly, Automattic, Prometheus, InfluxData, Percona and more! You can see the full list of speakers below, but be sure to get your ticket now.

Get Your Ticket Now

GrafanaCon EU will feature talks from:

“Google Bigtable”
Misha Brukman
PROJECT MANAGER,
GOOGLE CLOUD
GOOGLE

“Monitoring at Bloomberg”
Stig Sorensen
HEAD OF TELEMETRY
BLOOMBERG

“Monitoring at Bloomberg”
Sean Hanson
SOFTWARE DEVELOPER
BLOOMBERG

“Monitoring Tinder’s Billions of Swipes with Grafana”
Utkarsh Bhatnagar
SR. SOFTWARE ENGINEER
TINDER

“Grafana at CERN”
Borja Garrido
PROJECT ASSOCIATE
CERN

“Monitoring the Huge Scale at Automattic”
Abhishek Gahlot
SOFTWARE ENGINEER
Automattic

“Real-time Engagement During the 2016 US Presidential Election”
Anna MacLachlan
CONTENT MARKETING MANAGER
Fastly

“Real-time Engagement During the 2016 US Presidential Election”
Gerlando Piro
FRONT END DEVELOPER
Fastly

“Grafana v5 and the Future”
Torkel Odegaard
CREATOR | PROJECT LEAD
GRAFANA

“Prometheus for Monitoring Metrics”
Brian Brazil
FOUNDER
ROBUST PERCEPTION

“What We Learned Integrating Grafana with Prometheus”
Peter Zaitsev
CO-FOUNDER | CEO
PERCONA

“The Biz of Grafana”
Raj Dutt
CO-FOUNDER | CEO
GRAFANA LABS

“What’s New In Graphite”
Dan Cech
DIR, PLATFORM SERVICES
GRAFANA LABS

“The Design of IFQL, the New Influx Functional Query Language”
Paul Dix
CO-FOUNTER | CTO
INFLUXDATA

“Writing Grafana Dashboards with Jsonnet”
Julien Pivotto
OPEN SOURCE CONSULTANT
INUITS

“Monitoring AI Platform at eBay”
Deepak Vasthimal
MTS-2 SOFTWARE ENGINEER
EBAY

“Running a Power Plant with Grafana”
Ryan McKinley
DEVELOPER
NATEL ENERGY

“Performance Metrics and User Experience: A “Tinder” Experience”
Susanne Greiner
DATA SCIENTIST
WÜRTH PHOENIX S.R.L.

“Analyzing Performance of OpenStack with Grafana Dashboards”
Alex Krzos
SENIOR SOFTWARE ENGINEER
RED HAT INC.

“Storage Monitoring at Shell Upstream”
Arie Jan Kraai
STORAGE ENGINEER
SHELL TECHNICAL LANDSCAPE SERVICE

“The RED Method: How To Instrument Your Services”
Tom Wilkie
FOUNDER
KAUSAL

“Grafana Usage in the Quality Assurance Process”
Andrejs Kalnacs
LEAD SOFTWARE DEVELOPER IN TEST
EVOLUTION GAMING

“Using Prometheus and Grafana for Monitoring my Power Usage”
Erwin de Keijzer
LINUX ENGINEER
SNOW BV

“Weather, Power & Market Forecasts with Grafana”
Max von Roden
DATA SCIENTIST
ENERGY WEATHER

“Weather, Power & Market Forecasts with Grafana”
Steffen Knott
HEAD OF IT
ENERGY WEATHER

“Inherited Technical Debt – A Tale of Overcoming Enterprise Inertia”
Jordan J. Hamel
HEAD OF MONITORING PLATFORMS
AMGEN

“Grafanalib: Dashboards as Code”
Jonathan Lange
VP OF ENGINEERING
WEAVEWORKS

“The Journey of Shifting the MQTT Broker HiveMQ to Kubernetes”
Arnold Bechtoldt
SENIOR SYSTEMS ENGINEER
INOVEX

“Graphs Tell Stories”
Blerim Sheqa
SENIOR DEVELOPER
NETWAYS

[email protected] or How to Store Millions of Metrics per Second”
Vladimir Smirnov
SYSTEM ADMINISTRATOR
Booking.com


Upcoming Events:

In between code pushes we like to speak at, sponsor and attend all kinds of conferences and meetups. We also like to make sure we mention other Grafana-related events happening all over the world. If you’re putting on just such an event, let us know and we’ll list it here.

FOSDEM | Brussels, Belgium – Feb 3-4, 2018: FOSDEM is a free developer conference where thousands of developers of free and open source software gather to share ideas and technology. There is no need to register; all are welcome.

Jfokus | Stockholm, Sweden – Feb 5-7, 2018:
Carl Bergquist – Quickie: Monitoring? Not OPS Problem

Why should we monitor our system? Why can’t we just rely on the operations team anymore? They use to be able to do that. What’s currently changing? Presentation content: – Why do we monitor our system – How did it use to work? – Whats changing – Why do we need to shift focus – Everyone should be on call. – Resilience is the goal (Best way of having someone care about quality is to make them responsible).

Register Now

Jfokus | Stockholm, Sweden – Feb 5-7, 2018:
Leonard Gram – Presentation: DevOps Deconstructed

What’s a Site Reliability Engineer and how’s that role different from the DevOps engineer my boss wants to hire? I really don’t want to be on call, should I? Is Docker the right place for my code or am I better of just going straight to Serverless? And why should I care about any of it? I’ll try to answer some of these questions while looking at what DevOps really is about and how commodisation of servers through “the cloud” ties into it all. This session will be an opinionated piece from a developer who’s been on-call for the past 6 years and would like to convince you to do the same, at least once.

Register Now

Tweet of the Week

We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove

Awesome! Let us know if you have any questions – we’re happy to help out. We also have a bunch of screencasts to help you get going.


Grafana Labs is Hiring!

We are passionate about open source software and thrive on tackling complex challenges to build the future. We ship code from every corner of the globe and love working with the community. If this sounds exciting, you’re in luck – WE’RE HIRING!

Check out our Open Positions


How are we doing?

That’s a wrap! Let us know what you think about timeShift. Submit a comment on this article below, or post something at our community forum. See you next year!

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

AWS Architecture Monthly for Kindle

Post Syndicated from Jamey Tisdale original https://aws.amazon.com/blogs/architecture/aws-architecture-monthly-for-kindle/

We recently launched AWS Architecture Monthly, a new subscription service on Kindle that will push a selection of the best content around cloud architecture from AWS, with a few pointers to other content you might also enjoy.

From building a simple website to crafting an AI-based chat bot, the choices of technologies and the best practices in how to apply them are constantly evolving. Our goal is to supply you each month with a broad selection of the best new tech content from AWS — from deep-dive tutorials to industry-trend articles.

With your free subscription, you can look forward to fresh content delivered directly to your Kindledevice or Kindle app including:
– Technical whitepapers
– Reference architectures
– New solutions and implementation guides
– Training and certification opportunities
– Industry trends

The January issue is now live. This month includes:
– AWS Architecture Blog: Glenn Gore’s Take on re:Invent 2017 (Chief Architect for AWS)
– AWS Reference Architectures: Java Microservices Deployed on EC2 Container Service; Node.js Microservices Deployed on EC2 Container Service
– AWS Training & Certification: AWS Certified Solutions Architect – Associate
– Sample Code: aws-serverless-express
– Technical Whitepaper: Serverless Architectures with AWS Lambda – Overview and Best Practices

At this time, Architecture Monthly annual subscriptions are only available in the France (new), US, UK, and Germany. As more countries become available, we’ll update you here on the blog. For Amazon.com countries not listed above, we are offering single-issue downloads — also accessible from our landing page. The content is the same as in the subscription but requires individual-issue downloads.

FAQ
I have to submit my credit card information for a free subscription?
While you do have to submit your card information at this time (as you would for a free book in the Kindle store), it won’t be charged. This will remain a free, annual subscription and includes all 10 issues for the year.

Why isn’t the subscription available everywhere?
As new countries get added to Kindle Newsstand, we’ll ensure we add them for Architecture Monthly. This month we added France but anticipate it will take some time for the new service to move into additional markets.

What countries are included in the Amazon.com list where the issues can be downloaded?
Andorra, Australia, Austria, Belgium, Brazil, Canada, Gibraltar, Guernsey, India, Ireland, Isle of Man, Japan, Jersey, Liechtenstein, Luxembourg, Mexico, Monaco, Netherlands, New Zealand, San Marino, Spain, Switzerland, Vatican City

Serverless @ re:Invent 2017

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/serverless-reinvent-2017/

At re:Invent 2014, we announced AWS Lambda, what is now the center of the serverless platform at AWS, and helped ignite the trend of companies building serverless applications.

This year, at re:Invent 2017, the topic of serverless was everywhere. We were incredibly excited to see the energy from everyone attending 7 workshops, 15 chalk talks, 20 skills sessions and 27 breakout sessions. Many of these sessions were repeated due to high demand, so we are happy to summarize and provide links to the recordings and slides of these sessions.

Over the course of the week leading up to and then the week of re:Invent, we also had over 15 new features and capabilities across a number of serverless services, including AWS Lambda, Amazon API Gateway, AWS [email protected], AWS SAM, and the newly announced AWS Serverless Application Repository!

AWS Lambda

Amazon API Gateway

  • Amazon API Gateway Supports Endpoint Integrations with Private VPCs – You can now provide access to HTTP(S) resources within your VPC without exposing them directly to the public internet. This includes resources available over a VPN or Direct Connect connection!
  • Amazon API Gateway Supports Canary Release Deployments – You can now use canary release deployments to gradually roll out new APIs. This helps you more safely roll out API changes and limit the blast radius of new deployments.
  • Amazon API Gateway Supports Access Logging – The access logging feature lets you generate access logs in different formats such as CLF (Common Log Format), JSON, XML, and CSV. The access logs can be fed into your existing analytics or log processing tools so you can perform more in-depth analysis or take action in response to the log data.
  • Amazon API Gateway Customize Integration Timeouts – You can now set a custom timeout for your API calls as low as 50ms and as high as 29 seconds (the default is 30 seconds).
  • Amazon API Gateway Supports Generating SDK in Ruby – This is in addition to support for SDKs in Java, JavaScript, Android and iOS (Swift and Objective-C). The SDKs that Amazon API Gateway generates save you development time and come with a number of prebuilt capabilities, such as working with API keys, exponential back, and exception handling.

AWS Serverless Application Repository

Serverless Application Repository is a new service (currently in preview) that aids in the publication, discovery, and deployment of serverless applications. With it you’ll be able to find shared serverless applications that you can launch in your account, while also sharing ones that you’ve created for others to do the same.

AWS [email protected]

[email protected] now supports content-based dynamic origin selection, network calls from viewer events, and advanced response generation. This combination of capabilities greatly increases the use cases for [email protected], such as allowing you to send requests to different origins based on request information, showing selective content based on authentication, and dynamically watermarking images for each viewer.

AWS SAM

Twitch Launchpad live announcements

Other service announcements

Here are some of the other highlights that you might have missed. We think these could help you make great applications:

AWS re:Invent 2017 sessions

Coming up with the right mix of talks for an event like this can be quite a challenge. The Product, Marketing, and Developer Advocacy teams for Serverless at AWS spent weeks reading through dozens of talk ideas to boil it down to the final list.

From feedback at other AWS events and webinars, we knew that customers were looking for talks that focused on concrete examples of solving problems with serverless, how to perform common tasks such as deployment, CI/CD, monitoring, and troubleshooting, and to see customer and partner examples solving real world problems. To that extent we tried to settle on a good mix based on attendee experience and provide a track full of rich content.

Below are the recordings and slides of breakout sessions from re:Invent 2017. We’ve organized them for those getting started, those who are already beginning to build serverless applications, and the experts out there already running them at scale. Some of the videos and slides haven’t been posted yet, and so we will update this list as they become available.

Find the entire Serverless Track playlist on YouTube.

Talks for people new to Serverless

Advanced topics

Expert mode

Talks for specific use cases

Talks from AWS customers & partners

Looking to get hands-on with Serverless?

At re:Invent, we delivered instructor-led skills sessions to help attendees new to serverless applications get started quickly. The content from these sessions is already online and you can do the hands-on labs yourself!
Build a Serverless web application

Still looking for more?

We also recently completely overhauled the main Serverless landing page for AWS. This includes a new Resources page containing case studies, webinars, whitepapers, customer stories, reference architectures, and even more Getting Started tutorials. Check it out!

MagPi 65: Newbies Guide, and something brand new!

Post Syndicated from Rob Zwetsloot original https://www.raspberrypi.org/blog/magpi-65/

Hey folks, Rob from The MagPi here! We know many people might be getting their very first Raspberry Pi this Christmas, and excitedly wondering “what do I do with it?” While we can’t tell you exactly what to do with your Pi, we can show you how to immerse yourself in the world of Raspberry Pi and be inspired by our incredible community, and that’s the topic of The MagPi 65, out today tomorrow (we’re a day early because we’re simply TOO excited about the special announcement below!).

The one, the only…issue 65!

Raspberry Pi for Newbies

Raspberry Pi for Newbies covers some of the very basics you should know about the world of Raspberry Pi. After a quick set-up tutorial, we introduce you to the Raspberry Pi’s free online resources, including Scratch and Python projects from Code Club, before guiding you through the wider Raspberry Pi and maker community.

Raspberry Pi MagPi 65 Newbie Guide

Pages and pages of useful advice and starter projects

The online community is an amazing place to learn about all the incredible things you can do with the Raspberry Pi. We’ve included some information on good places to look for tutorials, advice and ideas.

And that’s not all

Want to do more after learning about the world of Pi? The rest of the issue has our usual selection of expert guides to help you build some amazing projects: you can make a Christmas memory game, build a tower of bells to ring in the New Year, and even take your first steps towards making a game using C++.

Raspberry Pi MagPi 65

Midimutant, the synthesizer “that boinks endless strange sounds”

All this along with inspiring projects, definitive reviews, and tales from around the community.

Raspberry Pi Annual

Issue 65 isn’t the only new release to look out for. We’re excited to bring you the first ever Raspberry Pi Annual, and it’s free for MagPi subscribers – in fact, subscribers should be receiving it the same day as their issue 65 delivery!

If you’re not yet a subscriber of The MagPi, don’t panic: you can still bag yourself a copy of the Raspberry Pi Annual by signing up to a 12-month subscription of The MagPi before 24 January. You’ll also receive the usual subscriber gift of a free Raspberry Pi Zero W (with case and cable).  Click here to subscribe to The MagPi – The Official Raspberry Pi magazine.

Ooooooo…aaaaaahhhhh…

The Raspberry Pi Annual is aimed at young folk wanting to learn to code, with a variety of awesome step-by-step Scratch tutorials, games, puzzles, and comics, including a robotic Babbage.

Get your copy

You can get The MagPi 65 and the Raspberry Pi Annual 2018 from our online store, and the magazine can be found in the wild at WHSmith, Tesco, Sainsbury’s, and Asda. You’ll be able to get it in the US at Barnes & Noble and Micro Center in a few days’ time. The MagPi 65 is also available digitally on our Android and iOS apps. Finally, you can also download a free PDF of The MagPi 65 and The Raspberry Pi Annual 2018.

We hope you have a merry Christmas! We’re off until the New Year. Bye!

The post MagPi 65: Newbies Guide, and something brand new! appeared first on Raspberry Pi.

Introducing the New GDPR Center and “Navigating GDPR Compliance on AWS” Whitepaper

Post Syndicated from Chad Woolf original https://aws.amazon.com/blogs/security/introducing-the-new-gdpr-center-and-navigating-gdpr-compliance-on-aws-whitepaper/

European Union flag

At AWS re:Invent 2017, the AWS Compliance team participated in excellent engagements with AWS customers about the General Data Protection Regulation (GDPR), including discussions that generated helpful input. Today, I am announcing resulting enhancements to our recently launched GDPR Center and the release of a new whitepaper, Navigating GDPR Compliance on AWS. The resources available on the GDPR Center are designed to give you GDPR basics, and provide some ideas as you work out the details of the regulation and find a path to compliance.

In this post, I focus on two of these new GDPR requirements in terms of articles in the GDPR, and explain some of the AWS services and other resources that can help you meet these requirements.

Background about the GDPR

The GDPR is a European privacy law that will become enforceable on May 25, 2018, and is intended to harmonize data protection laws throughout the European Union (EU) by applying a single data protection law that is binding throughout each EU member state. The GDPR not only applies to organizations located within the EU, but also to organizations located outside the EU if they offer goods or services to, or monitor the behavior of, EU data subjects. All AWS services will comply with the GDPR in advance of the May 25, 2018, enforcement date.

We are already seeing customers move personal data to AWS to help solve challenges in complying with the EU’s GDPR because of AWS’s advanced toolset for identifying, securing, and managing all types of data, including personal data. Steve Schmidt, the AWS CISO, has already written about the internal and external work we have been undertaking to help you use AWS services to meet your own GDPR compliance goals.

Article 25 – Data Protection by Design and by Default (Privacy by Design)

Privacy by Design is the integration of data privacy and compliance into the systems development process, enabling applications, systems, and accounts, among other things, to be secure by default. To secure your AWS account, we offer a script to evaluate your AWS account against the full Center for Internet Security (CIS) Amazon Web Services Foundations Benchmark 1.1. You can access this public benchmark on GitHub. Additionally, AWS Trusted Advisor is an online resource to help you improve security by optimizing your AWS environment. Among other things, Trusted Advisor lists a number of security-related controls you should be monitoring. AWS also offers AWS CloudTrail, a logging tool to track usage and API activity. Another example of tooling that enables data protection is Amazon Inspector, which includes a knowledge base of hundreds of rules (regularly updated by AWS security researchers) mapped to common security best practices and vulnerability definitions. Examples of built-in rules include checking for remote root login being enabled or vulnerable software versions installed. These and other tools enable you to design an environment that protects customer data by design.

An accurate inventory of all the GDPR-impacting data is important but sometimes difficult to assess. AWS has some advanced tooling, such as Amazon Macie, to help you determine where customer data is present in your AWS resources. Macie uses advanced machine learning to automatically discover and classify data so that you can protect data, per Article 25.

Article 32 – Security of Processing

You can use many AWS services and features to secure the processing of data regulated by the GDPR. Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch resources in a virtual network that you define. You have complete control over your virtual networking environment, including the selection of your own IP address range, creation of subnets, and configuration of route tables and network gateways. With Amazon VPC, you can make the Amazon Cloud a seamless extension of your existing on-premises resources.

AWS Key Management Service (AWS KMS) is a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data, and uses hardware security modules (HSMs) to help protect your keys. Managing keys with AWS KMS allows you to choose to encrypt data either on the server side or the client side. AWS KMS is integrated with several other AWS services to help you protect the data you store with these services. AWS KMS is also integrated with CloudTrail to provide you with logs of all key usage to help meet your regulatory and compliance needs. You can also use the AWS Encryption SDK to correctly generate and use encryption keys, as well as protect keys after they have been used.

We also recently announced new encryption and security features for Amazon S3, including default encryption and a detailed inventory report. Services of this type as well as additional GDPR enablers will be published regularly on our GDPR Center.

Other resources

As you prepare for GDPR, you may want to visit our AWS Customer Compliance Center or Tools for Amazon Web Services to learn about options for building anything from small scripts that delete data to a full orchestration framework that uses AWS Code services.

-Chad

[$] Toward better CPU load estimation

Post Syndicated from corbet original https://lwn.net/Articles/741171/rss

“Load tracking” refers to the kernel’s attempts to track how much load each
running process will put on the system’s CPUs. Good load tracking can
yield reasonable predictions about the near-future demands on the system;
those, in turn, can be used to optimize the placement of processes and the
selection of CPU-frequency parameters. Obviously, poor load tracking will
lead to less-than-optimal results. While achieving perfection in load tracking
seems unlikely for now, it appears that it is possible to to do better than
current kernels do. The utilization estimation
patch set
from Patrick Bellasi is the latest in a series of efforts to
make the scheduler’s load tracking work well with a wider variety of
workloads.

Resilient TVAddons Plans to Ditch Proactive ‘Piracy’ Screening

Post Syndicated from Ernesto original https://torrentfreak.com/resilient-tvaddons-plans-to-ditch-proactive-piracy-screening-171207/

After years of smooth sailing, this year TVAddons became a poster child for the entertainment industry’s war on illicit streaming devices.

The leading repository for unofficial Kodi addons was sued for copyright infringement in the US by satellite and broadcast provider Dish Network. Around the same time, a similar case was filed by Bell, TVA, Videotron, and Rogers in Canada.

The latter case has done the most damage thus far, as it caused the addon repository to lose its domain names and social media accounts. As a result, the site went dead and while many believed it would never return, it made a blazing comeback after a few weeks.

Since the original TVAddons.ag domain was seized, the site returned on TVaddons.co. And that was not the only difference. A lot of the old add-ons, for which it was unclear if they linked to licensed content, were no longer listed in the repository either.

TVAddons previously relied on the DMCA to shield it from liability but apparently, that wasn’t enough. As a result, they took the drastic decision to check all submitted add-ons carefully.

“Since complying with the law is clearly not enough to prevent frivolous legal action from being taken against you, we have been forced to implement a more drastic code vetting process,” a TVAddons representative told us previously.

Despite the absence of several of the most used add-ons, the repository has managed to regain many of its former users. Over the past month, TVAddons had over 12 million unique users. These all manually installed the new repository on their devices.

“We’re not like one of those pirate sites that are shut down and opens on a new domain the next day, getting users to actually manually install a new repo isn’t an easy feat,” a TVAddons representative informs TorrentFreak.

While it’s still far away from the 40 million unique users it had earlier this year, before the trouble began, it’s still a force to be reckoned with.

Interestingly, the vast majority of all TVAddons traffic comes from the United States. The UK is second at a respectable distance, followed by Canada, Germany, and the Netherlands.

While many former users have returned, the submission policy changes didn’t go unnoticed. The relatively small selection of add-ons is a major drawback for some, but that’s about to change as well, we are informed.

TVAddons plans to return to the old submission model where developers can upload their code more freely. Instead of proactive screening, TVAddons will rely on a standard DMCA takedown policy, relying on copyright holders to flag potentially infringing content.

“We intend on returning to a standard DMCA compliant add-on submission policy shortly, there’s no reason why we should be held to a higher standard than Facebook, Twitter, YouTube or Reddit given the fact that we don’t even host any form of streaming content in the first place.

“Our interim policy isn’t pragmatic, it’s nearly impossible for us to verify the global licensing of all forms of protected content. When you visit a website, there’s no way of verifying licensing beyond trusting them based on reputation.”

The upcoming change doesn’t mean that TVAddons will ignore its legal requirements. If they receive a legitimate takedown notice, proper action will be taken, as always. As such, they would operate in the same fashion as other user-generated sites.

“Right now our interim addon submission policy is akin to North Korea. We always followed the law and will always continue to do so. Anytime we’ve received a legitimate complaint we’ve acted upon it in an expedited manner.

“Facebook, Twitter, Reddit and other online communities would have never existed if they were required to approve the contents of each user’s submissions prior to public posting.”

The change takes place while the two court cases are still pending. TVAddons is determined to keep up this fight. Meanwhile, they are also asking the public to support the project financially.

While some copyright holders, including those who are fighting the service in court, might not like the change, TVAddons believes that this is well within their rights. And with support from groups such as the Electronic Frontier Foundation, they don’t stand alone in this.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

timeShift(GrafanaBuzz, 1w) Issue 24

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/12/01/timeshiftgrafanabuzz-1w-issue-24/

Welcome to TimeShift

It’s hard to believe it’s already December. Here at Grafana Labs we’ve been spending a lot of time working on new features and enhancements for Grafana v5, and finalizing our selections for GrafanaCon EU. This week we have some interesting articles to share and a number of plugin updates. Enjoy!


Latest Release

Grafana 4.6.2 is now available and includes some bug fixes:

  • Prometheus: Fixes bug with new Prometheus alerts in Grafana. Make sure to download this version if you’re using Prometheus for alerting. More details in the issue. #9777
  • Color picker: Bug after using textbox input field to change/paste color string #9769
  • Cloudwatch: build using golang 1.9.2 #9667, thanks @mtanda
  • Heatmap: Fixed tooltip for “time series buckets” mode #9332
  • InfluxDB: Fixed query editor issue when using > or < operators in WHERE clause #9871

Download Grafana 4.6.2 Now


From the Blogosphere

Monitoring Camel with Prometheus in Red Hat OpenShift: This in-depth walk-through will show you how to build an Apache Camel application from scratch, deploy it in a Kubernetes environment, gather metrics using Prometheus and display them in Grafana.

How to run Grafana with DeviceHive: We see more and more examples of people using Grafana in IoT. This article discusses how to gather data from the IoT platform, DeviceHive, and build useful dashboards.

How to Install Grafana on Linux Servers: Pretty self-explanatory, but this tutorial walks you installing Grafana on Ubuntu 16.04 and CentOS 7. After installation, it covers configuration and plugin installation. This is the first article in an upcoming series about Grafana.

Monitoring your AKS cluster with Grafana: It’s important to know how your application is performing regardless of where it lives; the same applies to Kubernetes. This article focuses on aggregating data from Kubernetes with Heapster and feeding it to a backend for Grafana to visualize.

CoinStatistics: With the price of Bitcoin skyrocketing, more and more people are interested in cryptocurrencies. This is a cool dashboard that has a lot of stats about popular cryptocurrencies, and has a calculator to let you know when you can buy that lambo.

Using OpenNTI As A Collector For Streaming Telemetry From Juniper Devices: Part 1: This series will serve as a quick start guide for getting up and running with streaming real-time telemetry data from Juniper devices. This first article covers some high-level concepts and installation, while part 2 covers configuration options.

How to Get Metrics for Advance Alerting to Prevent Trouble: What good is performance monitoring if you’re never told when something has gone wrong? This article suggests ways to be more proactive to prevent issues and avoid the scramble to troubleshoot issues.

Thoughtworks: Technology Radar: We got a shout-out in the latest Technology Radar in the Tools section, as the dashboard visualization tool of choice for Prometheus!


GrafanaCon Tickets are Going Fast

Tickets are going fast for GrafanaCon EU, but we still have a seat reserved for you. Join us March 1-2, 2018 in Amsterdam for 2 days of talks centered around Grafana and the surrounding monitoring ecosystem including Graphite, Prometheus, InfluxData, Elasticsearch, Kubernetes, and more.

Get Your Ticket Now


Grafana Plugins

We have a number of plugin updates to highlight this week. Authors improve plugins regularly to fix bugs and improve performance, so it’s important to keep your plugins up to date. We’ve made updating easy; for on-prem Grafana, use the Grafana-cli tool, or update with 1 click if you’re using Hosted Grafana.

UPDATED PLUGIN

Clickhouse Data Source – The Clickhouse Data Source received a substantial update this week. It now has support for Ace Editor, which has a reformatting function for the query editor that automatically formats your sql. If you’re using Clickhouse then you should also have a look at CHProxy – see the plugin readme for more details.


Update

UPDATED PLUGIN

Influx Admin Panel – This panel received a number of small fixes. A new version will be coming soon with some new features.

Some of the changes (see the release notes) for more details):

  • Fix issue always showing query results
  • When there is only one row, swap rows/cols (ie: SHOW DIAGNOSTICS)
  • Improve auto-refresh behavior
  • Show ‘message’ response. (ie: please use POST)
  • Fix query time sorting
  • Show ‘status’ field (killed, etc)

Update

UPDATED PLUGIN

Gnocchi Data Source – The latest version of the Gnocchi Data Source adds support for dynamic aggregations.


Update

UPDATED PLUGINS

BT Plugins – All of the BT panel plugins received updates this week.


Upcoming Events:

In between code pushes we like to speak at, sponsor and attend all kinds of conferences and meetups. We have some awesome talks and events coming soon. Hope to see you at one of these!

KubeCon | Austin, TX – Dec. 6-8, 2017: We’re sponsoring KubeCon 2017! This is the must-attend conference for cloud native computing professionals. KubeCon + CloudNativeCon brings together leading contributors in:

  • Cloud native applications and computing
  • Containers
  • Microservices
  • Central orchestration processing
  • And more

Buy Tickets

FOSDEM | Brussels, Belgium – Feb 3-4, 2018: FOSDEM is a free developer conference where thousands of developers of free and open source software gather to share ideas and technology. Carl Bergquist is managing the Cloud and Monitoring Devroom, and we’ve heard there were some great talks submitted. There is no need to register; all are welcome.


Tweet of the Week

We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove

YIKES! Glad it’s not – there’s good attention and bad attention.


Grafana Labs is Hiring!

We are passionate about open source software and thrive on tackling complex challenges to build the future. We ship code from every corner of the globe and love working with the community. If this sounds exciting, you’re in luck – WE’RE HIRING!

Check out our Open Positions


How are we doing?

Let us know if you’re finding these weekly roundups valuable. Submit a comment on this article below, or post something at our community forum. Find an article I haven’t included? Send it my way. Help us make timeShift better!

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

AWS Cloud9 – Cloud Developer Environments

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/aws-cloud9-cloud-developer-environments/

One of the first things you learn when you start programming is that, just like any craftsperson, your tools matter. Notepad.exe isn’t going to cut it. A powerful editor and testing pipeline supercharge your productivity. I still remember learning to use Vim for the first time and being able to zip around systems and complex programs. Do you remember how hard it was to setup all your compilers and dependencies on a new machine? How many cycles have you wasted matching versions, tinkering with configs, and then writing documentation to onboard a new developer to a project?

Today we’re launching AWS Cloud9, an Integrated Development Environment (IDE) for writing, running, and debugging code, all from your web browser. Cloud9 comes prepackaged with essential tools for many popular programming languages (Javascript, Python, PHP, etc.) so you don’t have to tinker with installing various compilers and toolchains. Cloud9 also provides a seamless experience for working with serverless applications allowing you to quickly switch between local and remote testing or debugging. Based on the popular open source Ace Editor and c9.io IDE (which we acquired last year), AWS Cloud9 is designed to make collaborative cloud development easy with extremely powerful pair programming features. There are more features than I could ever cover in this post but to give a quick breakdown I’ll break the IDE into 3 components: The editor, the AWS integrations, and the collaboration.

Editing


The Ace Editor at the core of Cloud9 is what lets you write code quickly, easily, and beautifully. It follows a UNIX philosophy of doing one thing and doing it well: writing code.

It has all the typical IDE features you would expect: live syntax checking, auto-indent, auto-completion, code folding, split panes, version control integration, multiple cursors and selections, and it also has a few unique features I want to highlight. First of all, it’s fast, even for large (100000+ line) files. There’s no lag or other issues while typing. It has over two dozen themes built-in (solarized!) and you can bring all of your favorite themes from Sublime Text or TextMate as well. It has built-in support for 40+ language modes and customizable run configurations for your projects. Most importantly though, it has Vim mode (or emacs if your fingers work that way). It also has a keybinding editor that allows you to bend the editor to your will.

The editor supports powerful keyboard navigation and commands (similar to Sublime Text or vim plugins like ctrlp). On a Mac, with ⌘+P you can open any file in your environment with fuzzy search. With ⌘+. you can open up the command pane which allows you to do invoke any of the editor commands by typing the name. It also helpfully displays the keybindings for a command in the pane, for instance to open to a terminal you can press ⌥+T. Oh, did I mention there’s a terminal? It ships with the AWS CLI preconfigured for access to your resources.

The environment also comes with pre-installed debugging tools for many popular languages – but you’re not limited to what’s already installed. It’s easy to add in new programs and define new run configurations.

The editor is just one, admittedly important, component in an IDE though. I want to show you some other compelling features.

AWS Integrations

The AWS Cloud9 IDE is the first IDE I’ve used that is truly “cloud native”. The service is provided at no additional charge, and you only charged for the underlying compute and storage resources. When you create an environment you’re prompted for either: an instance type and an auto-hibernate time, or SSH access to a machine of your choice.

If you’re running in AWS the auto-hibernate feature will stop your instance shortly after you stop using your IDE. This can be a huge cost savings over running a more permanent developer desktop. You can also launch it within a VPC to give it secure access to your development resources. If you want to run Cloud9 outside of AWS, or on an existing instance, you can provide SSH access to the service which it will use to create an environment on the external machine. Your environment is provisioned with automatic and secure access to your AWS account so you don’t have to worry about copying credentials around. Let me say that again: you can run this anywhere.

Serverless Development with AWS Cloud9

I spend a lot of time on Twitch developing serverless applications. I have hundreds of lambda functions and APIs deployed. Cloud9 makes working with every single one of these functions delightful. Let me show you how it works.


If you look in the top right side of the editor you’ll see an AWS Resources tab. Opening this you can see all of the lambda functions in your region (you can see functions in other regions by adjusting your region preferences in the AWS preference pane).

You can import these remote functions to your local workspace just by double-clicking them. This allows you to edit, test, and debug your serverless applications all locally. You can create new applications and functions easily as well. If you click the Lambda icon in the top right of the pane you’ll be prompted to create a new lambda function and Cloud9 will automatically create a Serverless Application Model template for you as well. The IDE ships with support for the popular SAM local tool pre-installed. This is what I use in most of my local testing and serverless development. Since you have a terminal, it’s easy to install additional tools and use other serverless frameworks.

 

Launching an Environment from AWS CodeStar

With AWS CodeStar you can easily provision an end-to-end continuous delivery toolchain for development on AWS. Codestar provides a unified experience for building, testing, deploying, and managing applications using AWS CodeCommit, CodeBuild, CodePipeline, and CodeDeploy suite of services. Now, with a few simple clicks you can provision a Cloud9 environment to develop your application. Your environment will be pre-configured with the code for your CodeStar application already checked out and git credentials already configured.

You can easily share this environment with your coworkers which leads me to another extremely useful set of features.

Collaboration

One of the many things that sets AWS Cloud9 apart from other editors are the rich collaboration tools. You can invite an IAM user to your environment with a few clicks.

You can see what files they’re working on, where their cursors are, and even share a terminal. The chat features is useful as well.

Things to Know

  • There are no additional charges for this service beyond the underlying compute and storage.
  • c9.io continues to run for existing users. You can continue to use all the features of c9.io and add new team members if you have a team account. In the future, we will provide tools for easy migration of your c9.io workspaces to AWS Cloud9.
  • AWS Cloud9 is available in the US West (Oregon), US East (Ohio), US East (N.Virginia), EU (Ireland), and Asia Pacific (Singapore) regions.

I can’t wait to see what you build with AWS Cloud9!

Randall

Our brand-new Christmas resources

Post Syndicated from Laura Sach original https://www.raspberrypi.org/blog/christmas-resources-2017/

It’s never too early for Christmas-themed resources — especially when you want to make the most of them in your school, Code Club or CoderDojo! So here’s the ever-wonderful Laura Sach with an introduction of our newest festive projects.

A cartoon of people singing Christmas carols - Raspberry Pi Christmas Resources

In the immortal words of Noddy Holder: “it’s Christmaaaaaaasssss!” Well, maybe it isn’t quite Christmas yet, but since the shops have been playing Mariah Carey on a loop since the last pumpkin lantern hit the bargain bin, you’re hopefully well prepared.

To get you in the mood with some festive fun, we’ve put together a selection of seasonal free resources for you. Each project has a difficulty level in line with our Digital Making Curriculum, so you can check which might suit you best. Why not try them out at your local Raspberry Jam, CoderDojo, or Code Club, at school, or even on a cold day at home with a big mug of hot chocolate?

Jazzy jumpers

A cartoon of someone remembering pairs of jumper designs - Raspberry Pi Christmas Resources

Jazzy jumpers (Creator level): as a child in the eighties, you’d always get an embarrassing and probably badly sized jazzy jumper at Christmas from some distant relative. Thank goodness the trend has gone hipster and dreadful jumpers are now cool!

This resource shows you how to build a memory game in Scratch where you must remember the colour and picture of a jazzy jumper before recreating it. How many jumpers can you successfully recall in a row?

Sense HAT advent calendar

A cartoon Sense HAT lit up in the design of a Christmas pudding - Raspberry Pi Christmas Resources

Sense HAT advent calendar (Builder level): put the lovely lights on your Sense HAT to festive use by creating an advent calendar you can open day by day. However, there’s strictly no cheating with this calendar — we teach you how to use Python to detect the current date and prevent would-be premature peekers!

Press the Enter key to open today’s door:

(Note: no chocolate will be dispensed from your Raspberry Pi. Sorry about that.)

Code a carol

A cartoon of people singing Christmas carols - Raspberry Pi Christmas Resources

Code a carol (Developer level): Have you ever noticed how much repetition there is in carols and other songs? This resource teaches you how to break down the Twelve days of Christmas tune into its component parts and code it up in Sonic Pi the lazy way: get the computer to do all the repetition for you!

No musical knowledge required — just follow our lead, and you’ll have yourself a rocking doorbell tune in no time!

Naughty and nice

A cartoon of Santa judging people by their tweets - Raspberry Pi Christmas Resources

Naughty and nice (Maker level): Have you been naughty or nice? Find out by using sentiment analysis on your tweets to see what sort of things you’ve been talking about throughout the year. For added fun, why not use your program on the Twitter account of your sibling/spouse/arch nemesis and report their level of naughtiness to Santa with an @ mention?

raspberry_pi is 65.5 percent NICE, with an accuracy of 0.9046692607003891

It’s Christmaaaaaasssss

With the festive season just around the corner, it’s time to get started on your Christmas projects! Whether you’re planning to run your Christmas lights via a phone app, install a home assistant inside an Elf on a Shelf, or work through our Christmas resources, we would like to see what you make. So do share your festive builds with us on social media, or by posting links in the comments.

The post Our brand-new Christmas resources appeared first on Raspberry Pi.

M5 – The Next Generation of General-Purpose EC2 Instances

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/m5-the-next-generation-of-general-purpose-ec2-instances/

I always advise new EC2 users to start with our general-purpose instances, run some stress tests, and to get a really good feel for the compute, memory, and networking profile of their application before taking a look at other instance types. With a broad selection of instances optimized for compute, memory, and storage, our customers have many options and the flexibility to choose the instance type that is the best fit for their needs.

As you can see from my EC2 Instance History post, the general-purpose (M) instances go all the way back to 2006 when we launched the m1.small. We continued to evolve along this branch of our family tree, launching the the M2 (2009), M3 (2012), and the M4 (2015) instances. Our customers use the general-purpose instances to run web & app servers, host enterprise applications, support online games, and build cache fleets.

New M5 Instances
Today we are taking the next step forward with the launch of our new M5 instances. These instances benefit from our commitment to continued innovation and offer even better price-performance than their predecessors. Based on Custom Intel® Xeon® Platinum 8175M series processors running at 2.5 GHz, the M5 instances are designed for highly demanding workloads and will deliver 14% better price/performance than the M4 instances on a per-core basis. Applications that use the AVX-512 instructions will crank out twice as many FLOPS per core. We’ve also added a new size at the high-end, giving you even more options.

Here are the M5 instances (all VPC-only, HVM-only, and EBS-Optimized):

Instance Name vCPUs
RAM
Network Bandwidth EBS-Optimized Bandwidth
m5.large 2 8 GiB Up to 10 Gbps Up to 2120 Mbps
m5.xlarge 4 16 GiB Up to 10 Gbps Up to 2120 Mbps
m5.2xlarge 8 32 GiB Up to 10 Gbps Up to 2120 Mbps
m5.4xlarge 16 64 GiB Up to 10 Gbps 2120 Mbps
m5.12xlarge 48 192 GiB 10 Gbps 5000 Mbps
m5.24xlarge 96 384 GiB 25 Gbps 10000 Mbps

At the top end of the lineup, the m5.24xlarge is second only to the X instances when it comes to vCPU count, giving you more room to scale up and to consolidate workloads. The instances support Enhanced Networking, and can deliver up to 25 Gbps when used within a Placement Group.

In addition to dedicated, EBS-Optimized bandwidth to EBS, access to EBS storage is enhanced by the use of NVMe (you’ll need to install the proper drivers if you are using older AMIs). The combination of more bandwidth and NVMe will increase the amount of data that your M5 instances can chew through.

Launch One Today
You can launch M5 instances today in the US East (Northern Virginia), US West (Oregon), and EU (Ireland) Regions in On-Demand and Spot form (Reserved Instances are also available), with additional Regions in the works.

One quick note: the current NVMe driver is not optimized for high-performance sequential workloads and we don’t recommend the use of M5 instances in conjunction with sc1 or st1 volumes. We are aware of this issue and have been working to optimize the driver for this important use case.

Jeff;

 

 

AWS Media Services – Process, Store, and Monetize Cloud-Based Video

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-media-services-process-store-and-monetize-cloud-based-video/

Do you remember what web video was like in the early days? Standalone players, video no larger than a postage stamp, slow & cantankerous connections, overloaded servers, and the ever-present buffering messages were the norm less than two decades ago.

Today, thanks to technological progress and a broad array of standards, things are a lot better. Video consumers are now in control. They use devices of all shapes, sizes, and vintages to enjoy live and recorded content that is broadcast, streamed, or sent over-the-top (OTT, as they say), and expect immediate access to content that captures and then holds their attention. Meeting these expectations presents a challenge for content creators and distributors. Instead of generating video in a one-size-fits-all format, they (or their media servers) must be prepared to produce video that spans a broad range of sizes, formats, and bit rates, taking care to be ready to deal with planned or unplanned surges in demand. In the face of all of this complexity, they must backstop their content with a monetization model that supports the content and the infrastructure to deliver it.

New AWS Media Services
Today we are launching an array of broadcast-quality media services, each designed to address one or more aspects of the challenge that I outlined above. You can use them together to build a complete end-to-end video solution or you can use one or more in building-block style. In true AWS fashion, you can spend more time innovating and less time setting up and running infrastructure, leaving you ready to focus on creating, delivering, and monetizing your content. The services are all elastic, allowing you to ramp up processing power, connections, and storage and giving you the ability to handle million-user (and beyond) spikes with ease.

Here are the services (all accessible from a set of interactive consoles as well as through a comprehensive set of APIs):

AWS Elemental MediaConvert – File-based transcoding for OTT, broadcast, or archiving, with support for a long list of formats and codecs. Features include multi-channel audio, graphic overlays, closed captioning, and several DRM options.

AWS Elemental MediaLive – Live encoding to deliver video streams in real time to both televisions and multiscreen devices. Allows you to deploy highly reliable live channels in minutes, with full control over encoding parameters. It supports ad insertion, multi-channel audio, graphic overlays, and closed captioning.

AWS Elemental MediaPackage – Video origination and just-in-time packaging. Starting from a single input, produces output for multiple devices representing a long list of current and legacy formats. Supports multiple monetization models, time-shifted live streaming, ad insertion, DRM, and blackout management.

AWS Elemental MediaStore – Media-optimized storage that enables high performance and low latency applications such as live streaming, while taking advantage of the scale and durability of Amazon Simple Storage Service (S3).

AWS Elemental MediaTailor – Monetization service that supports ad serving and server-side ad insertion, a broad range of devices, transcoding, and accurate reporting of server-side and client-side ad insertion.

Instead of listing out all of the features in the sections below, I’ve simply included as many screen shots as possible with the expectation that this will give you a better sense of the rich set of features, parameters, and settings that you get with this set of services.

AWS Elemental MediaConvert
MediaConvert allows you to transcode content that is stored in files. You can process individual files or entire media libraries, or anything in-between. You simply create a conversion job that specifies the content and the desired outputs, and submit it to MediaConvert. There’s no software to install or patch and the service scales to meet your needs without affecting turnaround time or performance.

The MediaConvert Console lets you manage Output presets, Job templates, Queues, and Jobs:

You can use a built-in system preset or you can make one of your own. You have full control over the settings when you make your own:

Jobs templates are named, and produce one or more output groups. You can add a new group to a template with a click:

When everything is ready to go, you create a job and make some final selections, then click on Create:

Each account starts with a default queue for jobs, where incoming work is processed in parallel using all processing resources available to the account. Adding queues does not add processing resources, but does cause them to be apportioned across queues. You can temporarily pause one queue in order to devote more resources to the others. You can submit jobs to paused queues and you can also cancel any that have yet to start.

Pricing for this service is based on the amount of video that you process and the features that you use.

AWS Elemental MediaLive
This service is for live encoding, and can be run 24×7. MediaLive channels are deployed on redundant resources distributed in two physically separated Availability Zones in order to provide the reliability expected by our customers in the broadcast industry. You can specify your inputs and define your channels in the MediaLive Console:

After you create an Input, you create a Channel and attach it to the Input:

You have full control over the settings for each channel:

 

AWS Elemental MediaPackage
This service lets you deliver video to many devices from a single source. It focuses on protection and just-in-time packaging, giving you the ability to provide your users with the desired content on the device of their choice. You simply create a channel to get started:

Then you add one or more endpoints. Once again, plenty of options and full control, including a startover window and a time delay:

You find the input URL, user name, and password for your channel and route your live video stream to it for packaging:

AWS Elemental MediaStore
MediaStore offers the performance, consistency, and latency required for live and on-demand media delivery. Objects are written and read into a new “temporal” tier of object storage for a limited amount of time, then move silently into S3 for long-lived durability. You simply create a storage container to group your media content:

The container is available within a minute or so:

Like S3 buckets, MediaStore containers have access policies and no limits on the number of objects or storage capacity.

MediaStore helps you to take full advantage of S3 by managing the object key names so as to maximize storage and retrieval throughput, in accord with the Request Rate and Performance Considerations.

AWS Elemental MediaTailor
This service takes care of server-side ad insertion while providing a broadcast-quality viewer experience by transcoding ad assets on the fly. Your customer’s video player asks MediaTailor for a playlist. MediaTailor, in turn, calls your Ad Decision Server and returns a playlist that references the origin server for your original video and the ads recommended by the Ad Decision Server. The video player makes all of its requests to a single endpoint in order to ensure that client-side ad-blocking is ineffective. You simply create a MediaTailor Configuration:

Context information is passed to the Ad Decision Server in the URL:

Despite the length of this post I have barely scratched the surface of the AWS Media Services. Once AWS re:Invent is in the rear view mirror I hope to do a deep dive and show you how to use each of these services.

Available Now
The entire set of AWS Media Services is available now and you can start using them today! Pricing varies by service, but is built around a pay-as-you-go model.

Jeff;