Tag Archives: Cali

Now Available – AWS Serverless Application Repository

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-available-aws-serverless-application-repository/

Last year I suggested that you Get Ready for the AWS Serverless Application Repository and gave you a sneak peek. The Repository is designed to make it as easy as possible for you to discover, configure, and deploy serverless applications and components on AWS. It is also an ideal venue for AWS partners, enterprise customers, and independent developers to share their serverless creations.

Now Available
After a well-received public preview, the AWS Serverless Application Repository is now generally available and you can start using it today!

As a consumer, you will be able to tap in to a thriving ecosystem of serverless applications and components that will be a perfect complement to your machine learning, image processing, IoT, and general-purpose work. You can configure and consume them as-is, or you can take them apart, add features, and submit pull requests to the author.

As a publisher, you can publish your contribution in the Serverless Application Repository with ease. You simply enter a name and a description, choose some labels to increase discoverability, select an appropriate open source license from a menu, and supply a README to help users get started. Then you enter a link to your existing source code repo, choose a SAM template, and designate a semantic version.

Let’s take a look at both operations…

Consuming a Serverless Application
The Serverless Application Repository is accessible from the Lambda Console. I can page through the existing applications or I can initiate a search:

A search for “todo” returns some interesting results:

I simply click on an application to learn more:

I can configure the application and deploy it right away if I am already familiar with the application:

I can expand each of the sections to learn more. The Permissions section tells me which IAM policies will be used:

And the Template section displays the SAM template that will be used to deploy the application:

I can inspect the template to learn more about the AWS resources that will be created when the template is deployed. I can also use the templates as a learning resource in preparation for creating and publishing my own application.

The License section displays the application’s license:

To deploy todo, I name the application and click Deploy:

Deployment starts immediately and is done within a minute (application deployment time will vary, depending on the number and type of resources to be created):

I can see all of my deployed applications in the Lambda Console:

There’s currently no way for a SAM template to indicate that an API Gateway function returns binary media types, so I set this up by hand and then re-deploy the API:

Following the directions in the Readme, I open the API Gateway Console and find the URL for the app in the API Gateway Dashboard:

I visit the URL and enter some items into my list:

Publishing a Serverless Application
Publishing applications is a breeze! I visit the Serverless App Repository page and click on Publish application to get started:

Then I assign a name to my application, enter my own name, and so forth:

I can choose from a long list of open-source friendly SPDX licenses:

I can create an initial version of my application at this point, or I can do it later. Either way, I simply provide a version number, a URL to a public repository containing my code, and a SAM template:

Available Now
The AWS Serverless Application Repository is available now and you can start using it today, paying only for the AWS resources consumed by the serverless applications that you deploy.

You can deploy applications in the US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Canada (Central), EU (Frankfurt), EU (Ireland), EU (London), and South America (São Paulo) Regions. You can publish from the US East (N. Virginia) or US East (Ohio) Regions for global availability.

Jeff;

 

Game Companies Oppose DMCA Exemption for ‘Abandoned’ Online Games

Post Syndicated from Ernesto original https://torrentfreak.com/game-companies-oppose-dmca-exemption-for-abandoned-online-games-180217/

There are a lot of things people are not allowed to do under US copyright law, but perhaps just as importantly there are exemptions.

The U.S. Copyright Office is currently considering whether or not to loosen the DMCA’s anti-circumvention provisions, which prevent the public from ‘tinkering’ with DRM-protected content and devices.

These provisions are renewed every three years after the Office hears various arguments from the public. One of the major topics on the agenda this year is the preservation of abandoned games.

The Copyright Office previously included game preservation exemptions to keep these games accessible. This means that libraries, archives, and museums can use emulators and other circumvention tools to make old classics playable.

Late last year several gaming fans including the Museum of Art and Digital Entertainment (the MADE), a nonprofit organization operating in California, argued for an expansion of this exemption to also cover online games. This includes games in the widely popular multiplayer genre, which require a connection to an online server.

“Although the Current Exemption does not cover it, preservation of online video games is now critical,” MADE wrote in its comment to the Copyright Office.

“Online games have become ubiquitous and are only growing in popularity. For example, an estimated fifty-three percent of gamers play multiplayer games at least once a week, and spend, on average, six hours a week playing with others online.”

This week, the Entertainment Software Association (ESA), which acts on behalf of prominent members including Electonic Arts, Nintendo and Ubisoft, opposed the request.

While they are fine with the current game-preservation exemption, expanding it to online games goes too far, they say. This would allow outsiders to recreate online game environments using server code that was never published in public.

It would also allow a broad category of “affiliates” to help with this which, according to the ESA, could include members of the public

“The proponents characterize these as ‘slight modifications’ to the existing exemption. However they are nothing of the sort. The proponents request permission to engage in forms of circumvention that will enable the complete recreation of a hosted video game-service environment and make the video game available for play by a public audience.”

“Worse yet, proponents seek permission to deputize a legion of ‘affiliates’ to assist in their activities,” ESA adds.

The proposed changes would enable and facilitate infringing use, the game companies warn. They fear that outsiders such as MADE will replicate the game servers and allow the public to play these abandoned games, something games companies would generally charge for. This could be seen as direct competition.

MADE, for example, already charges the public to access its museum so they can play games. This can be seen as commercial use under the DMCA, ESA points out.

“Public performance and display of online games within a museum likewise is a commercial use within the meaning of Section 107. MADE charges an admission fee – ‘$10 to play games all day’.

“Under the authority summarized above, public performance and display of copyrighted works to generate entrance fee revenue is a commercial use, even if undertaken by a nonprofit museum,” the ESA adds.

The ESA also stresses that their members already make efforts to revive older games themselves. There is a vibrant and growing market for “retro” games, which games companies are motivated to serve, they say.

The games companies, therefore, urge the Copyright Office to keep the status quo and reject any exemptions for online games.

“In sum, expansion of the video game preservation exemption as contemplated by Class 8 is not a ‘modest’ proposal. Eliminating the important limitations that the Register provided when adopting the current exemption risks the possibility of wide-scale infringement and substantial market harm,” they write.

The Copyright Office will take all arguments into consideration before it makes a final decision. It’s clear that the wishes of game preservation advocates, such as MADE, are hard to unite with the interests of the game companies, so one side will clearly be disappointed with the outcome.

A copy of ESA’s submissionavailablelble here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Embedding a Tweet Can be Copyright Infringement, Court Rules

Post Syndicated from Ernesto original https://torrentfreak.com/embedding-a-tweet-can-be-copyright-infringement-court-rules-180216/

Nowadays it’s fairly common for blogs and news sites to embed content posted by third parties, ranging from YouTube videos to tweets.

Although these publications don’t host the content themselves, they can be held liable for copyright infringement, a New York federal court has ruled.

The case in question was filed by Justin Goldman whose photo of Tom Brady went viral after he posted it on Snapchat. After being reposted on Reddit, it also made its way onto Twitter from where various news organizations picked it up.

Several of these news sites reported on the photo by embedding tweets from others. However, since Goldman never gave permission to display his photo, he went on to sue the likes of Breitbart, Time, Vox and Yahoo, for copyright infringement.

In their defense, the news organizations argued that they did nothing wrong as no content was hosted on their servers. They referred to the so-called “server test” that was applied in several related cases in the past, which determined that liability rests on the party that hosts the infringing content.

In an order that was just issued, US District Court Judge Katherine Forrest disagrees. She rejects the “server test” argument and rules that the news organizations are liable.

“[W]hen defendants caused the embedded Tweets to appear on their websites, their actions violated plaintiff’s exclusive display right; the fact that the image was hosted on a server owned and operated by an unrelated third party (Twitter) does not shield them from this result,” Judge Forrest writes.

Judge Forrest argues that the server test was established in the ‘Perfect 10 v. Amazon’ case, which dealt with the ‘distribution’ of content. This case is about ‘displaying’ an infringing work instead, an area where the jurisprudence is not as clear.

“The Court agrees with plaintiff. The plain language of the Copyright Act, the legislative history undergirding its enactment, and subsequent Supreme Court jurisprudence provide no basis for a rule that allows the physical location or possession of an image to determine who may or may not have “displayed” a work within the meaning of the Copyright Act.”

As a result, summary judgment was granted in favor of Goldman.

Rightsholders, including Getty Images which supported Goldman, are happy with the result. However, not everyone is pleased. The Electronic Frontier Foundation (EFF) says that if the current verdict stands it will put millions of regular Internet users at risk.

“Rejecting years of settled precedent, a federal court in New York has ruled that you could infringe copyright simply by embedding a tweet in a web page,” EFF comments.

“Even worse, the logic of the ruling applies to all in-line linking, not just embedding tweets. If adopted by other courts, this legally and technically misguided decision would threaten millions of ordinary Internet users with infringement liability.”

Given what’s at stake, it’s likely that the news organization will appeal this week’s order.

Interestingly, earlier this week a California district court dismissed Playboy’s copyright infringement complaint against Boing Boing, which embedded a YouTube video that contained infringing content.

A copy of Judge Forrest’s opinion can be found here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Court Dismisses Playboy’s Copyright Claims Against Boing Boing

Post Syndicated from Ernesto original https://torrentfreak.com/court-dismisses-playboys-copyright-claims-against-boing-boing-180215/

Early 2016, Boing Boing co-editor Xeni Jardin published an article in which she linked to an archive of every Playboy centerfold image till then.

“Kind of amazing to see how our standards of hotness, and the art of commercial erotic photography, have changed over time,” Jardin commented.

While the linked material undoubtedly appealed to many readers, Playboy itself took offense to the fact that infringing copies of their work were being shared in public. While Boing Boing didn’t upload or store the images in question, the publisher filed a lawsuit late last year.

The blog’s parent company Happy Mutants was accused of various counts of copyright infringement, with Playboy claiming that it exploited their playmates’ images for commercial purposes.

Boing Boing saw things differently. With help from the Electronic Frontier Foundation (EFF) it filed a motion to dismiss, arguing that hyperlinking is not copyright infringement. If Playboy would’ve had their way, millions of other Internet users could be sued for linking too.

“This case merely has to survive a motion to dismiss to launch a thousand more expensive lawsuits, chilling a broad variety of lawful expression and reporting that merely adopts the common practice of linking to the material that is the subject of the report,” they wrote.

The article in question

Yesterday US District Court Judge Fernando Olguin ruled on the matter. In a brief order, he concluded that an oral argument is not needed and that based on the arguments from both sides, the case should be dismissed with leave.

This effectively means that Playboy’s complaint has been thrown out. However, the company is offered a lifeline and is allowed to submit a new one if they can properly back up their copyright infringement allegations.

“The court will grant defendant’s Motion and dismiss plaintiff’s First Amended Complaint with leave to amend. In preparing the Second Amended Complaint, plaintiff shall carefully evaluate the contentions set forth in defendant’s Motion.

“For example, the court is skeptical that plaintiff has sufficiently alleged facts to support either its inducement or material contribution theories of copyright infringement,” Judge Olguin adds.

According to the order, it is not sufficient to argue that Boing Boing merely ‘provided the means’ to carry out copyright infringing activity. There also has to be a personal action that ‘assists’ the infringing activity.

Playboy has until the end of the month to submit a new complaint and if it chooses not to do so, the case will be thrown out.

The order is clearly a win for Boing Boing, which vehemently opposed Playboy’s claims. While the order is clear, it must come as a surprise to the magazine publisher, which won a similar ‘hyperlinking’ lawsuit in the European Court of Justice last year.

EFF, who defend Boing Boing, is happy with the order and hopes that Playboy will leave it at this.

“From the outset of this lawsuit, we have been puzzled as to why Playboy, once a staunch defender of the First Amendment, would attack a small news and commentary website,” EFF comments

“Today’s decision leaves Playboy with a choice: it can try again with a new complaint or it can leave this lawsuit behind. We don’t believe there’s anything Playboy could add to its complaint that would meet the legal standard. We hope that it will choose not to continue with its misguided suit.”

A copy of US District Court Judge Fernando Olguin’s order is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Tickbox Must Remove Pirate Streaming Addons From Sold Devices

Post Syndicated from Ernesto original https://torrentfreak.com/tickbox-remove-pirate-streaming-addons-180214/

Online streaming piracy is on the rise and many people now use dedicated media players to watch content through their regular TVs.

This is a thorn in the side of various movie companies, who have launched a broad range of initiatives to curb this trend.

One of these initiatives is the Alliance for Creativity and Entertainment (ACE), an anti-piracy partnership between Hollywood studios, Netflix, Amazon, and more than two dozen other companies.

Last year, ACE filed a lawsuit against the Georgia-based company Tickbox TV, which sells Kodi-powered set-top boxes that stream a variety of popular media.

ACE sees these devices as nothing more than pirate tools so the coalition asked the court for an injunction to prevent Tickbox from facilitating copyright infringement, demanding that it removes all pirate add-ons from previously sold devices.

Last month, a California federal court issued an initial injunction, ordering Tickbox to keep pirate addons out of its box and halt all piracy-inducing advertisements going forward. In addition, the court directed both parties to come up with a proper solution for devices that were already sold.

The movie companies wanted Tickbox to remove infringing addons from previously sold devices, but the device seller refused this initially, equating it to hacking.

This week, both parties were able to reach an ‘agreement’ on the issue. They drafted an updated preliminary injunction which replaces the previous order and will be in effect for the remainder of the lawsuit.

The new injunction prevents Tickbox from linking to any “build,” “theme,” “app,” or “addon” that can be indirectly used to transmit copyright-infringing material. Web browsers such as Internet Explorer, Google Chrome, Safari, and Firefox are specifically excluded.

In addition, Tickbox must also release a new software updater that will remove any infringing software from previously sold devices.

“TickBox shall issue an update to the TickBox launcher software to be automatically downloaded and installed onto any previously distributed TickBox TV device and to be launched when such device connects to the internet,” the injunction reads.

“Upon being launched, the update will delete the Subject [infringing] Software downloaded onto the device prior to the update, or otherwise cause the TickBox TV device to be unable to access any Subject Software downloaded onto or accessed via that device prior to the update.”

All tiles that link to copyright-infringing software from the box’s home screen also have to be stripped. Going forward, only tiles to the Google Play Store or to Kodi within the Google Play Store are allowed.

In addition, the agreement also allows ACE to report newly discovered infringing apps or addons to Tickbox, which the company will then have to remove within 24-hours, weekends excluded.

“This ruling sets an important precedent and reduces the threat from piracy devices to the legal market for creative content and a vibrant creative economy that supports millions of workers around the world,” ACE spokesperson Zoe Thorogood says, commenting on the news.

The new injunction is good news for the movie companies, but many Tickbox customers will not appreciate the forced changes. That said, the legal battle is far from over. The main question, whether Tickbox contributed to the alleged copyright infringements, has yet to be answered.

Ultimately, this case is likely to result in a landmark decision, determining what sellers of streaming boxes can and cannot do in the United States.

A copy of the new Tickbox injunction is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Amazon Relational Database Service – Looking Back at 2017

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-relational-database-service-looking-back-at-2017/

The Amazon RDS team launched nearly 80 features in 2017. Some of them were covered in this blog, others on the AWS Database Blog, and the rest in What’s New or Forum posts. To wrap up my week, I thought it would be worthwhile to give you an organized recap. So here we go!

Certification & Security

Features

Engine Versions & Features

Regional Support

Instance Support

Price Reductions

And That’s a Wrap
I’m pretty sure that’s everything. As you can see, 2017 was quite the year! I can’t wait to see what the team delivers in 2018.

Jeff;

 

Hosting Provider Steadfast Maintains DMCA Safe Harbor Defense For Trial

Post Syndicated from Ernesto original https://torrentfreak.com/hosting-provider-steadfast-maintains-dmca-safe-harbor-defense-for-trial-180212/

Two years ago, adult entertainment publisher ALS Scan dragged several third-party Internet services to court.

The company targeted several companies including CDN provider CloudFlare and the Chicago-based hosting company Steadfast, accusing them of copyright infringement because they offered services to pirate sites.

The case against Steadfast is getting close to trial and to start with an advantage, ALS Scan recently asked the court for partial summary judgment, determining that the hosting company contributed to copyright infringement and that it has no safe harbor protection.

ALS argued that Steadfast refused to shut down the servers of the image sharing platform Imagebam.com, which was operated by its client Flixya. ALS Scan described the site as a repeat offender, as it had been targeted with dozens of DMCA notices, and accused Steadfast of turning a blind eye to the situation.

Steadfast, for its part, fiercely denied the allegations. The hosting provider admitted that it leased servers to Flixya for ten years but said that it forwarded all notices to its client. The hosting company could not address individual infringements, other than shutting down the entire site, which would have been disproportionate in their view.

A few days ago California District Court Judge George Wu ruled on the matter, denying ALS’s motion for summary judgment.

Both sides made sensible arguments on the contributory infringement issue, but it is by no means undisputed that the hosting provider ‘contributed’ to the infringing activities. The court, therefore, left this question open for the jury to determine at trial.

“Ultimately, both sides have raised triable issues of fact with respect to material contribution. As a result, the Court would deny Plaintiff’s Motion,” Judge Wu writes.

ALS also sought summary judgment on the DMCA safe harbor protection issue, but the court denied this request as well. While it’s clear that the hosting company never terminated a customer for repeat infringements, it’s not clear whether it was ever in a situation where it needed to.

The DMCA requires Internet services to implement a meaningful repeat infringer policy, but in this case, Steadfast’s client Imagebam reportedly had a takedown policy of its own, which complicates the issue.

“While the fact Steadfast has never terminated one of its own customers for infringement is potentially damaging to its ability to fit the safe harbor, Plaintiff has not established that Steadfast faced a situation requiring it to terminate one of its users,” Judge Wu writes.

“Even in the present case it is unclear that Steadfast needed to terminate Flixya’s account given Flixya itself had a policy that was arguably successful at removing infringing images from imagebam.com.”

Judge Wu adds that safe harbor defenses are generally left to the jury, and this is what he decided as well.

As a result, ALS’s entire motion for summary judgment is denied. This is good news for Steadfast, who will have their safe harbor defense available at the upcoming trial. However, they will likely celebrate this win with caution, as the jury makes its ultimate decision.

A copy of the court’s order is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

How I built a data warehouse using Amazon Redshift and AWS services in record time

Post Syndicated from Stephen Borg original https://aws.amazon.com/blogs/big-data/how-i-built-a-data-warehouse-using-amazon-redshift-and-aws-services-in-record-time/

This is a customer post by Stephen Borg, the Head of Big Data and BI at Cerberus Technologies.

Cerberus Technologies, in their own words: Cerberus is a company founded in 2017 by a team of visionary iGaming veterans. Our mission is simple – to offer the best tech solutions through a data-driven and a customer-first approach, delivering innovative solutions that go against traditional forms of working and process. This mission is based on the solid foundations of reliability, flexibility and security, and we intend to fundamentally change the way iGaming and other industries interact with technology.

Over the years, I have developed and created a number of data warehouses from scratch. Recently, I built a data warehouse for the iGaming industry single-handedly. To do it, I used the power and flexibility of Amazon Redshift and the wider AWS data management ecosystem. In this post, I explain how I was able to build a robust and scalable data warehouse without the large team of experts typically needed.

In two of my recent projects, I ran into challenges when scaling our data warehouse using on-premises infrastructure. Data was growing at many tens of gigabytes per day, and query performance was suffering. Scaling required major capital investment for hardware and software licenses, and also significant operational costs for maintenance and technical staff to keep it running and performing well. Unfortunately, I couldn’t get the resources needed to scale the infrastructure with data growth, and these projects were abandoned. Thanks to cloud data warehousing, the bottleneck of infrastructure resources, capital expense, and operational costs have been significantly reduced or have totally gone away. There is no more excuse for allowing obstacles of the past to delay delivering timely insights to decision makers, no matter how much data you have.

With Amazon Redshift and AWS, I delivered a cloud data warehouse to the business very quickly, and with a small team: me. I didn’t have to order hardware or software, and I no longer needed to install, configure, tune, or keep up with patches and version updates. Instead, I easily set up a robust data processing pipeline and we were quickly ingesting and analyzing data. Now, my data warehouse team can be extremely lean, and focus more time on bringing in new data and delivering insights. In this post, I show you the AWS services and the architecture that I used.

Handling data feeds

I have several different data sources that provide everything needed to run the business. The data includes activity from our iGaming platform, social media posts, clickstream data, marketing and campaign performance, and customer support engagements.

To handle the diversity of data feeds, I developed abstract integration applications using Docker that run on Amazon EC2 Container Service (Amazon ECS) and feed data to Amazon Kinesis Data Streams. These data streams can be used for real time analytics. In my system, each record in Kinesis is preprocessed by an AWS Lambda function to cleanse and aggregate information. My system then routes it to be stored where I need on Amazon S3 by Amazon Kinesis Data Firehose. Suppose that you used an on-premises architecture to accomplish the same task. A team of data engineers would be required to maintain and monitor a Kafka cluster, develop applications to stream data, and maintain a Hadoop cluster and the infrastructure underneath it for data storage. With my stream processing architecture, there are no servers to manage, no disk drives to replace, and no service monitoring to write.

Setting up a Kinesis stream can be done with a few clicks, and the same for Kinesis Firehose. Firehose can be configured to automatically consume data from a Kinesis Data Stream, and then write compressed data every N minutes to Amazon S3. When I want to process a Kinesis data stream, it’s very easy to set up a Lambda function to be executed on each message received. I can just set a trigger from the AWS Lambda Management Console, as shown following.

I also monitor the duration of function execution using Amazon CloudWatch and AWS X-Ray.

Regardless of the format I receive the data from our partners, I can send it to Kinesis as JSON data using my own formatters. After Firehose writes this to Amazon S3, I have everything in nearly the same structure I received but compressed, encrypted, and optimized for reading.

This data is automatically crawled by AWS Glue and placed into the AWS Glue Data Catalog. This means that I can immediately query the data directly on S3 using Amazon Athena or through Amazon Redshift Spectrum. Previously, I used Amazon EMR and an Amazon RDS–based metastore in Apache Hive for catalog management. Now I can avoid the complexity of maintaining Hive Metastore catalogs. Glue takes care of high availability and the operations side so that I know that end users can always be productive.

Working with Amazon Athena and Amazon Redshift for analysis

I found Amazon Athena extremely useful out of the box for ad hoc analysis. Our engineers (me) use Athena to understand new datasets that we receive and to understand what transformations will be needed for long-term query efficiency.

For our data analysts and data scientists, we’ve selected Amazon Redshift. Amazon Redshift has proven to be the right tool for us over and over again. It easily processes 20+ million transactions per day, regardless of the footprint of the tables and the type of analytics required by the business. Latency is low and query performance expectations have been more than met. We use Redshift Spectrum for long-term data retention, which enables me to extend the analytic power of Amazon Redshift beyond local data to anything stored in S3, and without requiring me to load any data. Redshift Spectrum gives me the freedom to store data where I want, in the format I want, and have it available for processing when I need it.

To load data directly into Amazon Redshift, I use AWS Data Pipeline to orchestrate data workflows. I create Amazon EMR clusters on an intra-day basis, which I can easily adjust to run more or less frequently as needed throughout the day. EMR clusters are used together with Amazon RDS, Apache Spark 2.0, and S3 storage. The data pipeline application loads ETL configurations from Spring RESTful services hosted on AWS Elastic Beanstalk. The application then loads data from S3 into memory, aggregates and cleans the data, and then writes the final version of the data to Amazon Redshift. This data is then ready to use for analysis. Spark on EMR also helps with recommendations and personalization use cases for various business users, and I find this easy to set up and deliver what users want. Finally, business users use Amazon QuickSight for self-service BI to slice, dice, and visualize the data depending on their requirements.

Each AWS service in this architecture plays its part in saving precious time that’s crucial for delivery and getting different departments in the business on board. I found the services easy to set up and use, and all have proven to be highly reliable for our use as our production environments. When the architecture was in place, scaling out was either completely handled by the service, or a matter of a simple API call, and crucially doesn’t require me to change one line of code. Increasing shards for Kinesis can be done in a minute by editing a stream. Increasing capacity for Lambda functions can be accomplished by editing the megabytes allocated for processing, and concurrency is handled automatically. EMR cluster capacity can easily be increased by changing the master and slave node types in Data Pipeline, or by using Auto Scaling. Lastly, RDS and Amazon Redshift can be easily upgraded without any major tasks to be performed by our team (again, me).

In the end, using AWS services including Kinesis, Lambda, Data Pipeline, and Amazon Redshift allows me to keep my team lean and highly productive. I eliminated the cost and delays of capital infrastructure, as well as the late night and weekend calls for support. I can now give maximum value to the business while keeping operational costs down. My team pushed out an agile and highly responsive data warehouse solution in record time and we can handle changing business requirements rapidly, and quickly adapt to new data and new user requests.


Additional Reading

If you found this post useful, be sure to check out Deploy a Data Warehouse Quickly with Amazon Redshift, Amazon RDS for PostgreSQL and Tableau Server and Top 8 Best Practices for High-Performance ETL Processing Using Amazon Redshift.


About the Author

Stephen Borg is the Head of Big Data and BI at Cerberus Technologies. He has a background in platform software engineering, and first became involved in data warehousing using the typical RDBMS, SQL, ETL, and BI tools. He quickly became passionate about providing insight to help others optimize the business and add personalization to products. He is now the Head of Big Data and BI at Cerberus Technologies.

 

 

 

Astro Pi celebrates anniversary of ISS Columbus module

Post Syndicated from David Honess original https://www.raspberrypi.org/blog/astro-pi-celebrates-anniversary/

Right now, 400km above the Earth aboard the International Space Station, are two very special Raspberry Pi computers. They were launched into space on 6 December 2015 and are, most assuredly, the farthest-travelled Raspberry Pi computers in existence. Each year they run experiments that school students create in the European Astro Pi Challenge.

Raspberry Astro Pi units on the International Space Station

Left: Astro Pi Vis (Ed); right: Astro Pi IR (Izzy). Image credit: ESA.

The European Columbus module

Today marks the tenth anniversary of the launch of the European Columbus module. The Columbus module is the European Space Agency’s largest single contribution to the ISS, and it supports research in many scientific disciplines, from astrobiology and solar science to metallurgy and psychology. More than 225 experiments have been carried out inside it during the past decade. It’s also home to our Astro Pi computers.

Here’s a video from 7 February 2008, when Space Shuttle Atlantis went skywards carrying the Columbus module in its cargo bay.

STS-122 Launch NASA TV Coverage

From February 7th, 2008 NASA-TV Coverage of The 121st Space Shuttle Launch Launched At:2:45:30 P.M E.T – Coverage begins exactly one hour till launch STS-122 Crew:

Today, coincidentally, is also the deadline for the European Astro Pi Challenge: Mission Space Lab. Participating teams have until midnight tonight to submit their experiments.

Anniversary celebrations

At 16:30 GMT today there will be a live event on NASA TV for the Columbus module anniversary with NASA flight engineers Joe Acaba and Mark Vande Hei.

Our Astro Pi computers will be joining in the celebrations by displaying a digital birthday candle that the crew can blow out. It works by detecting an increase in humidity when someone blows on it. The video below demonstrates the concept.

AstroPi candle

Uploaded by Effi Edmonton on 2018-01-17.

Do try this at home

The exact Astro Pi code that will run on the ISS today is available for you to download and run on your own Raspberry Pi and Sense HAT. You’ll notice that the program includes code to make it stop automatically when the date changes to 8 February. This is just to save time for the ground control team.

If you have a Raspberry Pi and a Sense HAT, you can use the terminal commands below to download and run the code yourself:

wget http://rpf.io/colbday -O birthday.py
chmod +x birthday.py
./birthday.py

When you see a blank blue screen with the brightness increasing, the Sense HAT is measuring the baseline humidity. It does this every 15 minutes so it can recalibrate to take account of natural changes in background humidity. A humidity increase of 2% is needed to blow out the candle, so if the background humidity changes by more than 2% in 15 minutes, it’s possible to get a false positive. Press Ctrl + C to quit.

Please tweet pictures of your candles to @astro_pi – we might share yours! And if we’re lucky, we might catch a glimpse of the candle on the ISS during the NASA TV event at 16:30 GMT today.

The post Astro Pi celebrates anniversary of ISS Columbus module appeared first on Raspberry Pi.

Progressing from tech to leadership

Post Syndicated from Michal Zalewski original http://lcamtuf.blogspot.com/2018/02/on-leadership.html

I’ve been a technical person all my life. I started doing vulnerability research in the late 1990s – and even today, when I’m not fiddling with CNC-machined robots or making furniture, I’m probably clobbering together a fuzzer or writing a book about browser protocols and APIs. In other words, I’m a geek at heart.

My career is a different story. Over the past two decades and a change, I went from writing CGI scripts and setting up WAN routers for a chain of shopping malls, to doing pentests for institutional customers, to designing a series of network monitoring platforms and handling incident response for a big telco, to building and running the product security org for one of the largest companies in the world. It’s been an interesting ride – and now that I’m on the hook for the well-being of about 100 folks across more than a dozen subteams around the world, I’ve been thinking a bit about the lessons learned along the way.

Of course, I’m a bit hesitant to write such a post: sometimes, your efforts pan out not because of your approach, but despite it – and it’s possible to draw precisely the wrong conclusions from such anecdotes. Still, I’m very proud of the culture we’ve created and the caliber of folks working on our team. It happened through the work of quite a few talented tech leads and managers even before my time, but it did not happen by accident – so I figured that my observations may be useful for some, as long as they are taken with a grain of salt.

But first, let me start on a somewhat somber note: what nobody tells you is that one’s level on the leadership ladder tends to be inversely correlated with several measures of happiness. The reason is fairly simple: as you get more senior, a growing number of people will come to you expecting you to solve increasingly fuzzy and challenging problems – and you will no longer be patted on the back for doing so. This should not scare you away from such opportunities, but it definitely calls for a particular mindset: your motivation must come from within. Look beyond the fight-of-the-day; find satisfaction in seeing how far your teams have come over the years.

With that out of the way, here’s a collection of notes, loosely organized into three major themes.

The curse of a techie leader

Perhaps the most interesting observation I have is that for a person coming from a technical background, building a healthy team is first and foremost about the subtle art of letting go.

There is a natural urge to stay involved in any project you’ve started or helped improve; after all, it’s your baby: you’re familiar with all the nuts and bolts, and nobody else can do this job as well as you. But as your sphere of influence grows, this becomes a choke point: there are only so many things you could be doing at once. Just as importantly, the project-hoarding behavior robs more junior folks of the ability to take on new responsibilities and bring their own ideas to life. In other words, when done properly, delegation is not just about freeing up your plate; it’s also about empowerment and about signalling trust.

Of course, when you hand your project over to somebody else, the new owner will initially be slower and more clumsy than you; but if you pick the new leads wisely, give them the right tools and the right incentives, and don’t make them deathly afraid of messing up, they will soon excel at their new jobs – and be grateful for the opportunity.

A related affliction of many accomplished techies is the conviction that they know the answers to every question even tangentially related to their domain of expertise; that belief is coupled with a burning desire to have the last word in every debate. When practiced in moderation, this behavior is fine among peers – but for a leader, one of the most important skills to learn is knowing when to keep your mouth shut: people learn a lot better by experimenting and making small mistakes than by being schooled by their boss, and they often try to read into your passing remarks. Don’t run an authoritarian camp focused on total risk aversion or perfectly efficient resource management; just set reasonable boundaries and exit conditions for experiments so that they don’t spiral out of control – and be amazed by the results every now and then.

Death by planning

When nothing is on fire, it’s easy to get preoccupied with maintaining the status quo. If your current headcount or budget request lists all the same projects as last year’s, or if you ever find yourself ending an argument by deferring to a policy or a process document, it’s probably a sign that you’re getting complacent. In security, complacency usually ends in tears – and when it doesn’t, it leads to burnout or boredom.

In my experience, your goal should be to develop a cadre of managers or tech leads capable of coming up with clever ideas, prioritizing them among themselves, and seeing them to completion without your day-to-day involvement. In your spare time, make it your mission to challenge them to stay ahead of the curve. Ask your vendor security lead how they’d streamline their work if they had a 40% jump in the number of vendors but no extra headcount; ask your product security folks what’s the second line of defense or containment should your primary defenses fail. Help them get good ideas off the ground; set some mental success and failure criteria to be able to cut your losses if something does not pan out.

Of course, malfunctions happen even in the best-run teams; to spot trouble early on, instead of overzealous project tracking, I found it useful to encourage folks to run a data-driven org. I’d usually ask them to imagine that a brand new VP shows up in our office and, as his first order of business, asks “why do you have so many people here and how do I know they are doing the right things?”. Not everything in security can be quantified, but hard data can validate many of your assumptions – and will alert you to unseen issues early on.

When focusing on data, it’s important not to treat pie charts and spreadsheets as an art unto itself; if you run a security review process for your company, your CSAT scores are going to reach 100% if you just rubberstamp every launch request within ten minutes of receiving it. Make sure you’re asking the right questions; instead of “how satisfied are you with our process”, try “is your product better as a consequence of talking to us?”

Whenever things are not progressing as expected, it is a natural instinct to fall back to micromanagement, but it seldom truly cures the ill. It’s probable that your team disagrees with your vision or its feasibility – and that you’re either not listening to their feedback, or they don’t think you’d care. It’s good to assume that most of your employees are as smart or smarter than you; barking your orders at them more loudly or more frequently does not lead anyplace good. It’s good to listen to them and either present new facts or work with them on a plan you can all get behind.

In some circumstances, all that’s needed is honesty about the business trade-offs, so that your team feels like your “partner in crime”, not a victim of circumstance. For example, we’d tell our folks that by not falling behind on basic, unglamorous work, we earn the trust of our VPs and SVPs – and that this translates into the independence and the resources we need to pursue more ambitious ideas without being told what to do; it’s how we game the system, so to speak. Oh: leading by example is a pretty powerful tool at your disposal, too.

The human factor

I’ve come to appreciate that hiring decent folks who can get along with others is far more important than trying to recruit conference-circuit superstars. In fact, hiring superstars is a decidedly hit-and-miss affair: while certainly not a rule, there is a proportion of folks who put the maintenance of their celebrity status ahead of job responsibilities or the well-being of their peers.

For teams, one of the most powerful demotivators is a sense of unfairness and disempowerment. This is where tech-originating leaders can shine, because their teams usually feel that their bosses understand and can evaluate the merits of the work. But it also means you need to be decisive and actually solve problems for them, rather than just letting them vent. You will need to make unpopular decisions every now and then; in such cases, I think it’s important to move quickly, rather than prolonging the uncertainty – but it’s also important to sincerely listen to concerns, explain your reasoning, and be frank about the risks and trade-offs.

Whenever you see a clash of personalities on your team, you probably need to respond swiftly and decisively; being right should not justify being a bully. If you don’t react to repeated scuffles, your best people will probably start looking for other opportunities: it’s draining to put up with constant pie fights, no matter if the pies are thrown straight at you or if you just need to duck one every now and then.

More broadly, personality differences seem to be a much better predictor of conflict than any technical aspects underpinning a debate. As a boss, you need to identify such differences early on and come up with creative solutions. Sometimes, all you need is taking some badly-delivered but valid feedback and having a conversation with the other person, asking some questions that can help them reach the same conclusions without feeling that their worldview is under attack. Other times, the only path forward is making sure that some folks simply don’t run into each for a while.

Finally, dealing with low performers is a notoriously hard but important part of the game. Especially within large companies, there is always the temptation to just let it slide: sideline a struggling person and wait for them to either get over their issues or leave. But this sends an awful message to the rest of the team; for better or worse, fairness is important to most. Simply firing the low performers is seldom the best solution, though; successful recovery cases are what sets great managers apart from the average ones.

Oh, one more thought: people in leadership roles have their allegiance divided between the company and the people who depend on them. The obligation to the company is more formal, but the impact you have on your team is longer-lasting and more intimate. When the obligations to the employer and to your team collide in some way, make sure you can make the right call; it might be one of the the most consequential decisions you’ll ever make.

Playboy’s Copyright Lawsuit Threatens Online Expression, Boing Boing Argues

Post Syndicated from Ernesto original https://torrentfreak.com/playboys-copyright-lawsuit-threatens-online-expression-boing-boing-argues-180202/

Early 2016, Boing Boing co-editor Xeni Jardin published an article in which she linked to an archive of every Playboy centerfold image till then.

“Kind of amazing to see how our standards of hotness, and the art of commercial erotic photography, have changed over time,” Jardin commented.

While the linked material undoubtedly appealed to many readers, Playboy itself took offense to the fact that infringing copies of their work were being shared in public. While Boing Boing didn’t upload or store the images in question, the publisher filed a complaint.

Playboy accused the blog’s parent company Happy Mutants of various counts of copyright infringement, claiming that it exploited their playmates’ images for commercial purposes.

Last month Boing Boing responded to the allegations with a motion to dismiss. The case should be thrown out, it argued, noting that linking to infringing material for the purpose of reporting and commentary, is not against the law.

This prompted Playboy to fire back, branding Boing Boing a “clickbait” site. Playboy informed the court that the popular blog profits off the work of others and has no fair use defense.

Before the California District Court decides on the matter, Boing Boing took the opportunity to reply to Playboy’s latest response. According to the defense, Playboy’s case is an attack on people’s freedom of expression.

“Playboy claims this is an important case. It is partially correct: if the Court allows this case to go forward, it will send a dangerous message to everyone engaged in ordinary online commentary,” Boing Boing’s reply reads.

Referencing a previous Supreme Court decision, the blog says that the Internet democratizes access to speech, with websites as a form of modern-day pamphlets.

Links to source materials posted by third parties give these “pamphlets” more weight as they allow readers to form their own opinion on the matter, Boing Boing argues. If the court upholds Playboy’s arguments, however, this will become a risky endeavor.

“Playboy, however, would apparently prefer a world in which the ‘pamphleteer’ must ask for permission before linking to primary sources, on pain of expensive litigation,” the defense writes.

“This case merely has to survive a motion to dismiss to launch a thousand more expensive lawsuits, chilling a broad variety of lawful expression and reporting that merely adopts the common practice of linking to the material that is the subject of the report.”

The defense says that there are several problems with Playboy’s arguments. Among other things, Boing Boing argues that did nothing to cause the unauthorized posting of Playboy’s work on Imgur and YouTube.

Another key argument is that linking to copyright-infringing material should be considered fair use, since it was for purposes of criticism, commentary, and news reporting.

“Settled precedent requires dismissal, both because Boing Boing did not induce or materially contribute to any copyright infringement and, in the alternative, because Boing Boing engaged in fair use,” the defense writes.

Instead of going after Boing Boing for contributory infringement, Playboy could actually try to uncover the people who shared the infringing material, they argue. There is nothing that prevents them from doing so.

After hearing the arguments from both sides it is now up to the court to decide how to proceed. Given what’s at stake, the eventual outcome in this case is bound to set a crucial precedent.

A copy of Boing Boing’s reply is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

[$] Mixed-criticality support in seL4

Post Syndicated from corbet original https://lwn.net/Articles/745946/rss

Linux tries to be useful for a wide variety of use cases, but there are
some situations where it may not be appropriate; safety-critical
deployments with tight timing constraints would be near the top of the list
for many people. On the other hand, systems that can run safety-critical
code in a provably correct manner tend to be restricted in functionality
and often have to be dedicated to a single task. In a linux.conf.au 2018
talk, Gernot Heiser presented work that is being done with the seL4 microkernel system to safely support
complex systems in a provably safe manner.

Cloudflare is Liable For Pirate Sites & Has No Safe Harbor, Publisher Says

Post Syndicated from Ernesto original https://torrentfreak.com/cloudflare-is-liable-for-pirate-sites-and-has-no-safe-harbor-publisher-says-180201/

As one of the leading CDN and DDoS protection services, Cloudflare is used by millions of websites across the globe.

This includes thousands of “pirate” sites, including the likes of The Pirate Bay, which rely on the U.S.-based company to keep server loads down.

Many rightsholders have complained about Cloudflare’s involvement with these sites and last year adult entertainment publisher ALS Scan took it a step further by dragging the company to court.

ALS accused the CDN service of various types of copyright and trademark infringement, noting that several customers used the Cloudflare’s servers to distribute pirated content. While Cloudflare managed to have several counts dismissed, the accusation of contributory copyright infringement remains.

An upcoming trial could determine whether Cloudflare is liable or not, but ALS believes that this isn’t needed. This week, the publisher filed a request for partial summary judgment, asking the court to rule over the matter in advance of a trial.

“The evidence is undisputed,” ALS writes. “Cloudflare materially assists website operators in reproduction, distribution and display of copyrighted works, including infringing copies of ALS works. Cloudflare also masks information about pirate sites and their hosts.”

ALS anticipates that Cloudflare may argue that the company or its clients are protected by the DMCA’s safe harbor provision, but contests this claim. The publisher notes that none of the customers registered the required paperwork at the US Copyright Office.

“Cloudflare may say that the Cloudflare Customer Sites are themselves service providers entitled to DMCA protections, however, none have qualified for safe harbors by submitting the required notices to the US Copyright Office.”

Cloudflare itself has no safe harbor protection either, they argue, because it operates differently than a service provider as defined in the DMCA. It’s a “smart system” which also modifies content, instead of a “dumb pipe,” they claim.

In addition, the CDN provider is accused of failing to implement a reasonable policy that will terminate repeat offenders.

“Cloudflare has no available safe harbors. Even if any safe harbors apply, Cloudflare has lost such safe harbors for failure to adopt and reasonably implement a policy including termination of repeat infringers,” ALS writes.

Previously, the court clarified that under U.S. law the company can be held liable for caching content of copyright infringing websites. Cloudflare’s “infrastructure-level caching” cannot be seen as fair use, it ruled.

ALS now asks the court to issue a partial summary judgment ruling that Cloudflare is liable for contributory copyright infringement. If this motion is granted, a trial would only be needed to establish the damages amount.

The lawsuit is a crucial matter for Cloudflare, and not only because of the potential damages it faces in this case. If Cloudflare loses, other rightsholders are likely to make similar demands, forcing the company to actively police potential pirate sites.

Cloudflare will undoubtedly counter ALS’ claims in a future filing, so this case is far from over.

A copy of ALS Scan’s memorandum in support of the motion for partial summary judgment can be found here (pdf).

Udemy Targets ‘Pirate’ Site Giving Away its Paid Courses For Free

Post Syndicated from Andy original https://torrentfreak.com/udemy-targets-pirate-site-giving-away-its-paid-courses-for-free-180129/

While there’s no shortage of people who advocate free sharing of movies and music, passions are often raised when it comes to the availability of educational information.

Significant numbers of people believe that learning should be open to all and that texts and associated materials shouldn’t be locked away by copyright holders trying to monetize knowledge. Of course, people who make a living creating learning materials see the position rather differently.

A clash of these ideals is brewing in the United States where online learning platform Udemy has been trying to have some of its courses taken down from FreeTutorials.us, a site that makes available premium tutorials and other learning materials for free.

Early December 2017, counsel acting for Udemy and a number of its individual and corporate instructors (Maximilian Schwarzmüller, Academind GmbH, Peter Dalmaris, Futureshock Enterprises, Jose Marcial Portilla, and Pierian Data) wrote to FreeTutorials.us with DMCA takedown notice.

“Pursuant to 17 U.S.C. § 512(c)(3)(A) of the Digital Millennium Copyright Act (‘DMCA’), this communication serves as a notice of infringement and request for removal of certain web content available on freetutorials.us,” the letter reads.

“I hereby request that you remove or disable access to the material listed in Exhibit A in as expedient a fashion as possible. This communication does not constitute a waiver of any right to recover damages incurred by virtue of any such unauthorized activities, and such rights as well as claims for other relief are expressly retained.”

A small sample of Exhibit A

On January 10, 2018, the same law firm wrote to Cloudflare, which provides services to FreeTutorials. The DMCA notice asked Cloudflare to disable access to the same set of infringing content listed above.

It seems likely that whatever happened next wasn’t to Udemy’s satisfaction. On January 16, an attorney from the same law firm filed a DMCA subpoena at a district court in California. A DMCA subpoena can enable a copyright holder to obtain the identity of an alleged infringer without having to file a lawsuit and without needing a signature from a judge.

The subpoena was directed at Cloudflare, which provides services to FreeTutorials. The company was ordered to hand over “all identifying information identifying the owner, operator and/or contact person(s) associated with the domain www.freetutorials.us, including but not limited to name(s), address(es), telephone number(s), email address(es), Internet protocol connection records, administrative records and billing records from the time the account was established to the present.”

On January 26, the date by which Cloudflare was ordered to hand over the information, Cloudflare wrote to FreeTutorials with a somewhat late-in-the-day notification.

“We received the attached subpoena regarding freetutorials.us, a domain managed through your Cloudflare account. The subpoena requires us to provide information in our systems related to this website,” the company wrote.

“We have determined that this is a valid subpoena, and we are required to provide the requested information. In accordance with our Privacy Policy, we are informing you before we provide any of the requested subscriber information. We plan to turn over documents in response to the subpoena on January 26th, 2018, unless you intervene in the case.”

With that deadline passing last Friday, it’s safe to say that Cloudflare has complied with the subpoena as the law requires. However, TorrentFreak spoke with FreeTutorials who told us that the company doesn’t hold anything useful on them.

“No, they have nothing,” the team explained.

Noting that they’ll soon dispense with the services of Cloudflare, the team confirmed that they had received emails from Udemy and its instructors but hadn’t done a lot in response.

“How about a ‘NO’? was our answer to all the DMCA takedown requests from Udemy and its Instructors,” they added.

FreeTutorials (FTU) are affiliated with FreeCoursesOnline (FCO) and seem passionate about what they do. In common with others who distribute learning materials online, they express a belief in free education for all, irrespective of financial resources.

“We, FTU and FCO, are a group of seven members assorted as a team from different countries and cities. We are JN, SRZ aka SunRiseZone, Letap, Lihua Google Drive, Kaya, Zinnia, Faiz MeemBazooka,” a spokesperson revealed.

“We’re all members and colleagues and we also have our own daily work and business stuff to do. We have been through that phase of life when we didn’t have enough money to buy books and get tuition or even apply for a good course that we always wanted to have, so FTU & FCO are just our vision to provide Free Education For Everyone.

“We would love to change our priorities towards our current and future projects, only if we manage to get some faithful FTU’ers to join in and help us to grow together and make FTU a place it should be.”

TorrentFreak requested comment from Udemy but at the time of publication, we were yet to hear back. However, we did manage to get in touch with Jonathan Levi, an Udemy instructor who sent this takedown notice to the site in October 2017:

“I’m writing to you on behalf of SuperHuman Enterprises, LLC. You are in violation of our copyright, using our images, and linking to pirated copies of our courses. Remove them IMMEDIATELY or face severe legal action….You have 48 hours to comply,” he wrote, adding:

“And in case you’re going to say I don’t have evidence that I own the files, it’s my fucking face in the videos.”

Levi says that the site had been non-responsive so now things are being taken to the next level.

“They don’t reply to takedowns, so we’ve joined a class action lawsuit against FTU lead by Udemy and a law firm specializing in this type of thing,” Levi concludes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Task Networking in AWS Fargate

Post Syndicated from Nathan Peck original https://aws.amazon.com/blogs/compute/task-networking-in-aws-fargate/

AWS Fargate is a technology that allows you to focus on running your application without needing to provision, monitor, or manage the underlying compute infrastructure. You package your application into a Docker container that you can then launch using your container orchestration tool of choice.

Fargate allows you to use containers without being responsible for Amazon EC2 instances, similar to how EC2 allows you to run VMs without managing physical infrastructure. Currently, Fargate provides support for Amazon Elastic Container Service (Amazon ECS). Support for Amazon Elastic Container Service for Kubernetes (Amazon EKS) will be made available in the near future.

Despite offloading the responsibility for the underlying instances, Fargate still gives you deep control over configuration of network placement and policies. This includes the ability to use many networking fundamentals such as Amazon VPC and security groups.

This post covers how to take advantage of the different ways of networking your containers in Fargate when using ECS as your orchestration platform, with a focus on how to do networking securely.

The first step to running any application in Fargate is defining an ECS task for Fargate to launch. A task is a logical group of one or more Docker containers that are deployed with specified settings. When running a task in Fargate, there are two different forms of networking to consider:

  • Container (local) networking
  • External networking

Container Networking

Container networking is often used for tightly coupled application components. Perhaps your application has a web tier that is responsible for serving static content as well as generating some dynamic HTML pages. To generate these dynamic pages, it has to fetch information from another application component that has an HTTP API.

One potential architecture for such an application is to deploy the web tier and the API tier together as a pair and use local networking so the web tier can fetch information from the API tier.

If you are running these two components as two processes on a single EC2 instance, the web tier application process could communicate with the API process on the same machine by using the local loopback interface. The local loopback interface has a special IP address of 127.0.0.1 and hostname of localhost.

By making a networking request to this local interface, it bypasses the network interface hardware and instead the operating system just routes network calls from one process to the other directly. This gives the web tier a fast and efficient way to fetch information from the API tier with almost no networking latency.

In Fargate, when you launch multiple containers as part of a single task, they can also communicate with each other over the local loopback interface. Fargate uses a special container networking mode called awsvpc, which gives all the containers in a task a shared elastic network interface to use for communication.

If you specify a port mapping for each container in the task, then the containers can communicate with each other on that port. For example the following task definition could be used to deploy the web tier and the API tier:

{
  "family": "myapp"
  "containerDefinitions": [
    {
      "name": "web",
      "image": "my web image url",
      "portMappings": [
        {
          "containerPort": 80
        }
      ],
      "memory": 500,
      "cpu": 10,
      "esssential": true
    },
    {
      "name": "api",
      "image": "my api image url",
      "portMappings": [
        {
          "containerPort": 8080
        }
      ],
      "cpu": 10,
      "memory": 500,
      "essential": true
    }
  ]
}

ECS, with Fargate, is able to take this definition and launch two containers, each of which is bound to a specific static port on the elastic network interface for the task.

Because each Fargate task has its own isolated networking stack, there is no need for dynamic ports to avoid port conflicts between different tasks as in other networking modes. The static ports make it easy for containers to communicate with each other. For example, the web container makes a request to the API container using its well-known static port:

curl 127.0.0.1:8080/my-endpoint

This sends a local network request, which goes directly from one container to the other over the local loopback interface without traversing the network. This deployment strategy allows for fast and efficient communication between two tightly coupled containers. But most application architectures require more than just internal local networking.

External Networking

External networking is used for network communications that go outside the task to other servers that are not part of the task, or network communications that originate from other hosts on the internet and are directed to the task.

Configuring external networking for a task is done by modifying the settings of the VPC in which you launch your tasks. A VPC is a fundamental tool in AWS for controlling the networking capabilities of resources that you launch on your account.

When setting up a VPC, you create one or more subnets, which are logical groups that your resources can be placed into. Each subnet has an Availability Zone and its own route table, which defines rules about how network traffic operates for that subnet. There are two main types of subnets: public and private.

Public subnets

A public subnet is a subnet that has an associated internet gateway. Fargate tasks in that subnet are assigned both private and public IP addresses:


A browser or other client on the internet can send network traffic to the task via the internet gateway using its public IP address. The tasks can also send network traffic to other servers on the internet because the route table can route traffic out via the internet gateway.

If tasks want to communicate directly with each other, they can use each other’s private IP address to send traffic directly from one to the other so that it stays inside the subnet without going out to the internet gateway and back in.

Private subnets

A private subnet does not have direct internet access. The Fargate tasks inside the subnet don’t have public IP addresses, only private IP addresses. Instead of an internet gateway, a network address translation (NAT) gateway is attached to the subnet:

 

There is no way for another server or client on the internet to reach your tasks directly, because they don’t even have an address or a direct route to reach them. This is a great way to add another layer of protection for internal tasks that handle sensitive data. Those tasks are protected and can’t receive any inbound traffic at all.

In this configuration, the tasks can still communicate to other servers on the internet via the NAT gateway. They would appear to have the IP address of the NAT gateway to the recipient of the communication. If you run a Fargate task in a private subnet, you must add this NAT gateway. Otherwise, Fargate can’t make a network request to Amazon ECR to download the container image, or communicate with Amazon CloudWatch to store container metrics.

Load balancers

If you are running a container that is hosting internet content in a private subnet, you need a way for traffic from the public to reach the container. This is generally accomplished by using a load balancer such as an Application Load Balancer or a Network Load Balancer.

ECS integrates tightly with AWS load balancers by automatically configuring a service-linked load balancer to send network traffic to containers that are part of the service. When each task starts, the IP address of its elastic network interface is added to the load balancer’s configuration. When the task is being shut down, network traffic is safely drained from the task before removal from the load balancer.

To get internet traffic to containers using a load balancer, the load balancer is placed into a public subnet. ECS configures the load balancer to forward traffic to the container tasks in the private subnet:

This configuration allows your tasks in Fargate to be safely isolated from the rest of the internet. They can still initiate network communication with external resources via the NAT gateway, and still receive traffic from the public via the Application Load Balancer that is in the public subnet.

Another potential use case for a load balancer is for internal communication from one service to another service within the private subnet. This is typically used for a microservice deployment, in which one service such as an internet user account service needs to communicate with an internal service such as a password service. Obviously, it is undesirable for the password service to be directly accessible on the internet, so using an internet load balancer would be a major security vulnerability. Instead, this can be accomplished by hosting an internal load balancer within the private subnet:

With this approach, one container can distribute requests across an Auto Scaling group of other private containers via the internal load balancer, ensuring that the network traffic stays safely protected within the private subnet.

Best Practices for Fargate Networking

Determine whether you should use local task networking

Local task networking is ideal for communicating between containers that are tightly coupled and require maximum networking performance between them. However, when you deploy one or more containers as part of the same task they are always deployed together so it removes the ability to independently scale different types of workload up and down.

In the example of the application with a web tier and an API tier, it may be the case that powering the application requires only two web tier containers but 10 API tier containers. If local container networking is used between these two container types, then an extra eight unnecessary web tier containers would end up being run instead of allowing the two different services to scale independently.

A better approach would be to deploy the two containers as two different services, each with its own load balancer. This allows clients to communicate with the two web containers via the web service’s load balancer. The web service could distribute requests across the eight backend API containers via the API service’s load balancer.

Run internet tasks that require internet access in a public subnet

If you have tasks that require internet access and a lot of bandwidth for communication with other services, it is best to run them in a public subnet. Give them public IP addresses so that each task can communicate with other services directly.

If you run these tasks in a private subnet, then all their outbound traffic has to go through an NAT gateway. AWS NAT gateways support up to 10 Gbps of burst bandwidth. If your bandwidth requirements go over this, then all task networking starts to get throttled. To avoid this, you could distribute the tasks across multiple private subnets, each with their own NAT gateway. It can be easier to just place the tasks into a public subnet, if possible.

Avoid using a public subnet or public IP addresses for private, internal tasks

If you are running a service that handles private, internal information, you should not put it into a public subnet or use a public IP address. For example, imagine that you have one task, which is an API gateway for authentication and access control. You have another background worker task that handles sensitive information.

The intended access pattern is that requests from the public go to the API gateway, which then proxies request to the background task only if the request is from an authenticated user. If the background task is in a public subnet and has a public IP address, then it could be possible for an attacker to bypass the API gateway entirely. They could communicate directly to the background task using its public IP address, without being authenticated.

Conclusion

Fargate gives you a way to run containerized tasks directly without managing any EC2 instances, but you still have full control over how you want networking to work. You can set up containers to talk to each other over the local network interface for maximum speed and efficiency. For running workloads that require privacy and security, use a private subnet with public internet access locked down. Or, for simplicity with an internet workload, you can just use a public subnet and give your containers a public IP address.

To deploy one of these Fargate task networking approaches, check out some sample CloudFormation templates showing how to configure the VPC, subnets, and load balancers.

If you have questions or suggestions, please comment below.

Playboy Brands Boing Boing a “Clickbait” Site With No Fair Use Defense

Post Syndicated from Andy original https://torrentfreak.com/playboy-brands-boing-boing-a-clickbait-site-with-no-fair-use-defense-180126/

Late 2017, Boing Boing co-editor Xena Jardin posted an article in which he linked to an archive containing every Playboy centerfold image to date.

“Kind of amazing to see how our standards of hotness, and the art of commercial erotic photography, have changed over time,” Jardin noted.

While Boing Boing had nothing to do with the compilation, uploading, or storing of the Imgur-based archive, Playboy took exception to the popular blog linking to the album.

Noting that Jardin had referred to the archive uploader as a “wonderful person”, the adult publication responded with a lawsuit (pdf), claiming that Boing Boing had commercially exploited its copyrighted images.

Last week, with assistance from the Electronic Frontier Foundation, Boing Boing parent company Happy Mutants filed a motion to dismiss in which it defended its right to comment on and link to copyrighted content without that constituting infringement.

“This lawsuit is frankly mystifying. Playboy’s theory of liability seems to be that it is illegal to link to material posted by others on the web — an act performed daily by hundreds of millions of users of Facebook and Twitter, and by journalists like the ones in Playboy’s crosshairs here,” the company wrote.

EFF Senior Staff Attorney Daniel Nazer weighed in too, arguing that since Boing Boing’s reporting and commenting is protected by copyright’s fair use doctrine, the “deeply flawed” lawsuit should be dismissed.

Now, just a week later, Playboy has fired back. Opposing Happy Mutants’ request for the Court to dismiss the case, the company cites the now-famous Perfect 10 v. Amazon/Google case from 2007, which tried to prevent Google from facilitating access to infringing images.

Playboy highlights the court’s finding that Google could have been held contributorily liable – if it had knowledge that Perfect 10 images were available using its search engine, could have taken simple measures to prevent further damage, but failed to do so.

Turning to Boing Boing’s conduct, Playboy says that the company knew it was linking to infringing content, could have taken steps to prevent that, but failed to do so. It then launches an attack on the site itself, offering disparaging comments concerning its activities and business model.

“This is an important case. At issue is whether clickbait sites like Happy Mutants’ Boing Boing weblog — a site designed to attract viewers and encourage them to click on links in order to generate advertising revenue — can knowingly find, promote, and profit from infringing content with impunity,” Playboy writes.

“Clickbait sites like Boing Boing are not known for creating original content. Rather, their business model is based on ‘collecting’ interesting content created by others. As such, they effectively profit off the work of others without actually creating anything original themselves.”

Playboy notes that while sites like Boing Boing are within their rights to leverage works created by others, courts in the US and overseas have ruled that knowingly linking to infringing content is unacceptable.

Even given these conditions, Playboy argues, Happy Mutants and the EFF now want the Court to dismiss the case so that sites are free to “not only encourage, facilitate, and induce infringement, but to profit from those harmful activities.”

Claiming that Boing Boing’s only reason for linking to the infringing album was to “monetize the web traffic that over fifty years of Playboy photographs would generate”, Playboy insists that the site and parent company Happy Mutants was properly charged with copyright infringement.

Playboy also dismisses Boing Boing’s argument that a link to infringing content cannot result in liability due to the link having both infringing and substantial non-infringing uses.

First citing the Betamax case, which found that maker Sony could not be held liable for infringement because its video recorders had substantial non-infringing uses, Playboy counters with the Grokster decision, which held that a distributor of a product could be liable for infringement, if there was an intent to encourage or support infringement.

“In this case, Happy Mutants’ offending link — which does nothing more than support infringing content — is good for nothing but promoting infringement and there is no legitimate public interest in its unlicensed availability,” Playboy notes.

In its motion to dismiss, Happy Mutants also argued that unless Playboy could identify users who “in fact downloaded — rather than simply viewing — the material in question,” the case should be dismissed. However, Playboy rejects the argument, claiming it is based on an erroneous interpretation of the law.

Citing the Grokster decision once more, the adult publisher notes that the Supreme Court found that someone infringes contributorily when they intentionally induce or encourage direct infringement.

“The argument that contributory infringement only lies where the defendant’s actions result in further infringement ignores the ‘or’ and collapses ‘inducing’ and ‘encouraging’ into one thing when they are two distinct things,” Playboy writes.

As for Boing Boing’s four classic fair use arguments, the publisher describes these as “extremely weak” and proceeds to hit them one by one.

In respect of the purpose and character of the use, Playboy discounts Boing Boing’s position that the aim of its post was to show “how our standards of hotness, and the art of commercial erotic photography, have changed over time.” The publisher argues that is the exact same purpose of Playboy magazine, while highliting its publication Playboy: The Compete Centerfolds, 1953-2016.

Moving on to the second factor of fair use – the nature of the copyrighted work – Playboy notes that an entire album of artwork is involved, rather than just a single image.

On the third factor, concerning the amount and substantiality of the original work used, Playboy argues that in order to publish an opinion on how “standards of hotness” had developed over time, there was no need to link to all of the pictures in the archive.

“Had only representative images from each decade, or perhaps even each year, been taken, this would be a very different case — but Happy Mutants cannot dispute that it knew it was linking to an illegal library of ‘Every Playboy Playmate Centerfold Ever’ since that is what it titled its blog post,” Playboy notes.

Finally, when considering the effect of the use upon the potential market for or value of the copyrighted work, Playbody says its archive of images continues to be monetized and Boing Boing’s use of infringing images jeopardizes that.

“Given that people are generally not going to pay for what is freely available, it is disingenuous of Happy Mutants to claim that promoting the free availability of infringing archives of Playboy’s work for viewing and downloading is not going to have an adverse effect on the value or market of that work,” the publisher adds.

While it appears the parties agree on very little, there is agreement on one key aspect of the case – its wider importance.

On the one hand, Playboy insists that a finding in its favor will ensure that people can’t commercially exploit infringing content with impunity. On the other, Boing Boing believes that the health of the entire Internet is at stake.

“The world can’t afford a judgment against us in this case — it would end the web as we know it, threatening everyone who publishes online, from us five weirdos in our basements to multimillion-dollar, globe-spanning publishing empires like Playboy,” the company concludes.

Playboy’s opposition to Happy Mutants’ motion to dismiss can be found here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Grumpy Cat Wins $710,000 From Copyright Infringing Coffee Maker

Post Syndicated from Ernesto original https://torrentfreak.com/grumpy-cat-wins-710000-copyright-infringing-coffee-maker-180125/

grumpcatThere are dozens of celebrity cats on the Internet, but Grumpy Cat probably tops them all.

The cat’s owners have made millions thanks to their pet’s unique facial expression, which turned her into an overnight Internet star.

Part of this revenue comes from successful merchandise lines, including the Grumpy Cat “Grumppuccino” iced coffee beverage, sold by the California company Grenade Beverage.

The company licensed the copyright and trademarks to sell the iced coffee but is otherwise not affiliated with the cat and its owners. Initially, this partnership went well, but after the coffee maker started to sell other “Grumpy Cat” products, things turned bad.

The cat’s owners, incorporated as Grumpy Cat LLC, took the matter to court with demands for the coffee maker to stop infringing associated copyrights and trademarks.

“Without authorization, Defendants […] have extensively and repeatedly exploited the Grumpy Cat Copyrights and the Grumpy Cat Trademarks,” the complaint read.

Pirate coffee..

grumpycoffee

After two years the case went before a jury this week where, Courthouse News reports, the cat itself also made an appearance.

The eight-person jury in Santa Ana, California sided with the cat’s owner and awarded the company $710,000 in copyright and trademark infringement damages, as well as a symbolic $1 for contract breach.

According to court documents, the majority of the damages have to be paid by Grumpy Beverage, but the company’s owner Paul Sandford is also held personally liable for $60,000.

The verdict is good news for Grumpy Cat and its owner, and according to their attorney, they are happy with the outcome.

“Grumpy Cat feels vindicated and feels the jury reached a just verdict,” Grumpy Cat’s lawyer David Jonelis said, describing it as “a complete victory.”

A copy of the verdict form is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Building Blocks of Amazon ECS

Post Syndicated from Tiffany Jernigan original https://aws.amazon.com/blogs/compute/building-blocks-of-amazon-ecs/

So, what’s Amazon Elastic Container Service (ECS)? ECS is a managed service for running containers on AWS, designed to make it easy to run applications in the cloud without worrying about configuring the environment for your code to run in. Using ECS, you can easily deploy containers to host a simple website or run complex distributed microservices using thousands of containers.

Getting started with ECS isn’t too difficult. To fully understand how it works and how you can use it, it helps to understand the basic building blocks of ECS and how they fit together!

Let’s begin with an analogy

Imagine you’re in a virtual reality game with blocks and portals, in which your task is to build kingdoms.

In your spaceship, you pull up a holographic map of your upcoming destination: Nozama, a golden-orange planet. Looking at its various regions, you see that the nearest one is za-southwest-1 (SW Nozama). You set your destination, and use your jump drive to jump to the outer atmosphere of za-southwest-1.

As you approach SW Nozama, you see three portals, 1a, 1b, and 1c. Each portal lets you transport directly to an isolated zone (Availability Zone), where you can start construction on your new kingdom (cluster), Royaume.

With your supply of blocks, you take the portal to 1b, and erect the surrounding walls of your first territory (instance)*.

Before you get ahead of yourself, there are some rules to keep in mind. For your territory to be a part of Royaume, the land ordinance requires construction of a building (container), specifically a castle, from which your territory’s lord (agent)* rules.

You can then create architectural plans (task definitions) to build your developments (tasks), consisting of up to 10 buildings per plan. A development can be built now within this or any territory, or multiple territories.

If you do decide to create more territories, you can either stay here in 1b or take a portal to another location in SW Nozama and start building there.

Amazon EC2 building blocks

We currently provide two launch types: EC2 and Fargate. With Fargate, the Amazon EC2 instances are abstracted away and managed for you. Instead of worrying about ECS container instances, you can just worry about tasks. In this post, the infrastructure components used by ECS that are handled by Fargate are marked with a *.

Instance*

EC2 instances are good ol’ virtual machines (VMs). And yes, don’t worry, you can connect to them (via SSH). Because customers have varying needs in memory, storage, and computing power, many different instance types are offered. Just want to run a small application or try a free trial? Try t2.micro. Want to run memory-optimized workloads? R3 and X1 instances are a couple options. There are many more instance types as well, which cater to various use cases.

AMI*

Sorry if you wanted to immediately march forward, but before you create your instance, you need to choose an AMI. An AMI stands for Amazon Machine Image. What does that mean? Basically, an AMI provides the information required to launch an instance: root volume, launch permissions, and volume-attachment specifications. You can find and choose a Linux or Windows AMI provided by AWS, the user community, the AWS Marketplace (for example, the Amazon ECS-Optimized AMI), or you can create your own.

Region

AWS is divided into regions that are geographic areas around the world (for now it’s just Earth, but maybe someday…). These regions have semi-evocative names such as us-east-1 (N. Virginia), us-west-2 (Oregon), eu-central-1 (Frankfurt), ap-northeast-1 (Tokyo), etc.

Each region is designed to be completely isolated from the others, and consists of multiple, distinct data centers. This creates a “blast radius” for failure so that even if an entire region goes down, the others aren’t affected. Like many AWS services, to start using ECS, you first need to decide the region in which to operate. Typically, this is the region nearest to you or your users.

Availability Zone

AWS regions are subdivided into Availability Zones. A region has at minimum two zones, and up to a handful. Zones are physically isolated from each other, spanning one or more different data centers, but are connected through low-latency, fiber-optic networking, and share some common facilities. EC2 is designed so that the most common failures only affect a single zone to prevent region-wide outages. This means you can achieve high availability in a region by spanning your services across multiple zones and distributing across hosts.

Amazon ECS building blocks

Container

Well, without containers, ECS wouldn’t exist!

Are containers virtual machines?
Nope! Virtual machines virtualize the hardware (benefits), while containers virtualize the operating system (even more benefits!). If you look inside a container, you would see that it is made by processes running on the host, and tied together by kernel constructs like namespaces, cgroups, etc. But you don’t need to bother about that level of detail, at least not in this post!

Why containers?
Containers give you the ability to build, ship, and run your code anywhere!

Before the cloud, you needed to self-host and therefore had to buy machines in addition to setting up and configuring the operating system (OS), and running your code. In the cloud, with virtualization, you can just skip to setting up the OS and running your code. Containers make the process even easier—you can just run your code.

Additionally, all of the dependencies travel in a package with the code, which is called an image. This allows containers to be deployed on any host machine. From the outside, it looks like a host is just holding a bunch of containers. They all look the same, in the sense that they are generic enough to be deployed on any host.

With ECS, you can easily run your containerized code and applications across a managed cluster of EC2 instances.

Are containers a fairly new technology?
The concept of containerization is not new. Its origins date back to 1979 with the creation of chroot. However, it wasn’t until the early 2000s that containers became a major technology. The most significant milestone to date was the release of Docker in 2013, which led to the popularization and widespread adoption of containers.

What does ECS use?
While other container technologies exist (LXC, rkt, etc.), because of its massive adoption and use by our customers, ECS was designed first to work natively with Docker containers.

Container instance*

Yep, you are back to instances. An instance is just slightly more complex in the ECS realm though. Here, it is an ECS container instance that is an EC2 instance running the agent, has a specifically defined IAM policy and role, and has been registered into your cluster.

And as you probably guessed, in these instances, you are running containers. 

AMI*

These container instances can use any AMI as long as it has the following specifications: a modern Linux distribution with the agent and the Docker Daemon with any Docker runtime dependencies running on it.

Want it more simplified? Well, AWS created the Amazon ECS-Optimized AMI for just that. Not only does that AMI come preconfigured with all of the previously mentioned specifications, it’s tested and includes the recommended ecs-init upstart process to run and monitor the agent.

Cluster

An ECS cluster is a grouping of (container) instances* (or tasks in Fargate) that lie within a single region, but can span multiple Availability Zones – it’s even a good idea for redundancy. When launching an instance (or tasks in Fargate), unless specified, it registers with the cluster named “default”. If “default” doesn’t exist, it is created. You can also scale and delete your clusters.

Agent*

The Amazon ECS container agent is a Go program that runs in its own container within each EC2 instance that you use with ECS. (It’s also available open source on GitHub!) The agent is the intermediary component that takes care of the communication between the scheduler and your instances. Want to register your instance into a cluster? (Why wouldn’t you? A cluster is both a logical boundary and provider of pool of resources!) Then you need to run the agent on it.

Task

When you want to start a container, it has to be part of a task. Therefore, you have to create a task first. Succinctly, tasks are a logical grouping of 1 to N containers that run together on the same instance, with N defined by you, up to 10. Let’s say you want to run a custom blog engine. You could put together a web server, an application server, and an in-memory cache, each in their own container. Together, they form a basic frontend unit.

Task definition

Ah, but you cannot create a task directly. You have to create a task definition that tells ECS that “task definition X is composed of this container (and maybe that other container and that other container too!).” It’s kind of like an architectural plan for a city. Some other details it can include are how the containers interact, container CPU and memory constraints, and task permissions using IAM roles.

Then you can tell ECS, “start one task using task definition X.” It might sound like unnecessary planning at first. As soon as you start to deal with multiple tasks, scaling, upgrades, and other “real life” scenarios, you’ll be glad that you have task definitions to keep track of things!

Scheduler*

So, the scheduler schedules… sorry, this should be more helpful, huh? The scheduler is part of the “hosted orchestration layer” provided by ECS. Wait a minute, what do I mean by “hosted orchestration”? Simply put, hosted means that it’s operated by ECS on your behalf, without you having to care about it. Your applications are deployed in containers running on your instances, but the managing of tasks is taken care of by ECS. One less thing to worry about!

Also, the scheduler is the component that decides what (which containers) gets to run where (on which instances), according to a number of constraints. Say that you have a custom blog engine to scale for high availability. You could create a service, which by default, spreads tasks across all zones in the chosen region. And if you want each task to be on a different instance, you can use the distinctInstance task placement constraint. ECS makes sure that not only this happens, but if a task fails, it starts again.

Service

To ensure that you always have your task running without managing it yourself, you can create a service based on the task that you defined and ECS ensures that it stays running. A service is a special construct that says, “at any given time, I want to make sure that N tasks using task definition X1 are running.” If N=1, it just means “make sure that this task is running, and restart it if needed!” And with N>1, you’re basically scaling your application until you hit N, while also ensuring each task is running.

So, what now?

Hopefully you, at the very least, learned a tiny something. All comments are very welcome!

Want to discuss ECS with others? Join the amazon-ecs slack group, which members of the community created and manage.

Also, if you’re interested in learning more about the core concepts of ECS and its relation to EC2, here are some resources:

Pages
Amazon ECS landing page
AWS Fargate landing page
Amazon ECS Getting Started
Nathan Peck’s AWSome ECS

Docs
Amazon EC2
Amazon ECS

Blogs
AWS Compute Blog
AWS Blog

GitHub code
Amazon ECS container agent
Amazon ECS CLI

AWS videos
Learn Amazon ECS
AWS videos
AWS webinars

 

— tiffany

 @tiffanyfayj

 

Recent EC2 Goodies – Launch Templates and Spread Placement

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/recent-ec2-goodies-launch-templates-and-spread-placement/

We launched some important new EC2 instance types and features at AWS re:Invent. I’ve already told you about the M5, H1, T2 Unlimited and Bare Metal instances, and about Spot features such as Hibernation and the New Pricing Model. Randall told you about the Amazon Time Sync Service. Today I would like to tell you about two of the features that we launched: Spread placement groups and Launch Templates. Both features are available in the EC2 Console and from the EC2 APIs, and can be used in all of the AWS Regions in the “aws” partition:

Launch Templates
You can use launch templates to store the instance, network, security, storage, and advanced parameters that you use to launch EC2 instances, and can also include any desired tags. Each template can include any desired subset of the full collection of parameters. You can, for example, define common configuration parameters such as tags or network configurations in a template, and allow the other parameters to be specified as part of the actual launch.

Templates give you the power to set up a consistent launch environment that spans instances launched in On-Demand and Spot form, as well as through EC2 Auto Scaling and as part of a Spot Fleet. You can use them to implement organization-wide standards and to enforce best practices, and you can give your IAM users the ability to launch instances via templates while withholding the ability to do so via the underlying APIs.

Templates are versioned and you can use any desired version when you launch an instance. You can create templates from scratch, base them on the previous version, or copy the parameters from a running instance.

Here’s how you create a launch template in the Console:

Here’s how to include network interfaces, storage volumes, tags, and security groups:

And here’s how to specify advanced and specialized parameters:

You don’t have to specify values for all of these parameters in your templates; enter the values that are common to multiple instances or launches and specify the rest at launch time.

When you click Create launch template, the template is created and can be used to launch On-Demand instances, create Auto Scaling Groups, and create Spot Fleets:

The Launch Instance button now gives you the option to launch from a template:

Simply choose the template and the version, and finalize all of the launch parameters:

You can also manage your templates and template versions from the Console:

To learn more about this feature, read Launching an Instance from a Launch Template.

Spread Placement Groups
Spread placement groups indicate that you do not want the instances in the group to share the same underlying hardware. Applications that rely on a small number of critical instances can launch them in a spread placement group to reduce the odds that one hardware failure will impact more than one instance. Here are a couple of things to keep in mind when you use spread placement groups:

  • Availability Zones – A single spread placement group can span multiple Availability Zones. You can have a maximum of seven running instances per Availability Zone per group.
  • Unique Hardware – Launch requests can fail if there is insufficient unique hardware available. The situation changes over time as overall usage changes and as we add additional hardware; you can retry failed requests at a later time.
  • Instance Types – You can launch a wide variety of M4, M5, C3, R3, R4, X1, X1e, D2, H1, I2, I3, HS1, F1, G2, G3, P2, and P3 instances types in spread placement groups.
  • Reserved Instances – Instances launched into a spread placement group can make use of reserved capacity. However, you cannot currently reserve capacity for a placement group and could receive an ICE (Insufficient Capacity Error) even if you have some RI’s available.
  • Applicability – You cannot use spread placement groups in conjunction with Dedicated Instances or Dedicated Hosts.

You can create and use spread placement groups from the AWS Management Console, the AWS Command Line Interface (CLI), the AWS Tools for Windows PowerShell, and the AWS SDKs. The console has a new feature that will help you to learn how to use the command line:

You can specify an existing placement group or create a new one when you launch an EC2 instance:

To learn more, read about Placement Groups.

Jeff;