Tag Archives: eu

Treasure Trove of AACS 2.0 UHD Blu-Ray Keys Leak Online

Post Syndicated from Ernesto original https://torrentfreak.com/treasure-trove-of-aacs-2-0-uhd-blu-ray-keys-leak-online-171211/

Nowadays, movie buffs and videophiles find it hard to imagine a good viewing experience without UHD content, but disc rippers and pirates have remained on the sidelines for a long time.

Protected with strong AACS 2.0 encryption, UHD Blu-ray discs have long been one of the last bastions movie pirates had yet to breach.

This year there have been some major developments on this front, as full copies of UHD discs started to leak online. While it remained unclear how these were ripped, it was a definite milestone.

Just a few months ago another breakthrough came when a Russian company released a Windows tool called DeUHD that could rip UHD Blu-ray discs. Again, the method for obtaining the keys was not revealed.

Now there’s another setback for AACS LA, the licensing outfit founded by Warner Bros, Disney, Microsoft, Intel, and others. On various platforms around the Internet, copies of 72 AACS 2.0 keys are being shared.

The first mention we can find was posted a few days ago in a ten-year-old forum thread in the Doom9 forums. Since then it has been replicated a few times, without much fanfare.

The keys

The keys in question are confirmed to work and allow people to rip UHD Blu-ray discs of movies with freely available software such as MakeMKV. They are also different from the DeUHD list, so there are more people who know how to get them.

The full list of leaked keys includes movies such as Deadpool, Hancock, Passengers, Star Trek: Into Darkness, and The Martian. Some movies have multiple keys, likely as a result of different disc releases.

The leaked keys are also relevant for another reason. Ten years ago, a hacker leaked the AACS cryptographic key “09 F9” online which prompted the MPAA and AACS LA to issue DMCA takedown requests to sites where it surfaced.

This escalated into a censorship debate when Digg started removing articles that referenced the leak, triggering a massive backlash.

Thus fas the response to the AACS 2.0 leaks has been pretty tame, but it’s still early days. A user who posted the leaked keys on MyCe has already removed them due to possible copyright problems, so it’s definitely still a touchy subject.

The question that remains now is how the hacker managed to secure the keys, and if AACS 2.0 has been permanently breached.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

16-Year-Old Boy Arrested for Running Pirate TV Service

Post Syndicated from Andy original https://torrentfreak.com/16-year-old-boy-arrested-for-running-pirate-tv-service-171211/

After more than a decade and a half in existence, public pirate sites, services, and apps remain a thorn in the side of entertainment industry groups who are determined to close them down.

That trend continued last week when French anti-piracy group ALPA teamed up with police in the Bordeaux region to raid and arrest the founder and administrator of piracy service ARTV.

According to the anti-piracy group, the ARTV.watch website first appeared during April 2017 but quickly grew to become a significant source of streaming TV piracy. Every month the site had around 150,000 visitors and in less than eight months amassed 800,000 registered users.

“Artv.watch was a public site offering live access to 176 free and paid French TV channels that are members of ALPA: Canal + Group, M6 Group, TF1 Group, France Télévision Group, Paramount, Disney, and FOX. Other thematic and sports channels were broadcast,” an ALPA statement reads.

This significant offering was reportedly lucrative for the site’s operator. While probably best taken with a grain of salt, ALPA estimates the site generated around 3,000 euros per month from advertising revenue. That’s a decent amount for anyone but even more so when one learns that ARTV’s former operator is just 16 years old.

“ARTV.WATCH it’s over. ARTV is now closed for legal reasons. Thank you for your understanding! The site was indeed illegal,” a notice on the site now reads.

“Thank you all for this experience that I have acquired in this project. And thanks to you who have believed in me.”

Closure formalities aside, ARTV’s founder also has a message for anyone else considering launching a similar platform.

“Notice to anyone wanting to do a site of the same kind, I strongly advise against it. On the criminal side, the punishment can go up to three years of imprisonment and a 300,000 euro fine. If [individual] complaints of channels (or productions) are filed against you, it will be more complicated to determine,” ARTV’s owner warns.

ALPA says that in addition to closing down the site, ARTV’s owner also deactivated the site’s Android app, which had been available for download on Google Play. The anti-piracy group adds that this action against IPTV and live streaming was a first in France.

For anyone who speaks French, the 16-year-old has published a video on YouTube talking about his predicament.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Weekend distractions, part deux: a bench, and stuff

Post Syndicated from Michal Zalewski original http://lcamtuf.blogspot.com/2017/12/weekend-distractions-part-deux-bench.html

Continuing the tradition of the previous post, here’s a perfectly good bench:

The legs are 8/4 hard maple, cut into 2.3″ (6 cm) strips and then glued together. The top is 4/4 domestic walnut, with an additional strip glued to the bottom to make it look thicker (because gosh darn, walnut is expensive).

Cut on a bandsaw, joined together with a biscuit joiner + glue, then sanded, that’s about it. Still applying finish (nitrocellulose lacquer from a rattle can), but this was the last moment when I could snap a photo (about to get dark) and it basically looks like the final product anyway. Pretty simple but turned out nice.

Several additional, smaller woodworking projects here.

Колективно управление на права: иск на ЕК срещу България

Post Syndicated from nellyo original https://nellyo.wordpress.com/2017/12/10/2014-26-ip/

На  7 декември 2017 r. стана известно решението на ЕК да предяви иск пред Съда на ЕС срещу България, Испания, Люксембург и Румъния поради липса на уведомление за цялостно транспониране в националното законодателство  на разпоредбите на ЕС в областта на колективното управление на авторското право и сродните му права и на многотериториалното лицензиране на правата върху музикални произведения за използване онлайн, т. е. транспониране на Директивата относно колективното управление на правата (Директива 2014/26/ЕС), което  е предвидено да се извърши до 10 април 2016 г. Процедурите за нарушение са открити през май 2016 г. , но Комисията все още не е получила уведомление, че са предприети необходимите мерки за транспониране на директивата.

Комисията реагира при липса на уведомление, при частично или неправилно транспониране.  По-нататък в съобщението се казва, че

Комисията призовава Съда да наложи финансови санкции на тези четири държави (България — 19 121,60 EUR на ден, Испания — 123 928,64 EUR на ден, Люксембург — 12 920,00 EUR на ден и Румъния — 42 377,60 EUR на ден).

Относно процедурата

Макар че по-известната реакция е Комисията да предложи финансова санкция (едва) в случай на неизпълнение на решение на Съда,  с нововъведение от Договора от Лисабон (чл.260, нов параграф 3 ДФЕС)  се предвижда следното: „Когато Комисията сезира Съда на Европейския съюз с иск по силата на член 258, тъй като счита, че тази държава  не е изпълнила задължението си да съобщи за мерките за транспониране на директива, приета съгласно определена законодателна процедура, тя може, ако счете за уместно, да определи размера на еднократно платимата сума или периодичната имуществена санкция, която тази държава трябва да заплати, и която според нея е съобразена с обстоятелствата.

Ако Съдът на Европейския съюз установи, че има неизпълнение на горепосоченото задължение, той може да наложи на тази държава  да заплати еднократно платимата сума или периодичната имуществена санкция, в рамките на размера, определен от Комисията. Задължението за плащане влиза в сила на датата, определена в решението на Съда на Европейския съюз.“

Както се посочва и в SEC(2010) 1371 окончателен, този параграф създава изцяло нов инструмент. Комисията може да предложи на Съда на Европейския съюз  още от момента на подаване на иска си за установяване на неизпълнение на задължение по силата на член 258 ДФЕС  да наложи заплащането на еднократно платимата сума или на периодичната имуществена санкция със същото решение, с което се установява неизпълнението от страна на държава-членка на задължението ѝ да съобщи за мерки за транспонирането на директива, приета съгласно законодателната процедура.

Комисията може да използва новата възможност  „ако счете  за уместно“ – какъвто очевидно е нашият случай.

Комисията вече е прилагала чл.260, параграф 3 ДФЕС по отношение на България – по дело С-203/13 искането на ЕК е Съдът освен отстраняване на допуснатите нарушения в законодателство, да наложи на Република България в съответствие c член 260, параграф 3 от ДФЕС и имуществена санкция в размер на 8448 евро на ден за всяка частично транспонирана директива  в областта на енергетиката.  Комисията  впоследствие е оттеглила иска си в резултат на поведението на Република България, която е предприела необходимите за изпълнение на задълженията си мерки едва след предявяването на иска.

По същество

Очаква въвеждане  Директива относно колективното управление на правата (Директива 2014/26/ЕС)

Директивата  има за цел да подобри начина, по който се управляват всички организации за колективно управление чрез създаване на общи стандарти за управление, прозрачност и финансова дисциплина. С нея също така се определят общи стандарти за многотериториалното лицензиране на правата върху музикални произведения за използване онлайн на вътрешния пазар. Директивата относно колективното управление на правата е важна част от законодателството за авторското право в Европа. Всички организации за колективно управление трябва да подобрят своите стандарти за управление и за прозрачност.

В Румъния според ЕК има неправилно прилагане – законодателството на ЕС предвижда, че авторите могат да разрешават или забраняват разгласяването пред публика на своите произведения, но в Румъния авторите нямат друг избор, освен да оставят управлението на своето право на публично разгласяване на музикални произведения на организация за колективно управление. Това води до лишаване от изключителното авторско право на публично разгласяване, което, по мнението на Комисията, не е оправдано по смисъла на правото на ЕС.  Другите три държави, вкл. България,  не са уведомили ЕК за транспониране.

Рискът от санкция и необходимостта от експедитивност не намалява изискванията към националния законодателен процес за обоснованост, балансираност, съобразяване на различните интереси. Става дума за стандарти за прозрачност на организациите за колективно управление – и това прави националната мярка особено важна.

Събщение за медиите

Filed under: Uncategorized Tagged: съд на ес

Hollywood and Netflix Ask Court to Seize Tickbox Streaming Devices

Post Syndicated from Ernesto original https://torrentfreak.com/hollywood-and-netflix-ask-court-to-seize-tickbox-streaming-devices-171209/

More and more people are starting to use Kodi-powered set-top boxes to stream video content to their TVs.

While Kodi itself is a neutral platform, sellers who ship devices with unauthorized add-ons give it a bad reputation.

According to the Alliance for Creativity and Entertainment (ACE), an anti-piracy partnership between Hollywood studios, Netflix, Amazon, and more than two dozen other companies, Tickbox TV is one of these bad actors.

Earlier this year, ACE filed a lawsuit against the Georgia-based company, which sells set-top boxes that allow users to stream a variety of popular media. The Tickbox devices use the Kodi media player and come with instructions on how to add various add-ons.

According to ACE, these devices are nothing more than pirate tools, allowing buyers to stream copyright infringing content. “TickBox promotes and distributes TickBox TV for infringing use, and that is exactly the result of its use,” they told court this week.

After the complaint was filed in October, Tickbox made some cosmetic changes to the site, removing some allegedly inducing language. The streaming devices are still for sale, however, but not for long if it’s up to the media giants.

This week ACE submitted a request for a preliminary injunction to the court, hoping to stop Tickbox’s sales activities.

“TickBox is intentionally inducing infringement, pure and simple. Plaintiffs respectfully request that the Court enter a preliminary injunction that requires TickBox to halt its flagrantly illegal conduct immediately,” they write in their application.

The companies explain that that since Tickbox is causing irreparable harm, all existing devices should be impounded.

“[A]ll TickBox TV devices in the possession of TickBox and all of its officers, directors, agents, servants, and employees, and all persons in active concert or participation or in privity with any of them are to be impounded and shall be retained by Defendant until further order of the Court,” the proposed order reads.

In addition, Tickbox should push out a software update which remove all infringing add-ons from the devices that were previously sold.

“TickBox shall, via software update, remove from all distributed TickBox TV devices all Kodi ‘Themes,’ ‘Builds,’ ‘Addons,’ or any other software that facilitates the infringing public performances of Plaintiffs’ Copyrighted Works.”

Among others, the list of allegedly infringing add-ons and themes includes Spinz, Lodi Black, Stream on Fire, Wookie, Aqua, CMM, Spanish Quasar, Paradox, Covenant, Elysium, UK Turk, Gurzil, Maverick, and Poseidon.

The filing shows that ACE is serious about its efforts to stop the sale of these type of streaming devices. Tickbox has yet to reply to the original complaint or the injunction request.

While this is the first US lawsuit of its kind, the anti-piracy conglomerate has been rather active in recent weeks. The group has successfully pressured several addon developers to quit and has been involved in enforcement actions around the globe.

A copy of the proposed preliminary injunction is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Sean Hodgins’ video-playing Christmas ornament

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/sean-hodgins-ornament/

Standard Christmas tree ornaments are just so boring, always hanging there doing nothing. Yawn! Lucky for us, Sean Hodgins has created an ornament that plays classic nineties Christmas adverts, because of nostalgia.

YouTube Christmas Ornament! – Raspberry Pi Project

This Christmas ornament will really take you back…

Ingredients

Sean first 3D printed a small CRT-shaped ornament resembling the family television set in The Simpsons. He then got to work on the rest of the components.

Pi Zero and electronic components — Sean Hodgins Raspberry Pi Christmas ornament

All images featured in this blog post are c/o Sean Hodgins. Thanks, Sean!

The ornament uses a Raspberry Pi Zero W, 2.2″ TFT LCD screen, Mono Amp, LiPo battery, and speaker, plus the usual peripherals. Sean purposely assembled it with jumper wires and tape, so that he can reuse the components for another project after the festive season.

Clip of PowerBoost 1000 LiPo charger — Sean Hodgins Raspberry Pi Christmas ornament

By adding header pins to a PowerBoost 1000 LiPo charger, Sean was able to connect a switch to control the Pi’s power usage. This method is handy if you want to seal your Pi in a casing that blocks access to the power leads. From there, jumper wires connect the audio amplifier, LCD screen, and PowerBoost to the Zero W.

Code

Then, with Raspbian installed to an SD card and SSH enabled on the Zero W, Sean got the screen to work. The type of screen he used has both SPI and FBTFT enabled. And his next step was to set up the audio functionality with the help of an Adafruit tutorial.

Clip demoing Sean Hodgins Raspberry Pi Christmas ornament

For video playback, Sean installed mplayer before writing a program to extract video content from YouTube*. Once extracted, the video files are saved to the Raspberry Pi, allowing for seamless playback on the screen.

Construct

When fully assembled, the entire build fit snugly within the 3D-printed television set. And as a final touch, Sean added the cut-out lens of a rectangular magnifying glass to give the display the look of a curved CRT screen.

Clip of completed Sean Hodgins Raspberry Pi Christmas ornament in a tree

Then finally, the ornament hangs perfectly on the Christmas tree, up and running and spreading nostalgic warmth.

For more information on the build, check out the Instructables tutorial. And to see all of Sean’s builds, subscribe to his YouTube channel.

Make

If you’re looking for similar projects, have a look at this tutorial by Cabe Atwell for building a Pi-powered ornament that receives and displays text messages.

Have you created Raspberry Pi tree ornaments? Maybe you’ve 3D printed some of our own? We’d love to see what you’re doing with a Raspberry Pi this festive season, so make sure to share your projects with us, either in the comments below or via our social media channels.

 

*At this point, I should note that we don’t support the extraction of  video content from YouTube for your own use if you do not have the right permissions. However, since Sean’s device can play back any video, we think it would look great on your tree showing your own family videos from previous years. So, y’know, be good, be legal, and be festive.

The post Sean Hodgins’ video-playing Christmas ornament appeared first on Raspberry Pi.

Dutch Film Distributor Wins Right To Chase Pirates, Store Data For 5 Years

Post Syndicated from Andy original https://torrentfreak.com/dutch-film-distributor-wins-right-to-chase-pirates-store-data-for-5-years-171208/

For many years, Dutch Internet users were allowed to download copyrighted content without reprisals, provided it was for their own personal use.

In 2014, however, the European Court of Justice ruled that the country’s “piracy levy” to compensate rightsholders was unlawful. Almost immediately, the government announced a downloading ban.

In March 2016, anti-piracy outfit BREIN followed up by obtaining permission from the Dutch Data Protection Authority to track and store the personal data of alleged BitTorrent pirates. This year, movie distributor Dutch FilmWorks (DFW) made a similar application.

The company said that it would be pursuing alleged pirates to deter future infringement but many suspected that securing cash settlements was its main aim. That was confirmed in August.

“[The letter to alleged pirates] will propose a fee. If someone does not agree [to pay], the organization can start a lawsuit,” said DFW CEO Willem Pruijsserts

“In Germany, this costs between €800 and €1,000, although we find this a bit excessive. But of course it has to be a deterrent, so it will be more than a tenner or two,” he added.

But despite the grand plans, nothing would be possible without first obtaining the necessary permission from the Data Protection Authority. This Wednesday, however, that arrived.

“DFW has given sufficient guarantees for the proper and careful processing of personal data. This means that DFW has been given a green light from the Data Protection Authority to collect personal data, such as IP addresses, from people downloading from illegal sources,” the Authority announced.

Noting that it received feedback from four entities during the six-week consultation process following the publication of its draft decision during the summer, the Data Protection Authority said that further investigations were duly carried out. All input was considered before handing down the final decision.

The Authority said it was satisfied that personal data would be handled correctly and that the information collected and stored would be encrypted and hashed to ensure integrity. Furthermore, data will not be retained for longer than is necessary.

“DFW has stated…that data from users with Dutch IP addresses who were involved in the exchange of a title owned by DFW, but in respect of which there is no intention to follow up on that within three months after receipt, will be destroyed,” the decision reads.

For any cases that are active and haven’t been discarded in the initial three-month period, DFW will be allowed to hold alleged pirates’ data for a maximum of five years, a period that matches the time a company has to file a claim under the Dutch Civil Code.

“When DFW does follow up on a file, DFW carries out further research into the identity of the users of the IP addresses. For this, it is necessary to contact the Internet service providers of the subscribers who used the IP addresses found in the BitTorrent network,” the Authority notes.

According to the decision, once DFW has a person’s details it can take any of several actions, starting with a simple warning or moving up to an amicable cash settlement. Failing that, it might choose to file a full-on court case in which the distributor seeks an injunction against the alleged pirate plus compensation and costs.

Only time will tell what strategy DFW will deploy against alleged pirates but since these schemes aren’t cheap to run, it’s likely that simple warning letters will be seriously outnumbered by demands for cash settlement.

While it seems unlikely that the Data Protection Authority will change its mind at this late stage, it’s decision remains open to appeal. Interested parties have just under six weeks to make their voices heard. Failing that, copyright trolling will hit the Netherlands in the weeks and months to come.

The full decision can be found here (Dutch, pdf) via Tweakers

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

timeShift(GrafanaBuzz, 1w) Issue 25

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/12/08/timeshiftgrafanabuzz-1w-issue-25/

Welcome to TimeShift

This week, a few of us from Grafana Labs, along with 4,000 of our closest friends, headed down to chilly Austin, TX for KubeCon + CloudNativeCon North America 2017. We got to see a number of great talks and were thrilled to see Grafana make appearances in some of the presentations. We were also a sponsor of the conference and handed out a ton of swag (we overnighted some of our custom Grafana scarves, which came in handy for Thursday’s snow).

We also announced Grafana Labs has joined the Cloud Native Computing Foundation as a Silver member! We’re excited to share our expertise in time series data visualization and open source software with the CNCF community.


Latest Release

Grafana 4.6.2 is available and includes some bug fixes:

  • Prometheus: Fixes bug with new Prometheus alerts in Grafana. Make sure to download this version if you’re using Prometheus for alerting. More details in the issue. #9777
  • Color picker: Bug after using textbox input field to change/paste color string #9769
  • Cloudwatch: build using golang 1.9.2 #9667, thanks @mtanda
  • Heatmap: Fixed tooltip for “time series buckets” mode #9332
  • InfluxDB: Fixed query editor issue when using > or < operators in WHERE clause #9871

Download Grafana 4.6.2 Now


From the Blogosphere

Grafana Labs Joins the CNCF: Grafana Labs has officially joined the Cloud Native Computing Foundation (CNCF). We look forward to working with the CNCF community to democratize metrics and help unify traditionally disparate information.

Automating Web Performance Regression Alerts: Peter and his team needed a faster and easier way to find web performance regressions at the Wikimedia Foundation. Grafana 4’s alerting features were exactly what they needed. This post covers their journey on setting up alerts for both RUM and synthetic testing and shares the alerts they’ve set up on their dashboards.

How To Install Grafana on Ubuntu 17.10: As you probably guessed from the title, this article walks you through installing and configuring Grafana in the latest version of Ubuntu (or earlier releases). It also covers installing plugins using the Grafana CLI tool.

Prometheus: Starting the Server with Alertmanager, cAdvisor and Grafana: Learn how to monitor Docker from scratch using cAdvisor, Prometheus and Grafana in this detailed, step-by-step walkthrough.

Monitoring Java EE Servers with Prometheus and Payara: In this screencast, Adam uses firehose; a Java EE 7+ metrics gateway for Prometheus, to convert the JSON output into Prometheus statistics and visualizes the data in Grafana.

Monitoring Spark Streaming with InfluxDB and Grafana: This article focuses on how to monitor Apache Spark Streaming applications with InfluxDB and Grafana at scale.


GrafanaCon EU, March 1-2, 2018

We are currently reaching out to everyone who submitted a talk to GrafanaCon and will soon publish the final schedule at grafanacon.org.

Join us March 1-2, 2018 in Amsterdam for 2 days of talks centered around Grafana and the surrounding monitoring ecosystem including Graphite, Prometheus, InfluxData, Elasticsearch, Kubernetes, and more.

Get Your Ticket Now


Grafana Plugins

Lots of plugin updates and a new OpenNMS Helm App plugin to announce! To install or update any plugin in an on-prem Grafana instance, use the Grafana-cli tool, or install and update with 1 click on Hosted Grafana.

NEW PLUGIN

OpenNMS Helm App – The new OpenNMS Helm App plugin replaces the old OpenNMS data source. Helm allows users to create flexible dashboards using both fault management (FM) and performance management (PM) data from OpenNMS® Horizon™ and/or OpenNMS® Meridian™. The old data source is now deprecated.


Install Now

UPDATED PLUGIN

PNP Data Source – This data source plugin (that uses PNP4Nagios to access RRD files) received a small, but important update that fixes template query parsing.


Update

UPDATED PLUGIN

Vonage Status Panel – The latest version of the Status Panel comes with a number of small fixes and changes. Below are a few of the enhancements:

  • Threshold settings – removed Show Always option, and replaced it with 2 options:
    • Display Alias – Select when to show the metric alias.
    • Display Value – Select when to show the metric value.
  • Text format configuration (bold / italic) for warning / critical / disabled states.
  • Option to change the corner radius of the panel. Now you can change the panel’s shape to have rounded corners.

Update

UPDATED PLUGIN

Google Calendar Plugin – This plugin received a small update, so be sure to install version 1.0.4.


Update

UPDATED PLUGIN

Carpet Plot Panel – The Carpet Plot Panel received a fix for IE 11, and also added the ability to choose custom colors.


Update


Upcoming Events:

In between code pushes we like to speak at, sponsor and attend all kinds of conferences and meetups. We also like to make sure we mention other Grafana-related events happening all over the world. If you’re putting on just such an event, let us know and we’ll list it here.

Docker Meetup @ Tuenti | Madrid, Spain – Dec 12, 2017: Javier Provecho: Intro to Metrics with Swarm, Prometheus and Grafana

Learn how to gain visibility in real time for your micro services. We’ll cover how to deploy a Prometheus server with persistence and Grafana, how to enable metrics endpoints for various service types (docker daemon, traefik proxy and postgres) and how to scrape, visualize and set up alarms based on those metrics.

RSVP

Grafana Lyon Meetup n ° 2 | Lyon, France – Dec 14, 2017: This meetup will cover some of the latest innovations in Grafana and discussion about automation. Also, free beer and chips, so – of course you’re going!

RSVP

FOSDEM | Brussels, Belgium – Feb 3-4, 2018: FOSDEM is a free developer conference where thousands of developers of free and open source software gather to share ideas and technology. Carl Bergquist is managing the Cloud and Monitoring Devroom, and we’ve heard there were some great talks submitted. There is no need to register; all are welcome.


Tweet of the Week

We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove

We were thrilled to see our dashboards bigger than life at KubeCon + CloudNativeCon this week. Thanks for snapping a photo and sharing!


Grafana Labs is Hiring!

We are passionate about open source software and thrive on tackling complex challenges to build the future. We ship code from every corner of the globe and love working with the community. If this sounds exciting, you’re in luck – WE’RE HIRING!

Check out our Open Positions


How are we doing?

Hard to believe this is the 25th issue of Timeshift! I have a blast writing these roundups, but Let me know what you think. Submit a comment on this article below, or post something at our community forum. Find an article I haven’t included? Send it my way. Help us make timeShift better!

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

Is blockchain a security topic? (Opensource.com)

Post Syndicated from jake original https://lwn.net/Articles/740929/rss

At Opensource.com, Mike Bursell looks at blockchain security from the angle of trust. Unlike cryptocurrencies, which are pseudonymous typically, other kinds of blockchains will require mapping users to real-life identities; that raises the trust issue.

What’s really interesting is that, if you’re thinking about moving to a permissioned blockchain or distributed ledger with permissioned actors, then you’re going to have to spend some time thinking about trust. You’re unlikely to be using a proof-of-work system for making blocks—there’s little point in a permissioned system—so who decides what comprises a “valid” block that the rest of the system should agree on? Well, you can rotate around some (or all) of the entities, or you can have a random choice, or you can elect a small number of über-trusted entities. Combinations of these schemes may also work.

If these entities all exist within one trust domain, which you control, then fine, but what if they’re distributors, or customers, or partners, or other banks, or manufacturers, or semi-autonomous drones, or vehicles in a commercial fleet? You really need to ensure that the trust relationships that you’re encoding into your implementation/deployment truly reflect the legal and IRL [in real life] trust relationships that you have with the entities that are being represented in your system.

And the problem is that, once you’ve deployed that system, it’s likely to be very difficult to backtrack, adjust, or reset the trust relationships that you’ve designed.”

Libertarians are against net neutrality

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/12/libertarians-are-against-net-neutrality.html

This post claims to be by a libertarian in support of net neutrality. As a libertarian, I need to debunk this. “Net neutrality” is a case of one-hand clapping, you rarely hear the competing side, and thus, that side may sound attractive. This post is about the other side, from a libertarian point of view.

That post just repeats the common, and wrong, left-wing talking points. I mean, there might be a libertarian case for some broadband regulation, but this isn’t it.

This thing they call “net neutrality” is just left-wing politics masquerading as some sort of principle. It’s no different than how people claim to be “pro-choice”, yet demand forced vaccinations. Or, it’s no different than how people claim to believe in “traditional marriage” even while they are on their third “traditional marriage”.

Properly defined, “net neutrality” means no discrimination of network traffic. But nobody wants that. A classic example is how most internet connections have faster download speeds than uploads. This discriminates against upload traffic, harming innovation in upload-centric applications like DropBox’s cloud backup or BitTorrent’s peer-to-peer file transfer. Yet activists never mention this, or other types of network traffic discrimination, because they no more care about “net neutrality” than Trump or Gingrich care about “traditional marriage”.

Instead, when people say “net neutrality”, they mean “government regulation”. It’s the same old debate between who is the best steward of consumer interest: the free-market or government.

Specifically, in the current debate, they are referring to the Obama-era FCC “Open Internet” order and reclassification of broadband under “Title II” so they can regulate it. Trump’s FCC is putting broadband back to “Title I”, which means the FCC can’t regulate most of its “Open Internet” order.

Don’t be tricked into thinking the “Open Internet” order is anything but intensely politically. The premise behind the order is the Democrat’s firm believe that it’s government who created the Internet, and all innovation, advances, and investment ultimately come from the government. It sees ISPs as inherently deceitful entities who will only serve their own interests, at the expense of consumers, unless the FCC protects consumers.

It says so right in the order itself. It starts with the premise that broadband ISPs are evil, using illegitimate “tactics” to hurt consumers, and continues with similar language throughout the order.

A good contrast to this can be seen in Tim Wu’s non-political original paper in 2003 that coined the term “net neutrality”. Whereas the FCC sees broadband ISPs as enemies of consumers, Wu saw them as allies. His concern was not that ISPs would do evil things, but that they would do stupid things, such as favoring short-term interests over long-term innovation (such as having faster downloads than uploads).

The political depravity of the FCC’s order can be seen in this comment from one of the commissioners who voted for those rules:

FCC Commissioner Jessica Rosenworcel wants to increase the minimum broadband standards far past the new 25Mbps download threshold, up to 100Mbps. “We invented the internet. We can do audacious things if we set big goals, and I think our new threshold, frankly, should be 100Mbps. I think anything short of that shortchanges our children, our future, and our new digital economy,” Commissioner Rosenworcel said.

This is indistinguishable from communist rhetoric that credits the Party for everything, as this booklet from North Korea will explain to you.

But what about monopolies? After all, while the free-market may work when there’s competition, it breaks down where there are fewer competitors, oligopolies, and monopolies.

There is some truth to this, in individual cities, there’s often only only a single credible high-speed broadband provider. But this isn’t the issue at stake here. The FCC isn’t proposing light-handed regulation to keep monopolies in check, but heavy-handed regulation that regulates every last decision.

Advocates of FCC regulation keep pointing how broadband monopolies can exploit their renting-seeking positions in order to screw the customer. They keep coming up with ever more bizarre and unlikely scenarios what monopoly power grants the ISPs.

But the never mention the most simplest: that broadband monopolies can just charge customers more money. They imagine instead that these companies will pursue a string of outrageous, evil, and less profitable behaviors to exploit their monopoly position.

The FCC’s reclassification of broadband under Title II gives it full power to regulate ISPs as utilities, including setting prices. The FCC has stepped back from this, promising it won’t go so far as to set prices, that it’s only regulating these evil conspiracy theories. This is kind of bizarre: either broadband ISPs are evilly exploiting their monopoly power or they aren’t. Why stop at regulating only half the evil?

The answer is that the claim “monopoly” power is a deception. It starts with overstating how many monopolies there are to begin with. When it issued its 2015 “Open Internet” order the FCC simultaneously redefined what they meant by “broadband”, upping the speed from 5-mbps to 25-mbps. That’s because while most consumers have multiple choices at 5-mbps, fewer consumers have multiple choices at 25-mbps. It’s a dirty political trick to convince you there is more of a problem than there is.

In any case, their rules still apply to the slower broadband providers, and equally apply to the mobile (cell phone) providers. The US has four mobile phone providers (AT&T, Verizon, T-Mobile, and Sprint) and plenty of competition between them. That it’s monopolistic power that the FCC cares about here is a lie. As their Open Internet order clearly shows, the fundamental principle that animates the document is that all corporations, monopolies or not, are treacherous and must be regulated.

“But corporations are indeed evil”, people argue, “see here’s a list of evil things they have done in the past!”

No, those things weren’t evil. They were done because they benefited the customers, not as some sort of secret rent seeking behavior.

For example, one of the more common “net neutrality abuses” that people mention is AT&T’s blocking of FaceTime. I’ve debunked this elsewhere on this blog, but the summary is this: there was no network blocking involved (not a “net neutrality” issue), and the FCC analyzed it and decided it was in the best interests of the consumer. It’s disingenuous to claim it’s an evil that justifies FCC actions when the FCC itself declared it not evil and took no action. It’s disingenuous to cite the “net neutrality” principle that all network traffic must be treated when, in fact, the network did treat all the traffic equally.

Another frequently cited abuse is Comcast’s throttling of BitTorrent.Comcast did this because Netflix users were complaining. Like all streaming video, Netflix backs off to slower speed (and poorer quality) when it experiences congestion. BitTorrent, uniquely among applications, never backs off. As most applications become slower and slower, BitTorrent just speeds up, consuming all available bandwidth. This is especially problematic when there’s limited upload bandwidth available. Thus, Comcast throttled BitTorrent during prime time TV viewing hours when the network was already overloaded by Netflix and other streams. BitTorrent users wouldn’t mind this throttling, because it often took days to download a big file anyway.

When the FCC took action, Comcast stopped the throttling and imposed bandwidth caps instead. This was a worse solution for everyone. It penalized heavy Netflix viewers, and prevented BitTorrent users from large downloads. Even though BitTorrent users were seen as the victims of this throttling, they’d vastly prefer the throttling over the bandwidth caps.

In both the FaceTime and BitTorrent cases, the issue was “network management”. AT&T had no competing video calling service, Comcast had no competing download service. They were only reacting to the fact their networks were overloaded, and did appropriate things to solve the problem.

Mobile carriers still struggle with the “network management” issue. While their networks are fast, they are still of low capacity, and quickly degrade under heavy use. They are looking for tricks in order to reduce usage while giving consumers maximum utility.

The biggest concern is video. It’s problematic because it’s designed to consume as much bandwidth as it can, throttling itself only when it experiences congestion. This is what you probably want when watching Netflix at the highest possible quality, but it’s bad when confronted with mobile bandwidth caps.

With small mobile devices, you don’t want as much quality anyway. You want the video degraded to lower quality, and lower bandwidth, all the time.

That’s the reasoning behind T-Mobile’s offerings. They offer an unlimited video plan in conjunction with the biggest video providers (Netflix, YouTube, etc.). The catch is that when congestion occurs, they’ll throttle it to lower quality. In other words, they give their bandwidth to all the other phones in your area first, then give you as much of the leftover bandwidth as you want for video.

While it sounds like T-Mobile is doing something evil, “zero-rating” certain video providers and degrading video quality, the FCC allows this, because they recognize it’s in the customer interest.

Mobile providers especially have great interest in more innovation in this area, in order to conserve precious bandwidth, but they are finding it costly. They can’t just innovate, but must ask the FCC permission first. And with the new heavy handed FCC rules, they’ve become hostile to this innovation. This attitude is highlighted by the statement from the “Open Internet” order:

And consumers must be protected, for example from mobile commercial practices masquerading as “reasonable network management.”

This is a clear declaration that free-market doesn’t work and won’t correct abuses, and that that mobile companies are treacherous and will do evil things without FCC oversight.

Conclusion

Ignoring the rhetoric for the moment, the debate comes down to simple left-wing authoritarianism and libertarian principles. The Obama administration created a regulatory regime under clear Democrat principles, and the Trump administration is rolling it back to more free-market principles. There is no principle at stake here, certainly nothing to do with a technical definition of “net neutrality”.

The 2015 “Open Internet” order is not about “treating network traffic neutrally”, because it doesn’t do that. Instead, it’s purely a left-wing document that claims corporations cannot be trusted, must be regulated, and that innovation and prosperity comes from the regulators and not the free market.

It’s not about monopolistic power. The primary targets of regulation are the mobile broadband providers, where there is plenty of competition, and who have the most “network management” issues. Even if it were just about wired broadband (like Comcast), it’s still ignoring the primary ways monopolies profit (raising prices) and instead focuses on bizarre and unlikely ways of rent seeking.

If you are a libertarian who nonetheless believes in this “net neutrality” slogan, you’ve got to do better than mindlessly repeating the arguments of the left-wing. The term itself, “net neutrality”, is just a slogan, varying from person to person, from moment to moment. You have to be more specific. If you truly believe in the “net neutrality” technical principle that all traffic should be treated equally, then you’ll want a rewrite of the “Open Internet” order.

In the end, while libertarians may still support some form of broadband regulation, it’s impossible to reconcile libertarianism with the 2015 “Open Internet”, or the vague things people mean by the slogan “net neutrality”.

How to Easily Apply Amazon Cloud Directory Schema Changes with In-Place Schema Upgrades

Post Syndicated from Mahendra Chheda original https://aws.amazon.com/blogs/security/how-to-easily-apply-amazon-cloud-directory-schema-changes-with-in-place-schema-upgrades/

Now, Amazon Cloud Directory makes it easier for you to apply schema changes across your directories with in-place schema upgrades. Your directory now remains available while Cloud Directory applies backward-compatible schema changes such as the addition of new fields. Without migrating data between directories or applying code changes to your applications, you can upgrade your schemas. You also can view the history of your schema changes in Cloud Directory by using version identifiers, which help you track and audit schema versions across directories. If you have multiple instances of a directory with the same schema, you can view the version history of schema changes to manage your directory fleet and ensure that all directories are running with the same schema version.

In this blog post, I demonstrate how to perform an in-place schema upgrade and use schema versions in Cloud Directory. I add additional attributes to an existing facet and add a new facet to a schema. I then publish the new schema and apply it to running directories, upgrading the schema in place. I also show how to view the version history of a directory schema, which helps me to ensure my directory fleet is running the same version of the schema and has the correct history of schema changes applied to it.

Note: I share Java code examples in this post. I assume that you are familiar with the AWS SDK and can use Java-based code to build a Cloud Directory code example. You can apply the concepts I cover in this post to other programming languages such as Python and Ruby.

Cloud Directory fundamentals

I will start by covering a few Cloud Directory fundamentals. If you are already familiar with the concepts behind Cloud Directory facets, schemas, and schema lifecycles, you can skip to the next section.

Facets: Groups of attributes. You use facets to define object types. For example, you can define a device schema by adding facets such as computers, phones, and tablets. A computer facet can track attributes such as serial number, make, and model. You can then use the facets to create computer objects, phone objects, and tablet objects in the directory to which the schema applies.

Schemas: Collections of facets. Schemas define which types of objects can be created in a directory (such as users, devices, and organizations) and enforce validation of data for each object class. All data within a directory must conform to the applied schema. As a result, the schema definition is essentially a blueprint to construct a directory with an applied schema.

Schema lifecycle: The four distinct states of a schema: Development, Published, Applied, and Deleted. Schemas in the Published and Applied states have version identifiers and cannot be changed. Schemas in the Applied state are used by directories for validation as applications insert or update data. You can change schemas in the Development state as many times as you need them to. In-place schema upgrades allow you to apply schema changes to an existing Applied schema in a production directory without the need to export and import the data populated in the directory.

How to add attributes to a computer inventory application schema and perform an in-place schema upgrade

To demonstrate how to set up schema versioning and perform an in-place schema upgrade, I will use an example of a computer inventory application that uses Cloud Directory to store relationship data. Let’s say that at my company, AnyCompany, we use this computer inventory application to track all computers we give to our employees for work use. I previously created a ComputerSchema and assigned its version identifier as 1. This schema contains one facet called ComputerInfo that includes attributes for SerialNumber, Make, and Model, as shown in the following schema details.

Schema: ComputerSchema
Version: 1

Facet: ComputerInfo
Attribute: SerialNumber, type: Integer
Attribute: Make, type: String
Attribute: Model, type: String

AnyCompany has offices in Seattle, Portland, and San Francisco. I have deployed the computer inventory application for each of these three locations. As shown in the lower left part of the following diagram, ComputerSchema is in the Published state with a version of 1. The Published schema is applied to SeattleDirectory, PortlandDirectory, and SanFranciscoDirectory for AnyCompany’s three locations. Implementing separate directories for different geographic locations when you don’t have any queries that cross location boundaries is a good data partitioning strategy and gives your application better response times with lower latency.

Diagram of ComputerSchema in Published state and applied to three directories

Legend for the diagrams in this post

The following code example creates the schema in the Development state by using a JSON file, publishes the schema, and then creates directories for the Seattle, Portland, and San Francisco locations. For this example, I assume the schema has been defined in the JSON file. The createSchema API creates a schema Amazon Resource Name (ARN) with the name defined in the variable, SCHEMA_NAME. I can use the putSchemaFromJson API to add specific schema definitions from the JSON file.

// The utility method to get valid Cloud Directory schema JSON
String validJson = getJsonFile("ComputerSchema_version_1.json")

String SCHEMA_NAME = "ComputerSchema";

String developmentSchemaArn = client.createSchema(new CreateSchemaRequest()
        .withName(SCHEMA_NAME))
        .getSchemaArn();

// Put the schema document in the Development schema
PutSchemaFromJsonResult result = client.putSchemaFromJson(new PutSchemaFromJsonRequest()
        .withSchemaArn(developmentSchemaArn)
        .withDocument(validJson));

The following code example takes the schema that is currently in the Development state and publishes the schema, changing its state to Published.

String SCHEMA_VERSION = "1";
String publishedSchemaArn = client.publishSchema(
        new PublishSchemaRequest()
        .withDevelopmentSchemaArn(developmentSchemaArn)
        .withVersion(SCHEMA_VERSION))
        .getPublishedSchemaArn();

// Our Published schema ARN is as follows
// arn:aws:clouddirectory:us-west-2:XXXXXXXXXXXX:schema/published/ComputerSchema/1

The following code example creates a directory named SeattleDirectory and applies the published schema. The createDirectory API call creates a directory by using the published schema provided in the API parameters. Note that Cloud Directory stores a version of the schema in the directory in the Applied state. I will use similar code to create directories for PortlandDirectory and SanFranciscoDirectory.

String DIRECTORY_NAME = "SeattleDirectory"; 

CreateDirectoryResult directory = client.createDirectory(
        new CreateDirectoryRequest()
        .withName(DIRECTORY_NAME)
        .withSchemaArn(publishedSchemaArn));

String directoryArn = directory.getDirectoryArn();
String appliedSchemaArn = directory.getAppliedSchemaArn();

// This code section can be reused to create directories for Portland and San Francisco locations with the appropriate directory names

// Our directory ARN is as follows 
// arn:aws:clouddirectory:us-west-2:XXXXXXXXXXXX:directory/XX_DIRECTORY_GUID_XX

// Our applied schema ARN is as follows 
// arn:aws:clouddirectory:us-west-2:XXXXXXXXXXXX:directory/XX_DIRECTORY_GUID_XX/schema/ComputerSchema/1

Revising a schema

Now let’s say my company, AnyCompany, wants to add more information for computers and to track which employees have been assigned a computer for work use. I modify the schema to add two attributes to the ComputerInfo facet: Description and OSVersion (operating system version). I make Description optional because it is not important for me to track this attribute for the computer objects I create. I make OSVersion mandatory because it is critical for me to track it for all computer objects so that I can make changes such as applying security patches or making upgrades. Because I make OSVersion mandatory, I must provide a default value that Cloud Directory will apply to objects that were created before the schema revision, in order to handle backward compatibility. Note that you can replace the value in any object with a different value.

I also add a new facet to track computer assignment information, shown in the following updated schema as the ComputerAssignment facet. This facet tracks these additional attributes: Name (the name of the person to whom the computer is assigned), EMail (the email address of the assignee), Department, and department CostCenter. Note that Cloud Directory refers to the previously available version identifier as the Major Version. Because I can now add a minor version to a schema, I also denote the changed schema as Minor Version A.

Schema: ComputerSchema
Major Version: 1
Minor Version: A 

Facet: ComputerInfo
Attribute: SerialNumber, type: Integer 
Attribute: Make, type: String
Attribute: Model, type: Integer
Attribute: Description, type: String, required: NOT_REQUIRED
Attribute: OSVersion, type: String, required: REQUIRED_ALWAYS, default: "Windows 7"

Facet: ComputerAssignment
Attribute: Name, type: String
Attribute: EMail, type: String
Attribute: Department, type: String
Attribute: CostCenter, type: Integer

The following diagram shows the changes that were made when I added another facet to the schema and attributes to the existing facet. The highlighted area of the diagram (bottom left) shows that the schema changes were published.

Diagram showing that schema changes were published

The following code example revises the existing Development schema by adding the new attributes to the ComputerInfo facet and by adding the ComputerAssignment facet. I use a new JSON file for the schema revision, and for the purposes of this example, I am assuming the JSON file has the full schema including planned revisions.

// The utility method to get a valid CloudDirectory schema JSON
String schemaJson = getJsonFile("ComputerSchema_version_1_A.json")

// Put the schema document in the Development schema
PutSchemaFromJsonResult result = client.putSchemaFromJson(
        new PutSchemaFromJsonRequest()
        .withSchemaArn(developmentSchemaArn)
        .withDocument(schemaJson));

Upgrading the Published schema

The following code example performs an in-place schema upgrade of the Published schema with schema revisions (it adds new attributes to the existing facet and another facet to the schema). The upgradePublishedSchema API upgrades the Published schema with backward-compatible changes from the Development schema.

// From an earlier code example, I know the publishedSchemaArn has this value: "arn:aws:clouddirectory:us-west-2:XXXXXXXXXXXX:schema/published/ComputerSchema/1"

// Upgrade publishedSchemaArn to minorVersion A. The Development schema must be backward compatible with 
// the existing publishedSchemaArn. 

String minorVersion = "A"

UpgradePublishedSchemaResult upgradePublishedSchemaResult = client.upgradePublishedSchema(new UpgradePublishedSchemaRequest()
        .withDevelopmentSchemaArn(developmentSchemaArn)
        .withPublishedSchemaArn(publishedSchemaArn)
        .withMinorVersion(minorVersion));

String upgradedPublishedSchemaArn = upgradePublishedSchemaResult.getUpgradedSchemaArn();

// The Published schema ARN after the upgrade shows a minor version as follows 
// arn:aws:clouddirectory:us-west-2:XXXXXXXXXXXX:schema/published/ComputerSchema/1/A

Upgrading the Applied schema

The following diagram shows the in-place schema upgrade for the SeattleDirectory directory. I am performing the schema upgrade so that I can reflect the new schemas in all three directories. As a reminder, I added new attributes to the ComputerInfo facet and also added the ComputerAssignment facet. After the schema and directory upgrade, I can create objects for the ComputerInfo and ComputerAssignment facets in the SeattleDirectory. Any objects that were created with the old facet definition for ComputerInfo will now use the default values for any additional attributes defined in the new schema.

Diagram of the in-place schema upgrade for the SeattleDirectory directory

I use the following code example to perform an in-place upgrade of the SeattleDirectory to a Major Version of 1 and a Minor Version of A. Note that you should change a Major Version identifier in a schema to make backward-incompatible changes such as changing the data type of an existing attribute or dropping a mandatory attribute from your schema. Backward-incompatible changes require directory data migration from a previous version to the new version. You should change a Minor Version identifier in a schema to make backward-compatible upgrades such as adding additional attributes or adding facets, which in turn may contain one or more attributes. The upgradeAppliedSchema API lets me upgrade an existing directory with a different version of a schema.

// This upgrades ComputerSchema version 1 of the Applied schema in SeattleDirectory to Major Version 1 and Minor Version A
// The schema must be backward compatible or the API will fail with IncompatibleSchemaException

UpgradeAppliedSchemaResult upgradeAppliedSchemaResult = client.upgradeAppliedSchema(new UpgradeAppliedSchemaRequest()
        .withDirectoryArn(directoryArn)
        .withPublishedSchemaArn(upgradedPublishedSchemaArn));

String upgradedAppliedSchemaArn = upgradeAppliedSchemaResult.getUpgradedSchemaArn();

// The Applied schema ARN after the in-place schema upgrade will appear as follows
// arn:aws:clouddirectory:us-west-2:XXXXXXXXXXXX:directory/XX_DIRECTORY_GUID_XX/schema/ComputerSchema/1

// This code section can be reused to upgrade directories for the Portland and San Francisco locations with the appropriate directory ARN

Note: Cloud Directory has excluded returning the Minor Version identifier in the Applied schema ARN for backward compatibility and to enable the application to work across older and newer versions of the directory.

The following diagram shows the changes that are made when I perform an in-place schema upgrade in the two remaining directories, PortlandDirectory and SanFranciscoDirectory. I make these calls sequentially, upgrading PortlandDirectory first and then upgrading SanFranciscoDirectory. I use the same code example that I used earlier to upgrade SeattleDirectory. Now, all my directories are running the most current version of the schema. Also, I made these schema changes without having to migrate data and while maintaining my application’s high availability.

Diagram showing the changes that are made with an in-place schema upgrade in the two remaining directories

Schema revision history

I can now view the schema revision history for any of AnyCompany’s directories by using the listAppliedSchemaArns API. Cloud Directory maintains the five most recent versions of applied schema changes. Similarly, to inspect the current Minor Version that was applied to my schema, I use the getAppliedSchemaVersion API. The listAppliedSchemaArns API returns the schema ARNs based on my schema filter as defined in withSchemaArn.

I use the following code example to query an Applied schema for its version history.

// This returns the five most recent Minor Versions associated with a Major Version
ListAppliedSchemaArnsResult listAppliedSchemaArnsResult = client.listAppliedSchemaArns(new ListAppliedSchemaArnsRequest()
        .withDirectoryArn(directoryArn)
        .withSchemaArn(upgradedAppliedSchemaArn));

// Note: The listAppliedSchemaArns API without the SchemaArn filter returns all the Major Versions in a directory

The listAppliedSchemaArns API returns the two ARNs as shown in the following output.

arn:aws:clouddirectory:us-west-2:XXXXXXXXXXXX:directory/XX_DIRECTORY_GUID_XX/schema/ComputerSchema/1
arn:aws:clouddirectory:us-west-2:XXXXXXXXXXXX:directory/XX_DIRECTORY_GUID_XX/schema/ComputerSchema/1/A

The following code example queries an Applied schema for current Minor Version by using the getAppliedSchemaVersion API.

// This returns the current Applied schema's Minor Version ARN 

GetAppliedSchemaVersion getAppliedSchemaVersionResult = client.getAppliedSchemaVersion(new GetAppliedSchemaVersionRequest()
	.withSchemaArn(upgradedAppliedSchemaArn));

The getAppliedSchemaVersion API returns the current Applied schema ARN with a Minor Version, as shown in the following output.

arn:aws:clouddirectory:us-west-2:XXXXXXXXXXXX:directory/XX_DIRECTORY_GUID_XX/schema/ComputerSchema/1/A

If you have a lot of directories, schema revision API calls can help you audit your directory fleet and ensure that all directories are running the same version of a schema. Such auditing can help you ensure high integrity of directories across your fleet.

Summary

You can use in-place schema upgrades to make changes to your directory schema as you evolve your data set to match the needs of your application. An in-place schema upgrade allows you to maintain high availability for your directory and applications while the upgrade takes place. For more information about in-place schema upgrades, see the in-place schema upgrade documentation.

If you have comments about this blog post, submit them in the “Comments” section below. If you have questions about implementing the solution in this post, start a new thread in the Directory Service forum or contact AWS Support.

– Mahendra

 

Apple CEO is Optimistic VPN Apps Will Return to China App Store

Post Syndicated from Andy original https://torrentfreak.com/apple-ceo-is-optimistic-vpn-apps-will-return-to-china-app-store-171206/

As part of an emerging crackdown on tools and systems with the ability to bypass China’s ‘Great Firewall’, during the summer Chinese government pressure began to affect Apple.

During the final days of July, Apple was forced to remove many of the most-used VPN applications from its Chinese App Store. In a short email from the company, VPN providers and software developers were told that VPN applications are considered illegal in China.

“We are writing to notify you that your application will be removed from the China App Store because it includes content that is illegal in China, which is not in compliance with the App Store Review Guidelines,” Apple informed the affected VPNs.

While the position on the ground doesn’t appear to have changed in the interim, Apple Chief Executive Tim Cook today expressed optimism that the VPN apps would eventually be restored to their former positions on China’s version of the App Store.

“My hope over time is that some of the things, the couple of things that’s been pulled, come back,” Cook said. “I have great hope on that and great optimism on that.”

According to Reuters, Cook said that he always tries to find ways to work together to settle differences and if he gets criticized for that “so be it.”

Speaking at the Fortune Forum in the Chinese city of Guangzhou, Cook said that he believes strongly in freedoms. But back home in the US, Apple has been strongly criticized for not doing enough to uphold freedom of speech and communication in China.

Back in October, two US senators wrote to Cook asking why the company had removed the VPN apps from the company’s store in China.

“VPNs allow users to access the uncensored Internet in China and other countries that restrict Internet freedom. If these reports are true, we are concerned that Apple may be enabling the Chinese government’s censorship and surveillance of the Internet,” senators Ted Cruz and Patrick Leahy wrote.

“While Apple’s many contributions to the global exchange of information are admirable, removing VPN apps that allow individuals in China to evade the Great Firewall and access the Internet privately does not enable people in China to ‘speak up’.”

They were comments Senator Leahy underlined again yesterday.

“American tech companies have become leading champions of free expression. But that commitment should not end at our borders,” Leahy told CNBC.

“Global leaders in innovation, like Apple, have both an opportunity and a moral obligation to promote free expression and other basic human rights in countries that routinely deny these rights.”

Whether the optimism expressed by Cook today is based on discussions with the Chinese government is unknown. However, it seems unlikely that authorities would be willing to significantly compromise on their dedication to maintaining the Great Firewall, which not only controls access to locally controversial content but also seeks to boost the success of Chinese companies.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Google & Facebook Excluded From Aussie Safe Harbor Copyright Amendments

Post Syndicated from Andy original https://torrentfreak.com/google-facebook-excluded-from-aussie-safe-harbor-copyright-amendments-171205/

Due to a supposed drafting error in Australia’s implementation of the Australia – US Free Trade Agreement (AUSFTA), copyright safe harbor provisions currently only apply to commercial Internet service providers.

This means that while local ISPs such as Telstra receive protection from copyright infringement complaints, services such as Google, Facebook and YouTube face legal uncertainty.

Proposed amendments to the Copyright Act earlier this year would’ve seen enhanced safe harbor protections for such platforms but they were withdrawn at the eleventh hour so that the government could consider “further feedback” from interested parties.

Shortly after the government embarked on a detailed consultation with entertainment industry groups. They accuse platforms like YouTube of exploiting safe harbor provisions in the US and Europe, which forces copyright holders into an expensive battle to have infringing content taken down. They do not want that in Australia and at least for now, they appear to have achieved their aims.

According to a report from AFR (paywall), the Australian government is set to introduce new legislation Wednesday which will expand safe harbors for some organizations but will exclude companies such as Google, Facebook, and similar platforms.

Communications Minister Mitch Fifield confirmed the exclusions while noting that additional safeguards will be available to institutions, libraries, and organizations in the disability, archive and culture sectors.

“The measures in the bill will ensure these sectors are protected from legal liability where they can demonstrate that they have taken reasonable steps to deal with copyright infringement by users of their online platforms,” Senator Fifield told AFR.

“Extending the safe harbor scheme in this way will provide greater certainty to institutions in these sectors and enhance their ability to provide more innovative and creative services for all Australians.”

According to the Senator, the government will continue its work with stakeholders to further reform safe harbor provisions, before applying them to other service providers.

The news that Google, Facebook, and similar platforms are to be denied access to the new safe harbor rules will be seen as a victory for rightsholders. They’re desperately trying to tighten up legislation in other regions where such safeguards are already in place, arguing that platforms utilizing user-generated content for profit should obtain appropriate licensing first.

This so-called ‘Value Gap’ (1,2,3) and associated proactive filtering proposals are among the hottest copyright topics right now, generating intense debate across Europe and the United States.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

ISPs and Movie Industry Prepare Canadian Pirate Site Blocking Deal

Post Syndicated from Ernesto original https://torrentfreak.com/isps-and-movie-industry-prepare-canadian-pirate-site-blocking-deal-171205/

ISP blocking has become a prime measure for the entertainment industry to target pirate sites on the Internet.

In recent years sites have been blocked throughout Europe, in Asia, and even Down Under.

In most countries, these blockades are ordered by local courts, which compel Internet providers to restrict access to certain websites. In Canada, however, there’s a plan in the works to allow for website blockades without judicial oversight.

A coalition of movie industry companies and ISPs, including Bell, Rogers, and Cineplex are discussing a proposal to implement such measures. The Canadian blocklist would be maintained by a new non-profit organization called “Internet Piracy Review Agency” (IPRA) and enforced through the CTRC, Canadaland reports.

The plan doesn’t come as a total surprise as Bell alluded to a nationwide blocking mechanism during a recent Government hearing. What becomes clear from the new plans, however, is that the telco is not alone.

The new proposal is being discussed by various stakeholders including ISPs and local movie companies. As in other countries, major American movie companies are also in the loop, but they will not be listed as official applicants when the plan is submitted to the CRTC.

Canadian law professor Micheal Geist is very critical of the plans. Although the proposal would only cover sites that “blatantly, overwhelmingly or structurally” engage in or facilitate copyright infringement, this can be a blurry line.

“Recent history suggests that the list will quickly grow to cover tougher judgment calls. For example, Bell has targeted TVAddons, a site that contains considerable non-infringing content,” Geist notes.

“It can be expected that many other sites disliked by rights holders or broadcasters would find their way onto the block list,” he adds.

While the full list of applicants is not ready yet, it is expected that the coalition will file its proposal to the CRTC before the end of the month.

Thus far, the Government appears to be reluctant in its response. In comments to Canadaland spokesperson Karl Sasseville stressed that Canada maintains committed to an open Internet.

“Our government supports an open internet where Canadians have the ability to access the content of their choice in accordance to Canadian laws,” Sasseville says. “While other parts of the world are focused on building walls, we’re focused on opening doors‎.”

As we’ve seen in the past, “net neutrality” and website blocking are not mutually exclusive. Courts around the world, also in Canada, have ordered content to be blocked, open Internet or not. However, bypassing the judicial system may prove to be a problem.

Professor Geist is happy with the Government’s comments and notes that legal basis for the proposal is thin.

He stresses that the ISPs involved in these plans should seriously consider if they want to continue down this path, which isn’t necessarily in the best interest of their customers.

“The government rightly seems dismissive of the proposal in the Canadaland report but as leading Internet providers, Bell and Rogers should be ashamed for leading the charge on such a dangerous, anti-speech and anti-consumer proposal,” Geist concludes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

AWS Contributes to Milestone 1.0 Release and Adds Model Serving Capability for Apache MXNet

Post Syndicated from Ana Visneski original https://aws.amazon.com/blogs/aws/aws-contributes-to-milestone-1-0-release-and-adds-model-serving-capability-for-apache-mxnet/

Post by Dr. Matt Wood

Today AWS announced contributions to the milestone 1.0 release of the Apache MXNet deep learning engine including the introduction of a new model-serving capability for MXNet. The new capabilities in MXNet provide the following benefits to users:

1) MXNet is easier to use: The model server for MXNet is a new capability introduced by AWS, and it packages, runs, and serves deep learning models in seconds with just a few lines of code, making them accessible over the internet via an API endpoint and thus easy to integrate into applications. The 1.0 release also includes an advanced indexing capability that enables users to perform matrix operations in a more intuitive manner.

  • Model Serving enables set up of an API endpoint for prediction: It saves developers time and effort by condensing the task of setting up an API endpoint for running and integrating prediction functionality into an application to just a few lines of code. It bridges the barrier between Python-based deep learning frameworks and production systems through a Docker container-based deployment model.
  • Advanced indexing for array operations in MXNet: It is now more intuitive for developers to leverage the powerful array operations in MXNet. They can use the advanced indexing capability by leveraging existing knowledge of NumPy/SciPy arrays. For example, it supports MXNet NDArray and Numpy ndarray as index, e.g. (a[mx.nd.array([1,2], dtype = ‘int32’]).

2) MXNet is faster: The 1.0 release includes implementation of cutting-edge features that optimize the performance of training and inference. Gradient compression enables users to train models up to five times faster by reducing communication bandwidth between compute nodes without loss in convergence rate or accuracy. For speech recognition acoustic modeling like the Alexa voice, this feature can reduce network bandwidth by up to three orders of magnitude during training. With the support of NVIDIA Collective Communication Library (NCCL), users can train a model 20% faster on multi-GPU systems.

  • Optimize network bandwidth with gradient compression: In distributed training, each machine must communicate frequently with others to update the weight-vectors and thereby collectively build a single model, leading to high network traffic. Gradient compression algorithm enables users to train models up to five times faster by compressing the model changes communicated by each instance.
  • Optimize the training performance by taking advantage of NCCL: NCCL implements multi-GPU and multi-node collective communication primitives that are performance optimized for NVIDIA GPUs. NCCL provides communication routines that are optimized to achieve high bandwidth over interconnection between multi-GPUs. MXNet supports NCCL to train models about 20% faster on multi-GPU systems.

3) MXNet provides easy interoperability: MXNet now includes a tool for converting neural network code written with the Caffe framework to MXNet code, making it easier for users to take advantage of MXNet’s scalability and performance.

  • Migrate Caffe models to MXNet: It is now possible to easily migrate Caffe code to MXNet, using the new source code translation tool for converting Caffe code to MXNet code.

MXNet has helped developers and researchers make progress with everything from language translation to autonomous vehicles and behavioral biometric security. We are excited to see the broad base of users that are building production artificial intelligence applications powered by neural network models developed and trained with MXNet. For example, the autonomous driving company TuSimple recently piloted a self-driving truck on a 200-mile journey from Yuma, Arizona to San Diego, California using MXNet. This release also includes a full-featured and performance optimized version of the Gluon programming interface. The ease-of-use associated with it combined with the extensive set of tutorials has led significant adoption among developers new to deep learning. The flexibility of the interface has driven interest within the research community, especially in the natural language processing domain.

Getting started with MXNet
Getting started with MXNet is simple. To learn more about the Gluon interface and deep learning, you can reference this comprehensive set of tutorials, which covers everything from an introduction to deep learning to how to implement cutting-edge neural network models. If you’re a contributor to a machine learning framework, check out the interface specs on GitHub.

To get started with the Model Server for Apache MXNet, install the library with the following command:

$ pip install mxnet-model-server

The Model Server library has a Model Zoo with 10 pre-trained deep learning models, including the SqueezeNet 1.1 object classification model. You can start serving the SqueezeNet model with just the following command:

$ mxnet-model-server \
  --models squeezenet=https://s3.amazonaws.com/model-server/models/squeezenet_v1.1/squeezenet_v1.1.model \
  --service dms/model_service/mxnet_vision_service.py

Learn more about the Model Server and view the source code, reference examples, and tutorials here: https://github.com/awslabs/mxnet-model-server/

-Dr. Matt Wood

Cr3dOv3r – Credential Reuse Attack Tool

Post Syndicated from Darknet original https://www.darknet.org.uk/2017/12/cr3dov3r-credential-reuse-attack-tool/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

Cr3dOv3r – Credential Reuse Attack Tool

Cr3dOv3r is a fairly simple Python-based set of functions that carry out the prelimary work as a credential reuse attack tool.

You just give the tool your target email address then it does two fairly straightforward (but useful) jobs:

  • Search for public leaks for the email and if it any, it returns with all available details about the leak (Using hacked-emails site API).
  • Then you give it this email’s old or leaked password then it checks this credentials against 16 websites (ex: facebook, twitter, google…) and notifies of any successful logins.

Read the rest of Cr3dOv3r – Credential Reuse Attack Tool now! Only available at Darknet.

New Piracy Scaremongering Video Depicts ‘Dangerous’ Raspberry Pi

Post Syndicated from Andy original https://torrentfreak.com/new-piracy-scaremongering-video-depicts-dangerous-raspberry-pi-171202/

Unless you’ve been living under a rock for the past few years, you’ll be aware that online streaming of video is a massive deal right now.

In addition to the successes of Netflix and Amazon Prime, for example, unauthorized sources are also getting a piece of the digital action.

Of course, entertainment industry groups hate this and are quite understandably trying to do something about it. Few people have a really good argument as to why they shouldn’t but recent tactics by some video-affiliated groups are really starting to wear thin.

From the mouth of Hollywood itself, the trending worldwide anti-piracy message is that piracy is dangerous. Torrent sites carry viruses that will kill your computer, streaming sites carry malware that will steal your identity, and ISDs (that’s ‘Illegal Streaming Devices’, apparently) can burn down your home, kill you, and corrupt your children.

If anyone is still taking notice of these overblown doomsday messages, here’s another one. Brought to you by the Hollywood-funded Digital Citizens Alliance, the new video rams home the message – the exact same message in fact – that set-top boxes providing the latest content for free are a threat to, well, just about everything.

While the message is probably getting a little old now, it’s worth noting the big reveal at ten seconds into the video, where the evil pirate box is introduced to the viewer.

As reproduced in the left-hand image below, it is a blatantly obvious recreation of the totally content-neutral Raspberry Pi, the affordable small computer from the UK. Granted, people sometimes use it for Kodi (the image on the right shows a Kodi-themed Raspberry Pi case, created by official Kodi team partner FLIRC) but its overwhelming uses have nothing to do with the media center, or indeed piracy.

Disreputable and dangerous device? Of course not

So alongside all the scary messages, the video succeeds in demonizing a perfectly innocent and safe device of which more than 15 million have been sold, many of them directly to schools. Since the device is so globally recognizable, it’s a not inconsiderable error.

It’s a topic that the Kodi team itself vented over earlier this week, noting how the British tabloid media presented the recent wave of “Kodi Boxes Can Kill You” click-bait articles alongside pictures of the Raspberry Pi.

“Instead of showing one of the many thousands of generic black boxes sold without the legally required CE/UL marks, the media mainly chose to depict a legitimate Rasbperry Pi clothed in a very familiar Kodi case. The Pis originate from Cambridge, UK, and have been rigorously certified,” the team complain.

“We’re also super-huge fans of the Raspberry Pi Foundation, and the proceeds of Pi board sales fund the awesome work they do to promote STEM (Science, Technology, Engineering and Mathematics) education in schools. The Kodi FLIRC case has also been a hit with our Raspberry Pi users and sales contribute towards the cost of events like Kodi DevCon.”

“It’s insulting, and potentially harmful, to see two successful (and safe) products being wrongly presented for the sake of a headline,” they conclude.

Indeed, it seems that both press and the entertainment industry groups that feed them have been playing fast and loose recently, with the Raspberry Pi getting a particularly raw deal.

Still, if it scares away some pirates, that’s the main thing….

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Мария Габриел в София: #AVMSD

Post Syndicated from nellyo original https://nellyo.wordpress.com/2017/12/02/avmsd-16/

Мария Габриел, член на Европейската комисия с ресор Цифрова икономика и цифрово общество, участва в кръгла маса на тема: “Предизвикателства пред медийния сектор в Европа“,  организирана от Представителството на Европейската комисия в България и Съветът за електронни медии. Бяха представени две теми:

  • актуалните въпроси по Директивата за аудиовизуалните медийни услуги и 
  • обявената обществена консултация по въпросите на фалшивите новини и дезинформацията.

I

Първата тема – ход на ревизията на Директивата за аудиовизуални медийни услуги.

Първоначалният проект и развитията по 2016/0151(COD) могат да се следят тук.

Според Стратегическата 18-месечна програма на Съвета (председателство Естония, България, Австрия) трите председателства ще приключат работата по ключови инициативи, свързани с цифровия единен пазар, включително

улесняване на свързаността и постигане на напредък в развитието на конкурентоспособен и справедлив цифров единен пазар чрез насърчаване на трансграничната електронна търговия (онлайн продажба на стоки, предоставяне на цифрово съдържание, реформа на авторското право, аудиовизуални медийни услуги, доставяне на колетни пратки) и чрез преход към интелигентна икономика (свободно движение на данни, преглед на регулаторната рамка в областта на далекосъобщенията, инициативи в областта на дружественото право) и укрепване на доверието и сигурността в сферата на цифровите услуги (нов пакет за защита на данните)

Работата по  медийната директива (впрочем и по директивата за авторското право) продължават по време на Българското председателство – в този дух е съобщението на посланика на Естония в ЕС:

https://platform.twitter.com/widgets.js

 

От изложението на г-жа Габриел стана ясно, че е тази седмица е проведен  пети триалог, в който тя участва лично. Работата продължава (“финализиране се очаква”) по време на Българското председателство. Като открити бяха посочени три тематични области:

(1) разширяване на обхвата на директивата, включване на стрийминга в услугите,  нови адресати – социалните медии и платформите за видеосподеляне;

(2) квотата на произведенията, създадени от европейски продуценти – и празмерът на квотата (предложения: 20 на сто –  ЕК, 30 на сто –  ЕП), и дефинициите са дискусионни;

(3) търговските съобщения – и правилата (почасова продължителност и продължителност за денонощие), и разполагането (предложение: двоен лимит за светло и тъмно време), и прекъсването (предложение: 20-минутно правило) са още предмет на обсъждане.

По повод откритите въпроси:

  • Директивата разширява обхвата си с всяка ревизия. Все пак, когато се обсъжда отговорност на платформите, да бъде в контекста на Директивата за електронната търговия – защото има риск да се стигне  до мълчаливо преуреждане или отмяна на нейните принципи.  В допълнение практиката на ЕСПЧ (решението Делфи за отговорността за съобщения във форумите) илюстрира проблемите, възникващи при нееднозначно разбиране на отговорността в правото на ЕС и в международното право.  По подобни съображения е важна обсъжданата Директива за авторското право, в частност чл.13 – който (споделям позицията на EDRI) трябва да бъде заличен, а нови права не следва да бъдат създавани (чл.11 проекта).
  • Квотата на европейските произведения съществува реално на по-високи нива от предвиденото в директивата. В тази област проблем  е отношението европейска – национална  квота, защото местните индустрии  настояват именно за национална квота – а това би било мярка със съвсем различни цели от   европейската квота (единен пазар).
  • Уредбата на електронните търговски съобщения се либерализира (в България  – и дерегулира) непрекъснато. Все някъде е добре този процес да спре, за да не се превърне телевизията в поредица търговски съобщения, прекъсвани с основно съдържание. Слушаме уверенията на доставчиците, че потребителите  са най-голямата им ценност и телевизиите не биха програмирали против интересите на зрителите, но все пак нека останат и правни гаранции срещу океаните от търговски съобщения.

Ключови за регулацията остават балансите:

  • индустрии / аудитории – както обикновено, натискът на индустриите е мощен и организиран, докато аудиториите са слабо организирани и слабо представени в диалога – дано ЕП  защити интересите на гражданите;
  • гъвкавост / недопускане на фрагментиране – съвсем точно наблюдение: неслучайно при предната ревизия ЕК обърна внимание, че гъвкавост и свобода е добре, но не и свобода при  дефинициите – защото различните дефиниции в националните мерки компрометират ефективното прилагане;
  • регулиране / саморегулиране – никой не е против саморегулирането, но самата ЕК преди време беше заявила, че има области като интелектуалната собственост и конкурентното право, които не могат по естеството си да бъдат оставени само на саморегулиране.

II

По втората тема – фалшивите новини –   г-жа Габриел поясни, че ЕК работи по следната логика:  дефиниция – обзор на добри практики – мерки, като на този етап не се мисли за законодателство. Още за тази част от срещата –  в медиите.

През 2018 г. се очаква:

  • създаване на работна група на високо равнище и доклад в тримесечен срок;
  • Евробарометър по въпроси, свързани с фалшивите новини;
  • Съобщение на ЕК към средата на 2018 г.

Съвсем наскоро (9 ноември 2017 ) Асоциацията на европейските журналисти проведе международна конференция с последващо обучение по въпросите на дезинформацията и фалшивите новини. Имах възможност да участвам (вж последната част от  записа на конференцията):

  • Смятам, че трябва да се започне с  дефиниране, но на следващо място да се продължи с категоризиране на фалшивите новини.
  • Реакцията към различните категории фалшиви новини  трябва да е различна. Правото вече предвижда санкции за някои категории лъжа – напр. клеветата или подвеждащата реклама. Не съм   сигурна, че правото няма да се намеси с нови мерки в най-сериозните случаи, когато става дума за дезинформационни кампании, които могат да заплашат националната сигурност.
  • Няма единно мнение по въпроса кой идентифицира фалшивите новини или – с други думи: кой владее истината.  Италия предлага това да са нов тип конкурентни регулатори (аналогия с регулирането на подвеждащата реклама), Чехия – звено в МВР (при риск за националната сигурност). ЕК казва, че няма нужда от Министерство на истината – но по-добре ли е това да е частна компания? Виждали сме как Facebook различава морално/неморално – предстои ли да се заеме и с преценката вярно/невярно?  Не всички са съгласни. Но точно това се случва, неслучайно се обсъжда – научаваме – контранотификация срещу свръхпремахване на съдържание.

 

Filed under: EU Law, Media Law Tagged: давму, fake

Glenn’s Take on re:Invent Part 2

Post Syndicated from Glenn Gore original https://aws.amazon.com/blogs/architecture/glenns-take-on-reinvent-part-2/

Glenn Gore here, Chief Architect for AWS. I’m in Las Vegas this week — with 43K others — for re:Invent 2017. We’ve got a lot of exciting announcements this week. I’m going to check in to the Architecture blog with my take on what’s interesting about some of the announcements from an cloud architectural perspective. My first post can be found here.

The Media and Entertainment industry has been a rapid adopter of AWS due to the scale, reliability, and low costs of our services. This has enabled customers to create new, online, digital experiences for their viewers ranging from broadcast to streaming to Over-the-Top (OTT) services that can be a combination of live, scheduled, or ad-hoc viewing, while supporting devices ranging from high-def TVs to mobile devices. Creating an end-to-end video service requires many different components often sourced from different vendors with different licensing models, which creates a complex architecture and a complex environment to support operationally.

AWS Media Services
Based on customer feedback, we have developed AWS Media Services to help simplify distribution of video content. AWS Media Services is comprised of five individual services that can either be used together to provide an end-to-end service or individually to work within existing deployments: AWS Elemental MediaConvert, AWS Elemental MediaLive, AWS Elemental MediaPackage, AWS Elemental MediaStore and AWS Elemental MediaTailor. These services can help you with everything from storing content safely and durably to setting up a live-streaming event in minutes without having to be concerned about the underlying infrastructure and scalability of the stream itself.

In my role, I participate in many AWS and industry events and often work with the production and event teams that put these shows together. With all the logistical tasks they have to deal with, the biggest question is often: “Will the live stream work?” Compounding this fear is the reality that, as users, we are also quick to jump on social media and make noise when a live stream drops while we are following along remotely. Worse is when I see event organizers actively selecting not to live stream content because of the risk of failure and and exposure — leading them to decide to take the safe option and not stream at all.

With AWS Media Services addressing many of the issues around putting together a high-quality media service, live streaming, and providing access to a library of content through a variety of mechanisms, I can’t wait to see more event teams use live streaming without the concern and worry I’ve seen in the past. I am excited for what this also means for non-media companies, as video becomes an increasingly common way of sharing information and adding a more personalized touch to internally- and externally-facing content.

AWS Media Services will allow you to focus more on the content and not worry about the platform. Awesome!

Amazon Neptune
As a civilization, we have been developing new ways to record and store information and model the relationships between sets of information for more than a thousand years. Government census data, tax records, births, deaths, and marriages were all recorded on medium ranging from knotted cords in the Inca civilization, clay tablets in ancient Babylon, to written texts in Western Europe during the late Middle Ages.

One of the first challenges of computing was figuring out how to store and work with vast amounts of information in a programmatic way, especially as the volume of information was increasing at a faster rate than ever before. We have seen different generations of how to organize this information in some form of database, ranging from flat files to the Information Management System (IMS) used in the 1960s for the Apollo space program, to the rise of the relational database management system (RDBMS) in the 1970s. These innovations drove a lot of subsequent innovations in information management and application development as we were able to move from thousands of records to millions and billions.

Today, as architects and developers, we have a vast variety of database technologies to select from, which have different characteristics that are optimized for different use cases:

  • Relational databases are well understood after decades of use in the majority of companies who required a database to store information. Amazon Relational Database (Amazon RDS) supports many popular relational database engines such as MySQL, Microsoft SQL Server, PostgreSQL, MariaDB, and Oracle. We have even brought the traditional RDBMS into the cloud world through Amazon Aurora, which provides MySQL and PostgreSQL support with the performance and reliability of commercial-grade databases at 1/10th the cost.
  • Non-relational databases (NoSQL) provided a simpler method of storing and retrieving information that was often faster and more scalable than traditional RDBMS technology. The concept of non-relational databases has existed since the 1960s but really took off in the early 2000s with the rise of web-based applications that required performance and scalability that relational databases struggled with at the time. AWS published this Dynamo whitepaper in 2007, with DynamoDB launching as a service in 2012. DynamoDB has quickly become one of the critical design elements for many of our customers who are building highly-scalable applications on AWS. We continue to innovate with DynamoDB, and this week launched global tables and on-demand backup at re:Invent 2017. DynamoDB excels in a variety of use cases, such as tracking of session information for popular websites, shopping cart information on e-commerce sites, and keeping track of gamers’ high scores in mobile gaming applications, for example.
  • Graph databases focus on the relationship between data items in the store. With a graph database, we work with nodes, edges, and properties to represent data, relationships, and information. Graph databases are designed to make it easy and fast to traverse and retrieve complex hierarchical data models. Graph databases share some concepts from the NoSQL family of databases such as key-value pairs (properties) and the use of a non-SQL query language such as Gremlin. Graph databases are commonly used for social networking, recommendation engines, fraud detection, and knowledge graphs. We released Amazon Neptune to help simplify the provisioning and management of graph databases as we believe that graph databases are going to enable the next generation of smart applications.

A common use case I am hearing every week as I talk to customers is how to incorporate chatbots within their organizations. Amazon Lex and Amazon Polly have made it easy for customers to experiment and build chatbots for a wide range of scenarios, but one of the missing pieces of the puzzle was how to model decision trees and and knowledge graphs so the chatbot could guide the conversation in an intelligent manner.

Graph databases are ideal for this particular use case, and having Amazon Neptune simplifies the deployment of a graph database while providing high performance, scalability, availability, and durability as a managed service. Security of your graph database is critical. To help ensure this, you can store your encrypted data by running AWS in Amazon Neptune within your Amazon Virtual Private Cloud (Amazon VPC) and using encryption at rest integrated with AWS Key Management Service (AWS KMS). Neptune also supports Amazon VPC and AWS Identity and Access Management (AWS IAM) to help further protect and restrict access.

Our customers now have the choice of many different database technologies to ensure that they can optimize each application and service for their specific needs. Just as DynamoDB has unlocked and enabled many new workloads that weren’t possible in relational databases, I can’t wait to see what new innovations and capabilities are enabled from graph databases as they become easier to use through Amazon Neptune.

Look for more on DynamoDB and Amazon S3 from me on Monday.

 

Glenn at Tour de Mont Blanc