Tag Archives: Play

Game Companies Oppose DMCA Exemption for ‘Abandoned’ Online Games

Post Syndicated from Ernesto original https://torrentfreak.com/game-companies-oppose-dmca-exemption-for-abandoned-online-games-180217/

There are a lot of things people are not allowed to do under US copyright law, but perhaps just as importantly there are exemptions.

The U.S. Copyright Office is currently considering whether or not to loosen the DMCA’s anti-circumvention provisions, which prevent the public from ‘tinkering’ with DRM-protected content and devices.

These provisions are renewed every three years after the Office hears various arguments from the public. One of the major topics on the agenda this year is the preservation of abandoned games.

The Copyright Office previously included game preservation exemptions to keep these games accessible. This means that libraries, archives, and museums can use emulators and other circumvention tools to make old classics playable.

Late last year several gaming fans including the Museum of Art and Digital Entertainment (the MADE), a nonprofit organization operating in California, argued for an expansion of this exemption to also cover online games. This includes games in the widely popular multiplayer genre, which require a connection to an online server.

“Although the Current Exemption does not cover it, preservation of online video games is now critical,” MADE wrote in its comment to the Copyright Office.

“Online games have become ubiquitous and are only growing in popularity. For example, an estimated fifty-three percent of gamers play multiplayer games at least once a week, and spend, on average, six hours a week playing with others online.”

This week, the Entertainment Software Association (ESA), which acts on behalf of prominent members including Electonic Arts, Nintendo and Ubisoft, opposed the request.

While they are fine with the current game-preservation exemption, expanding it to online games goes too far, they say. This would allow outsiders to recreate online game environments using server code that was never published in public.

It would also allow a broad category of “affiliates” to help with this which, according to the ESA, could include members of the public

“The proponents characterize these as ‘slight modifications’ to the existing exemption. However they are nothing of the sort. The proponents request permission to engage in forms of circumvention that will enable the complete recreation of a hosted video game-service environment and make the video game available for play by a public audience.”

“Worse yet, proponents seek permission to deputize a legion of ‘affiliates’ to assist in their activities,” ESA adds.

The proposed changes would enable and facilitate infringing use, the game companies warn. They fear that outsiders such as MADE will replicate the game servers and allow the public to play these abandoned games, something games companies would generally charge for. This could be seen as direct competition.

MADE, for example, already charges the public to access its museum so they can play games. This can be seen as commercial use under the DMCA, ESA points out.

“Public performance and display of online games within a museum likewise is a commercial use within the meaning of Section 107. MADE charges an admission fee – ‘$10 to play games all day’.

“Under the authority summarized above, public performance and display of copyrighted works to generate entrance fee revenue is a commercial use, even if undertaken by a nonprofit museum,” the ESA adds.

The ESA also stresses that their members already make efforts to revive older games themselves. There is a vibrant and growing market for “retro” games, which games companies are motivated to serve, they say.

The games companies, therefore, urge the Copyright Office to keep the status quo and reject any exemptions for online games.

“In sum, expansion of the video game preservation exemption as contemplated by Class 8 is not a ‘modest’ proposal. Eliminating the important limitations that the Register provided when adopting the current exemption risks the possibility of wide-scale infringement and substantial market harm,” they write.

The Copyright Office will take all arguments into consideration before it makes a final decision. It’s clear that the wishes of game preservation advocates, such as MADE, are hard to unite with the interests of the game companies, so one side will clearly be disappointed with the outcome.

A copy of ESA’s submissionavailablelble here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Major US Sports Leagues Report Top Piracy Nations to Government

Post Syndicated from Ernesto original https://torrentfreak.com/major-us-sports-leagues-report-top-piracy-nations-to-government-180216/

While pirated Hollywood blockbusters often score the big headlines, there are several other industries that have been battling with piracy over the years. This includes sports organizations.

Many of the major US leagues including the NBA, NFL, NHL, MLB and the Tennis Association, are bundling their powers in the Sports Coalition, to try and curb the availability of pirated streams and videos.

A few days ago the Sports Coalition put the piracy problem on the agenda of the United States Trade Representative (USTR).

“Sports organizations, including Sports Coalition members, are heavily affected by live sports telecast piracy, including the unauthorized live retransmission of sports telecasts over the Internet,” the Sports Coalition wrote.

“The Internet piracy of live sports telecasts is not only a persistent problem, but also a global one, often involving bad actors in more than one nation.”

The USTR asked the public for comments on which countries play a central role in copyright infringement issues. In its response, the Sports Coalition stresses that piracy is a global issue but singles out several nations as particularly problematic.

The coalition recommends that the USTR should put the Netherlands and Switzerland on the “Priority Watch List” of its 2018 Special 301 Report, followed by Russia, Saudi Arabia, Seychelles and Sweden, which get a regular “Watch List” recommendation.

The main problem with these countries is that hosting providers and content distribution networks don’t do enough to curb piracy.

In the Netherlands, sawlive.tv, strikezoneme, wizlnet, AltusHost, Host Palace, Quasi Networks and SNEL pirated or provided services contributing to sports piracy, the coalition writes. In Switzerland, mlbstreamme, robinwidgetorg, strikeoutmobi, BlackHOST, Private Layer and Solar Communications are doing the same.

According to the major sports leagues, the US Government should encourage these countries to step up their anti-piracy game. This is not only important for US copyright holders, but also for licensees in other countries.

“Clearly, there is common ground – both in terms of shared economic interests and legal obligations to protect and enforce intellectual property and related rights – for the United States and the nations with which it engages in international trade to work cooperatively to stop Internet piracy of sports programming.”

Whether any of these countries will make it into the USTR’s final list has yet to be seen. For Switzerland it wouldn’t be the first time but for the Netherlands it would be new, although it has been considered before.

A document we received through a FOIA request earlier this year revealed that the US Embassy reached out to the Dutch Government in the past, to discuss similar complaints from the Sports Coalition.

The same document also revealed that local anti-piracy group BREIN consistently urged the entertainment industries it represents not to advocate placing the Netherlands on the 301 Watch List but to solve the problems behind the scenes instead.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Embedding a Tweet Can be Copyright Infringement, Court Rules

Post Syndicated from Ernesto original https://torrentfreak.com/embedding-a-tweet-can-be-copyright-infringement-court-rules-180216/

Nowadays it’s fairly common for blogs and news sites to embed content posted by third parties, ranging from YouTube videos to tweets.

Although these publications don’t host the content themselves, they can be held liable for copyright infringement, a New York federal court has ruled.

The case in question was filed by Justin Goldman whose photo of Tom Brady went viral after he posted it on Snapchat. After being reposted on Reddit, it also made its way onto Twitter from where various news organizations picked it up.

Several of these news sites reported on the photo by embedding tweets from others. However, since Goldman never gave permission to display his photo, he went on to sue the likes of Breitbart, Time, Vox and Yahoo, for copyright infringement.

In their defense, the news organizations argued that they did nothing wrong as no content was hosted on their servers. They referred to the so-called “server test” that was applied in several related cases in the past, which determined that liability rests on the party that hosts the infringing content.

In an order that was just issued, US District Court Judge Katherine Forrest disagrees. She rejects the “server test” argument and rules that the news organizations are liable.

“[W]hen defendants caused the embedded Tweets to appear on their websites, their actions violated plaintiff’s exclusive display right; the fact that the image was hosted on a server owned and operated by an unrelated third party (Twitter) does not shield them from this result,” Judge Forrest writes.

Judge Forrest argues that the server test was established in the ‘Perfect 10 v. Amazon’ case, which dealt with the ‘distribution’ of content. This case is about ‘displaying’ an infringing work instead, an area where the jurisprudence is not as clear.

“The Court agrees with plaintiff. The plain language of the Copyright Act, the legislative history undergirding its enactment, and subsequent Supreme Court jurisprudence provide no basis for a rule that allows the physical location or possession of an image to determine who may or may not have “displayed” a work within the meaning of the Copyright Act.”

As a result, summary judgment was granted in favor of Goldman.

Rightsholders, including Getty Images which supported Goldman, are happy with the result. However, not everyone is pleased. The Electronic Frontier Foundation (EFF) says that if the current verdict stands it will put millions of regular Internet users at risk.

“Rejecting years of settled precedent, a federal court in New York has ruled that you could infringe copyright simply by embedding a tweet in a web page,” EFF comments.

“Even worse, the logic of the ruling applies to all in-line linking, not just embedding tweets. If adopted by other courts, this legally and technically misguided decision would threaten millions of ordinary Internet users with infringement liability.”

Given what’s at stake, it’s likely that the news organization will appeal this week’s order.

Interestingly, earlier this week a California district court dismissed Playboy’s copyright infringement complaint against Boing Boing, which embedded a YouTube video that contained infringing content.

A copy of Judge Forrest’s opinion can be found here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

This IoT Pet Monitor barks back

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/iot-pet-monitor/

Jennifer Fox, founder of FoxBot Industries, uses a Raspberry Pi pet monitor to check the sound levels of her home while she is out, allowing her to keep track of when her dog Marley gets noisy or agitated, and to interact with the gorgeous furball accordingly.

Bark Back Project Demo

A quick overview and demo of the Bark Back, a project to monitor and interact with Check out the full tutorial here: https://learn.sparkfun.com/tutorials/bark-back-interactive-pet-monitor For any licensing requests please contact [email protected]

Marley, bark!

Using a Raspberry Pi 3, speakers, SparkFun’s MEMS microphone breakout board, and an analogue-to-digital converter (ADC), the IoT Pet Monitor is fairly easy to recreate, all thanks to Jennifer’s full tutorial on the FoxBot website.

Building the pet monitor

In a nutshell, once the Raspberry Pi and the appropriate bits and pieces are set up, you’ll need to sign up at CloudMQTT — it’s free if you select the Cute Cat account. CloudMQTT will create an invisible bridge between your home and wherever you are that isn’t home, so that you can check in on your pet monitor.

Screenshot CloudMQTT account set-up — IoT Pet Monitor Bark Back Raspberry Pi

Image c/o FoxBot Industries

Within the project code, you’ll be able to calculate the peak-to-peak amplitude of sound the microphone picks up. Then you can decide how noisy is too noisy when it comes to the occasional whine and bark of your beloved pup.

MEMS microphone breakout board — IoT Pet Monitor Bark Back Raspberry Pi

The MEMS microphone breakout board collects sound data and relays it back to the Raspberry Pi via the ADC.
Image c/o FoxBot Industries

Next you can import sounds to a preset song list that will be played back when the volume rises above your predefined threshold. As Jennifer states in the tutorial, the sounds can easily be recorded via apps such as Garageband, or even on your mobile phone.

Using the pet monitor

Whenever the Bark Back IoT Pet Monitor is triggered to play back audio, this information is fed to the CloudMQTT service, allowing you to see if anything is going on back home.

A sitting dog with a doll in its mouth — IoT Pet Monitor Bark Back Raspberry Pi

*incoherent coos of affection from Alex*
Image c/o FoxBot Industries

And as Jennifer recommends, a update of the project could include a camera or sensors to feed back more information about your home environment.

If you’ve created something similar, be sure to let us know in the comments. And if you haven’t, but you’re now planning to build your own IoT pet monitor, be sure to let us know in the comments. And if you don’t have a pet but just want to say hi…that’s right, be sure to let us know in the comments.

The post This IoT Pet Monitor barks back appeared first on Raspberry Pi.

Court Orders Spanish ISPs to Block Pirate Sites For Hollywood

Post Syndicated from Andy original https://torrentfreak.com/court-orders-spanish-isps-to-block-pirate-sites-for-hollywood-180216/

Determined to reduce levels of piracy globally, Hollywood has become one of the main proponents of site-blocking on the planet. To date there have been multiple lawsuits in far-flung jurisdictions, with Europe one of the primary targets.

Following complaints from Disney, 20th Century Fox, Paramount, Sony, Universal and Warner, Spain has become one of the latest targets. According to the studios a pair of sites – HDFull.tv and Repelis.tv – infringe their copyrights on a grand scale and need to be slowed down by preventing users from accessing them.

HDFull is a platform that provides movies and TV shows in both Spanish and English. Almost 60% its traffic comes from Spain and after a huge surge in visitors last July, it’s now the 337th most popular site in the country according to Alexa. Visitors from Mexico, Argentina, United States and Chile make up the rest of its audience.

Repelis.tv is a similar streaming portal specializing in movies, mainly in Spanish. A third of the site’s visitors hail from Mexico with the remainder coming from Argentina, Columbia, Spain and Chile. In common with HDFull, Repelis has been building its visitor numbers quickly since 2017.

The studios demanding more blocks

With a ruling in hand from the European Court of Justice which determined that sites can be blocked on copyright infringement grounds, the studios asked the courts to issue an injunction against several local ISPs including Telefónica, Vodafone, Orange and Xfera. In an order handed down this week, Barcelona Commercial Court No. 6 sided with the studios and ordered the ISPs to begin blocking the sites.

“They damage the legitimate rights of those who own the films and series, which these pages illegally display and with which they profit illegally through the advertising revenues they generate,” a statement from the Spanish Federation of Cinematographic Distributors (FEDECINE) reads.

FEDECINE General director Estela Artacho said that changes in local law have helped to provide the studios with a new way to protect audiovisual content released in Spain.

“Thanks to the latest reform of the Civil Procedure Law, we have in this jurisdiction a new way to exercise different possibilities to protect our commercial film offering,” Artacho said.

“Those of us who are part of this industry work to make culture accessible and offer the best cinematographic experience in the best possible conditions, guaranteeing the continuity of the sector.”

The development was also welcomed by Stan McCoy, president of the Motion Picture Association’s EMEA division, which represents the plaintiffs in the case.

“We have just taken a welcome step which we consider crucial to face the problem of piracy in Spain,” McCoy said.

“These actions are necessary to maintain the sustainability of the creative community both in Spain and throughout Europe. We want to ensure that consumers enjoy the entertainment offer in a safe and secure environment.”

After gaining experience from blockades and subsequent circumvention in other regions, the studios seem better prepared to tackle fallout in Spain. In addition to blocking primary domains, the ruling handed down by the court this week also obliges ISPs to block any other domain, subdomain or IP address whose purpose is to facilitate access to the blocked platforms.

News of Spain’s ‘pirate’ blocks come on the heels of fresh developments in Germany, where this week a court ordered ISP Vodafone to block KinoX, one of the country’s most popular streaming portals.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

HackSpace magazine 4: the wearables issue

Post Syndicated from Andrew Gregory original https://www.raspberrypi.org/blog/hackspace-4-wearables/

Big things are afoot in the world of HackSpace magazine! This month we’re running our first special issue, with wearables projects throughout the magazine. Moreover, we’re giving away our first subscription gift free to all 12-month print subscribers. Lastly, and most importantly, we’ve made the cover EXTRA SHINY!

HackSpace magazine issue 4 cover

Prepare your eyeballs — it’s HackSpace magazine issue 4!

Wearables

In this issue, we’re taking an in-depth look at wearable tech. Not Fitbits or Apple Watches — we’re talking stuff you can make yourself, from projects that take a couple of hours to put together, to the huge, inspiring builds that are bringing technology to the runway. If you like wearing clothes and you like using your brain to make things better, then you’ll love this feature.

We’re continuing our obsession with Nixie tubes, with the brilliant Time-To-Go-Clock – Trump edition. This ingenious bit of kit uses obsolete Russian electronics to count down the time until the end of the 45th president’s term in office. However, you can also program it to tell the time left to any predictable event, such as the deadline for your tax return or essay submission, or the date England gets knocked out of the World Cup.

HackSpace magazine page 08
HackSpace magazine page 70
HackSpace magazine issue 4 page 98

We’re also talking to Dr Lucy Rogers — NASA alumna, Robot Wars judge, and fellow of the Institution of Mechanical Engineers — about the difference between making as a hobby and as a job, and about why we need the Guild of Makers. Plus, issue 4 has a teeny boat, the most beautiful Raspberry Pi cases you’ve ever seen, and it explores the results of what happens when you put a bunch of hardware hackers together in a French chateau — sacré bleu!

Tutorials

As always, we’ve got more how-tos than you can shake a soldering iron at. Fittingly for the current climate here in the UK, there’s a hot water monitor, which shows you how long you have before your morning shower turns cold, and an Internet of Tea project to summon a cuppa from your kettle via the web. Perhaps not so fittingly, there’s also an ESP8266 project for monitoring a solar power station online. Readers in the southern hemisphere, we’ll leave that one for you — we haven’t seen the sun here for months!

And there’s more!

We’re super happy to say that all our 12-month print subscribers have been sent an Adafruit Circuit Playground Express with this new issue:

Adafruit Circuit Playground Express HackSpace

This gadget was developed primarily with wearables in mind and comes with all sorts of in-built functionality, so subscribers can get cracking with their latest wearable project today! If you’re not a 12-month print subscriber, you’ll miss out, so subscribe here to get your magazine and your device,  and let us know what you’ll make.

The post HackSpace magazine 4: the wearables issue appeared first on Raspberry Pi.

Court Dismisses Playboy’s Copyright Claims Against Boing Boing

Post Syndicated from Ernesto original https://torrentfreak.com/court-dismisses-playboys-copyright-claims-against-boing-boing-180215/

Early 2016, Boing Boing co-editor Xeni Jardin published an article in which she linked to an archive of every Playboy centerfold image till then.

“Kind of amazing to see how our standards of hotness, and the art of commercial erotic photography, have changed over time,” Jardin commented.

While the linked material undoubtedly appealed to many readers, Playboy itself took offense to the fact that infringing copies of their work were being shared in public. While Boing Boing didn’t upload or store the images in question, the publisher filed a lawsuit late last year.

The blog’s parent company Happy Mutants was accused of various counts of copyright infringement, with Playboy claiming that it exploited their playmates’ images for commercial purposes.

Boing Boing saw things differently. With help from the Electronic Frontier Foundation (EFF) it filed a motion to dismiss, arguing that hyperlinking is not copyright infringement. If Playboy would’ve had their way, millions of other Internet users could be sued for linking too.

“This case merely has to survive a motion to dismiss to launch a thousand more expensive lawsuits, chilling a broad variety of lawful expression and reporting that merely adopts the common practice of linking to the material that is the subject of the report,” they wrote.

The article in question

Yesterday US District Court Judge Fernando Olguin ruled on the matter. In a brief order, he concluded that an oral argument is not needed and that based on the arguments from both sides, the case should be dismissed with leave.

This effectively means that Playboy’s complaint has been thrown out. However, the company is offered a lifeline and is allowed to submit a new one if they can properly back up their copyright infringement allegations.

“The court will grant defendant’s Motion and dismiss plaintiff’s First Amended Complaint with leave to amend. In preparing the Second Amended Complaint, plaintiff shall carefully evaluate the contentions set forth in defendant’s Motion.

“For example, the court is skeptical that plaintiff has sufficiently alleged facts to support either its inducement or material contribution theories of copyright infringement,” Judge Olguin adds.

According to the order, it is not sufficient to argue that Boing Boing merely ‘provided the means’ to carry out copyright infringing activity. There also has to be a personal action that ‘assists’ the infringing activity.

Playboy has until the end of the month to submit a new complaint and if it chooses not to do so, the case will be thrown out.

The order is clearly a win for Boing Boing, which vehemently opposed Playboy’s claims. While the order is clear, it must come as a surprise to the magazine publisher, which won a similar ‘hyperlinking’ lawsuit in the European Court of Justice last year.

EFF, who defend Boing Boing, is happy with the order and hopes that Playboy will leave it at this.

“From the outset of this lawsuit, we have been puzzled as to why Playboy, once a staunch defender of the First Amendment, would attack a small news and commentary website,” EFF comments

“Today’s decision leaves Playboy with a choice: it can try again with a new complaint or it can leave this lawsuit behind. We don’t believe there’s anything Playboy could add to its complaint that would meet the legal standard. We hope that it will choose not to continue with its misguided suit.”

A copy of US District Court Judge Fernando Olguin’s order is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Tickbox Must Remove Pirate Streaming Addons From Sold Devices

Post Syndicated from Ernesto original https://torrentfreak.com/tickbox-remove-pirate-streaming-addons-180214/

Online streaming piracy is on the rise and many people now use dedicated media players to watch content through their regular TVs.

This is a thorn in the side of various movie companies, who have launched a broad range of initiatives to curb this trend.

One of these initiatives is the Alliance for Creativity and Entertainment (ACE), an anti-piracy partnership between Hollywood studios, Netflix, Amazon, and more than two dozen other companies.

Last year, ACE filed a lawsuit against the Georgia-based company Tickbox TV, which sells Kodi-powered set-top boxes that stream a variety of popular media.

ACE sees these devices as nothing more than pirate tools so the coalition asked the court for an injunction to prevent Tickbox from facilitating copyright infringement, demanding that it removes all pirate add-ons from previously sold devices.

Last month, a California federal court issued an initial injunction, ordering Tickbox to keep pirate addons out of its box and halt all piracy-inducing advertisements going forward. In addition, the court directed both parties to come up with a proper solution for devices that were already sold.

The movie companies wanted Tickbox to remove infringing addons from previously sold devices, but the device seller refused this initially, equating it to hacking.

This week, both parties were able to reach an ‘agreement’ on the issue. They drafted an updated preliminary injunction which replaces the previous order and will be in effect for the remainder of the lawsuit.

The new injunction prevents Tickbox from linking to any “build,” “theme,” “app,” or “addon” that can be indirectly used to transmit copyright-infringing material. Web browsers such as Internet Explorer, Google Chrome, Safari, and Firefox are specifically excluded.

In addition, Tickbox must also release a new software updater that will remove any infringing software from previously sold devices.

“TickBox shall issue an update to the TickBox launcher software to be automatically downloaded and installed onto any previously distributed TickBox TV device and to be launched when such device connects to the internet,” the injunction reads.

“Upon being launched, the update will delete the Subject [infringing] Software downloaded onto the device prior to the update, or otherwise cause the TickBox TV device to be unable to access any Subject Software downloaded onto or accessed via that device prior to the update.”

All tiles that link to copyright-infringing software from the box’s home screen also have to be stripped. Going forward, only tiles to the Google Play Store or to Kodi within the Google Play Store are allowed.

In addition, the agreement also allows ACE to report newly discovered infringing apps or addons to Tickbox, which the company will then have to remove within 24-hours, weekends excluded.

“This ruling sets an important precedent and reduces the threat from piracy devices to the legal market for creative content and a vibrant creative economy that supports millions of workers around the world,” ACE spokesperson Zoe Thorogood says, commenting on the news.

The new injunction is good news for the movie companies, but many Tickbox customers will not appreciate the forced changes. That said, the legal battle is far from over. The main question, whether Tickbox contributed to the alleged copyright infringements, has yet to be answered.

Ultimately, this case is likely to result in a landmark decision, determining what sellers of streaming boxes can and cannot do in the United States.

A copy of the new Tickbox injunction is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Weekly roundup: Lost time

Post Syndicated from Eevee original https://eev.ee/dev/2018/02/13/weekly-roundup-lost-time/

I ran out of brain pills near the end of January due to some regulatory kerfuffle, and spent something like a week and a half basically in a daze. I have incredibly a lot of stuff to do right now, too, so not great timing… but, well, I guess no time would be especially good. Oh well. I got a forced vacation and played some Avernum.

Anyway, in the last three weeks, the longest span I’ve ever gone without writing one of these:

  • anise: I added a ✨ completely new menu feature ✨ that looks super cool and amazing and will vastly improve the game.

  • blog: I wrote SUPER game night 3, featuring a bunch of games from GAMES MADE QUICK??? 2.0! It’s only a third of them though, oh my god, there were just so many.

    I also backfilled some release posts, including one for Strawberry Jam 2 — more on that momentarily.

  • ???: Figured out a little roadmap and started on an ???.

  • idchoppers: Went down a whole rabbit hole trying to port some academic C++ to Rust, ultimately so I could intersect arbitrary shapes, all so I could try out this ridiculous idea to infer the progression through a Doom map. This was kind of painful, and is basically the only useful thing I did while unmedicated. I might write about it.

  • misc: I threw together a little PICO-8 prime sieve inspired by this video. It’s surprisingly satisfying.

    (Hmm, does this deserve a release post? Where should its permanent home be? Argh.)

  • art: I started to draw my Avernum party but only finished one of them. I did finish a comic celebrating the return of my brain pills.

  • neon vn: I contributed some UI and bugfixing to a visual novel that’ll be released on Floraverse tomorrow.

  • alice vn: For Strawberry Jam 2, glip and I are making a ludicrously ambitious horny visual novel in Ren’Py. Turns out Ren’Py is impressively powerful, and I’ve been having a blast messing with it. But also our idea requires me to write about sixty zillion words by the end of the month. I guess we’ll see how that goes.

    I have a (NSFW) progress thread going on my smut alt, but honestly, most of the progress for the next week will be “did more writing”.

I’m behind again! Sorry. I still owe a blog post for last month, and a small project for last month, and now blog posts for this month, and Anise game is kinda in limbo, and I don’t know how any of this will happen with this huge jam game taking priority over basically everything else. I’ll see if I can squeeze other stuff in here and there. I intended to draw more regularly this month, too, but wow I don’t think I can even spare an hour a day.

The jam game is forcing me to do a lot of writing that I’d usually dance around and avoid, though, so I think I’ll come out the other side of it much better and faster and more confident.

Welp. Back to writing!

How I built a data warehouse using Amazon Redshift and AWS services in record time

Post Syndicated from Stephen Borg original https://aws.amazon.com/blogs/big-data/how-i-built-a-data-warehouse-using-amazon-redshift-and-aws-services-in-record-time/

This is a customer post by Stephen Borg, the Head of Big Data and BI at Cerberus Technologies.

Cerberus Technologies, in their own words: Cerberus is a company founded in 2017 by a team of visionary iGaming veterans. Our mission is simple – to offer the best tech solutions through a data-driven and a customer-first approach, delivering innovative solutions that go against traditional forms of working and process. This mission is based on the solid foundations of reliability, flexibility and security, and we intend to fundamentally change the way iGaming and other industries interact with technology.

Over the years, I have developed and created a number of data warehouses from scratch. Recently, I built a data warehouse for the iGaming industry single-handedly. To do it, I used the power and flexibility of Amazon Redshift and the wider AWS data management ecosystem. In this post, I explain how I was able to build a robust and scalable data warehouse without the large team of experts typically needed.

In two of my recent projects, I ran into challenges when scaling our data warehouse using on-premises infrastructure. Data was growing at many tens of gigabytes per day, and query performance was suffering. Scaling required major capital investment for hardware and software licenses, and also significant operational costs for maintenance and technical staff to keep it running and performing well. Unfortunately, I couldn’t get the resources needed to scale the infrastructure with data growth, and these projects were abandoned. Thanks to cloud data warehousing, the bottleneck of infrastructure resources, capital expense, and operational costs have been significantly reduced or have totally gone away. There is no more excuse for allowing obstacles of the past to delay delivering timely insights to decision makers, no matter how much data you have.

With Amazon Redshift and AWS, I delivered a cloud data warehouse to the business very quickly, and with a small team: me. I didn’t have to order hardware or software, and I no longer needed to install, configure, tune, or keep up with patches and version updates. Instead, I easily set up a robust data processing pipeline and we were quickly ingesting and analyzing data. Now, my data warehouse team can be extremely lean, and focus more time on bringing in new data and delivering insights. In this post, I show you the AWS services and the architecture that I used.

Handling data feeds

I have several different data sources that provide everything needed to run the business. The data includes activity from our iGaming platform, social media posts, clickstream data, marketing and campaign performance, and customer support engagements.

To handle the diversity of data feeds, I developed abstract integration applications using Docker that run on Amazon EC2 Container Service (Amazon ECS) and feed data to Amazon Kinesis Data Streams. These data streams can be used for real time analytics. In my system, each record in Kinesis is preprocessed by an AWS Lambda function to cleanse and aggregate information. My system then routes it to be stored where I need on Amazon S3 by Amazon Kinesis Data Firehose. Suppose that you used an on-premises architecture to accomplish the same task. A team of data engineers would be required to maintain and monitor a Kafka cluster, develop applications to stream data, and maintain a Hadoop cluster and the infrastructure underneath it for data storage. With my stream processing architecture, there are no servers to manage, no disk drives to replace, and no service monitoring to write.

Setting up a Kinesis stream can be done with a few clicks, and the same for Kinesis Firehose. Firehose can be configured to automatically consume data from a Kinesis Data Stream, and then write compressed data every N minutes to Amazon S3. When I want to process a Kinesis data stream, it’s very easy to set up a Lambda function to be executed on each message received. I can just set a trigger from the AWS Lambda Management Console, as shown following.

I also monitor the duration of function execution using Amazon CloudWatch and AWS X-Ray.

Regardless of the format I receive the data from our partners, I can send it to Kinesis as JSON data using my own formatters. After Firehose writes this to Amazon S3, I have everything in nearly the same structure I received but compressed, encrypted, and optimized for reading.

This data is automatically crawled by AWS Glue and placed into the AWS Glue Data Catalog. This means that I can immediately query the data directly on S3 using Amazon Athena or through Amazon Redshift Spectrum. Previously, I used Amazon EMR and an Amazon RDS–based metastore in Apache Hive for catalog management. Now I can avoid the complexity of maintaining Hive Metastore catalogs. Glue takes care of high availability and the operations side so that I know that end users can always be productive.

Working with Amazon Athena and Amazon Redshift for analysis

I found Amazon Athena extremely useful out of the box for ad hoc analysis. Our engineers (me) use Athena to understand new datasets that we receive and to understand what transformations will be needed for long-term query efficiency.

For our data analysts and data scientists, we’ve selected Amazon Redshift. Amazon Redshift has proven to be the right tool for us over and over again. It easily processes 20+ million transactions per day, regardless of the footprint of the tables and the type of analytics required by the business. Latency is low and query performance expectations have been more than met. We use Redshift Spectrum for long-term data retention, which enables me to extend the analytic power of Amazon Redshift beyond local data to anything stored in S3, and without requiring me to load any data. Redshift Spectrum gives me the freedom to store data where I want, in the format I want, and have it available for processing when I need it.

To load data directly into Amazon Redshift, I use AWS Data Pipeline to orchestrate data workflows. I create Amazon EMR clusters on an intra-day basis, which I can easily adjust to run more or less frequently as needed throughout the day. EMR clusters are used together with Amazon RDS, Apache Spark 2.0, and S3 storage. The data pipeline application loads ETL configurations from Spring RESTful services hosted on AWS Elastic Beanstalk. The application then loads data from S3 into memory, aggregates and cleans the data, and then writes the final version of the data to Amazon Redshift. This data is then ready to use for analysis. Spark on EMR also helps with recommendations and personalization use cases for various business users, and I find this easy to set up and deliver what users want. Finally, business users use Amazon QuickSight for self-service BI to slice, dice, and visualize the data depending on their requirements.

Each AWS service in this architecture plays its part in saving precious time that’s crucial for delivery and getting different departments in the business on board. I found the services easy to set up and use, and all have proven to be highly reliable for our use as our production environments. When the architecture was in place, scaling out was either completely handled by the service, or a matter of a simple API call, and crucially doesn’t require me to change one line of code. Increasing shards for Kinesis can be done in a minute by editing a stream. Increasing capacity for Lambda functions can be accomplished by editing the megabytes allocated for processing, and concurrency is handled automatically. EMR cluster capacity can easily be increased by changing the master and slave node types in Data Pipeline, or by using Auto Scaling. Lastly, RDS and Amazon Redshift can be easily upgraded without any major tasks to be performed by our team (again, me).

In the end, using AWS services including Kinesis, Lambda, Data Pipeline, and Amazon Redshift allows me to keep my team lean and highly productive. I eliminated the cost and delays of capital infrastructure, as well as the late night and weekend calls for support. I can now give maximum value to the business while keeping operational costs down. My team pushed out an agile and highly responsive data warehouse solution in record time and we can handle changing business requirements rapidly, and quickly adapt to new data and new user requests.


Additional Reading

If you found this post useful, be sure to check out Deploy a Data Warehouse Quickly with Amazon Redshift, Amazon RDS for PostgreSQL and Tableau Server and Top 8 Best Practices for High-Performance ETL Processing Using Amazon Redshift.


About the Author

Stephen Borg is the Head of Big Data and BI at Cerberus Technologies. He has a background in platform software engineering, and first became involved in data warehousing using the typical RDBMS, SQL, ETL, and BI tools. He quickly became passionate about providing insight to help others optimize the business and add personalization to products. He is now the Head of Big Data and BI at Cerberus Technologies.

 

 

 

The Fisher Piano: make music in the air

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/air-piano/

Piano keys are so limiting! Why not swap them out for LEDs and the wealth of instruments in Pygame to build air keys, as demonstrated by Instructables maker 2fishy?

Raspberry Pi LED Light Schroeder Piano – Twinkle Little Star

Raspberry Pi LED Light Schroeder Piano – Twinkle Little Star

Keys? Where we’re going you don’t need keys!

This project, created by either Yolanda or Ken Fisher (or both!), uses an array of LEDs and photoresistors to form a MIDI sequencer. Twelve LEDs replace piano keys, and another three change octaves and access the menu.

Each LED is paired with a photoresistor, which detects the emitted light to form a closed circuit. Interrupting the light beam — in this case with a finger — breaks the circuit, telling the Python program to perform an action.

2fishy LED light piano raspberry pi

We’re all hoping this is just the scaled-down prototype of a full-sized LED grand piano

Using Pygame, the 2fishy team can access 75 different instruments and 128 notes per instrument, making their wooden piano more than just a one-hit wonder.

Piano building

The duo made the piano’s body out of plywood, hardboard, and dowels, and equipped it with a Raspberry Pi 2, a speaker, and the aforementioned LEDs and photoresistors.

2fishy LED light piano raspberry pi

A Raspberry Pi 2 and speaker sit within the wooden body, with LEDs and photoresistors in place of the keys.

A complete how-to for the build, including some rather fancy and informative schematics, is available at Instructables, where 2fishy received a bronze medal for their project. Congratulations!

Learn more

If you’d like to learn more about using Pygame, check out The MagPi’s Make Games with Python Essentials Guide, available both in print and as a free PDF download.

And for more music-based projects using a variety of tech, be sure to browse our free resources.

Lastly, if you’d like to see more piano-themed Raspberry Pi projects, take a look at our Big Minecraft Piano, these brilliant piano stairs, this laser-guided piano teacher, and our video below about the splendid Street Fighter duelling pianos we witnessed at Maker Faire.

Pianette: Piano Street Fighter at Maker Faire NYC 2016

Two pianos wired up as Playstation 2 controllers allow users to battle…musically! We caught up with makers Eric Redon and Cyril Chapellier of foobarflies a…

The post The Fisher Piano: make music in the air appeared first on Raspberry Pi.

Integration With Zapier

Post Syndicated from Bozho original https://techblog.bozho.net/integration-with-zapier/

Integration is boring. And also inevitable. But I won’t be writing about enterprise integration patterns. Instead, I’ll explain how to create an app for integration with Zapier.

What is Zapier? It is a service that allows you tо connect two (or more) otherwise unconnected services via their APIs (or protocols). You can do stuff like “Create a Trello task from an Evernote note”, “publish new RSS items to Facebook”, “append new emails to a spreadsheet”, “post approaching calendar meeting to Slack”, “Save big email attachments to Dropbox”, “tweet all instagrams above a certain likes threshold”, and so on. In fact, it looks to cover mostly the same usecases as another famous service that I really like – IFTTT (if this then that), with my favourite use-case “Get a notification when the international space station passes over your house”. And all of those interactions can be configured via a UI.

Now that’s good for end users but what does it have to do with software development and integration? Zapier (unlike IFTTT, unfortunately), allows custom 3rd party services to be included. So if you have a service of your own, you can create an “app” and allow users to integrate your service with all the other 3rd party services. IFTTT offers a way to invoke web endpoints (including RESTful services), but it doesn’t allow setting headers, so that makes it quite limited for actual APIs.

In this post I’ll briefly explain how to write a custom Zapier app and then will discuss where services like Zapier stand from an architecture perspective.

The thing that I needed it for – to be able to integrate LogSentinel with any of the third parties available through Zapier, i.e. to store audit logs for events that happen in all those 3rd party systems. So how do I do that? There’s a tutorial that makes it look simple. And it is, with a few catches.

First, there are two tutorials – one in GitHub and one on Zapier’s website. And they differ slightly, which becomes tricky in some cases.

I initially followed the GitHub tutorial and had my build fail. It claimed the zapier platform dependency is missing. After I compared it with the example apps, I found out there’s a caret in front of the zapier platform dependency. Removing it just yielded another error – that my node version should be exactly 6.10.2. Why?

The Zapier CLI requires you have exactly version 6.10.2 installed. You’ll see errors and will be unable to proceed otherwise.

It appears that they are using AWS Lambda which is stuck on Node 6.10.2 (actually – it’s 6.10.3 when you check). The current major release is 8, so minus points for choosing … javascript for a command-line tool and for building sandboxed apps. Maybe other decisions had their downsides as well, I won’t be speculating. Maybe it’s just my dislike for dynamic languages.

So, after you make sure you have the correct old version on node, you call zapier init and make sure there are no carets, npm install and then zapier test. So far so good, you have a dummy app. Now how do you make a RESTful call to your service?

Zapier splits the programmable entities in two – “triggers” and “creates”. A trigger is the event that triggers the whole app, an a “create” is what happens as a result. In my case, my app doesn’t publish any triggers, it only accepts input, so I won’t be mentioning triggers (though they seem easy). You configure all of the elements in index.js (e.g. this one):

const log = require('./creates/log');
....
creates: {
    [log.key]: log,
}

The log.js file itself is the interesting bit – there you specify all the parameters that should be passed to your API call, as well as making the API call itself:

const log = (z, bundle) => {
  const responsePromise = z.request({
    method: 'POST',
    url: `https://api.logsentinel.com/api/log/${bundle.inputData.actorId}/${bundle.inputData.action}`,
    body: bundle.inputData.details,
	headers: {
		'Accept': 'application/json'
	}
  });
  return responsePromise
    .then(response => JSON.parse(response.content));
};

module.exports = {
  key: 'log-entry',
  noun: 'Log entry',

  display: {
    label: 'Log',
    description: 'Log an audit trail entry'
  },

  operation: {
    inputFields: [
      {key: 'actorId', label:'ActorID', required: true},
      {key: 'action', label:'Action', required: true},
      {key: 'details', label:'Details', required: false}
    ],
    perform: log
  }
};

You can pass the input parameters to your API call, and it’s as simple as that. The user can then specify which parameters from the source (“trigger”) should be mapped to each of your parameters. In an example zap, I used an email trigger and passed the sender as actorId, the sibject as “action” and the body of the email as details.

There’s one more thing – authentication. Authentication can be done in many ways. Some services offer OAuth, others – HTTP Basic or other custom forms of authentication. There is a section in the documentation about all the options. In my case it was (almost) an HTTP Basic auth. My initial thought was to just supply the credentials as parameters (which you just hardcode rather than map to trigger parameters). That may work, but it’s not the canonical way. You should configure “authentication”, as it triggers a friendly UI for the user.

You include authentication.js (which has the fields your authentication requires) and then pre-process requests by adding a header (in index.js):

const authentication = require('./authentication');

const includeAuthHeaders = (request, z, bundle) => {
  if (bundle.authData.organizationId) {
	request.headers = request.headers || {};
	request.headers['Application-Id'] = bundle.authData.applicationId
	const basicHash = Buffer(`${bundle.authData.organizationId}:${bundle.authData.apiSecret}`).toString('base64');
	request.headers['Authorization'] = `Basic ${basicHash}`;
  }
  return request;
};

const App = {
  // This is just shorthand to reference the installed dependencies you have. Zapier will
  // need to know these before we can upload
  version: require('./package.json').version,
  platformVersion: require('zapier-platform-core').version,
  authentication: authentication,
  
  // beforeRequest & afterResponse are optional hooks into the provided HTTP client
  beforeRequest: [
	includeAuthHeaders
  ]
...
}

And then you zapier push your app and you can test it. It doesn’t automatically go live, as you have to invite people to try it and use it first, but in many cases that’s sufficient (i.e. using Zapier when doing integration with a particular client)

Can Zapier can be used for any integration problem? Unlikely – it’s pretty limited and simple, but that’s also a strength. You can, in half a day, make your service integrate with thousands of others for the most typical use-cases. And not that although it’s meant for integrating public services rather than for enterprise integration (where you make multiple internal systems talk to each other), as an increasing number of systems rely on 3rd party services, it could find home in an enterprise system, replacing some functions of an ESB.

Effectively, such services (Zapier, IFTTT) are “Simple ESB-as-a-service”. You go to a UI, fill a bunch of fields, and you get systems talking to each other without touching the systems themselves. I’m not a big fan of ESBs, mostly because they become harder to support with time. But minimalist, external ones might be applicable in certain situations. And while such services are primarily aimed at end users, they could be a useful bit in an enterprise architecture that relies on 3rd party services.

Whether it could process the required load, whether an organization is willing to let its data flow through a 3rd party provider (which may store the intermediate parameters), is a question that should be answered in a case by cases basis. I wouldn’t recommend it as a general solution, but it’s certainly an option to consider.

The post Integration With Zapier appeared first on Bozho's tech blog.

The Early Days of Mass Internet Piracy Were Awesome Yet Awful

Post Syndicated from Andy original https://torrentfreak.com/the-early-days-of-mass-internet-piracy-were-awesome-yet-awful-180211/

While Napster certainly put the digital cats among the pigeons in 1999, the organized chaos of mass Internet file-sharing couldn’t be truly appreciated until the advent of decentralized P2P networks a year or so later.

In the blink of an eye, everyone with a “shared folder” client became both a consumer and publisher, sucking in files from strangers and sharing them with like-minded individuals all around the planet. While today’s piracy narrative is all about theft and danger, in the early 2000s the sharing community felt more like distant friends who hadn’t met, quietly trading cards together.

Satisfying to millions, those who really engaged found shared folder sharing a real adrenaline buzz, as English comedian Seann Walsh noted on Conan this week.

“Click. 20th Century Fox comes up. No pixels. No shaky cam. No silhouettes of heads at the bottom of the screen, people coming in five minutes late. None of that,” Walsh said, recalling his experience of downloading X-Men 2 (X2) from LimeWire.

“We thought: ‘We’ve done it!!’ This was incredible! We were going to have to go to the cinema. We weren’t going to have to wait for the film to come out on video. We weren’t going to have to WALK to blockbuster!”

But while the nostalgia has an air of magic about it, Walsh’s take on the piracy experience is bittersweet. While obtaining X2 without having to trudge to a video store was a revelation, there were plenty of drawbacks too.

Downloading the pirate copy took a week, which pre-BitTorrent wasn’t a completely bad result but still a considerable commitment. There were also serious problems with quality control.

“20th Century fades, X Men 2 comes up. We’ve done it! We’re not taking it for granted – we’re actually hugging. Yes! Yes! We’ve done it! This is the future! We look at the screen, Wolverine turns round…,” …..and Walsh launches into a broadside of pseudo-German babble, mimicking the unexpectedly-dubbed superhero.

After a week of downloading and getting a quality picture on launch, that is a punch in the gut, to say the least. Arguably no less than a pirate deserves, some will argue, but a fat lip nonetheless, and one many a pirate has suffered over the years. Nevertheless, as Walsh notes, it’s a pain that kids in 2018 simply cannot comprehend.

“Children today are living the childhood I dreamed of. If they want to hear a song – touch – they stream it. They’ve got it now. Bang. Instantly. They don’t know the pain of LimeWire.

“Start downloading a song, go to school, come back. HOPE that it’d finished! That download bar messing with you. Four minutes left…..nine HOURS and 28 minutes left? Thirty seconds left…..52 hours and 38 minutes left? JUST TELL ME THE TRUTH!!!!!” Walsh pleaded.

While this might sound comical now, this was the reality of people downloading from clients such as LimeWire and Kazaa. While X2 in German would’ve been torture for a non-German speaker, the misery of watching an English language copy of 28 Days Later somehow crammed into a 30Mb file is right up there too.

Mislabeled music with microscopic bitrates? That was pretty much standard.

But against the odds, these frankly second-rate experiences still managed to capture the hearts and minds of the digitally minded. People were prepared to put up with nonsense and regular disappointment in order to consume content in a way fit for the 21st century. Yet somehow the combined might of the entertainment industries couldn’t come up with anything substantially better for a number of years.

Of course, broadband availability and penetration played its part but looking back, something could have been done. Not only didn’t the Internet’s popularity come as a surprise, people’s expectations were dramatically lower than they are today too. In any event, beating the pirates should have been child’s play. After all, it was just regular people sharing files in a Windows folder.

Any fool could do it – and millions did. Surprisingly, they have proven unstoppable.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Voksi Releases Detailed Denuvo-Cracking Video Tutorial

Post Syndicated from Andy original https://torrentfreak.com/voksi-releases-detailed-denuvo-cracking-video-tutorial-180210/

Earlier this week, version 4.9 of the Denuvo anti-tamper system, which had protected Assassins Creed Origin for the past several months, was defeated by Italian cracking group CPY.

While Denuvo would probably paint four months of protection as a success, the company would certainly have preferred for things to have gone on a bit longer, not least following publisher Ubisoft’s decision to use VMProtect technology on top.

But while CPY do their thing in Italy there’s another rival whittling away at whatever the giants at Denuvo (and new owner Irdeto) can come up with. The cracker – known only as Voksi – hails from Bulgaria and this week he took the unusual step of releasing a 90-minute video (embedded below) in which he details how to defeat Denuvo’s V4 anti-tamper technology.

The video is not for the faint-hearted so those with an aversion to issues of a highly technical nature might feel the urge to look away. However, it may surprise readers to learn that not so long ago, Voksi knew absolutely nothing about coding.

“You will find this very funny and unbelievable,” Voksi says, recalling the events of 2012.

“There was one game called Sanctum and on one free [play] weekend [on Steam], I and my best friend played through it and saw how great the cooperative action was. When the free weekend was over, we wanted to keep playing, but we didn’t have any money to buy the game.

“So, I started to look for alternative ways, LAN emulators, anything! Then I decided I need to crack it. That’s how I got into reverse engineering. I started watching some shitty YouTube videos with bad quality and doing some tutorials. Then I found about Steam exploits and that’s how I got into making Steamworks fixes, allowing cracked multiplayer between players.”

Voksi says his entire cracking career began with this one indie game and his desire to play it with his best friend. Prior to that, he had absolutely no experience at all. He says he’s taken no university courses or any course at all for that matter. Everything he knows has come from material he’s found online. But the intrigue doesn’t stop there.

“I don’t even know how to code properly in high-level language like C#, C++, etc. But I understand assembly [language] perfectly fine,” he explains.

For those who code, that’s generally a little bit back to front, with low-level languages usually posing the most difficulties. But Voksi says that with assembly, everything “just clicked.”

Of course, it’s been six years since the 21-year-old was first motivated to crack a game due to lack of funds. In the more than half decade since, have his motivations changed at all? Is it the thrill of solving the puzzle or are there other factors at play?

“I just developed an urge to provide paid stuff for free for people who can’t afford it and specifically, co-op and multiplayer cracks. Of course, i’m not saying don’t support the developers if you have the money and like the game. You should do that,” he says.

“The challenge of cracking also motivates me, especially with an abomination like Denuvo. It is pure cancer for the gaming industry, it doesn’t help and it only causes issues for the paying customers.”

Those who follow Voksi online will know that as well as being known in his own right, he’s part of the REVOLT group, a collective that has Voksi’s core interests and goals as their own.

“REVOLT started as a group with one and only goal – to provide multiplayer support for cracked games. No other group was doing it until that day. It was founded by several members, from which I’m currently the only one active, still releasing cracks.

“Our great achievements are in first place, of course, cracking Denuvo V4, making us one of the four groups/people who were able to break the protection. In second place are our online fixes for several AAA games, allowing you to play on legit servers with legit players. In third place, our ordinary Steamworks fixes allowing you to play multiplayer between cracked users.”

In communities like /r/crackwatch on Reddit and those less accessible, Voksi and others doing similar work are often held up as Internet heroes, cracking games in order to give the masses access to something that might’ve been otherwise inaccessible. But how does this fame sit with him?

“Well, I don’t see myself as a hero, just another ordinary person doing what he loves. I love seeing people happy because of my work, that’s also a big motivation, but nothing more than that,” he says.

Finally, what’s up next for Voksi and what are his hopes for the rest of the year?

“In an ideal world, Denuvo would die. As for me, I don’t know, time will tell,” he concludes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Give Your WordPress Blog a Voice With Our New Amazon Polly Plugin

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/give-your-wordpress-blog-a-voice-with-our-new-amazon-polly-plugin/

I first told you about Polly in late 2016 in my post Amazon Polly – Text to Speech in 47 Voices and 24 Languages. After that AWS re:Invent launch, we added support for Korean, five new voices, and made Polly available in all Regions in the aws partition. We also added whispering, speech marks, a timbre effect, and dynamic range compression.

New WordPress Plugin
Today we are launching a WordPress plugin that uses Polly to create high-quality audio versions of your blog posts. You can access the audio from within the post or in podcast form using a feature that we call Amazon Pollycast! Both options make your content more accessible and can help you to reach a wider audience. This plugin was a joint effort between the AWS team our friends at AWS Advanced Technology Partner WP Engine.

As you will see, the plugin is easy to install and configure. You can use it with installations of WordPress that you run on your own infrastructure or on AWS. Either way, you have access to all of Polly’s voices along with a wide variety of configuration options. The generated audio (an MP3 file for each post) can be stored alongside your WordPress content, or in Amazon Simple Storage Service (S3), with optional support for content distribution via Amazon CloudFront.

Installing the Plugin
I did not have an existing WordPress-powered blog, so I begin by launching a Lightsail instance using the WordPress 4.8.1 blueprint:

Then I follow these directions to access my login credentials:

Credentials in hand, I log in to the WordPress Dashboard:

The plugin makes calls to AWS, and needs to have credentials in order to do so. I hop over to the IAM Console and created a new policy. The policy allows the plugin to access a carefully selected set of S3 and Polly functions (find the full policy in the README):

Then I create an IAM user (wp-polly-user). I enter the name and indicate that it will be used for Programmatic Access:

Then I attach the policy that I just created, and click on Review:

I review my settings (not shown) and then click on Create User. Then I copy the two values (Access Key ID and Secret Access Key) into a secure location. Possession of these keys allows the bearer to make calls to AWS so I take care not to leave them lying around.

Now I am ready to install the plugin! I go back to the WordPress Dashboard and click on Add New in the Plugins menu:

Then I click on Upload Plugin and locate the ZIP file that I downloaded from the WordPress Plugins site. After I find it I click on Install Now to proceed:

WordPress uploads and installs the plugin. Now I click on Activate Plugin to move ahead:

With the plugin installed, I click on Settings to set it up:

I enter my keys and click on Save Changes:

The General settings let me control the sample rate, voice, player position, the default setting for new posts, and the autoplay option. I can leave all of the settings as-is to get started:

The Cloud Storage settings let me store audio in S3 and to use CloudFront to distribute the audio:

The Amazon Pollycast settings give me control over the iTunes parameters that are included in the generated RSS feed:

Finally, the Bulk Update button lets me regenerate all of the audio files after I change any of the other settings:

With the plugin installed and configured, I can create a new post. As you can see, the plugin can be enabled and customized for each post:

I can see how much it will cost to convert to audio with a click:

When I click on Publish, the plugin breaks the text into multiple blocks on sentence boundaries, calls the Polly SynthesizeSpeech API for each block, and accumulates the resulting audio in a single MP3 file. The published blog post references the file using the <audio> tag. Here’s the post:

I can’t seem to use an <audio> tag in this post, but you can download and play the MP3 file yourself if you’d like.

The Pollycast feature generates an RSS file with links to an MP3 file for each post:

Pricing
The plugin will make calls to Amazon Polly each time the post is saved or updated. Pricing is based on the number of characters in the speech requests, as described on the Polly Pricing page. Also, the AWS Free Tier lets you process up to 5 million characters per month at no charge, for a period of one year that starts when you make your first call to Polly.

Going Further
The plugin is available on GitHub in source code form and we are looking forward to your pull requests! Here are a couple of ideas to get you started:

Voice Per Author – Allow selection of a distinct Polly voice for each author.

Quoted Text – For blogs that make frequent use of embedded quotes, use a distinct voice for the quotes.

Translation – Use Amazon Translate to translate the texts into another language, and then use Polly to generate audio in that language.

Other Blogging Engines – Build a similar plugin for your favorite blogging engine.

SSML Support – Figure out an interesting way to use Polly’s SSML tags to add additional character to the audio.

Let me know what you come up with!

Jeff;

 

Astro Pi celebrates anniversary of ISS Columbus module

Post Syndicated from David Honess original https://www.raspberrypi.org/blog/astro-pi-celebrates-anniversary/

Right now, 400km above the Earth aboard the International Space Station, are two very special Raspberry Pi computers. They were launched into space on 6 December 2015 and are, most assuredly, the farthest-travelled Raspberry Pi computers in existence. Each year they run experiments that school students create in the European Astro Pi Challenge.

Raspberry Astro Pi units on the International Space Station

Left: Astro Pi Vis (Ed); right: Astro Pi IR (Izzy). Image credit: ESA.

The European Columbus module

Today marks the tenth anniversary of the launch of the European Columbus module. The Columbus module is the European Space Agency’s largest single contribution to the ISS, and it supports research in many scientific disciplines, from astrobiology and solar science to metallurgy and psychology. More than 225 experiments have been carried out inside it during the past decade. It’s also home to our Astro Pi computers.

Here’s a video from 7 February 2008, when Space Shuttle Atlantis went skywards carrying the Columbus module in its cargo bay.

STS-122 Launch NASA TV Coverage

From February 7th, 2008 NASA-TV Coverage of The 121st Space Shuttle Launch Launched At:2:45:30 P.M E.T – Coverage begins exactly one hour till launch STS-122 Crew:

Today, coincidentally, is also the deadline for the European Astro Pi Challenge: Mission Space Lab. Participating teams have until midnight tonight to submit their experiments.

Anniversary celebrations

At 16:30 GMT today there will be a live event on NASA TV for the Columbus module anniversary with NASA flight engineers Joe Acaba and Mark Vande Hei.

Our Astro Pi computers will be joining in the celebrations by displaying a digital birthday candle that the crew can blow out. It works by detecting an increase in humidity when someone blows on it. The video below demonstrates the concept.

AstroPi candle

Uploaded by Effi Edmonton on 2018-01-17.

Do try this at home

The exact Astro Pi code that will run on the ISS today is available for you to download and run on your own Raspberry Pi and Sense HAT. You’ll notice that the program includes code to make it stop automatically when the date changes to 8 February. This is just to save time for the ground control team.

If you have a Raspberry Pi and a Sense HAT, you can use the terminal commands below to download and run the code yourself:

wget http://rpf.io/colbday -O birthday.py
chmod +x birthday.py
./birthday.py

When you see a blank blue screen with the brightness increasing, the Sense HAT is measuring the baseline humidity. It does this every 15 minutes so it can recalibrate to take account of natural changes in background humidity. A humidity increase of 2% is needed to blow out the candle, so if the background humidity changes by more than 2% in 15 minutes, it’s possible to get a false positive. Press Ctrl + C to quit.

Please tweet pictures of your candles to @astro_pi – we might share yours! And if we’re lucky, we might catch a glimpse of the candle on the ISS during the NASA TV event at 16:30 GMT today.

The post Astro Pi celebrates anniversary of ISS Columbus module appeared first on Raspberry Pi.

Jailed Streaming Site Operator Hit With Fresh $3m Damages Lawsuit

Post Syndicated from Andy original https://torrentfreak.com/jailed-streaming-site-operator-hit-with-fresh-3m-damages-lawsuit-180207/

After being founded more than half a decade ago, Swefilmer grew to become Sweden’s most popular movie and TV show streaming site. It was only a question of time before authorities stepped in to bring the show to an end.

In 2015, a Swedish operator of the site in his early twenties was raided by local police. A second man, Turkish and in his late twenties, was later arrested in Germany.

The pair, who hadn’t met in person, appeared before the Varberg District Court in January 2017, accused of making more than $1.5m from their activities between November 2013 and June 2015.

The prosecutor described Swefilmer as “organized crime”, painting the then 26-year-old as the main brains behind the site and the 23-year-old as playing a much smaller role. The former was said to have led a luxury lifestyle after benefiting from $1.5m in advertising revenue.

The sentences eventually handed down matched the defendants’ alleged level of participation. While the younger man received probation and community service, the Turk was sentenced to serve three years in prison and ordered to forfeit $1.59m.

Very quickly it became clear there would be an appeal, with plaintiffs represented by anti-piracy outfit RightsAlliance complaining that their 10m krona ($1.25m) claim for damages over the unlawful distribution of local movie Johan Falk: Kodnamn: Lisa had been ruled out by the Court.

With the appeal hearing now just a couple of weeks away, Swedish outlet Breakit is reporting that media giant Bonnier Broadcasting has launched an action of its own against the now 27-year-old former operator of Swefilmer.

According to the publication, Bonnier’s pay-TV company C More, which distributes for Fox, MGM, Paramount, Universal, Sony and Warner, is set to demand around 24m krona ($3.01m) via anti-piracy outfit RightsAlliance.

“This is about organized crime and grossly criminal individuals who earned huge sums on our and others’ content. We want to take every opportunity to take advantage of our rights,” says Johan Gustafsson, Head of Corporate Communications at Bonnier Broadcasting.

C More reportedly filed its lawsuit at the Stockholm District Court on January 30, 2018. At its core are four local movies said to have been uploaded and made available via Swefilmer.

“C More would probably never even have granted a license to [the operator] to make or allow others to make the films available to the public in a similar way as [the operator] did, but if that had happened, the fee would not be less than 5,000,000 krona ($628,350) per film or a total of 20,000,000 krona ($2,513,400),” C More’s claim reads.

Speaking with Breakit, lawyer Ansgar Firsching said he couldn’t say much about C More’s claims against his client.

“I am very surprised that two weeks before the main hearing [C More] comes in with this requirement. If you open another front, we have two trials that are partly about the same thing,” he said.

Firsching said he couldn’t elaborate at this stage but expects his client to deny the claim for damages. C More sees things differently.

“Many people live under the illusion that sites like Swefilmer are driven by idealistic teens in their parents’ basements, which is completely wrong. This is about organized crime where our content is used to generate millions and millions in revenue,” the company notes.

The appeal in the main case is set to go ahead February 20th.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Google Won’t Take Down ‘Pirate’ VLC With Five Million Downloads

Post Syndicated from Andy original https://torrentfreak.com/google-wont-take-down-pirate-vlc-with-five-million-downloads-180206/

VLC is the media player of choice for Internet users around the globe. Downloaded for desktop at least 2,493,000,000 times since February 2005, VLC is an absolute giant. And those figures don’t even include GNU/Linux, iOS, Android, Chrome OS or Windows Phone downloads either.

Aside from its incredible functionality, VLC (operated by the VideoLAN non-profit) has won the hearts of Internet users for other key reasons, not least its commitment to being free and open source software. While it’s true to say that VLC doesn’t cost a penny, the term ‘free’ actually relates to the General Public License (GPL) under which it’s distributed.

The GPL aims to guarantee that software under it remains ‘free’ for all current and future users. To benefit from these protections, the GPL requires people who modify and redistribute software to afford others the same freedoms by informing them of the requirement to make source code available.

Since VLC is extremely popular and just about as ‘free’ as software can get, people get extremely defensive when they perceive that a third-party is benefiting from the software without adhering to the terms of the generous GPL license. That was the case beginning a few hours ago when veteran Reddit user MartinVanBallin pointed out a piece of software on the Google Play Store.

“They took VLC, put in ads, didn’t attribute VLC or follow the open source license, and they’re using Media Player Classics icon,” MartinVanBallin wrote.

The software is called 321 Media Player and has an impressive 4.5 score from more than 101,000 reviews. Despite not mentioning VLC or the GPL, it is based completely on VLC, as the image below (and other proof) shows.

VLC Media Player 321 Media Player

TorrentFreak spoke with VideoLAN President Jean-Baptiste Kempf who confirmed that the clone is in breach of the GPL.

“The Android version of VLC is under the license GPLv3, which requires everything inside the application to be open source and sharing the source,” Kempf says.

“This clone seems to use a closed-source advertisement component (are there any that are open source?), which is a clear violation of our copyleft. Moreover, they don’t seem to share the source at all, which is also a violation.”

Perhaps the most amazing thing is the popularity of the software. According to stats provided by Google, 321 Media Player has amassed between five and ten million downloads. That’s not an insignificant amount when one considers that unlike VLC, 321 Media Player contains revenue-generating ads.

Using GPL-licensed software for commercial purposes is allowed providing the license terms are strictly adhered to. Kempf informs TF that VideoLAN doesn’t mind if this happens but in this case, the GPL is not being respected.

“A fork application which changes some things is an interesting thing, because they maybe have something to give back to our community. The application here, is just a parasite, and I think they are useless and dangerous,” Kempf says.

All that being said, turning VLC itself into adware is something the VideoLAN team is opposed to. In fact, according to questions answered by Kempf last September, the team turned down “several tens of millions of euros” to turn their media player into an ad-supported platform.

“Integrating crap, adware and spyware with VLC is not OK,” Kempf informs TF.

TorrentFreak contacted the developer of 321 Media Player for comment but at the time of publication, we were yet to receive a response. We also asked for a copy of the source code for 321 Media Player as the GPL requires, but that wasn’t forthcoming either.

In the meantime, it appears that a small army of Reddit users are trying to get something done about the ‘rogue’ app by reporting it as an “inappropriate copycat” to Google. Whether this will have any effect remains to be seen but according to Kempf, tackling these clone versions has proven extremely difficult in the past.

“We reported this application already more than three times and Google refuses to take it down,” he says.

“Our experience is that it is very difficult to take these kinds of apps down, even if they embed spyware or malware. Maybe it is because it makes money for Google.”

Finally, Kempf also points to the obviously named “Indian VLC Player” on Google Play. Another VLC clone with up to 500,000 downloads, this one appears to breach both copyright and trademark law.

“We remove applications that violate our policies, such as apps that are illegal,” a Google spokesperson informs TorrentFreak.

“We don’t comment on individual applications; you can check out our policies for more information.”

Update: The app has now been removed from Google Play

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons