Tag Archives: rover

Hunting for life on Mars assisted by high-altitude balloons

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/eclipse-high-altitude-balloons/

Will bacteria-laden high-altitude balloons help us find life on Mars? Today’s eclipse should bring us closer to an answer.

NASA Bacteria Balloons Raspberry Pi HAB Life on Mars

image c/o NASA / Ames Research Center / Tristan Caro

The Eclipse Ballooning Project

Having learned of the Eclipse Ballooning Project set to take place today across the USA, a team at NASA couldn’t miss the opportunity to harness the high-flying project for their own experiments.

NASA Bacteria Balloons Raspberry Pi HAB Life on Mars

The Eclipse Ballooning Project invited students across the USA to aid in the launch of 50+ high-altitude balloons during today’s eclipse. Each balloon is equipped with its own Raspberry Pi and camera for data collection and live video-streaming.

High-altitude ballooning, or HAB as it’s often referred to, has become a popular activity within the Raspberry Pi community. The lightweight nature of the device allows for high ascent, and its Camera Module enables instant visual content collection.

Life on Mars

image c/o Montana State University

The Eclipse Ballooning Project team, headed by Angela Des Jardins of Montana State University, was contacted by Jim Green, Director of Planetary Science at NASA, who hoped to piggyback on the project to run tests on bacteria in the Mars-like conditions the balloons would encounter near space.

Into the stratosphere

At around -35 degrees Fahrenheit, with thinner air and harsher ultraviolet radiation, the conditions in the upper part of the earth’s stratosphere are comparable to those on the surface of Mars. And during the eclipse, the moon will block some UV rays, making the environment in our stratosphere even more similar to the martian oneideal for NASA’s experiment.

So the students taking part in the Eclipse Ballooning Project could help the scientists out, NASA sent them some small metal tags.

NASA Bacteria Balloons Raspberry Pi HAB Life on Mars

These tags contain samples of a kind of bacterium known as Paenibacillus xerothermodurans. Upon their return to ground, the bacteria will be tested to see whether and how the high-altitude conditions affected them.

Life on Mars

Paenibacillus xerothermodurans is one of the most resilient bacterial species we know. The team at NASA wants to discover how the bacteria react to their flight in order to learn more about whether life on Mars could possibly exist. If the low temperature, UV rays, and air conditions cause the bacteria to mutate or indeed die, we can be pretty sure that the existence of living organisms on the surface of Mars is very unlikely.

Life on Mars

What happens to the bacteria on the spacecraft and rovers we send to space? This experiment should provide some answers.

The eclipse

If you’re in the US, you might have a chance to witness the full solar eclipse today. And if you’re planning to watch, please make sure to take all precautionary measures. In a nutshell, don’t look directly at the sun. Not today, not ever.

If you’re in the UK, you can observe a partial eclipse, if the clouds decide to vanish. And again, take note of safety measures so you don’t damage your eyes.

Life on Mars

You can also watch a live-stream of the eclipse via the NASA website.

If you’ve created an eclipse-viewing Raspberry Pi project, make sure to share it with us. And while we’re talking about eclipses and balloons, check here for our coverage of the 2015 balloon launches coinciding with the UK’s partial eclipse.

The post Hunting for life on Mars assisted by high-altitude balloons appeared first on Raspberry Pi.

Cloudflare Kicking ‘Daily Stormer’ is Bad News For Pirate Sites

Post Syndicated from Ernesto original https://torrentfreak.com/cloudflare-kicking-daily-stormer-is-bad-news-for-pirate-sites-170817/

“I woke up this morning in a bad mood and decided to kick them off the Internet.”

Those are the words of Cloudflare CEO Matthew Prince, who decided to terminate the account of controversial Neo-Nazi site Daily Stormer.

Bam. Gone. At least for a while.

Although many people are happy to see the site go offline, the decision is not without consequence. It goes directly against what many saw as the core values of the company.

For years on end, Cloudflare has been asked to remove terrorist propaganda, pirate sites, and other possibly unacceptable content. Each time, Cloudflare replied that it doesn’t take action without a court order. No exceptions.

“Even if it were able to, Cloudfare does not monitor, evaluate, judge or store content appearing on a third party website,” the company wrote just a few weeks ago, in its whitepaper on intermediary liability.

“We’re the plumbers of the internet. We make the pipes work but it’s not right for us to inspect what is or isn’t going through the pipes,” Cloudflare CEO Matthew Prince himself said not too long ago.

“If companies like ours or ISPs start censoring there would be an uproar. It would lead us down a path of internet censors and controls akin to a country like China,” he added.

The same arguments were repeated in different contexts, over and over.

This strong position was also one of the reasons why Cloudflare was dragged into various copyright infringement court cases. In these cases, the company repeatedly stressed that removing a site from Cloudflare’s service would not make infringing content disappear.

Pirate sites would just require a simple DNS reconfiguration to continue their operation, after all.

“[T]here are no measures of any kind that CloudFlare could take to prevent this alleged infringement, because the termination of CloudFlare’s CDN services would have no impact on the existence and ability of these allegedly infringing websites to continue to operate,” it said.

That comment looks rather misplaced now that the CEO of the same company has decided to “kick” a website “off the Internet” after an emotional, but deliberate, decision.

Taking a page from Cloudflare’s (old) playbook we’re not going to make any judgments here. Just search Twitter or any social media site and you’ll see plenty of opinions, both for and against the company’s actions.

We do have a prediction though. During the months and years to come, Cloudflare is likely to be dragged into many more copyright lawsuits, and when they are, their counterparts are going to bring up Cloudflare’s voluntary decision to kick a website off the Internet.

Unless Cloudflare suddenly decides to pull all pirate sites from its service tomorrow, of course.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

[$] An alternative device-tree source language

Post Syndicated from corbet original https://lwn.net/Articles/730217/rss

Device trees have become, in a relatively short time, the preferred way to
inform the kernel of the available hardware on systems where that hardware
is not discoverable — most ARM systems, among others. In short, a
device tree is a textual description of a system’s hardware that is
compiled to a simple binary format and passed to the kernel by the
bootloader. The source format for device trees has been established for a
long time — longer than Linux has been using it. Perhaps it’s time for a
change, but a proposal for a new
device-tree source format has generated a fair amount of controversy in the
small corner of the community that concerns itself with such things.

Foxtel Targets 128 Torrent & Streaming Domains For Blocking Down Under

Post Syndicated from Andy original https://torrentfreak.com/foxtel-targets-128-torrent-streaming-domains-for-blocking-down-under-170808/

In 2015, Australia passed controversial legislation which allows ‘pirate’ sites located on servers overseas to be blocked at the ISP level.

“These offshore sites are not operated by noble spirits fighting for the freedom of the internet, they are run by criminals who profit from stealing other people’s creative endeavors,” commented then Foxtel chief executive Richard Freudenstein.

Before, during and after its introduction, Foxtel has positioned itself as a keen supporter of the resulting Section 115a of the Copyright Act. And in December 2016, with the law firmly in place, it celebrated success after obtaining a blocking injunction against The Pirate Bay, Torrentz, TorrentHound and isoHunt.

In May, Foxtel filed a new application, demanding that almost 50 local ISPs block what was believed to be a significant number of ‘pirate’ sites not covered by last year’s order.

Today the broadcasting giant was back in Federal Court, Sydney, to have this second application heard under Section 115a. It was revealed that the application contains 128 domains, each linked to movie and TV piracy.

According to ComputerWorld, the key sites targeted are as follows: YesMovies, Vumoo, LosMovies, CartoonHD, Putlocker, Watch Series 1, Watch Series 2, Project Free TV 1, Project Free TV 2, Watch Episodes, Watch Episode Series, Watch TV Series, The Dare Telly, Putlocker9.is, Putlocker9.to, Torlock and 1337x.

The Foxtel application targets both torrent and streaming sites but given the sample above, it seems that the latter is currently receiving the most attention. Streaming sites are appearing at a rapid rate and can even be automated to some extent, so this battle could become extremely drawn out.

Indeed, Justice Burley, who presided over the case this morning, described the website-blocking process (which necessarily includes targeting mirrors, proxies and replacement domains) as akin to “whack-a-mole”.

“Foxtel sees utility in orders of this nature,” counsel for Foxtel commented in response. “It’s important to block these sites.”

In presenting its application, Foxtel conducted live demonstrations of Yes Movies, Watch Series, 1337x, and Putlocker. It focused on the Australian prison drama series Wentworth, which has been running on Foxtel since 2013, but also featured tests of Game of Thrones.

Justice Burley told the court that since he’s a fan of the series, a spoiler-free piracy presentation would be appreciated. If the hearing had taken place a few days earlier, spoilers may have been possible. Last week, the latest episode of the show leaked onto the Internet from an Indian source before its official release.

Justice Burley’s decision will be handed down at a later date, but it’s unlikely there will be any serious problems with Foxtel’s application. After objecting to many aspects of blocking applications in the past, Australia’s ISPs no longer appear during these hearings. They are now paid AU$50 per domain blocked by companies such as Foxtel and play little more than a technical role in the process.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Lawyer Says He Was Deceived Into BitTorrent Copyright Trolling Scheme

Post Syndicated from Andy original https://torrentfreak.com/lawyer-says-he-was-deceived-into-bittorrent-copyright-trolling-scheme-170807/

For more than a decade, companies around the world have been trying to turn piracy into profit. For many this has meant the development of “copyright trolling” schemes, in which alleged pirates are monitored online and then pressured into cash settlements.

The shadowy nature of this global business means that its true scale will never be known but due to the controversial activities of some of the larger players, it’s occasionally possible to take a peek inside their operations. One such opportunity has just raised its head.

According to a lawsuit filed in California, James Davis is an attorney licensed in Oregon and California. Until two years ago, he was largely focused on immigration law. However, during March 2015, Davis says he was approached by an old classmate with an opportunity to get involved in a new line of business.

That classmate was Oregon lawyer Carl Crowell, who over the past several years has been deeply involved in copyright-trolling cases, including a deluge of Dallas Buyers Club and London Has Fallen litigation. He envisioned a place for Davis in the business.

Davis seemed to find the proposals attractive and became seriously involved in the operation, filing 58 cases on behalf of the companies involved. In common with similar cases, the lawsuits were brought in the name of the entities behind each copyrighted work, such as Dallas Buyers Club, LLC and LHF Productions, Inc.

In time, however, things started to go wrong. Davis claims that he discovered that Crowell, in connection with and on behalf of the other named defendants, “misrepresented the true nature of the Copyright Litigation Campaign, including the ownership of the works at issue and the role of the various third-parties involved in the litigation.”

Davis says that Crowell and the other defendants (which include the infamous Germany-based troll outfit Guardaley) made false representations to secure his participation, while holding back other information that might have made him think twice about becoming involved.

“Crowell and other Defendants withheld numerous material facts that were known to Crowell and the knowledge of which would have cast doubt on the value and ethical propriety of the Copyright Litigation Campaign for Mr. Davis,” the lawsuit reads.

Davis goes on to allege serious misconduct, including that representations regarding ownership of various entities were false and used to deceive him into participating in the scheme.

As time went on, Davis said he had increasing doubts about the operation. Then, in August 2016 as a result of a case underway in California, he began asking questions which resulted in him uncovering additional facts. These undermined both the representations of the people he was working for and his own belief in the “value and ethical propriety of the Copyright Litigation Campaign,” the lawsuit claims.

Davis said this spurred him on to “aggressively seek further information” from Crowell and other people involved in the scheme, including details of its structure and underlying support. He says all he received were “limited responses, excuses, and delays.”

The case was later dismissed by mutual agreement of the parties involved but of course, Davis’ concerns about the underlying case didn’t come to the forefront until the filing of his suit against Crowell and the others.

Davis says that following a meeting in Santa Monica with several of the main players behind the litigation campaign, he decided its legal and factual basis were unsound. He later told Crowell and Guardaley that he was withdrawing from their project.

As the result of the misrepresentations made to him, Davis is now suing the defendants on a number of counts, detailed below.

“Defendants’ business practices are unfair, unlawful, and fraudulent. Davis has suffered monetary damage as a direct result of the unfair, unlawful, and fraudulent business practices set forth herein,” the lawsuit reads.

Requesting a trial by jury, Davis is seeking actual damages, statutory damages, punitive or treble damages “in the amount of no less than $300,000.”

While a payment of that not insignificant amount would clearly satisfy Davis, the prospect of a trial in which the Guardaley operation is laid bare would be preferable when the interests of its thousands of previous targets are considered.

Only time will tell how things will pan out but like the vast majority of troll cases, this one too seems destined to be settled in private, to ensure the settlement machine keeps going.

Note: The case was originally filed in June, only to be voluntarily dismissed. It has now been refiled in state court.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Piracy Brings a New Young Audience to Def Leppard, Guitarist Says

Post Syndicated from Andy original https://torrentfreak.com/piracy-brings-a-new-young-audience-to-def-leppard-guitarist-says-170803/

For decades the debate over piracy has raged, with bands and their recording industry paymasters on one side and large swathes of the public on the other. Throughout, however, there have been those prepared to recognize that things aren’t necessarily black and white.

Over the years, many people have argued that access to free music has helped them broaden their musical horizons, dabbling in new genres and discovering new bands. This, they argue, would have been a prohibitively expensive proposition if purchases were forced on a trial and error basis.

Of course, many labels and bands believe that piracy amounts to theft, but some are prepared to put their heads above the parapet with an opinion that doesn’t necessarily tow the party line.

Formed in 1977 in Sheffield, England, rock band Def Leppard have sold more than 100 million records worldwide and have two RIAA diamond certificated albums to their name. But unlike Metallica who have sold a total of 116 million records and were famous for destroying Napster, Def Leppard’s attitude to piracy is entirely more friendly.

In an interview with Ultimate Classic Rock, Def Leppard guitarist Vivian Campbell has been describing why he believes piracy has its upsides, particularly for enduring bands that are still trying to broaden their horizons.

“The way the band works is quite extraordinary. In recent years, we’ve been really fortunate that we’ve seen this new surge in our popularity. For the most part, that’s fueled by younger people coming to the shows,” Campbell said.

“We’ve been seeing it for the last 10, 12 or 15 years, you’d notice younger kids in the audience, but especially in the last couple of years, it’s grown exponentially. I really do believe that this is the upside of music piracy.”

Def Leppard celebrate their 40th anniversary this year, and the fact that they’re still releasing music and attracting a new audience is a real achievement for a band whose original fans only had access to vinyl and cassette tapes. But Campbell says the band isn’t negatively affected by new technology, nor people using it to obtain their content for free.

“You know, people bemoan the fact that you can’t sell records anymore, but for a band like Def Leppard at least, there is a silver lining in the fact that our music is reaching a whole new audience, and that audience is excited to hear it, and they’re coming to the shows. It’s been fantastic,” he said.

While packing out events is every band’s dream, Campbell believes that the enthusiasm these fresh fans bring to the shows is actually helping the band to improve.

“There’s a whole new energy around Leppard, in fact. I think we’re playing better than we ever have. Which you’d like to think anyway. They always say that musicians, unlike athletes, you’re supposed to get better.

“I’m not sure that anyone other than the band really notices, but I notice it and I know that the other guys do too. When I play ‘Rock of Ages’ for the 3,000,000 time, it’s not the song that excites me, it’s the energy from the audience. That’s what really lifts our performance. When you’ve got a more youthful audience coming to your shows, it only goes in one direction,” he concludes.

The thought of hundreds or even thousands of enthusiastic young pirates energizing an aging Def Leppard to the band’s delight is a real novelty. However, with so many channels for music consumption available today, are these new followers necessarily pirates?

One only has to visit Def Leppard’s official YouTube channel to see that despite being born in the late fifties and early sixties, the band are still regularly posting new content to keep fans up to date. So, given the consumption habits of young people these days, YouTube seems a more likely driver of new fans than torrents, for example.

That being said, Def Leppard are still humming along nicely on The Pirate Bay. The site lists a couple of hundred torrents, some uploaded more recently, some many years ago, including full albums, videos, and even entire discographies.

Arrr, we be Def Leppaaaaaard

Interestingly, Campbell hasn’t changed his public opinion on piracy for more than a decade. Back in 2007 he was saying similar things, and in 2011 he admitted that there were plenty of “kids out there” with the entire Def Leppard collection on their iPods.

“I am pretty sure they didn’t all pay for it. But, maybe those same kids will buy a ticket and come to a concert,” he said.

“We do not expect to sell a lot of records, we are just thankful to have people listening to our music. That is more important than having people pay for it. It will monetize itself later down the line.”

With sites like YouTube perhaps driving more traffic to bands like Def Leppard than pure piracy these days (and even diverting people away from piracy itself), it’s interesting to note that there’s still controversy around people getting paid for music.

With torrent sites slowly dropping off the record labels’ hitlists, one is much more likely to hear them criticizing YouTube itself for not giving the industry a fair deal.

Still, bands like Def Leppard seem happy, so it’s not all bad news.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

TVAddons Returns, But in Ugly War With Canadian Telcos Over Kodi Addons

Post Syndicated from Andy original https://torrentfreak.com/tvaddons-returns-ugly-war-canadian-telcos-kodi-addons-170801/

After Dish Network filed a lawsuit against TVAddons in Texas, several high-profile Kodi addons took the decision to shut down. Soon after, TVAddons itself went offline.

In the weeks that followed, several TVAddons-related domains were signed over (1,2) to a Canadian law firm, a mysterious situation that didn’t dovetail well with the US-based legal action.

TorrentFreak can now reveal that the shutdown of TVAddons had nothing to do with the US action and everything to do with a separate lawsuit filed in Canada.

The complaint against TVAddons

Two months ago on June 2, a collection of Canadian telecoms giants including Bell Canada, Bell ExpressVu, Bell Media, Videotron, Groupe TVA, Rogers Communications and Rogers Media, filed a complaint in Federal Court against Montreal resident, Adam Lackman, the man behind TVAddons.

The 18-page complaint details the plaintiffs’ case against Lackman, claiming that he communicated copyrighted TV shows including Game of Thrones, Prison Break, The Big Bang Theory, America’s Got Talent, Keeping Up With The Kardashians and dozens more, to the public in breach of copyright.

The key claim is that Lackman achieved this by developing, hosting, distributing or promoting Kodi add-ons.

Adam Lackman, the man behind TVAddons (@adam.lackman on Instagram)

A total of 18 major add-ons are detailed in the complaint including 1Channel, Exodus, Phoenix, Stream All The Sources, SportsDevil, cCloudTV and Alluc, to name a few. Also under the spotlight is the ‘FreeTelly’ custom Kodi build distributed by TVAddons alongside its Kodi configuration tool, Indigo.

“[The defendant] has made the [TV shows] available to the public by telecommunication in a way that allows members of the public to have access to them from a place and at a time individually chosen by them…consequently infringing the Plaintiffs’ copyright…in contravention of sections 2.4(1.1), 3(1)(f) and 27(1) of the Copyright Act,” the complaint reads.

The complaint alleges that Lackman “induced and/or authorized users” of the FreeTelly and Indigo tools to carry out infringement by his handling and promotion of infringing add-ons, including through TVAddons.ag and Offshoregit.com, in contravention of sections 3(1)(f) and 27(1) of the Copyright Act.

“Approximately 40 million unique users located around the world are actively using Infringing Addons hosted by TVAddons every month, and approximately 900,000 Canadian households use Infringing Add-ons to access television content. The amount of users of Infringing add-ons hosted TVAddons is constantly increasing,” the complaint adds.

To limit the harm allegedly caused by TVAddons, the complaint asked for interim, interlocutory, and permanent injunctions restraining Lackman and associates from developing, promoting or distributing any of the allegedly infringing add-ons or software. On top, the plaintiffs requested punitive and exemplary damages, plus costs.

The interim injunction and Anton Piller Order

Following the filing of the complaint, on June 9 the Federal Court handed down a time-limited interim injunction against Lackman which restrained him from various activities in respect of TVAddons. The process took place ex parte, meaning in secret, without Lackman being able to mount a defense.

The Court also authorized a bailiff and computer forensics experts to take control of Internet domains including TVAddons.ag and Offshoregit.com plus social media and hosting provider accounts for a period of 14 days. These were transferred to Daniel Drapeau at DrapeauLex, an independent court-appointed supervising counsel.

The order also contained an Anton Piller order, a civil search warrant that grants plaintiffs no-notice permission to enter a defendant’s premises in order to secure and copy evidence to support their case, before it can be destroyed or tampered with.

The order covered not only data related to the TVAddons platform, such as operating and financial details, revenues, and banking information, but everything in Lackman’s possession.

The Court ordered the telecoms companies to inform Lackman that the case against him is a civil proceeding and that he could deny entry to his property if he wished. However, that option would put him in breach of the order and would place him at risk of being fined or even imprisoned. Catch 22 springs to mind.

The Court did, however, put limits on the number of people that could be present during the execution of the Anton Piller order (ostensibly to avoid intimidation) and ordered the plaintiffs to deposit CAD$50,000 with the Court, in case the order was improperly executed. That decision would later prove an important one.

The search and interrogation of TVAddons’ operator

On June 12, the order was executed and Lackman’s premises were searched for more than 16 hours. For nine hours he was interrogated and effectively denied his right to remain silent since non-cooperation with an Anton Piller order amounts to contempt of court. The Court’s stated aim of not intimidating Lackman failed.

The TVAddons operator informs TorrentFreak that he heard a disturbance in the hallway outside and spotted several men hiding on the other side of the door. Fearing for his life, Lackman called the police and when they arrived he opened the door. At this point, the police were told by those in attendance to leave, despite Lackman’s protests.

Once inside, Lackman was told he had an hour to find a lawyer, but couldn’t use any electronic device to get one. Throughout the entire day, Lackman says he was reminded by the plaintiffs’ lawyer that he could be held in contempt of court and jailed, even though he was always cooperating.

“I had to sit there and not leave their sight. I was denied access to medication,” Lackman told TorrentFreak. “I had a doctor’s appointment I was forced to miss. I wasn’t even allowed to call and cancel.”

In papers later filed with the court by Lackman’s team, the Anton Piller order was described as a “bombe atomique” since TVAddons had never been served with so much as a copyright takedown notice in advance of this action.

The Anton Piller controversy

Anton Piller orders are only valid when passing a three-step test: when there is a strong prima facie case against the respondent, the damage – potential or actual – is serious for the applicant, and when there is a real possibility that evidence could be destroyed.

For Bell Canada, Bell ExpressVu, Bell Media, Videotron, Groupe TVA, Rogers Communications and Rogers Media, serious problems emerged on at least two of these points after the execution of the order.

For example, TVAddons carried more than 1,500 add-ons yet only 1% of those add-ons were considered to be infringing, a tiny number in the overall picture. Then there was the not insignificant problem with the exchange that took place during the hearing to obtain the order, during which Lackman was not present.

Clearly, the securing of existing evidence wasn’t the number one priority.

Plaintiffs: We want to destroy TVAddons

And the problems continued.

No right to remain silent, no right to consult a lawyer

The Anton Piller search should have been carried out between 8am and 8pm but actually carried on until midnight. As previously mentioned, Adam Lackman was effectively denied his right to remain silent and was forbidden from getting advice from his lawyer.

None of this sat well with the Honourable B. Richard Bell during a subsequent Federal Court hearing to consider the execution of the Anton Piller order.

“It is important to note that the Defendant was not permitted to refuse to answer questions under fear of contempt proceedings, and his counsel was not permitted to clarify the answers to questions. I conclude unhesitatingly that the Defendant was subjected to an examination for discovery without any of the protections normally afforded to litigants in such circumstances,” the Judge said.

“Here, I would add that the ‘questions’ were not really questions at all. They took the form of orders or directions. For example, the Defendant was told to ‘provide to the bailiff’ or ‘disclose to the Plaintiffs’ solicitors’.”

Evidence preservation? More like a fishing trip

But shockingly, the interrogation of Lackman went much, much further. TorrentFreak understands that the TVAddons operator was given a list of 30 names of people that might be operating sites or services similar to TVAddons. He was then ordered to provide all of the information he had on those individuals.

Of course, people tend to guard their online identities so it’s possible that the information provided by Lackman will be of limited use, but Judge Bell was not happy that the Anton Piller order was abused by the plaintiffs in this way.

“I conclude that those questions, posed by Plaintiffs’ counsel, were solely made in furtherance of their investigation and constituted a hunt for further evidence, as opposed to the preservation of then existing evidence,” he wrote in a June 29 order.

But he was only just getting started.

Plaintiffs unlawfully tried to destroy TVAddons before trial

The Judge went on to note that from their own mouths, the Anton Piller order was purposely designed by the plaintiffs to completely shut down TVAddons, despite the fact that only a tiny proportion of the add-ons available on the site were allegedly used to infringe copyright.

“I am of the view that [the order’s] true purpose was to destroy the livelihood of the Defendant, deny him the financial resources to finance a defense to the claim made against him, and to provide an opportunity for discovery of the Defendant in circumstances where none of the procedural safeguards of our civil justice system could be engaged,” Judge Bell wrote.

As noted, plaintiffs must also have a “strong prima facie case” to obtain an Anton Piller order but Judge Bell says he’s not convinced that one exists. Instead, he praised the “forthright manner” of Lackman, who successfully compared the ability of Kodi addons to find content in the same way as Google search can.

So why the big turn around?

Judge Bell said that while the prima facie case may have appeared strong before the judge who heard the matter ex parte (without Lackman being present to defend himself), the subsequent adversarial hearing undermined it, to the point that it no longer met the threshold.

As a result of these failings, Judge Bell declared the Anton Piller order unlawful. Things didn’t improve for the plaintiffs on the injunction front either.

The Judge said that he believes that Lackman has “an arguable case” that he is not violating the Copyright Act by merely providing addons and that TVAddons is his only source of income. So, if an injunction to close the site was granted, the litigation would effectively be over, since the plaintiffs already admitted that their aim was to neutralize the platform.

If the platform was neutralized, Lackman could no longer earn money from the site, which would harm his ability to mount a defense.

“In considering the balance of convenience, I also repeat that the plaintiffs admit that the vast majority of add-ons are non-infringing. Whether the remaining approximately 1% are infringing is very much up for debate. For these reasons, I find the balance of convenience favors the defendant, and no interlocutory injunction will be issued,” the Judge declared.

With the Anton Piller order declared unlawful and no interlocutory injunction (one effective until the final determination of the case) handed down, things were about to get worse for the telecoms companies.

They had paid CAD$50,000 to the court in security in case things went wrong with the Anton Piller order, so TVAddons was entitled to compensation from that amount. That would be helpful, since at this point TVAddons had already run up CAD$75,000 in legal expenses.

On top, the Judge told independent counsel to give everything seized during the Anton Piller search back to Lackman.

The order to return items previously seized

But things were far from over. Within days, the telecoms companies took the decision to the Court of Appeal, asking for a stay of execution (a delay in carrying out a court order) to retain possession of items seized, including physical property, domains, and social media accounts.

Mid-July the appeal was granted and certain confidentiality clauses affecting independent counsel (including Daniel Drapeau, who holds the TVAddons’ domains) were ordered to be continued. However, considering the problems with the execution of the Anton Piller order, Bell Canada, TVA, Videotron and Rogers et al, were ordered to submit an additional security bond of CAD$140,000, on top of the CAD$50,000 already deposited.

So the battle continues, and continue it will

Speaking with TorrentFreak, Adam Lackman says that he has no choice but to fight the telcoms companies since not doing so would result in a loss by default judgment. Interestingly, both he and one of the judges involved in the case thus far believe he has an arguable case.

Lackman says that his activities are protected under the Canadian Copyright Act, specifically subparagraph 2.4(1)(b) which states as follows:

A person whose only act in respect of the communication of a work or other subject-matter to the public consists of providing the means of telecommunication necessary for another person to so communicate the work or other subject-matter does not communicate that work or other subject-matter to the public;

Of course, finding out whether that’s indeed the case will be a costly endeavor.

“It all comes down to whether we will have the financial resources necessary to mount our defense and go to trial. We won’t have ad revenue coming in, since losing our domain names means that we’ll lose the majority of our traffic for quite some time into the future,” Lackman told TF in a statement.

“We’re hoping that others will be as concerned as us about big companies manipulating the law in order to shut down what they see as competition. We desperately need help in financially supporting our legal defense, we cannot do it alone.

“We’ve run up a legal bill of over $100,000 to date. We’re David, and they are four Goliaths with practically unlimited resources. If we lose, it will mean that new case law is made, case law that could mean increased censorship of the internet.”

In the hope of getting support, TVAddons has launched a fundraiser campaign and in the meantime, a new version of the site is back on a new domain, TVAddons.co.

Given TVAddons’ line of defense, the nature of both the platform and Kodi addons, and the fact that there has already been a serious abuse of process during evidence preservation, this is now one of the most interesting and potentially influential copyright cases underway anywhere today.

TVAddons is being represented by Éva Richard , Hilal Ayoubi and Karim Renno in Canada, plus Erin Russell and Jason Sweet in the United States.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

5…4…3…2…1…SPACESHIP BUNK BED!

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/spaceship-bunk-bed/

Many of us have created basic forts in our childhood bedrooms using pillows, sheets, and stuffed toys. Pete Dearing’s sons, meanwhile, get to play and sleep in an incredible spaceship bunk bed.

A spaceship bunk bed with functional lights, levers, buttons, and knobs.

I’m not jealous at all.

Not. At. All.

spaceship bunk bed Raspberry Pi

All the best beds have LEDs.

Building a spaceship bunk bed

Pete purchased plans for a spacecraft-shaped bunk bed online, and set out to build its MDF frame. Now, I don’t know about you, but for young me, having a bunk bed shaped like a spaceship would have been enough – tiny humans have such incredible imagination. But it wasn’t enough for Pete. He had witnessed his children’s obsession with elevator buttons, mobile phones, and the small control panel he’d made for them using switches and an old tool box. He knew he had to go big or go home.

spaceship bunk bed Raspberry Pi

While he was cutting out pieces for the bed frame, Pete asked the boys some creative input, and then adjusted the bed’s plans to include a functional cockpit and extra storage (for moon boots, spacesuits, and flags for staking claims, no doubt).

Wiring a spaceship bunk bed

After realising he hadn’t made enough allowance for the space taken up by the cockpit’s dials, levers, and switches, Pete struggled a little to fit everything in place inside the bunk bed.

spaceship bunk bed Raspberry Pi

“Ground Control to Major Sleepy…”

But it all worked out, and the results were lights, buttons, and fun aplenty. Finally, as icing on the build’s proverbial cake, Pete added sound effects, powered by a Raspberry Pi, and headsets fitted with microphones.

spaceship bunk bed Raspberry Pi

“Red Leader standing by…”

The electronics of the build run on a 12V power supply. To ensure his boys’ safety, and so that they will actually be able to sleep, Pete integrated a timer for the bed’s ‘entertainment system’.

Find more information about the spaceship bunk bed and photos of the project here.

So where do I get mine?

If you want to apply to be adopted by Pete, you can head to www.alex-is-first-in-line.com/seriously_me_first. Alternatively, you could build your own fantastic Pi-powered bed, and add lights and sounds of your choosing. How about a Yellow Submarine bed with a dashboard of Beatles songs? Or an X-Wing bed with flight and weapon controls? Oh, oh, how about a bed shaped like one of the cars from Jurassic Park, or like a Top Gun jet?

Yup…I definitely need a new bed.

While I go take measurements and get the power tools out, why not share your own ideas with us in the comments? Have you pimped your kid’s room with a Raspberry Pi (maybe like this)? Or do you have plans to incorporate lights and noise into something wonderful you’re making for a friend or relation? We want to know.

And I want a spaceship bunk bed!

The post 5…4…3…2…1…SPACESHIP BUNK BED! appeared first on Raspberry Pi.

Russia Bans ‘Uncensored’ VPNs, Proxies and TOR

Post Syndicated from Ernesto original https://torrentfreak.com/russia-bans-unrestricted-vpns-proxies-and-tor-in-russia-170731/

Russia has swiftly become a world leader when it comes to website blocking. Tens of thousands of websites are blocked in the country on copyright infringement and a wide range of other grounds.

However, as is often the case, not all citizens willingly subject themselves to these type of restrictions. On the contrary, many use proxies or anonymizing services such as VPNs and TOR to gain access.

In recent months, the Russian Government has worked on legislation to crack down on these circumvention tools as well, and local media report that President Vladimir Putin has now signed the proposed bill into law.

Under the new law, local telecoms watchdog Rozcomnadzor will keep a list of banned domains while identifying sites, services, and software that provide access to them. Rozcomnadzor will then try to contact the operators of the services, urging them to ban the blocked websites, or face the same fate.

The FSB and the Ministry of Internal Affairs will be tasked with monitoring offenses, which they will then refer to the telecoms watchdog.

In addition to targeting the circumvention sites, services, and their hosts, the bill targets search engines as well.

Search engines will be required to remove links to blocked resources from their results, as these would encourage people to access prohibited material. Search engines that fail to comply with the new requirements face a $12,400 penalty per breach.

Local search giant Yandex previously spoke out against the far-reaching requirements, describing them as unnecessary.

“We believe that the laying of responsibilities on search engines is superfluous,” a Yandex spokesperson said.

“Even if the reference to a [banned] resource does appear in search results, it does not mean that by clicking on it the user will get access, if it was already blocked by ISPs or in any other ways,” the company added.

The new legislation has not been without controversy. Earlier this month many Russians protested the plans, but this had little effect on the final vote. In the Duma, the bill was approved by 373 deputies. Only two voted against the plans, and another and two abstained.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Surge of Threatening Piracy Letters Concerns Finnish Authorities

Post Syndicated from Ernesto original https://torrentfreak.com/massive-surge-in-threatening-piracy-letters-concerns-finnish-authorities-170726/

finlandStarting three years ago, copyright holders began sending out thousands of settlement letters to alleged pirates in Finland, a practice often described as copyright trolling.

In a country with a population of just over five million, copyright holders have cast their net wide. According to local reports, Internet providers handed over details of one hundred thousand customers last year alone.

This practice has not been without controversy. As the settlement letters were sent out, recipients – including some pensioners – started to complain. Many of the accused denied downloading any pirated material but felt threatened by the letters.

Thus far, complaints have been filed with the Market Court, the Finnish Communications Regulatory Authority, the Consumer Authority, and the Ministry of Education and Culture.

In May, the Ministry of Education set up a working group to create a set of ‘best practices’ for copyright enforcement. The working group includes, among others, Internet providers, and outfits that are involved in sending the influx of settlement letters.

Anna Vuopala, a Government’s counselor at the Ministry of Education and Culture, told Kauppaleht that rightsholders should act within the boundaries of the law.

“We strive to create good practices [for copyright enforcement] and eliminate practices that are contrary to law,” says Vuopala, who’s leading the working group.

If the parties involved can’t reach an agreement on how to proceed, the Government considers changing existing copyright law to defuse the situation. What these changes could be is unclear at this point.

Earlier this year the Finnish market court already dealt a blow to local copyright trolls. In a unanimous ruling, seven judges ruled that the privacy of alleged BitTorrent pirates outweighs the evidence provided by the rightsholders.

While it was clear that copyright infringement was taking place, the rightsholders failed to show that it was significant enough to hand over the requested personal details.

Although this decision supports the rights of those who are falsely accused, the Government believes that a set of good practices is still needed to prevent future excesses and controversy.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Kodi Security Risk Emerges After TVAddons Shutdown

Post Syndicated from Andy original https://torrentfreak.com/kodi-security-risk-emerges-after-tvaddons-shutdown-170723/

Formerly known as XBMC, the popularity of the entirely legal Kodi media player has soared in recent years.

Controversial third-party addons that provide access to infringing content have thrust Kodi into the mainstream and the product is now a household name.

Until recently, TVAddons.ag was the leading repository for these addons. During March, the platform had 40 million unique users connected to the site’s servers, together transferring an astounding petabyte of addons and updates.

Everything was going well until news broke last month that the people behind TVAddons were being sued in a federal court in Texas. Shortly after the site went dark and hasn’t been back since.

This was initially a nuisance to the millions of Kodi devices that relied on TVAddons for their addons and updates. With the site gone, none were forthcoming. However, the scene recovered relatively quickly and for users who know what they’re doing, addons are now available from elsewhere.

That being said, something very unusual happened this week. Out of the blue, several key TVAddons domains were transferred to a Canadian law firm. TVAddons, who have effectively disappeared, made no comment. The lawyer involved, Daniel Drapeau, ignored requests for an explanation.

While that’s unusual enough, there’s a bigger issue at play here for millions of former TVAddons users who haven’t yet wiped their devices or upgraded them to work with other repositories.

Without going into huge technical detail, any user of an augmented Kodi device that relied on TVAddons domains (TVAddons.ag, Offshoregit.com) for updates can be reasonably confident that the domains their device is now accessing are not controlled by TVAddons anymore. That is not good news.

When a user installs a Kodi addon or obtains an update, the whole system is based on human trust. People are told about a trustworthy source (repository or ‘repo’) and they feel happy getting their addons and updates from it.

However, any person in control of a repo can make a Kodi addon available that can do pretty much anything. When that’s getting free movies, people tend to be happy, but when that’s making a botnet out of set-top boxes, enthusiasm tends to wane a bit.

If the penny hasn’t yet dropped, consider this.

TVAddons’ domains are now being run by a law firm which refuses to answer questions but has the power to do whatever it likes with them, within the law of course. Currently, the domains are lying dormant and aren’t doing anything nefarious, but if that position changes, millions of people will have absolutely no idea anything is wrong.

TorrentFreak spoke to Kodi Project Manager Nathan Betzen who agrees that the current security situation probably isn’t what former TVAddons users had in mind.

“These are unsandboxed Python addons. The person [in control of] the repo could do whatever they wanted. You guys wrote about the addon that created a DDoS event,” Betzen says.

“If some malware author wanted, he could easily install a watcher that reports back the user’s IP address and everything they were doing in Kodi. If the law firm is actually an anti-piracy group, that seems like the likeliest thing I can think of,” he adds.

While nothing can be ruled out, it seems more likely that the law firm in question has taken control of TVAddons’ domains in order to put them out of action, potentially as part of a settlement in the Dish Network lawsuit. However, since it refuses to answer any questions, everything is open to speculation.

Another possibility is that the domains are being held pending sale, which then raises questions over who the buyer might be and what their intentions are. The bottom line is we simply do not know and since nobody is talking, it might be prudent to consider the worst case scenario.

“If it’s just a holding group, then people [in control of the domain/repo] could do whatever they can think of. Want a few million incredibly inefficient bit mining boxes?” Betzen speculates.

While this scenario is certainly a possibility, one would at least like to think of it as unlikely. That being said, plenty of Internet security fails can be attributed to people simply hoping for the best when things go bad. That rarely works.

On the plus side, Betzen says that since Python code is usually pretty easy to read, any nefarious action could be spotted by vigilant members of the community fairly quickly. However, Martijn Kaijser from Team Kodi warns that it’s possible to ship precompiled Python code instead of the readable versions.

“You can’t even see what’s in the Python files and what they do,” he notes.

Finally, there’s a possibility that TVAddons may be considering some kind of comeback. Earlier this week a new domain – TVAddons.co – was freshly registered, just after the old domains shifted to the law firm. At this stage, however, nothing is known about the site’s plans.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Google Removes Torrent Sites From ‘Results Carousel’

Post Syndicated from Ernesto original https://torrentfreak.com/google-removes-torrent-sites-from-results-carousel-100722/

Two weeks ago we noticed a ‘handy’ feature where Google highlighted various torrent sites in its search results.

People who typed “best torrent sites” into the search box would see a reel of popular sites such as The Pirate Bay and RARBG in the results, featured with their official logos and all.

Google employees obviously didn’t curate the list themselves. They are a Google feature called the “results carousel,” which is generated based on an algorithm. Still, considering the constant criticism the search engine faces from rightsholders, it’s a sensitive topic.

The torrent site carousel

It appears that the search engine itself wasn’t very happy with the featured search results either. This week, the torrent sites were quietly banned from the search carousel feature. According to the company, it wasn’t working as intended.

“We have investigated this particular issue and determined that this results carousel wasn’t working in the intended manner, and we have now fixed the issue,” a Google spokesperson informed TorrentFreak.

Although Google carefully avoids the words copyright and piracy in its comments, it’s quite obvious what motivated this decision. The company doesn’t want to highlight any pirate sites, to avoid yet another copyright controversy.

That the intervention was triggered by “piracy” concerns is backed up by another change. While various “streaming sites” are still prominently listed in a search carousel, the pirate sites were carefully stripped from there as well.

A few days ago it still listed sites including Putlocker, Alluc, and Movie4k.to, but only legitimate streaming portals remain on the list today. That change definitely required some human intervention.

Only ‘legitimate’ streaming postals now

This isn’t the first time that Google’s “rich” search results have featured pirate sites. The same thing happened in the past when the search engine displayed pirate site ratings of movies, next to ratings from regular review sites such as IMDb and Rotten Tomatoes.

Apparently, Google’s search engine algorithms need some anti-piracy fine-tuning, every now and then.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

DJI Firmware Hacking Removes Drone Flight Restrictions

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/WrLMjVOTRig/

Drones have been taking over the world, everyone with a passing interest in making videos has one and DJI firmware hacking gives you the ability to remove all restrictions (no-fly zones, height and distance) which under most jurisdictions is illegal (mostly EU and FAA for the US). It’s an interesting subject, and also a controversial…

Read the full post at darknet.org.uk

[$] User=0day considered harmful in systemd

Post Syndicated from jake original https://lwn.net/Articles/727490/rss

Validating user input is a long-established security best practice, but
there can be differences of opinion about what should be done when that
validation fails. A recently reported bug in systemd has fostered a
discussion on that topic; along the way there has also been discussion
about how much
validation systemd should actually be doing and how much should be left up
to the underlying distribution. The controversy all revolves around
usernames that systemd does not accept, but that some distributions (and
POSIX)
find to
be perfectly acceptable.

FACT Threatens Users of ‘Pirate’ Kodi Add-Ons

Post Syndicated from Ernesto original https://torrentfreak.com/fact-threatens-users-of-pirate-kodi-add-ons-170628/

In the UK there’s a war going on against streaming pirates. At least, that’s what the local anti-piracy body FACT would like the public to know.

The popular media streaming platform Kodi is at the center of the controversy. While Kodi is perfectly legal, many people use it in conjunction with third party add-ons that offer pirated content.

FACT hopes to curb this trend. The group has already taken action against sellers of Kodi devices pre-loaded with these add-ons and they’re keeping a keen eye on developers of illicit add-ons too.

However, according to FACT, the ‘crackdown’ doesn’t stop there. Users of pirate add-ons are also at risk, they claim.

“And then we’ll also be looking at, at some point, the end user. The reason for end users to come into this is that they are committing criminal offences,” FACT’s chief executive Kieron Sharp told the Independent.

While people who stream pirated content are generally hard to track, since they don’t broadcast their IP-address to the public, FACT says that customer data could be obtained directly from sellers of fully-loaded Kodi boxes.

“When we’re working with the police against a company that’s selling IPTV boxes or illicit streaming devices on a large scale, they have records of who they’ve sold them to,” Sharp noted.

While the current legal efforts are focused on the supply side, including these sellers, the end users may also be targeted in the future.

“We have a number of cases coming before the courts in terms of those people who have been providing, selling and distributing illicit streaming devices. It’s something for the very near future, when we’ll consider whether we go any further than that, in terms of customers.”

The comments above make it clear that FACT wants users of these pirate devices to feel vulnerable and exposed. But threatening talk is much easier than action.

It will be very hard to get someone convicted, simply because they bought a device that can access both legal and illegal content. A receipt doesn’t prove intent, and even if it did, it’s pretty much impossible to prove that a person streamed specific pirated content.

But let’s say FACT was able to prove that someone bought a fully-loaded Kodi box and streamed content without permission. How would that result in a conviction? Contrary to claims in the mainstream press, watching a pirated stream isn’t an offense covered by the new Digital Economy Act.

In theory, there could be other ways, but given the complexity of the situation, one would think that FACT would be better off spending its efforts elsewhere.

If FACT was indeed interested in going after individuals then they could easily target people who use torrents. These people broadcast their IP-addresses to the public, which makes them easy to identify. In addition, you can see what they are uploading, and they would also be liable under the Digital Economy Act.

However, after FACT’s decades-long association with the MPAA ended, its main partner in the demonization of Kodi-enabled devices is now the Premier League, who are far more concerned about piracy of live broadcasts (streaming) than content made available after the fact via torrents.

So, given the challenges of having a meaningful criminal prosecution of an end-user as suggested, that leaves us with the probability of FACT sowing fear, uncertainty, and doubt. In other words, scaring the public to deter them from buying or using a fully-loaded Kodi box.

This would also fit in with FACT’s recent claims that some pirate devices are a fire hazard. While it’s kind of FACT to be concerned about the well-being of pirates, as an anti-piracy organization their warnings also serve as a deterrent.

This strategy could pay off to a degree but there’s also some risk involved. Every day new “Kodi” related articles appear in the UK tabloid press, many of them with comments from FACT. Some of these may scare prospective users, but the same headlines also make these boxes known to a much wider public.

In fact, in what is quite a serious backfire, some recent pieces published by the popular Trinity Mirror group (which include FACT comments) actually provide a nice list of pirate addons that are still operational following recent crackdowns.

So are we just sowing fear now or educating a whole new audience?

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

The Pirate Bay Isn’t Affected By Adverse Court Rulings – Everyone Else Is

Post Syndicated from Andy original https://torrentfreak.com/the-pirate-bay-isnt-affected-by-adverse-court-rulings-everyone-else-is-170618/

For more than a decade The Pirate Bay has been the world’s most controversial site. Delivering huge quantities of copyrighted content to the masses, the platform is revered and reviled across the copyright spectrum.

Its reputation is one of a defiant Internet swashbuckler, but due to changes in how the site has been run in more recent times, its current philosophy is more difficult to gauge. What has never been in doubt, however, is the site’s original intent to be as provocative as possible.

Through endless publicity stunts, some real, some just for the ‘lulz’, The Pirate Bay managed to attract a massive audience, all while incurring the wrath of every major copyright holder in the world.

Make no mistake, they all queued up to strike back, but every subsequent rightsholder action was met by a Pirate Bay middle finger, two fingers, or chin flick, depending on the mood of the day. This only served to further delight the masses, who happily spread the word while keeping their torrents flowing.

This vicious circle of being targeted by the entertainment industries, mocking them, and then reaping the traffic benefits, developed into the cheapest long-term marketing campaign the Internet had ever seen. But nothing is ever truly for free and there have been consequences.

After taunting Hollywood and the music industry with its refusals to capitulate, endless legal action that the site would have ordinarily been forced to participate in largely took place without The Pirate Bay being present. It doesn’t take a law degree to work out what happened in each and every one of those cases, whatever complex route they took through the legal system. No defense, no win.

For example, the web-blocking phenomenon across the UK, Europe, Asia and Australia was driven by the site’s absolute resilience and although there would clearly have been other scapegoats had The Pirate Bay disappeared, the site was the ideal bogeyman the copyright lobby required to move forward.

Filing blocking lawsuits while bringing hosts, advertisers, and ISPs on board for anti-piracy initiatives were also made easier with the ‘evil’ Pirate Bay still online. Immune from every anti-piracy technique under the sun, the existence of the platform in the face of all onslaughts only strengthened the cases of those arguing for even more drastic measures.

Over a decade, this has meant a significant tightening of the sharing and streaming climate. Without any big legislative changes but plenty of case law against The Pirate Bay, web-blocking is now a walk in the park, ad hoc domain seizures are a fairly regular occurrence, and few companies want to host sharing sites. Advertisers and brands are also hesitant over where they place their ads. It’s a very different world to the one of 10 years ago.

While it would be wrong to attribute every tightening of the noose to the actions of The Pirate Bay, there’s little doubt that the site and its chaotic image played a huge role in where copyright enforcement is today. The platform set out to provoke and succeeded in every way possible, gaining supporters in their millions. It could also be argued it kicked a hole in a hornets’ nest, releasing the hell inside.

But perhaps the site’s most amazing achievement is the way it has managed to stay online, despite all the turmoil.

This week yet another ruling, this time from the powerful European Court of Justice, found that by offering links in the manner it does, The Pirate Bay and other sites are liable for communicating copyright works to the public. Of course, this prompted the usual swathe of articles claiming that this could be the final nail in the site’s coffin.

Wrong.

In common with every ruling, legal defeat, and legislative restriction put in place due to the site’s activities, this week’s decision from the ECJ will have zero effect on the Pirate Bay’s availability. For right or wrong, the site was breaking the law long before this ruling and will continue to do so until it decides otherwise.

What we have instead is a further tightened legal landscape that will have a lasting effect on everything BUT the site, including weaker torrent sites, Internet users, and user-uploaded content sites such as YouTube.

With The Pirate Bay carrying on regardless, that is nothing short of remarkable.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Nintendo & BREIN Target Seller of ‘Pirate’ Retro Gaming System

Post Syndicated from Andy original https://torrentfreak.com/nintendo-brein-target-seller-of-pirate-retro-gaming-system-170610/

As millions of often younger gamers immerse themselves in the latest 3D romp-fests from the world’s leading games developers, huge numbers of people are reliving their youth through the wonders of emulation.

The majority of old gaming systems can be emulated on a decent PC these days, opening up the possibility of reanimating thousands of the greatest games to ever grace the planet. While that’s a great prospect, the good news doesn’t stop there. The games are all free – if you don’t mind pirating them.

While many people go the do-it-yourself route by downloading emulators and ROMs (the games) from the Internet, increasingly people are saving time by buying systems ready-made online. Some of these are hugely impressive, housed in full-size arcade machine cabinets and packing many thousands of games. They also have sizeable price tags to match, running in some cases to thousands of dollars. But there are other options.

The rise of affordable compact computers has opened up emulation and retro gaming to a whole new audience and inevitable some people have taken to selling these devices online with the games pre-bundled on SD cards. These systems can be obtained relatively cheaply but despite the games being old, companies like Nintendo still take a dim view of their sale.

That’s also the case in the Netherlands, where Nintendo and other companies are taking action against people involved in the sale of what are effectively pirate gaming systems. In a recent case, Dutch anti-piracy outfit BREIN took action against the operator of the Retrospeler (Retro Player) site, an outlet selling a ready-made retro gaming system.

Retro Player site (translated from Dutch)

As seen from the image above, for a little under 110 euros the player can buy a games machine with classics like Super Mario, Street Fighter, and Final Fantasy pre-installed. Add a TV via an HDMI lead and a joypad or two, and yesteryear gaming becomes reality today. Unfortunately, the fun didn’t last long and it was soon “Game Over” for Retro Player.

Speaking with TorrentFreak, BREIN chief Tim Kuik says that the system sold by Retro Player was based on the popular Raspberry Pi single-board computer. Although small and relatively cheap, the Pi is easily capable of running retro games via software such as RetroPie, but it’s unclear which product was installed on the version sold by Retro Player.

What is clear is that the device came pre-installed with a lot of games. The now-defunct Retro Player site listed 6,500 titles for a wide range of classic gaming systems, including Gameboy, Super Nintendo, Nintendo 64, Megadrive and Playstation. Kuik didn’t provide precise numbers but said that the machine came packaged with “a couple of thousand” titles.

BREIN says in this particular case it was acting on behalf of Nintendo, among others. However, it doesn’t appear that the case will be going to court. Like many other cases handled by the anti-piracy group, BREIN says it has reached a settlement with the operator of the Retro Player site for an unspecified amount.

The debate and controversy surrounding retro gaming and emulation is one that has been running for years. The thriving community sees little wrong with reanimating games for long-dead systems and giving them new life among a new audience. On the other hand, copyright holders such as Nintendo view their titles as their property, to be exploited in a time, place and manner of their choosing.

While that friction will continue for a long time to come, there will be few if any legal problems for those choosing to pursue their emulation fantasies in the privacy of their own home. Retro gaming is here to stay and as long as computing power continues to increase, the experience is only likely to improve.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Building High-Throughput Genomics Batch Workflows on AWS: Workflow Layer (Part 4 of 4)

Post Syndicated from Andy Katz original https://aws.amazon.com/blogs/compute/building-high-throughput-genomics-batch-workflows-on-aws-workflow-layer-part-4-of-4/

Aaron Friedman is a Healthcare and Life Sciences Partner Solutions Architect at AWS

Angel Pizarro is a Scientific Computing Technical Business Development Manager at AWS

This post is the fourth in a series on how to build a genomics workflow on AWS. In Part 1, we introduced a general architecture, shown below, and highlighted the three common layers in a batch workflow:

  • Job
  • Batch
  • Workflow

In Part 2, you built a Docker container for each job that needed to run as part of your workflow, and stored them in Amazon ECR.

In Part 3, you tackled the batch layer and built a scalable, elastic, and easily maintainable batch engine using AWS Batch. This solution took care of dynamically scaling your compute resources in response to the number of runnable jobs in your job queue length as well as managed job placement.

In part 4, you build out the workflow layer of your solution using AWS Step Functions and AWS Lambda. You then run an end-to-end genomic analysis―specifically known as exome secondary analysis―for many times at a cost of less than $1 per exome.

Step Functions makes it easy to coordinate the components of your applications using visual workflows. Building applications from individual components that each perform a single function lets you scale and change your workflow quickly. You can use the graphical console to arrange and visualize the components of your application as a series of steps, which simplify building and running multi-step applications. You can change and add steps without writing code, so you can easily evolve your application and innovate faster.

An added benefit of using Step Functions to define your workflows is that the state machines you create are immutable. While you can delete a state machine, you cannot alter it after it is created. For regulated workloads where auditing is important, you can be assured that state machines you used in production cannot be altered.

In this blog post, you will create a Lambda state machine to orchestrate your batch workflow. For more information on how to create a basic state machine, please see this Step Functions tutorial.

All code related to this blog series can be found in the associated GitHub repository here.

Build a state machine building block

To skip the following steps, we have provided an AWS CloudFormation template that can deploy your Step Functions state machine. You can use this in combination with the setup you did in part 3 to quickly set up the environment in which to run your analysis.

The state machine is composed of smaller state machines that submit a job to AWS Batch, and then poll and check its execution.

The steps in this building block state machine are as follows:

  1. A job is submitted.
    Each analytical module/job has its own Lambda function for submission and calls the batchSubmitJob Lambda function that you built in the previous blog post. You will build these specialized Lambda functions in the following section.
  2. The state machine queries the AWS Batch API for the job status.
    This is also a Lambda function.
  3. The job status is checked to see if the job has completed.
    If the job status equals SUCCESS, proceed to log the final job status. If the job status equals FAILED, end the execution of the state machine. In all other cases, wait 30 seconds and go back to Step 2.

Here is the JSON representing this state machine.

{
  "Comment": "A simple example that submits a Job to AWS Batch",
  "StartAt": "SubmitJob",
  "States": {
    "SubmitJob": {
      "Type": "Task",
      "Resource": "arn:aws:lambda:us-east-1:<account-id>::function:batchSubmitJob",
      "Next": "GetJobStatus"
    },
    "GetJobStatus": {
      "Type": "Task",
      "Resource": "arn:aws:lambda:us-east-1:<account-id>:function:batchGetJobStatus",
      "Next": "CheckJobStatus",
      "InputPath": "$",
      "ResultPath": "$.status"
    },
    "CheckJobStatus": {
      "Type": "Choice",
      "Choices": [
        {
          "Variable": "$.status",
          "StringEquals": "FAILED",
          "End": true
        },
        {
          "Variable": "$.status",
          "StringEquals": "SUCCEEDED",
          "Next": "GetFinalJobStatus"
        }
      ],
      "Default": "Wait30Seconds"
    },
    "Wait30Seconds": {
      "Type": "Wait",
      "Seconds": 30,
      "Next": "GetJobStatus"
    },
    "GetFinalJobStatus": {
      "Type": "Task",
      "Resource": "arn:aws:lambda:us-east-1:<account-id>:function:batchGetJobStatus",
      "End": true
    }
  }
}

Building the Lambda functions for the state machine

You need two basic Lambda functions for this state machine. The first one submits a job to AWS Batch and the second checks the status of the AWS Batch job that was submitted.

In AWS Step Functions, you specify an input as JSON that is read into your state machine. Each state receives the aggregate of the steps immediately preceding it, and you can specify which components a state passes on to its children. Because you are using Lambda functions to execute tasks, one of the easiest routes to take is to modify the input JSON, represented as a Python dictionary, within the Lambda function and return the entire dictionary back for the next state to consume.

Building the batchSubmitIsaacJob Lambda function

For Step 1 above, you need a Lambda function for each of the steps in your analysis workflow. As you created a generic Lambda function in the previous post to submit a batch job (batchSubmitJob), you can use that function as the basis for the specialized functions you’ll include in this state machine. Here is such a Lambda function for the Isaac aligner.

from __future__ import print_function

import boto3
import json
import traceback

lambda_client = boto3.client('lambda')



def lambda_handler(event, context):
    try:
        # Generate output put
        bam_s3_path = '/'.join([event['resultsS3Path'], event['sampleId'], 'bam/'])

        depends_on = event['dependsOn'] if 'dependsOn' in event else []

        # Generate run command
        command = [
            '--bam_s3_folder_path', bam_s3_path,
            '--fastq1_s3_path', event['fastq1S3Path'],
            '--fastq2_s3_path', event['fastq2S3Path'],
            '--reference_s3_path', event['isaac']['referenceS3Path'],
            '--working_dir', event['workingDir']
        ]

        if 'cmdArgs' in event['isaac']:
            command.extend(['--cmd_args', event['isaac']['cmdArgs']])
        if 'memory' in event['isaac']:
            command.extend(['--memory', event['isaac']['memory']])

        # Submit Payload
        response = lambda_client.invoke(
            FunctionName='batchSubmitJob',
            InvocationType='RequestResponse',
            LogType='Tail',
            Payload=json.dumps(dict(
                dependsOn=depends_on,
                containerOverrides={
                    'command': command,
                },
                jobDefinition=event['isaac']['jobDefinition'],
                jobName='-'.join(['isaac', event['sampleId']]),
                jobQueue=event['isaac']['jobQueue']
            )))

        response_payload = response['Payload'].read()

        # Update event
        event['bamS3Path'] = bam_s3_path
        event['jobId'] = json.loads(response_payload)['jobId']
        
        return event
    except Exception as e:
        traceback.print_exc()
        raise e

In the Lambda console, create a Python 2.7 Lambda function named batchSubmitIsaacJob and paste in the above code. Use the LambdaBatchExecutionRole that you created in the previous post. For more information, see Step 2.1: Create a Hello World Lambda Function.

This Lambda function reads in the inputs passed to the state machine it is part of, formats the data for the batchSubmitJob Lambda function, invokes that Lambda function, and then modifies the event dictionary to pass onto the subsequent states. You can repeat these for each of the other tools, which can be found in the tools//lambda/lambda_function.py script in the GitHub repo.

Building the batchGetJobStatus Lambda function

For Step 2 above, the process queries the AWS Batch DescribeJobs API action with jobId to identify the state that the job is in. You can put this into a Lambda function to integrate it with Step Functions.

In the Lambda console, create a new Python 2.7 function with the LambdaBatchExecutionRole IAM role. Name your function batchGetJobStatus and paste in the following code. This is similar to the batch-get-job-python27 Lambda blueprint.

from __future__ import print_function

import boto3
import json

print('Loading function')

batch_client = boto3.client('batch')

def lambda_handler(event, context):
    # Log the received event
    print("Received event: " + json.dumps(event, indent=2))
    # Get jobId from the event
    job_id = event['jobId']

    try:
        response = batch_client.describe_jobs(
            jobs=[job_id]
        )
        job_status = response['jobs'][0]['status']
        return job_status
    except Exception as e:
        print(e)
        message = 'Error getting Batch Job status'
        print(message)
        raise Exception(message)

Structuring state machine input

You have structured the state machine input so that general file references are included at the top-level of the JSON object, and any job-specific items are contained within a nested JSON object. At a high level, this is what the input structure looks like:

{
        "general_field_1": "value1",
        "general_field_2": "value2",
        "general_field_3": "value3",
        "job1": {},
        "job2": {},
        "job3": {}
}

Building the full state machine

By chaining these state machine components together, you can quickly build flexible workflows that can process genomes in multiple ways. The development of the larger state machine that defines the entire workflow uses four of the above building blocks. You use the Lambda functions that you built in the previous section. Rename each building block submission to match the tool name.

We have provided a CloudFormation template to deploy your state machine and the associated IAM roles. In the CloudFormation console, select Create Stack, choose your template (deploy_state_machine.yaml), and enter in the ARNs for the Lambda functions you created.

Continue through the rest of the steps and deploy your stack. Be sure to check the box next to "I acknowledge that AWS CloudFormation might create IAM resources."

Once the CloudFormation stack is finished deploying, you should see the following image of your state machine.

In short, you first submit a job for Isaac, which is the aligner you are using for the analysis. Next, you use parallel state to split your output from "GetFinalIsaacJobStatus" and send it to both your variant calling step, Strelka, and your QC step, Samtools Stats. These then are run in parallel and you annotate the results from your Strelka step with snpEff.

Putting it all together

Now that you have built all of the components for a genomics secondary analysis workflow, test the entire process.

We have provided sequences from an Illumina sequencer that cover a region of the genome known as the exome. Most of the positions in the genome that we have currently associated with disease or human traits reside in this region, which is 1–2% of the entire genome. The workflow that you have built works for both analyzing an exome, as well as an entire genome.

Additionally, we have provided prebuilt reference genomes for Isaac, located at:

s3://aws-batch-genomics-resources/reference/

If you are interested, we have provided a script that sets up all of that data. To execute that script, run the following command on a large EC2 instance:

make reference REGISTRY=<your-ecr-registry>

Indexing and preparing this reference takes many hours on a large-memory EC2 instance. Be careful about the costs involved and note that the data is available through the prebuilt reference genomes.

Starting the execution

In a previous section, you established a provenance for the JSON that is fed into your state machine. For ease, we have auto-populated the input JSON for you to the state machine. You can also find this in the GitHub repo under workflow/test.input.json:

{
  "fastq1S3Path": "s3://aws-batch-genomics-resources/fastq/SRR1919605_1.fastq.gz",
  "fastq2S3Path": "s3://aws-batch-genomics-resources/fastq/SRR1919605_2.fastq.gz",
  "referenceS3Path": "s3://aws-batch-genomics-resources/reference/hg38.fa",
  "resultsS3Path": "s3://<bucket>/genomic-workflow/results",
  "sampleId": "NA12878_states_1",
  "workingDir": "/scratch",
  "isaac": {
    "jobDefinition": "isaac-myenv:1",
    "jobQueue": "arn:aws:batch:us-east-1:<account-id>:job-queue/highPriority-myenv",
    "referenceS3Path": "s3://aws-batch-genomics-resources/reference/isaac/"
  },
  "samtoolsStats": {
    "jobDefinition": "samtools_stats-myenv:1",
    "jobQueue": "arn:aws:batch:us-east-1:<account-id>:job-queue/lowPriority-myenv"
  },
  "strelka": {
    "jobDefinition": "strelka-myenv:1",
    "jobQueue": "arn:aws:batch:us-east-1:<account-id>:job-queue/highPriority-myenv",
    "cmdArgs": " --exome "
  },
  "snpEff": {
    "jobDefinition": "snpeff-myenv:1",
    "jobQueue": "arn:aws:batch:us-east-1:<account-id>:job-queue/lowPriority-myenv",
    "cmdArgs": " -t hg38 "
  }
}

You are now at the stage to run your full genomic analysis. Copy the above to a new text file, change paths and ARNs to the ones that you created previously, and save your JSON input as input.states.json.

In the CLI, execute the following command. You need the ARN of the state machine that you created in the previous post:

aws stepfunctions start-execution --state-machine-arn <your-state-machine-arn> --input file://input.states.json

Your analysis has now started. By using Spot Instances with AWS Batch, you can quickly scale out your workflows while concurrently optimizing for cost. While this is not guaranteed, most executions of the workflows presented here should cost under $1 for a full analysis.

Monitoring the execution

The output from the above CLI command gives you the ARN that describes the specific execution. Copy that and navigate to the Step Functions console. Select the state machine that you created previously and paste the ARN into the search bar.

The screen shows information about your specific execution. On the left, you see where your execution currently is in the workflow.

In the following screenshot, you can see that your workflow has successfully completed the alignment job and moved onto the subsequent steps, which are variant calling and generating quality information about your sample.

You can also navigate to the AWS Batch console and see that progress of all of your jobs reflected there as well.

Finally, after your workflow has completed successfully, check out the S3 path to which you wrote all of your files. If you run a ls –recursive command on the S3 results path, specified in the input to your state machine execution, you should see something similar to the following:

2017-05-02 13:46:32 6475144340 genomic-workflow/results/NA12878_run1/bam/sorted.bam
2017-05-02 13:46:34    7552576 genomic-workflow/results/NA12878_run1/bam/sorted.bam.bai
2017-05-02 13:46:32         45 genomic-workflow/results/NA12878_run1/bam/sorted.bam.md5
2017-05-02 13:53:20      68769 genomic-workflow/results/NA12878_run1/stats/bam_stats.dat
2017-05-02 14:05:12        100 genomic-workflow/results/NA12878_run1/vcf/stats/runStats.tsv
2017-05-02 14:05:12        359 genomic-workflow/results/NA12878_run1/vcf/stats/runStats.xml
2017-05-02 14:05:12  507577928 genomic-workflow/results/NA12878_run1/vcf/variants/genome.S1.vcf.gz
2017-05-02 14:05:12     723144 genomic-workflow/results/NA12878_run1/vcf/variants/genome.S1.vcf.gz.tbi
2017-05-02 14:05:12  507577928 genomic-workflow/results/NA12878_run1/vcf/variants/genome.vcf.gz
2017-05-02 14:05:12     723144 genomic-workflow/results/NA12878_run1/vcf/variants/genome.vcf.gz.tbi
2017-05-02 14:05:12   30783484 genomic-workflow/results/NA12878_run1/vcf/variants/variants.vcf.gz
2017-05-02 14:05:12    1566596 genomic-workflow/results/NA12878_run1/vcf/variants/variants.vcf.gz.tbi

Modifications to the workflow

You have now built and run your genomics workflow. While diving deep into modifications to this architecture are beyond the scope of these posts, we wanted to leave you with several suggestions of how you might modify this workflow to satisfy additional business requirements.

  • Job tracking with Amazon DynamoDB
    In many cases, such as if you are offering Genomics-as-a-Service, you might want to track the state of your jobs with DynamoDB to get fine-grained records of how your jobs are running. This way, you can easily identify the cost of individual jobs and workflows that you run.
  • Resuming from failure
    Both AWS Batch and Step Functions natively support job retries and can cover many of the standard cases where a job might be interrupted. There may be cases, however, where your workflow might fail in a way that is unpredictable. In this case, you can use custom error handling with AWS Step Functions to build out a workflow that is even more resilient. Also, you can build in fail states into your state machine to fail at any point, such as if a batch job fails after a certain number of retries.
  • Invoking Step Functions from Amazon API Gateway
    You can use API Gateway to build an API that acts as a "front door" to Step Functions. You can create a POST method that contains the input JSON to feed into the state machine you built. For more information, see the Implementing Serverless Manual Approval Steps in AWS Step Functions and Amazon API Gateway blog post.

Conclusion

While the approach we have demonstrated in this series has been focused on genomics, it is important to note that this can be generalized to nearly any high-throughput batch workload. We hope that you have found the information useful and that it can serve as a jump-start to building your own batch workloads on AWS with native AWS services.

For more information about how AWS can enable your genomics workloads, be sure to check out the AWS Genomics page.

Other posts in this four-part series:

Please leave any questions and comments below.

Building High-Throughput Genomic Batch Workflows on AWS: Batch Layer (Part 3 of 4)

Post Syndicated from Andy Katz original https://aws.amazon.com/blogs/compute/building-high-throughput-genomic-batch-workflows-on-aws-batch-layer-part-3-of-4/

Aaron Friedman is a Healthcare and Life Sciences Partner Solutions Architect at AWS

Angel Pizarro is a Scientific Computing Technical Business Development Manager at AWS

This post is the third in a series on how to build a genomics workflow on AWS. In Part 1, we introduced a general architecture, shown below, and highlighted the three common layers in a batch workflow:

  • Job
  • Batch
  • Workflow

In Part 2, you built a Docker container for each job that needed to run as part of your workflow, and stored them in Amazon ECR.

In Part 3, you tackle the batch layer and build a scalable, elastic, and easily maintainable batch engine using AWS Batch.

AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. It dynamically provisions the optimal quantity and type of compute resources (for example, CPU or memory optimized instances) based on the volume and specific resource requirements of the batch jobs that you submit. With AWS Batch, you do not need to install and manage your own batch computing software or server clusters, which allows you to focus on analyzing results, such as those of your genomic analysis.

Integrating applications into AWS Batch

If you are new to AWS Batch, we recommend reading Setting Up AWS Batch to ensure that you have the proper permissions and AWS environment.

After you have a working environment, you define several types of resources:

  • IAM roles that provide service permissions
  • A compute environment that launches and terminates compute resources for jobs
  • A custom Amazon Machine Image (AMI)
  • A job queue to submit the units of work and to schedule the appropriate resources within the compute environment to execute those jobs
  • Job definitions that define how to execute an application

After the resources are created, you’ll test the environment and create an AWS Lambda function to send generic jobs to the queue.

This genomics workflow covers the basic steps. For more information, see Getting Started with AWS Batch.

Creating the necessary IAM roles

AWS Batch simplifies batch processing by managing a number of underlying AWS services so that you can focus on your applications. As a result, you create IAM roles that give the service permissions to act on your behalf. In this section, deploy the AWS CloudFormation template included in the GitHub repository and extract the ARNs for later use.

To deploy the stack, go to the top level in the repo with the following command:

aws cloudformation create-stack --template-body file://batch/setup/iam.template.yaml --stack-name iam --capabilities CAPABILITY_NAMED_IAM

You can capture the output from this stack in the Outputs tab in the CloudFormation console:

Creating the compute environment

In AWS Batch, you will set up a managed compute environments. Managed compute environments automatically launch and terminate compute resources on your behalf based on the aggregate resources needed by your jobs, such as vCPU and memory, and simple boundaries that you define.

When defining your compute environment, specify the following:

  • Desired instance types in your environment
  • Min and max vCPUs in the environment
  • The Amazon Machine Image (AMI) to use
  • Percentage value for bids on the Spot Market and VPC subnets that can be used.

AWS Batch then provisions an elastic and heterogeneous pool of Amazon EC2 instances based on the aggregate resource requirements of jobs sitting in the RUNNABLE state. If a mix of CPU and memory-intensive jobs are ready to run, AWS Batch provisions the appropriate ratio and size of CPU and memory-optimized instances within your environment. For this post, you will use the simplest configuration, in which instance types are set to "optimal" allowing AWS Batch to choose from the latest C, M, and R EC2 instance families.

While you could create this compute environment in the console, we provide the following CLI commands. Replace the subnet IDs and key name with your own private subnets and key, and the image-id with the image you will build in the next section.

ACCOUNTID=<your account id>
SERVICEROLE=<from output in CloudFormation template>
IAMFLEETROLE=<from output in CloudFormation template>
JOBROLEARN=<from output in CloudFormation template>
SUBNETS=<comma delimited list of subnets>
SECGROUPS=<your security groups>
SPOTPER=50 # percentage of on demand
IMAGEID=<ami-id corresponding to the one you created>
INSTANCEROLE=<from output in CloudFormation template>
REGISTRY=${ACCOUNTID}.dkr.ecr.us-east-1.amazonaws.com
KEYNAME=<your key name>
MAXCPU=1024 # max vCPUs in compute environment
ENV=myenv

# Creates the compute environment
aws batch create-compute-environment --compute-environment-name genomicsEnv-$ENV --type MANAGED --state ENABLED --service-role ${SERVICEROLE} --compute-resources type=SPOT,minvCpus=0,maxvCpus=$MAXCPU,desiredvCpus=0,instanceTypes=optimal,imageId=$IMAGEID,subnets=$SUBNETS,securityGroupIds=$SECGROUPS,ec2KeyPair=$KEYNAME,instanceRole=$INSTANCEROLE,bidPercentage=$SPOTPER,spotIamFleetRole=$IAMFLEETROLE

Creating the custom AMI for AWS Batch

While you can use default Amazon ECS-optimized AMIs with AWS Batch, you can also provide your own image in managed compute environments. We will use this feature to provision additional scratch EBS storage on each of the instances that AWS Batch launches and also to encrypt both the Docker and scratch EBS volumes.

AWS Batch has the same requirements for your AMI as Amazon ECS. To build the custom image, modify the default Amazon ECS-Optimized Amazon Linux AMI in the following ways:

  • Attach a 1 TB scratch volume to /dev/sdb
  • Encrypt the Docker and new scratch volumes
  • Mount the scratch volume to /docker_scratch by modifying /etcfstab

The first two tasks can be addressed when you create the custom AMI in the console. Spin up a small t2.micro instance, and proceed through the standard EC2 instance launch.

After your instance has launched, record the IP address and then SSH into the instance. Copy and paste the following code:

sudo yum -y update
sudo parted /dev/xvdb mklabel gpt
sudo parted /dev/xvdb mkpart primary 0% 100%
sudo mkfs -t ext4 /dev/xvdb1
sudo mkdir /docker_scratch
sudo echo -e '/dev/xvdb1\t/docker_scratch\text4\tdefaults\t0\t0' | sudo tee -a /etc/fstab
sudo mount -a

This auto-mounts your scratch volume to /docker_scratch, which is your scratch directory for batch processing. Next, create your new AMI and record the image ID.

Creating the job queues

AWS Batch job queues are used to coordinate the submission of batch jobs. Your jobs are submitted to job queues, which can be mapped to one or more compute environments. Job queues have priority relative to each other. You can also specify the order in which they consume resources from your compute environments.

In this solution, use two job queues. The first is for high priority jobs, such as alignment or variant calling. Set this with a high priority (1000) and map back to the previously created compute environment. Next, set a second job queue for low priority jobs, such as quality statistics generation. To create these compute environments, enter the following CLI commands:

aws batch create-job-queue --job-queue-name highPriority-${ENV} --compute-environment-order order=0,computeEnvironment=genomicsEnv-${ENV}  --priority 1000 --state ENABLED
aws batch create-job-queue --job-queue-name lowPriority-${ENV} --compute-environment-order order=0,computeEnvironment=genomicsEnv-${ENV}  --priority 1 --state ENABLED

Creating the job definitions

To run the Isaac aligner container image locally, supply the Amazon S3 locations for the FASTQ input sequences, the reference genome to align to, and the output BAM file. For more information, see tools/isaac/README.md.

The Docker container itself also requires some information on a suitable mountable volume so that it can read and write files temporary files without running out of space.

Note: In the following example, the FASTQ files as well as the reference files to run are in a publicly available bucket.

FASTQ1=s3://aws-batch-genomics-resources/fastq/SRR1919605_1.fastq.gz
FASTQ2=s3://aws-batch-genomics-resources/fastq/SRR1919605_2.fastq.gz
REF=s3://aws-batch-genomics-resources/reference/isaac/
BAM=s3://mybucket/genomic-workflow/test_results/bam/

mkdir ~/scratch

docker run --rm -ti -v $(HOME)/scratch:/scratch $REPO_URI --bam_s3_folder_path $BAM \
--fastq1_s3_path $FASTQ1 \
--fastq2_s3_path $FASTQ2 \
--reference_s3_path $REF \
--working_dir /scratch 

Locally running containers can typically expand their CPU and memory resource headroom. In AWS Batch, the CPU and memory requirements are hard limits and are allocated to the container image at runtime.

Isaac is a fairly resource-intensive algorithm, as it creates an uncompressed index of the reference genome in memory to match the query DNA sequences. The large memory space is shared across multiple CPU threads, and Isaac can scale almost linearly with the number of CPU threads given to it as a parameter.

To fit these characteristics, choose an optimal instance size to maximize the number of CPU threads based on a given large memory footprint, and deploy a Docker container that uses all of the instance resources. In this case, we chose a host instance with 80+ GB of memory and 32+ vCPUs. The following code is example JSON that you can pass to the AWS CLI to create a job definition for Isaac.

aws batch register-job-definition --job-definition-name isaac-${ENV} --type container --retry-strategy attempts=3 --container-properties '
{"image": "'${REGISTRY}'/isaac",
"jobRoleArn":"'${JOBROLEARN}'",
"memory":80000,
"vcpus":32,
"mountPoints": [{"containerPath": "/scratch", "readOnly": false, "sourceVolume": "docker_scratch"}],
"volumes": [{"name": "docker_scratch", "host": {"sourcePath": "/docker_scratch"}}]
}'

You can copy and paste the following code for the other three job definitions:

aws batch register-job-definition --job-definition-name strelka-${ENV} --type container --retry-strategy attempts=3 --container-properties '
{"image": "'${REGISTRY}'/strelka",
"jobRoleArn":"'${JOBROLEARN}'",
"memory":32000,
"vcpus":32,
"mountPoints": [{"containerPath": "/scratch", "readOnly": false, "sourceVolume": "docker_scratch"}],
"volumes": [{"name": "docker_scratch", "host": {"sourcePath": "/docker_scratch"}}]
}'

aws batch register-job-definition --job-definition-name snpeff-${ENV} --type container --retry-strategy attempts=3 --container-properties '
{"image": "'${REGISTRY}'/snpeff",
"jobRoleArn":"'${JOBROLEARN}'",
"memory":10000,
"vcpus":4,
"mountPoints": [{"containerPath": "/scratch", "readOnly": false, "sourceVolume": "docker_scratch"}],
"volumes": [{"name": "docker_scratch", "host": {"sourcePath": "/docker_scratch"}}]
}'

aws batch register-job-definition --job-definition-name samtoolsStats-${ENV} --type container --retry-strategy attempts=3 --container-properties '
{"image": "'${REGISTRY}'/samtools_stats",
"jobRoleArn":"'${JOBROLEARN}'",
"memory":10000,
"vcpus":4,
"mountPoints": [{"containerPath": "/scratch", "readOnly": false, "sourceVolume": "docker_scratch"}],
"volumes": [{"name": "docker_scratch", "host": {"sourcePath": "/docker_scratch"}}]
}'

The value for "image" comes from the previous post on creating a Docker image and publishing to ECR. The value for jobRoleArn you can find from the output of the CloudFormation template that you deployed earlier. In addition to providing the number of CPU cores and memory required by Isaac, you also give it a storage volume for scratch and staging. The volume comes from the previously defined custom AMI.

Testing the environment

After you have created the Isaac job definition, you can submit the job using the AWS Batch submitJob API action. While the base mappings for Docker run are taken care of in the job definition that you just built, the specific job parameters should be specified in the container overrides section of the API call. Here’s what this would look like in the CLI, using the same parameters as in the bash commands shown earlier:

aws batch submit-job --job-name testisaac --job-queue highPriority-${ENV} --job-definition isaac-${ENV}:1 --container-overrides '{
"command": [
			"--bam_s3_folder_path", "s3://mybucket/genomic-workflow/test_batch/bam/",
            "--fastq1_s3_path", "s3://aws-batch-genomics-resources/fastq/ SRR1919605_1.fastq.gz",
            "--fastq2_s3_path", "s3://aws-batch-genomics-resources/fastq/SRR1919605_2.fastq.gz",
            "--reference_s3_path", "s3://aws-batch-genomics-resources/reference/isaac/",
            "--working_dir", "/scratch",
			"—cmd_args", " --exome ",]
}'

When you execute a submitJob call, jobId is returned. You can then track the progress of your job using the describeJobs API action:

aws batch describe-jobs –jobs <jobId returned from submitJob>

You can also track the progress of all of your jobs in the AWS Batch console dashboard.

To see exactly where a RUNNING job is at, use the link in the AWS Batch console to direct you to the appropriate location in CloudWatch logs.

Completing the batch environment setup

To finish, create a Lambda function to submit a generic AWS Batch job.

In the Lambda console, create a Python 2.7 Lambda function named batchSubmitJob. Copy and paste the following code. This is similar to the batch-submit-job-python27 Lambda blueprint. Use the LambdaBatchExecutionRole that you created earlier. For more information about creating functions, see Step 2.1: Create a Hello World Lambda Function.

from __future__ import print_function

import json
import boto3

batch_client = boto3.client('batch')

def lambda_handler(event, context):
    # Log the received event
    print("Received event: " + json.dumps(event, indent=2))
    # Get parameters for the SubmitJob call
    # http://docs.aws.amazon.com/batch/latest/APIReference/API_SubmitJob.html
    job_name = event['jobName']
    job_queue = event['jobQueue']
    job_definition = event['jobDefinition']
    
    # containerOverrides, dependsOn, and parameters are optional
    container_overrides = event['containerOverrides'] if event.get('containerOverrides') else {}
    parameters = event['parameters'] if event.get('parameters') else {}
    depends_on = event['dependsOn'] if event.get('dependsOn') else []
    
    try:
        response = batch_client.submit_job(
            dependsOn=depends_on,
            containerOverrides=container_overrides,
            jobDefinition=job_definition,
            jobName=job_name,
            jobQueue=job_queue,
            parameters=parameters
        )
        
        # Log response from AWS Batch
        print("Response: " + json.dumps(response, indent=2))
        
        # Return the jobId
        event['jobId'] = response['jobId']
        return event
    
    except Exception as e:
        print(e)
        message = 'Error getting Batch Job status'
        print(message)
        raise Exception(message)

Conclusion

In part 3 of this series, you successfully set up your data processing, or batch, environment in AWS Batch. We also provided a Python script in the corresponding GitHub repo that takes care of all of the above CLI arguments for you, as well as building out the job definitions for all of the jobs in the workflow: Isaac, Strelka, SAMtools, and snpEff. You can check the script’s README for additional documentation.

In Part 4, you’ll cover the workflow layer using AWS Step Functions and AWS Lambda.

Please leave any questions and comments below.

Pornhub Piracy Stopped Me Producing Porn, Jenna Haze Says

Post Syndicated from Andy original https://torrentfreak.com/pornhub-piracy-stopped-me-producing-porn-jenna-haze-says-170531/

Last week, adult ‘tube’ site Pornhub celebrated its 10th anniversary, and what a decade it was.

Six months after its May 2007 launch, the site was getting a million visitors every day. Six months after that, traffic had exploded five-fold. Such was the site’s success, by November 2008 Pornhub entered the ranks of the top 100 most-visited sites on the Internet.

As a YouTube-like platform, Pornhub traditionally relied on users to upload content to the site. Uploaders have to declare that they have the rights to do so but it’s clear that amid large quantities of fully licensed material, content exists on Pornhub that is infringing copyright.

Like YouTube, however, the site says it takes its legal responsibilities seriously by removing content whenever a valid DMCA notice is received. Furthermore, it also has a Content Partner Program which allows content owners to monetize their material on the platform.

But despite these overtures, Pornhub has remained a divisive operation. While some partners happily generate revenue from the platform and use it to drive valuable traffic to their own sites, others view it as a parasite living off their hard work. Today those critics were joined by one of the biggest stars the adult industry has ever known.

After ten years as an adult performer, starring in more than 600 movies (including one that marked her as the first adult performer to appear on Blu-ray format), in 2012 Jenna Haze decided on a change of pace. No longer interested in performing, she headed to the other side of the camera as a producer and director.

“Directing is where my heart is now. It’s allowed me to explore a creative side that is different from what performing has offered me,” she said in a statement.

“I am very satisfied with what I was able to accomplish in 10 years of performing, and now I’m enjoying the challenges of being on the other side of the camera and running my studio.”

But while Haze enjoyed success with 15 movies, it wasn’t to last. The former performer eventually backed away from both directing and producing adult content. This morning she laid the blame for that on Pornhub and similar sites.

It all began with a tweet from Conan O’Brien, who belatedly wished Pornhub a happy 10th anniversary.

In response to O’Brien apparently coming to the party late, a Twitter user informed him how he’d been missing out on Jenna Haze. That drew a response from Haze herself, who accused Pornhub of pirating her content.

“Please don’t support sites like porn hub,” she wrote. “They are a tube site that pirates content that other adult companies produce. It’s like Napster!”

In a follow-up, Haze went on to accuse Pornhub of theft and blamed the site for her exit from the business.

“Well they steal my content from my company, as do many other tube sites. It’s why I don’t produce or direct anymore,” Haze wrote.

“Maybe not all of their content is stolen, but I have definitely seen my content up there, as well as other people’s content.”

Of course, just like record companies can do with YouTube, there’s always the option for Haze to file a DMCA notice with Pornhub to have offending content taken down. However, it’s a route she claims to have taken already, but without much success.

“They take the videos down and put [them] back up. I’m not saying they don’t do legitimate business as well,” she said.

While Pornhub has its critics, the site does indeed do masses of legitimate business. The platform is owned by Mindgeek, whose websites receive a combined 115 million visitors per day, fueled in part by content supplied by Brazzers and Digital Playground, which Mindgeek owns. That being said, Mindgeek’s position in the market has always been controversial.

Three years ago, it became evident that Mindgeek had become so powerful in the adult industry that performers (some of whom felt their content was being exploited by the company) indicated they were scared to criticize it.

Adult actress and outspoken piracy critic Tasha Reign, who also had her videos uploaded to Pornhub without her permission, revealed she was in a particularly tight spot.

“It’s like we’re stuck between a rock and a hard place in a way, because if I want to shoot content then I kinda have to shoot for [Mindgeek] because that’s the company that books me because they own…almost…everything,” Reign said.

In 2017, Mindgeek’s dominance is clearly less of a problem for Haze, who is now concentrating on other things. But for those who remain in the industry, Mindgeek is a force to be reckoned with, so criticism will probably remain somewhat muted.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.