Tag Archives: ATS

Court Suspends Ban on Roku Sales in Mexico

Post Syndicated from Ernesto original https://torrentfreak.com/court-suspends-ban-on-roku-sales-in-mexico-170623/

Last week, news broke that the Superior Court of Justice of the City of Mexico had issued a ban on Roku sales.

The order prohibited stores such as Amazon, Liverpool, El Palacio de Hierro, and Sears from importing and selling the devices. In addition, several banks were told stop processing payments from accounts that are linked to pirated services on Roku.

While Roku itself is not offering any pirated content, there is a market for third-party pirate channels outside the Roku Channel Store, which turn the boxes into pirate tools. Cablevision filed a complaint about this unauthorized use which eventually resulted in the ban.

The news generated headlines all over the world and was opposed immediately by several of the parties involved. Yesterday, a federal judge decided to suspend the import and sales ban, at least temporarily.

As a result, local vendors can resume their sales of the popular media player.

“Roku is pleased with today’s court decision, which paves the way for sales of Roku devices to resume in Mexico,” Roku’s General Counsel Steve Kay informed TorrentFreak after he heard the news.

Roku

TorrentFreak has not been able to get a copy of the suspension order, but it’s likely that the court wants to review the case in more detail before a final decision is made.

While streaming player piracy is seen as one of the greatest threats the entertainment industry faces today, the Roku ban went quite far. In a way, it would be similar to banning the Chrome browser because certain add-ons and sites allow users to stream pirated movies.

Roku, meanwhile, says it will continue to work with rightholders and other stakeholders to prevent piracy on its platform, to the best of their ability.

“Piracy is a problem the industry at large is facing,” Key tells TorrentFreak.

“We prohibit copyright infringement of any kind on the Roku platform. We actively work to prevent third-parties from using our platform to distribute copyright infringing content. Moreover, we have been actively working with other industry stakeholders on a wide range of anti-piracy initiatives.”

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Sci-Hub Ordered to Pay $15 Million in Piracy Damages

Post Syndicated from Ernesto original https://torrentfreak.com/sci-hub-ordered-to-pay-15-million-in-piracy-damages-170623/

Two years ago, academic publisher Elsevier filed a complaint against Sci-Hub and several related “pirate” sites.

It accused the websites of making academic papers widely available to the public, without permission.

While Sci-Hub is nothing like the average pirate site, it is just as illegal according to Elsevier’s legal team, who obtained a preliminary injunction from a New York District Court last fall.

The injunction ordered Sci-Hub’s founder Alexandra Elbakyan to quit offering access to any Elsevier content. However, this didn’t happen.

Instead of taking Sci-Hub down, the lawsuit achieved the opposite. Sci-Hub grew bigger and bigger up to a point where its users were downloading hundreds of thousands of papers per day.

Although Elbakyan sent a letter to the court earlier, she opted not engage in the US lawsuit any further. The same is true for her fellow defendants, associated with Libgen. As a result, Elsevier asked the court for a default judgment and a permanent injunction which were issued this week.

Following a hearing on Wednesday, the Court awarded Elsevier $15,000,000 in damages, the maximum statutory amount for the 100 copyrighted works that were listed in the complaint. In addition, the injunction, through which Sci-Hub and LibGen lost several domain names, was made permanent.

Sci-Hub founder Alexandra Elbakyan says that even if she wanted to pay the millions of dollars in revenue, she doesn’t have the money to do so.

“The money project received and spent in about six years of its operation do not add up to 15 million,” Elbakyan tells torrentFreak.

“More interesting, Elsevier says: the Sci-Hub activity ’causes irreparable injury to Elsevier, its customers and the public’ and US court agreed. That feels like a perfect crime. If you want to cause an irreparable injury to American public, what do you have to do? Now we know the answer: establish a website where they can read research articles for free,” she adds.

Previously, Elbakyan already confirmed to us that, lawsuit or not, the site is not going anywhere.

“The Sci-Hub will continue as usual. In case of problems with the domain names, users can rely on TOR scihub22266oqcxt.onion,” Elbakyan added.

Sci-Hub is regularly referred to as the “Pirate Bay for science,” and based on the site’s resilience and its response to legal threats, it can certainly live up to this claim.

The Association of American Publishers (AAP) is happy with the outcome of the case.

“As the final judgment shows, the Court has not mistaken illegal activity for a public good,” AAP President and CEO Maria A. Pallante says.

“On the contrary, it has recognized the defendants’ operation for the flagrant and sweeping infringement that it really is and affirmed the critical role of copyright law in furthering scientific research and the public interest.”

Matt McKay, a spokesperson for the International Association of Scientific, Technical and Medical Publishers (STM) in Oxford went even further, telling Nature that the site doesn’t offer any value to the scientific comunity.

“Sci-Hub does not add any value to the scholarly community. It neither fosters scientific advancement nor does it value researchers’ achievements. It is simply a place for someone to go to download stolen content and then leave.”

Hundreds of thousands of academics, who regularly use the site to download papers, might contest this though.

With no real prospect of recouping the damages and an ever-resilient Elbakyan, Elsevier’s legal battle could just be a win on paper. Sci-Hub and Libgen are not going anywhere, it seems, and the lawsuit has made them more popular than ever before.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

NSA Insider Security Post-Snowden

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/06/nsa_insider_sec.html

According to a recently declassified report obtained under FOIA, the NSA’s attempts to protect itself against insider attacks aren’t going very well:

The N.S.A. failed to consistently lock racks of servers storing highly classified data and to secure data center machine rooms, according to the report, an investigation by the Defense Department’s inspector general completed in 2016.

[…]

The agency also failed to meaningfully reduce the number of officials and contractors who were empowered to download and transfer data classified as top secret, as well as the number of “privileged” users, who have greater power to access the N.S.A.’s most sensitive computer systems. And it did not fully implement software to monitor what those users were doing.

In all, the report concluded, while the post-Snowden initiative — called “Secure the Net” by the N.S.A. — had some successes, it “did not fully meet the intent of decreasing the risk of insider threats to N.S.A. operations and the ability of insiders to exfiltrate data.”

Marcy Wheeler comments:

The IG report examined seven of the most important out of 40 “Secure the Net” initiatives rolled out since Snowden began leaking classified information. Two of the initiatives aspired to reduce the number of people who had the kind of access Snowden did: those who have privileged access to maintain, configure, and operate the NSA’s computer systems (what the report calls PRIVACs), and those who are authorized to use removable media to transfer data to or from an NSA system (what the report calls DTAs).

But when DOD’s inspectors went to assess whether NSA had succeeded in doing this, they found something disturbing. In both cases, the NSA did not have solid documentation about how many such users existed at the time of the Snowden leak. With respect to PRIVACs, in June 2013 (the start of the Snowden leak), “NSA officials stated that they used a manually kept spreadsheet, which they no longer had, to identify the initial number of privileged users.” The report offered no explanation for how NSA came to no longer have that spreadsheet just as an investigation into the biggest breach thus far at NSA started. With respect to DTAs, “NSA did not know how many DTAs it had because the manually kept list was corrupted during the months leading up to the security breach.”

There seem to be two possible explanations for the fact that the NSA couldn’t track who had the same kind of access that Snowden exploited to steal so many documents. Either the dog ate their homework: Someone at NSA made the documents unavailable (or they never really existed). Or someone fed the dog their homework: Some adversary made these lists unusable. The former would suggest the NSA had something to hide as it prepared to explain why Snowden had been able to walk away with NSA’s crown jewels. The latter would suggest that someone deliberately obscured who else in the building might walk away with the crown jewels. Obscuring that list would be of particular value if you were a foreign adversary planning on walking away with a bunch of files, such as the set of hacking tools the Shadow Brokers have since released, which are believed to have originated at NSA.

Read the whole thing. Securing against insiders, especially those with technical access, is difficult, but I had assumed the NSA did more post-Snowden.

US Embassy Threatens to Close Domain Registry Over ‘Pirate Bay’ Domain

Post Syndicated from Andy original https://torrentfreak.com/us-embassy-threatens-to-close-domain-registry-over-pirate-bay-domain-170620/

Domains have become an integral part of the piracy wars and no one knows this better than The Pirate Bay.

The site has burned through numerous domains over the years, with copyright holders and authorities successfully pressurizing registries to destabilize the site.

The latest news on this front comes from the Central American country of Costa Rica, where the local domain registry is having problems with the United States government.

The drama is detailed in a letter to ICANN penned by Dr. Pedro León Azofeifa, President of the Costa Rican Academy of Science, which operates NIC Costa Rica, the registry in charge of local .CR domain names.

Azofeifa’s letter is addressed to ICANN board member Thomas Schneider and pulls no punches. It claims that for the past two years the United States Embassy in Costa Rica has been pressuring NIC Costa Rica to take action against a particular domain.

“Since 2015, the United Estates Embassy in Costa Rica, who represents the interests of the United States Department of Commerce, has frequently contacted our organization regarding the domain name thepiratebay.cr,” the letter to ICANN reads.

“These interactions with the United States Embassy have escalated with time and include great pressure since 2016 that is exemplified by several phone calls, emails, and meetings urging our ccTLD to take down the domain, even though this would go against our domain name policies.”

The letter states that following pressure from the US, the Costa Rican Ministry of Commerce carried out an investigation which concluded that not taking down the domain was in line with best practices that only require suspensions following a local court order. That didn’t satisfy the United States though, far from it.

“The representative of the United States Embassy, Mr. Kevin Ludeke, Economic Specialist, who claims to represent the interests of the US Department of
Commerce, has mentioned threats to close our registry, with repeated harassment
regarding our practices and operation policies,” the letter to ICANN reads.

Ludeke is indeed listed on the US Embassy site for Costa Rica. He’s also referenced in a 2008 diplomatic cable leaked previously by Wikileaks. Contacted via email, Ludeke did not immediately respond to TorrentFreak’s request for comment.

Extract from the letter to ICANN

Surprisingly, Azofeifa says the US representative then got personal, making negative comments towards his Executive Director, “based on no clear evidence or statistical data to support his claims, as a way to pressure our organization to take down the domain name without following our current policies.”

Citing the Tunis Agenda for the Information Society of 2005, Azofeifa asserts that “policy authority for Internet-related public policy issues is the sovereign right of the States,” which in Costa Rica’s case means that there must be “a final judgment from the Courts of Justice of the Republic of Costa Rica” before the registry will suspend a domain.

But it seems legal action was not the preferred route of the US Embassy. Demanding that NIC Costa Rica take unilateral action, Mr. Ludeke continued with “pressure and harassment to take down the domain name without its proper process and local court order.”

Azofeifa’s letter to ICANN, which is cc’d to Stafford Fitzgerald Haney, United States Ambassador to Costa Rica and various people in the Costa Rican Ministry of Commerce, concludes with a request for suggestions on how to deal with the matter.

While the response should prove very interesting, none of the parties involved appear to have noticed that ThePirateBay.cr isn’t officially connected to The Pirate Bay

The domain and associated site appeared in the wake of the December 2014 shut down of The Pirate Bay, claiming to be the real deal and even going as far as making fake accounts in the names of famous ‘pirate’ groups including ettv and YIFY.

Today it acts as an unofficial and unaffiliated reverse proxy to The Pirate Bay while presenting the site’s content as its own. It’s also affiliated with a fake KickassTorrents site, Kickass.cd, which to this day claims that it’s a reincarnation of the defunct torrent giant.

But perhaps the most glaring issue in this worrying case is the apparent willingness of the United States to call out Costa Rica for not doing anything about a .CR domain run by third parties, when the real Pirate Bay’s .org domain is under United States’ jurisdiction.

Registered by the Public Interest Registry in Reston, Virginia, ThePirateBay.org is the famous site’s main domain. TorrentFreak asked PIR if anyone from the US government had ever requested action against the domain but at the time of publication, we had received no response.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

The Pirate Bay Isn’t Affected By Adverse Court Rulings – Everyone Else Is

Post Syndicated from Andy original https://torrentfreak.com/the-pirate-bay-isnt-affected-by-adverse-court-rulings-everyone-else-is-170618/

For more than a decade The Pirate Bay has been the world’s most controversial site. Delivering huge quantities of copyrighted content to the masses, the platform is revered and reviled across the copyright spectrum.

Its reputation is one of a defiant Internet swashbuckler, but due to changes in how the site has been run in more recent times, its current philosophy is more difficult to gauge. What has never been in doubt, however, is the site’s original intent to be as provocative as possible.

Through endless publicity stunts, some real, some just for the ‘lulz’, The Pirate Bay managed to attract a massive audience, all while incurring the wrath of every major copyright holder in the world.

Make no mistake, they all queued up to strike back, but every subsequent rightsholder action was met by a Pirate Bay middle finger, two fingers, or chin flick, depending on the mood of the day. This only served to further delight the masses, who happily spread the word while keeping their torrents flowing.

This vicious circle of being targeted by the entertainment industries, mocking them, and then reaping the traffic benefits, developed into the cheapest long-term marketing campaign the Internet had ever seen. But nothing is ever truly for free and there have been consequences.

After taunting Hollywood and the music industry with its refusals to capitulate, endless legal action that the site would have ordinarily been forced to participate in largely took place without The Pirate Bay being present. It doesn’t take a law degree to work out what happened in each and every one of those cases, whatever complex route they took through the legal system. No defense, no win.

For example, the web-blocking phenomenon across the UK, Europe, Asia and Australia was driven by the site’s absolute resilience and although there would clearly have been other scapegoats had The Pirate Bay disappeared, the site was the ideal bogeyman the copyright lobby required to move forward.

Filing blocking lawsuits while bringing hosts, advertisers, and ISPs on board for anti-piracy initiatives were also made easier with the ‘evil’ Pirate Bay still online. Immune from every anti-piracy technique under the sun, the existence of the platform in the face of all onslaughts only strengthened the cases of those arguing for even more drastic measures.

Over a decade, this has meant a significant tightening of the sharing and streaming climate. Without any big legislative changes but plenty of case law against The Pirate Bay, web-blocking is now a walk in the park, ad hoc domain seizures are a fairly regular occurrence, and few companies want to host sharing sites. Advertisers and brands are also hesitant over where they place their ads. It’s a very different world to the one of 10 years ago.

While it would be wrong to attribute every tightening of the noose to the actions of The Pirate Bay, there’s little doubt that the site and its chaotic image played a huge role in where copyright enforcement is today. The platform set out to provoke and succeeded in every way possible, gaining supporters in their millions. It could also be argued it kicked a hole in a hornets’ nest, releasing the hell inside.

But perhaps the site’s most amazing achievement is the way it has managed to stay online, despite all the turmoil.

This week yet another ruling, this time from the powerful European Court of Justice, found that by offering links in the manner it does, The Pirate Bay and other sites are liable for communicating copyright works to the public. Of course, this prompted the usual swathe of articles claiming that this could be the final nail in the site’s coffin.

Wrong.

In common with every ruling, legal defeat, and legislative restriction put in place due to the site’s activities, this week’s decision from the ECJ will have zero effect on the Pirate Bay’s availability. For right or wrong, the site was breaking the law long before this ruling and will continue to do so until it decides otherwise.

What we have instead is a further tightened legal landscape that will have a lasting effect on everything BUT the site, including weaker torrent sites, Internet users, and user-uploaded content sites such as YouTube.

With The Pirate Bay carrying on regardless, that is nothing short of remarkable.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Disney Asks Google to Remove Its Own (Invisible) Takedown Notices

Post Syndicated from Ernesto original https://torrentfreak.com/disney-asks-google-to-remove-its-own-invisible-takedown-notices-170618/

Pretty much every major copyright holder regularly reports infringing links to Google, hoping to decrease the visibility of pirated files.

Over the past several years, the search engine has had to remove more than two billion links and most of these requests have been neatly archived in the Lumen database.

Walt Disney Company is no stranger to these takedown efforts. The company has sent over 20 million takedown requests to the search engine, covering a wide variety of content. All of these notices are listed in Google’s transparency report, and copies are available at Lumen.

While this is nothing new, we recently noticed that Disney doesn’t stop at reporting direct links to traditional “pirate” sites. In fact, they recently targeted one of their own takedown notices in the Lumen database, which was sent on behalf of its daughter company Lucasfilm.

In the notice below, the media giant wants Google to remove a links to a copy of its own takedown notice, claiming that it infringes the copyright of the blockbuster “Star Wars: The Force Awakens.”

Disney vs. Disney?

This is not the first time that a company has engaged in this type of meta-censorship, it appears.

However, it’s all the more relevant this week after a German court decided that Google can be ordered to stop linking to its own takedown notices. While that suggests that Disney was right to ask for its own link to be removed, the reality is a bit more complex.

When it was still known as ChillingEffects, the Lumen Database instructed Google not to index any takedown notices. And indeed, searching for copies of takedown notices yields no result. This means that Disney asked Google to remove a search result that doesn’t exist.

Perhaps things are different in a galaxy far, far away, but Disney’s takedown notice is not only self-censorship but also entirely pointless.

Disney might be better off focusing on content that Google has actually indexed, instead of going after imaginary threats. Or put in the words of Gold Five: “Stay on Target,” Disney..

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Mira, tiny robot of joyful delight

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/mira-robot-alonso-martinez/

The staff of Pi Towers are currently melting into puddles while making ‘Aaaawwwwwww’ noises as Mira, the adorable little Pi-controlled robot made by Pixar 3D artist Alonso Martinez, steals their hearts.

Mira the robot playing peek-a-boo

If you want to get updates on Mira’s progress, sign up for the mailing list! http://eepurl.com/bteigD Mira is a desk companion that makes your life better one smile at a time. This project explores human robot interactivity and emotional intelligence. Currently Mira uses face tracking to interact with the users and loves playing the game “peek-a-boo”.

Introducing Mira

Honestly, I can’t type words – I am but a puddle! If I could type at all, I would only produce a stream of affectionate fragments. Imagine walking into a room full of kittens. What you would sound like is what I’d type.

No! I can do this. I’m a professional. I write for a living! I can…

SHE BLINKS OHMYAAAARGH!!!

Mira Alonso Martinez Raspberry Pi

Weebl & Bob meets South Park’s Ike Broflovski in an adorable 3D-printed bundle of ‘Aaawwwww’

Introducing Mira (I promise I can do this)

Right. I’ve had a nap and a drink. I’ve composed myself. I am up for this challenge. As long as I don’t look directly at her, I’ll be fine!

Here I go.

As one of the many über-talented 3D artists at Pixar, Alonso Martinez knows a thing or two about bringing adorable-looking characters to life on screen. However, his work left him wondering:

In movies you see really amazing things happening but you actually can’t interact with them – what would it be like if you could interact with characters?

So with the help of his friends Aaron Nathan and Vijay Sundaram, Alonso set out to bring the concept of animation to the physical world by building a “character” that reacts to her environment. His experiments with robotics started with Gertie, a ball-like robot reminiscent of his time spent animating bouncing balls when he was learning his trade. From there, he moved on to Mira.

Mira Alonso Martinez

Many, many of the views of this Tested YouTube video have come from me. So many.

Mira swivels to follow a person’s face, plays games such as peekaboo, shows surprise when you finger-shoot her, and giggles when you give her a kiss.

Mira’s inner workings

To get Mira to turn her head in three dimensions, Alonso took inspiration from the Microsoft Sidewinder Pro joystick he had as a kid. He purchased one on eBay, took it apart to understand how it works, and replicated its mechanism for Mira’s Raspberry Pi-powered innards.

Mira Alonso Martinez

Alonso used the smallest components he could find so that they would fit inside Mira’s tiny body.

Mira’s axis of 3D-printed parts moves via tiny Power HD DSM44 servos, while a camera and OpenCV handle face-tracking, and a single NeoPixel provides a range of colours to indicate her emotions. As for the blinking eyes? Two OLED screens boasting acrylic domes fit within the few millimeters between all the other moving parts.

More on Mira, including her history and how she works, can be found in this wonderful video released by Tested this week.

Pixar Artist’s 3D-Printed Animated Robots!

We’re gushing with grins and delight at the sight of these adorable animated robots created by artist Alonso Martinez. Sean chats with Alonso to learn how he designed and engineered his family of robots, using processes like 3D printing, mold-making, and silicone casting. They’re amazing!

You can also sign up for Alonso’s newsletter here to stay up-to-date about this little robot. Hopefully one of these newsletters will explain how to buy or build your own Mira, as I for one am desperate to see her adorable little face on my desk every day for the rest of my life.

The post Mira, tiny robot of joyful delight appeared first on Raspberry Pi.

“Top ISPs” Are Discussing Fines & Browsing Hijacking For Pirates

Post Syndicated from Andy original https://torrentfreak.com/top-isps-are-discussing-fines-browsing-hijacking-for-pirates-170614/

For the past several years, anti-piracy outfit Rightscorp has been moderately successful in forcing smaller fringe ISPs in the United States to collaborate in a low-tier copyright trolling operation.

The way it works is relatively simple. Rightscorp monitors BitTorrent networks, captures the IP addresses of alleged infringers, and sends DMCA notices to their ISPs. Rightscorp expects ISPs to forward these to their customers along with an attached cash settlement demand.

These demands are usually for small amounts ($20 or $30) but most of the larger ISPs don’t forward them to their customers. This deprives Rightscorp (and clients such as BMG) of the opportunity to generate revenue, a situation that the anti-piracy outfit is desperate to remedy.

One of the problems is that when people who receive Rightscorp ‘fines’ refuse to pay them, the company does nothing, leading to a lack of respect for the company. With this in mind, Rightscorp has been trying to get ISPs involved in forcing people to pay up.

In 2014, Rightscorp said that its goal was to have ISPs place a redirect page in front of ‘pirate’ subscribers until they pay a cash fine.

“[What] we really want to do is move away from termination and move to what’s called a hard redirect, like, when you go into a hotel and you have to put your room number in order to get past the browser and get on to browsing the web,” the company said.

In the three years since that statement, the company has raised the issue again but nothing concrete has come to fruition. However, there are now signs of fresh movement which could be significant, if Rightscorp is to be believed.

“An ISP Good Corporate Citizenship Program is what we feel will drive revenue associated with our primary revenue model. This program is an attempt to garner the attention and ultimately inspire a behavior shift in any ISP that elects to embrace our suggestions to be DMCA-compliant,” the company told shareholders yesterday.

“In this program, we ask for the ISPs to forward our notices referencing the infringement and the settlement offer. We ask that ISPs take action against repeat infringers through suspensions or a redirect screen. A redirect screen will guide the infringer to our payment screen while limiting all but essential internet access.”

At first view, this sounds like a straightforward replay of Rightscorp’s wishlist of three years ago, but it’s worth noting that the legal landscape has shifted fairly significantly since then.

Perhaps the most important development is the BMG v Cox Communications case, in which the ISP was sued for not doing enough to tackle repeat infringers. In that case (for which Rightscorp provided the evidence), Cox was held liable for third-party infringement and ordered to pay damages of $25 million alongside $8 million in legal fees.

All along, the suggestion has been that if Cox had taken action against infringing subscribers (primarily by passing on Rightscorp ‘fines’ and/or disconnecting repeat infringers) the ISP wouldn’t have ended up in court. Instead, it chose to sweat it out to a highly unfavorable decision.

The BMG decision is a potentially powerful ruling for Rightscorp, particularly when it comes to seeking ‘cooperation’ from other ISPs who might not want a similar legal battle on their hands. But are other ISPs interested in getting involved?

According to the Rightscorp, preliminary negotiations are already underway with some big players.

“We are now beginning to have some initial and very thorough discussions with a handful of the top ISPs to create and implement such a program that others can follow. We have every reason to believe that the litigations referred to above are directly responsible for the beginning of a change in thinking of ISPs,” the company says.

Rightscorp didn’t identify these “top ISPs” but by implication, these could include companies such as Comcast, AT&T, Time Warner Cable, CenturyLink, Charter, Verizon, and/or even Cox Communications.

With cooperation from these companies, Rightscorp predicts that a “cultural shift” could be brought about which would significantly increase the numbers of subscribers paying cash demands. It’s also clear that while it may be seeking cooperation from ISPs, a gun is being held under the table too, in case any feel hesitant about putting up a redirect screen.

“This is the preferred approach that we advocate for any willing ISP as an alternative to becoming a defendant in a litigation and facing potential liability and significantly larger statutory damages,” Rightscorp says.

A recent development suggests the company may not be bluffing. Back in April the RIAA sued ISP Grande Communcations for failing to disconnect persistent pirates. Yet again, Rightscorp is deeply involved in the case, having provided the infringement data to the labels for a considerable sum.

Whether the “top ISPs” in the United States will cave into the pressure and implied threats remains to be seen but there’s no doubting the rising confidence at Rightscorp.

“We have demonstrated the tenacity to support two major litigation efforts initiated by two of our clients, which we feel will set a precedent for the entire anti-piracy industry led by Rightscorp. If you can predict the law, you can set the competition,” the company concludes.

Meanwhile, Rightscorp appears to continue its use of disingenuous tactics to extract money from alleged file-sharers.

In the wake of several similar reports, this week a Reddit user reported that Rightscorp asked him to pay a single $20 fine for pirating a song. After paying up, the next day the company allegedly called the user back and demanded payment for a further 200 notices.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Healthcare Industry Cybersecurity Report

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/06/healthcare_indu.html

New US government report: “Report on Improving Cybersecurity in the Health Care Industry.” It’s pretty scathing, but nothing in it will surprise regular readers of this blog.

It’s worth reading the executive summary, and then skimming the recommendations. Recommendations are in six areas.

The Task Force identified six high-level imperatives by which to organize its recommendations and action items. The imperatives are:

  1. Define and streamline leadership, governance, and expectations for health care industry cybersecurity.
  2. Increase the security and resilience of medical devices and health IT.

  3. Develop the health care workforce capacity necessary to prioritize and ensure cybersecurity awareness and technical capabilities.

  4. Increase health care industry readiness through improved cybersecurity awareness and education.

  5. Identify mechanisms to protect research and development efforts and intellectual property from attacks or exposure.

  6. Improve information sharing of industry threats, weaknesses, and mitigations.

News article.

Slashdot thread.

NSA Document Outlining Russian Attempts to Hack Voter Rolls

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/06/nsa_document_ou.html

This week brought new public evidence about Russian interference in the 2016 election. On Monday, the Intercept published a top-secret National Security Agency document describing Russian hacking attempts against the US election system. While the attacks seem more exploratory than operational ­– and there’s no evidence that they had any actual effect ­– they further illustrate the real threats and vulnerabilities facing our elections, and they point to solutions.

The document describes how the GRU, Russia’s military intelligence agency, attacked a company called VR Systems that, according to its website, provides software to manage voter rolls in eight states. The August 2016 attack was successful, and the attackers used the information they stole from the company’s network to launch targeted attacks against 122 local election officials on October 27, 12 days before the election.

That is where the NSA’s analysis ends. We don’t know whether those 122 targeted attacks were successful, or what their effects were if so. We don’t know whether other election software companies besides VR Systems were targeted, or what the GRU’s overall plan was — if it had one. Certainly, there are ways to disrupt voting by interfering with the voter registration process or voter rolls. But there was no indication on Election Day that people found their names removed from the system, or their address changed, or anything else that would have had an effect — anywhere in the country, let alone in the eight states where VR Systems is deployed. (There were Election Day problems with the voting rolls in Durham, NC ­– one of the states that VR Systems supports ­– but they seem like conventional errors and not malicious action.)

And 12 days before the election (with early voting already well underway in many jurisdictions) seems far too late to start an operation like that. That is why these attacks feel exploratory to me, rather than part of an operational attack. The Russians were seeing how far they could get, and keeping those accesses in their pocket for potential future use.

Presumably, this document was intended for the Justice Department, including the FBI, which would be the proper agency to continue looking into these hacks. We don’t know what happened next, if anything. VR Systems isn’t commenting, and the names of the local election officials targeted did not appear in the NSA document.

So while this document isn’t much of a smoking gun, it’s yet more evidence of widespread Russian attempts to interfere last year.

The document was, allegedly, sent to the Intercept anonymously. An NSA contractor, Reality Leigh Winner, was arrested Saturday and charged with mishandling classified information. The speed with which the government identified her serves as a caution to anyone wanting to leak official US secrets.

The Intercept sent a scan of the document to another source during its reporting. That scan showed a crease in the original document, which implied that someone had printed the document and then carried it out of some secure location. The second source, according to the FBI’s affidavit against Winner, passed it on to the NSA. From there, NSA investigators were able to look at their records and determine that only six people had printed out the document. (The government may also have been able to track the printout through secret dots that identified the printer.) Winner was the only one of those six who had been in e-mail contact with the Intercept. It is unclear whether the e-mail evidence was from Winner’s NSA account or her personal account, but in either case, it’s incredibly sloppy tradecraft.

With President Trump’s election, the issue of Russian interference in last year’s campaign has become highly politicized. Reports like the one from the Office of the Director of National Intelligence in January have been criticized by partisan supporters of the White House. It’s interesting that this document was reported by the Intercept, which has been historically skeptical about claims of Russian interference. (I was quoted in their story, and they showed me a copy of the NSA document before it was published.) The leaker was even praised by WikiLeaks founder Julian Assange, who up until now has been traditionally critical of allegations of Russian election interference.

This demonstrates the power of source documents. It’s easy to discount a Justice Department official or a summary report. A detailed NSA document is much more convincing. Right now, there’s a federal suit to force the ODNI to release the entire January report, not just the unclassified summary. These efforts are vital.

This hack will certainly come up at the Senate hearing where former FBI director James B. Comey is scheduled to testify Thursday. Last year, there were several stories about voter databases being targeted by Russia. Last August, the FBI confirmed that the Russians successfully hacked voter databases in Illinois and Arizona. And a month later, an unnamed Department of Homeland Security official said that the Russians targeted voter databases in 20 states. Again, we don’t know of anything that came of these hacks, but expect Comey to be asked about them. Unfortunately, any details he does know are almost certainly classified, and won’t be revealed in open testimony.

But more important than any of this, we need to better secure our election systems going forward. We have significant vulnerabilities in our voting machines, our voter rolls and registration process, and the vote tabulation systems after the polls close. In January, DHS designated our voting systems as critical national infrastructure, but so far that has been entirely for show. In the United States, we don’t have a single integrated election. We have 50-plus individual elections, each with its own rules and its own regulatory authorities. Federal standards that mandate voter-verified paper ballots and post-election auditing would go a long way to secure our voting system. These attacks demonstrate that we need to secure the voter rolls, as well.

Democratic elections serve two purposes. The first is to elect the winner. But the second is to convince the loser. After the votes are all counted, everyone needs to trust that the election was fair and the results accurate. Attacks against our election system, even if they are ultimately ineffective, undermine that trust and ­– by extension ­– our democracy. Yes, fixing this will be expensive. Yes, it will require federal action in what’s historically been state-run systems. But as a country, we have no other option.

This essay previously appeared in the Washington Post.

Online Platforms Should Collaborate to Ban Piracy and Terrorism, Report Suggests

Post Syndicated from Andy original https://torrentfreak.com/online-platforms-collaborate-ban-piracy-terrorism-report-suggests-170608/

With deep ties to the content industries, the Digital Citizens Alliance periodically produces reports on Internet piracy. It has published reports on cyberlockers and tried to blame Cloudflare for the spread of malware, for example.

One of the key themes pursued by DCA is that Internet piracy is inextricably linked to a whole bunch of other online evils and that tackling the former could deliver a much-needed body blow to the latter.

Its new report, titled ‘Trouble in Our Digital Midst’, takes this notion and runs with it, bundling piracy with everything from fake news to hacking, to malware and brand protection, to the sextortion of “young girls and boys” via their computer cameras.

The premise of the report is that cybercrime as a whole is undermining America’s trust in the Internet, noting that 64% of US citizens say that their trust in digital platforms has dropped in the last year. Given the topics under the spotlight, it doesn’t take long to see where this is going – Internet platforms like Google, Facebook and YouTube must tackle the problem.

“When asked, ‘In your opinion, are digital platforms doing enough to keep the Internet safe and trustworthy, or are do they need to do more?’ a staggering 75 percent responded that they need to do more to keep the Internet safe,” the report notes.

It’s abundantly clear that the report is mostly about piracy but a lot of effort has been expended to ensure that people support its general call for the Internet to be cleaned up. By drawing attention to things that even most pirates might find offensive, it’s easy to find more people in agreement.

“Nearly three-quarters of respondents see the pairing of brand name advertising with offensive online content – like ISIS/terrorism recruiting videos – as a threat to the continued trust and integrity of the Internet,” the report notes.

Of course, this is an incredibly sensitive topic. When big brand ads turned up next to terrorist recruiting videos on YouTube, there was an almighty stink, and rightly so. However, at every turn, the DCA report manages to weave the issue of piracy into the equation, noting that the problem includes the “$200 million in advertising that shows up on illegal content theft websites often unbeknownst to the brands.”

The overriding theme is that platforms like Google, Facebook, and YouTube should be able to tackle all of these problems in the same way. Filtering out a terrorist video is the same as removing a pirate movie. And making sure that ads for big brands don’t appear alongside terrorist videos will be just as easy as starving pirates of revenue, the suggestion goes.

But if terrorism doesn’t grind your gears, what about fake news?

“64 percent of Americans say that the Fake News issue has made them less likely to trust the Internet as a source of information,” the report notes.

At this juncture, Facebook gets a gentle pat on the back for dealing with fake news and employing 3,000 people to monitor for violent videos being posted to the network. This shows that the company “takes seriously” the potential harm bad actors pose to Internet safety. But in keeping with the theme running throughout the report, it’s clear DCA are carefully easing in the thin end of the wedge.

“We are at only the beginning of thinking through other kinds of illicit and illegal activity happening on digital platforms right now that we must gain or re-gain control over,” DCA writes.

Quite. In the very next sentence, the group goes on to warn about the sale of drugs and stolen credit cards, adding that the sale of illicit streaming devices (modified Kodi boxes etc) is actually an “insidious yet effective delivery mechanism to infect computers with malware such as Remote Access Trojans.”

Both Amazon and Facebook receive praise in the report for their recent banning (1,2) of augmented Kodi devices but their actions are actually framed as the companies protecting their own reputations, rather than the interests of the media groups that have been putting them under pressure.

“And though this issue underscores the challenges faced by digital platforms – not all of which act with the same level of responsibility – it also highlights the fact digital platforms can and will step up when their own brands are at stake,” the report reads.

But pirate content and Remote Access Trojans through Kodi boxes are only the beginning. Pirate sites are playing a huge part as well, DCA claims, with one in three “content theft websites” exposing people to identify theft, ransomware, and sextortion via “the computer cameras of young girls and boys.”

Worst still, if that was possible, the lack of policing by online platforms means that people are able to “showcase live sexual assaults, murders, and other illegal conduct.”

DCA says that with all this in mind, Americans are looking for online digital platforms to help them. The group claims that citizens need proactive protection from these ills and want companies like Facebook to take similar steps to those taken when warning consumers about fake news and violent content.

So what can be done to stop this tsunami of illegality? According to DCA, platforms like Google, Facebook, YouTube, and Twitter need to up their game and tackle the problem together.

“While digital platforms collaborate on policy and technical issues, there is no evidence that they are sharing information about the bad actors themselves. That enables criminals and bad actors to move seamlessly from platform to platform,” DCA writes.

“There are numerous examples of industry working together to identify and share information about exploitive behavior. For example, casinos share information about card sharks and cheats, and for decades the retail industry has shared information about fraudulent credit cards. A similar model would enable digital platforms and law enforcement to more quickly identify and combat those seeking to leverage the platforms to harm consumers.”

How this kind of collaboration could take place in the real world is open to interpretation but the DCA has a few suggestions of its own. Again, it doesn’t shy away from pulling people on side with something extremely offensive (in this case child pornography) in order to push what is clearly an underlying anti-piracy agenda.

“With a little help from engineers, digital platforms could create fingerprints of unlawful conduct that is shared across platforms to proactively block such conduct, as is done in a limited capacity with child pornography,” DCA explains.

“If these and other newly developed measures were adopted, digital platforms would have the information to enable them to make decisions whether to de-list or demote websites offering illicit goods and services, and the ability to stop the spread of illegal behavior that victimizes its users.”

The careful framing of the DCA report means that there’s something for everyone. If you don’t agree with them on tackling piracy, then their malware, fake news, or child exploitation angles might do the trick. It’s quite a clever strategy but one that the likes of Google, Facebook, and YouTube will recognize immediately.

And they need to – because apparently, it’s their job to sort all of this out. Good luck with that.

The full report can be found here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

EtherApe – Graphical Network Monitor

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/DxSK15EgI5k/

EtherApe is a graphical network monitor for Unix modelled after etherman. Featuring link layer, IP and TCP modes, it displays network activity graphically. Hosts and links change in size with traffic. Colour coded protocols display. It supports Ethernet, FDDI, Token Ring, ISDN, PPP, SLIP and WLAN devices, plus several encapsulation formats. It can…

Read the full post at darknet.org.uk

“Only a year? It’s felt like forever”: a twelve-month retrospective

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/12-months-raspberry-pi/

This weekend saw my first anniversary at Raspberry Pi, and this blog marks my 100th post written for the company. It would have been easy to let one milestone or the other slide had they not come along hand in hand, begging for some sort of acknowledgement.

Alex, Matt, and Courtney in a punt on the Cam

The day Liz decided to keep me

So here it is!

Joining the crew

Prior to my position in the Comms team as Social Media Editor, my employment history was largely made up of retail sales roles and, before that, bit parts in theatrical backstage crews. I never thought I would work for the Raspberry Pi Foundation, despite its firm position on my Top Five Awesome Places I’d Love to Work list. How could I work for a tech company when my knowledge of tech stretched as far as dismantling my Game Boy when I was a kid to see how the insides worked, or being the one friend everyone went to when their phone didn’t do what it was meant to do? I never thought about the other side of the Foundation coin, or how I could find my place within the hidden workings that turned the cogs that brought everything together.

… when suddenly, as if out of nowhere, a new job with a dream company. #raspberrypi #positive #change #dosomething

12 Likes, 1 Comments – Alex J’rassic (@thealexjrassic) on Instagram: “… when suddenly, as if out of nowhere, a new job with a dream company. #raspberrypi #positive…”

A little luck, a well-written though humorous resumé, and a meeting with Liz and Helen later, I found myself the newest member of the growing team at Pi Towers.

Ticking items off the Bucket List

I thought it would be fun to point out some of the chances I’ve had over the last twelve months and explain how they fit within the world of Raspberry Pi. After all, we’re about more than just a $35 credit card-sized computer. We’re a charitable Foundation made up of some wonderful and exciting projects, people, and goals.

High altitude ballooning (HAB)

Skycademy offers educators in the UK the chance to come to Pi Towers Cambridge to learn how to plan a balloon launch, build a payload with onboard Raspberry Pi and Camera Module, and provide teachers with the skills needed to take their students on an adventure to near space, with photographic evidence to prove it.

All the screens you need to hunt balloons. . We have our landing point and are now rushing to Therford to find the payload in a field. . #HAB #RasppberryPi

332 Likes, 5 Comments – Raspberry Pi (@raspberrypifoundation) on Instagram: “All the screens you need to hunt balloons. . We have our landing point and are now rushing to…”

I was fortunate enough to join Sky Captain James, along with Dan Fisher, Dave Akerman, and Steve Randell on a test launch back in August last year. Testing out new kit that James had still been tinkering with that morning, we headed to a field in Elsworth, near Cambridge, and provided Facebook Live footage of the process from payload build to launch…to the moment when our balloon landed in an RAF shooting range some hours later.

RAF firing range sign

“Can we have our balloon back, please, mister?”

Having enjoyed watching Blue Peter presenters send up a HAB when I was a child, I marked off the event on my bucket list with a bold tick, and I continue to show off the photographs from our Raspberry Pi as it reached near space.

Spend the day launching/chasing a high-altitude balloon. Look how high it went!!! #HAB #ballooning #space #wellspacekinda #ish #photography #uk #highaltitude

13 Likes, 2 Comments – Alex J’rassic (@thealexjrassic) on Instagram: “Spend the day launching/chasing a high-altitude balloon. Look how high it went!!! #HAB #ballooning…”

You can find more information on Skycademy here, plus more detail about our test launch day in Dan’s blog post here.

Dear Raspberry Pi Friends…

My desk is slowly filling with stuff: notes, mementoes, and trinkets that find their way to me from members of the community, both established and new to the life of Pi. There are thank you notes, updates, and more from people I’ve chatted to online as they explore their way around the world of Pi.

Letter of thanks to Raspberry Pi from a young fan

*heart melts*

By plugging myself into social media on a daily basis, I often find hidden treasures that go unnoticed due to the high volume of tags we receive on Facebook, Twitter, Instagram, and so on. Kids jumping off chairs in delight as they complete their first Scratch project, newcomers to the Raspberry Pi shedding a tear as they make an LED blink on their kitchen table, and seasoned makers turning their hobby into something positive to aid others.

It’s wonderful to join in the excitement of people discovering a new skill and exploring the community of Raspberry Pi makers: I’ve been known to shed a tear as a result.

Meeting educators at Bett, chatting to teen makers at makerspaces, and sharing a cupcake or three at the birthday party have been incredible opportunities to get to know you all.

You’re all brilliant.

The Queens of Robots, both shoddy and otherwise

Last year we welcomed the Queen of Shoddy Robots, Simone Giertz to Pi Towers, where we chatted about making, charity, and space while wandering the colleges of Cambridge and hanging out with flat Tim Peake.

Queen of Robots @simonegiertz came to visit #PiTowers today. We hung out with cardboard @astro_timpeake and ate chelsea buns at @fitzbillies #Cambridge. . We also had a great talk about the educational projects of the #RaspberryPi team, #AstroPi and how not enough people realise we’re a #charity. . If you’d like to learn more about the Raspberry Pi Foundation and the work we do with #teachers and #education, check out our website – www.raspberrypi.org. . How was your day? Get up to anything fun?

597 Likes, 3 Comments – Raspberry Pi (@raspberrypifoundation) on Instagram: “Queen of Robots @simonegiertz came to visit #PiTowers today. We hung out with cardboard…”

And last month, the wonderful Estefannie ‘Explains it All’ de La Garza came to hang out, make things, and discuss our educational projects.

Estefannie on Twitter

Ahhhh!!! I still can’t believe I got to hang out and make stuff at the @Raspberry_Pi towers!! Thank you thank you!!

Meeting such wonderful, exciting, and innovative YouTubers was a fantastic inspiration to work on my own projects and to try to do more to help others discover ways to connect with tech through their own interests.

Those ‘wow’ moments

Every Raspberry Pi project I see on a daily basis is awesome. The moment someone takes an idea and does something with it is, in my book, always worthy of awe and appreciation. Whether it be the aforementioned flashing LED, or sending Raspberry Pis to the International Space Station, if you have turned your idea into reality, I applaud you.

Some of my favourite projects over the last twelve months have not only made me say “Wow!”, they’ve also inspired me to want to do more with myself, my time, and my growing maker skill.

Museum in a Box on Twitter

Great to meet @alexjrassic today and nerd out about @Raspberry_Pi and weather balloons and @Space_Station and all things #edtech 🎈⛅🛰📚🤖

Projects such as Museum in a Box, a wonderful hands-on learning aid that brings the world to the hands of children across the globe, honestly made me tear up as I placed a miniaturised 3D-printed Virginia Woolf onto a wooden box and gasped as she started to speak to me.

Jill Ogle’s Let’s Robot project had me in awe as Twitch-controlled Pi robots tackled mazes, attempted to cut birthday cake, or swung to slap Jill in the face over webcam.

Jillian Ogle on Twitter

@SryAbtYourCats @tekn0rebel @Beam Lol speaking of faces… https://t.co/1tqFlMNS31

Every day I discover new, wonderful builds that both make me wish I’d thought of them first, and leave me wondering how they manage to make them work in the first place.

Space

We have Raspberry Pis in space. SPACE. Actually space.

Raspberry Pi on Twitter

New post: Mission accomplished for the European @astro_pi challenge and @esa @Thom_astro is on his way home 🚀 https://t.co/ycTSDR1h1Q

Twelve months later, this still blows my mind.

And let’s not forget…

  • The chance to visit both the Houses of Parliment and St James’s Palace

Raspberry Pi team at the Houses of Parliament

  • Going to a Doctor Who pre-screening and meeting Peter Capaldi, thanks to Clare Sutcliffe

There’s no need to smile when you’re #DoctorWho.

13 Likes, 2 Comments – Alex J’rassic (@thealexjrassic) on Instagram: “There’s no need to smile when you’re #DoctorWho.”

We’re here. Where are you? . . . . . #raspberrypi #vidconeu #vidcon #pizero #zerow #travel #explore #adventure #youtube

1,944 Likes, 30 Comments – Raspberry Pi (@raspberrypifoundation) on Instagram: “We’re here. Where are you? . . . . . #raspberrypi #vidconeu #vidcon #pizero #zerow #travel #explore…”

  • Making a GIF Cam and other builds, and sharing them with you all via the blog

Made a Gif Cam using a Raspberry Pi, Pi camera, button and a couple LEDs. . When you press the button, it takes 8 images and stitches them into a gif file. The files then appear on my MacBook. . Check out our Twitter feed (Raspberry_Pi) for examples! . Next step is to fit it inside a better camera body. . #DigitalMaking #Photography #Making #Camera #Gif #MakersGonnaMake #LED #Creating #PhotosofInstagram #RaspberryPi

19 Likes, 1 Comments – Alex J’rassic (@thealexjrassic) on Instagram: “Made a Gif Cam using a Raspberry Pi, Pi camera, button and a couple LEDs. . When you press the…”

The next twelve months

Despite Eben jokingly firing me near-weekly across Twitter, or Philip giving me the ‘Dad glare’ when I pull wires and buttons out of a box under my desk to start yet another project, I don’t plan on going anywhere. Over the next twelve months, I hope to continue discovering awesome Pi builds, expanding on my own skills, and curating some wonderful projects for you via the Raspberry Pi blog, the Raspberry Pi Weekly newsletter, my submissions to The MagPi Magazine, and the occasional video interview or two.

It’s been a pleasure. Thank you for joining me on the ride!

The post “Only a year? It’s felt like forever”: a twelve-month retrospective appeared first on Raspberry Pi.

How NAGRA Fights Kodi and IPTV Piracy

Post Syndicated from Andy original https://torrentfreak.com/how-nagra-fights-kodi-and-iptv-piracy-170603/

Nagravision or NAGRA is one of the best known companies operating in the digital cable and satellite television content security space. Due to successes spanning several decades, the company has often proven unpopular with pirates.

In particular, Nagravision encryption systems have regularly been a hot topic for discussion on cable and satellite hacking forums, frustrating those looking to receive pay TV services without paying the high prices associated with them. However, the rise of the Internet is now presenting new challenges.

NAGRA still protects traditional cable and satellite pay TV services in 2017; Virgin Media in the UK is a long-standing customer, for example. But the rise of Internet streaming means that pirate content can now be delivered to the home with ease, completely bypassing the entire pay TV provider infrastructure. And, by extension, NAGRA’s encryption.

This means that NAGRA has been required to spread its wings.

As reported in April, NAGRA is establishing a lab to monitor and detect unauthorized consumption of content via set-top boxes, websites and other streaming platforms. That covers the now omnipresent Kodi phenomenon, alongside premium illicit IPTV services. TorrentFreak caught up with the company this week to find out more.

“NAGRA has an automated monitoring platform that scans all live channels and VOD assets available on Kodi,” NAGRA’s Ivan Schnider informs TF.

“The service we offer to our customers automatically finds illegal distribution of their content on Kodi and removes infringing streams.”

In the first instance, NAGRA sends standard takedown notices to hosting services to terminate illicit streams. The company says that while some companies are very cooperative, others are less so. When meeting resistance, NAGRA switches to more coercive methods, described here by Christopher Schouten, NAGRA Senior Director Product Marketing.

“Takedowns are generally sent to streaming platforms and hosting servers. When those don’t work, Advanced Takedowns allow us to use both technical and legal means to get results,” Schouten says.

“Numerous stories in recent days show how for instance popular Kodi plug-ins have been removed by their authors because of the mere threat of legal actions like this.”

At the center of operations is NAGRA’s Piracy Intelligence Portal, which offers customers a real-time view of worldwide online piracy trends, information on the infrastructure behind illegal services, as well as statistics and status of takedown requests.

“We measure takedown compliance very carefully using our Piracy Intelligence Portal, so we can usually predict the results we will get. We work on a daily basis to improve relationships and interfaces with those who are less compliant,” Schouten says.

The Piracy Intelligence Portal

While persuasion is probably the best solution, some hosts inevitably refuse to cooperate. However, NAGRA also offers the NexGuard system, which is able to determine the original source of the content.

“Using forensic watermarking to trace the source of the leak, we will be able to completely shut down the ‘leak’ at the source, independently and within minutes of detection,” Schouten says.

Whatever route is taken, NAGRA says that the aim is to take down streams as quickly as possible, something which hopefully undermines confidence in pirate services and encourages users to re-enter the legal market. Interestingly, the company also says it uses “technical means” to degrade pirate services to the point that consumers lose faith in them.

But while augmented Kodi setups and illicit IPTV are certainly considered a major threat in 2017, they are not the only problem faced by content companies.

While the Apple platform is quite tight, the open nature of Android means that there are a rising number of apps that can be sideloaded from the web. These allow pirate content to be consumed quickly and conveniently within a glossy interface.

Apps like Showbox, MovieHD and Terrarium TV have the movie and TV show sector wrapped up, while the popular Mobdro achieves the same with live TV, including premium sports. Schnider says NAGRA can handle apps like these and other emerging threats in a variety of ways.

“In addition to Kodi-related anti-piracy activities, NAGRA offers a service that automatically finds illegal distribution of content on Android applications, fully loaded STBs, M3U playlist and other platforms that provide plug-and-play solutions for the big TV screen; this service also includes the removal of infringing streams,” he explains.

M3U playlist piracy doesn’t get a lot of press. An M3U file is a text file that specifies locations where content (such as streams) can be found online.

In its basic ‘free’ form, it’s simply a case of finding an M3U file on an indexing site or blog and loading it into VLC. It’s not as flashy as any of the above apps, and unless one knows where to get the free M3Us quickly, many channels may already be offline. Premium M3U files are widely available, however, and tend to be pretty reliable.

But while attacking sources of infringing content is clearly a big part of NAGRA’s mission, the company also deploys softer strategies for dealing with pirates.

“Beyond disrupting pirate streams, raising awareness amongst users that these services are illegal and helping service providers deliver competing legitimate services, are also key areas in the fight against premium IPTV piracy where NAGRA can help,” Schnider says.

“Converting users of such services to legitimate paying subscribers represents a significant opportunity for content owners and distributors.”

For this to succeed, Schouten says there needs to be an understanding of the different motivators that lead an individual to commit piracy.

“Is it price? Is it availability? Is it functionality?” he asks.

Interestingly, he also reveals that lots of people are spending large sums of money on IPTV services they believe are legal but are not. Rather than the high prices putting them off, they actually add to their air of legitimacy.

“These consumers can relatively easily be converted into paying subscribers if they can be convinced that pay-TV services offer superior quality, reliability, and convenience because let’s face it, most IPTV services are still a little dodgy to use,” he says.

“Education is also important; done through working with service providers to inform consumers through social media platforms of the risks linked to the use of illegitimate streaming devices / IPTV devices, e.g. purchasing boxes that may no longer work after a short period of time.”

And so the battle over content continues.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Building High-Throughput Genomics Batch Workflows on AWS: Workflow Layer (Part 4 of 4)

Post Syndicated from Andy Katz original https://aws.amazon.com/blogs/compute/building-high-throughput-genomics-batch-workflows-on-aws-workflow-layer-part-4-of-4/

Aaron Friedman is a Healthcare and Life Sciences Partner Solutions Architect at AWS

Angel Pizarro is a Scientific Computing Technical Business Development Manager at AWS

This post is the fourth in a series on how to build a genomics workflow on AWS. In Part 1, we introduced a general architecture, shown below, and highlighted the three common layers in a batch workflow:

  • Job
  • Batch
  • Workflow

In Part 2, you built a Docker container for each job that needed to run as part of your workflow, and stored them in Amazon ECR.

In Part 3, you tackled the batch layer and built a scalable, elastic, and easily maintainable batch engine using AWS Batch. This solution took care of dynamically scaling your compute resources in response to the number of runnable jobs in your job queue length as well as managed job placement.

In part 4, you build out the workflow layer of your solution using AWS Step Functions and AWS Lambda. You then run an end-to-end genomic analysis―specifically known as exome secondary analysis―for many times at a cost of less than $1 per exome.

Step Functions makes it easy to coordinate the components of your applications using visual workflows. Building applications from individual components that each perform a single function lets you scale and change your workflow quickly. You can use the graphical console to arrange and visualize the components of your application as a series of steps, which simplify building and running multi-step applications. You can change and add steps without writing code, so you can easily evolve your application and innovate faster.

An added benefit of using Step Functions to define your workflows is that the state machines you create are immutable. While you can delete a state machine, you cannot alter it after it is created. For regulated workloads where auditing is important, you can be assured that state machines you used in production cannot be altered.

In this blog post, you will create a Lambda state machine to orchestrate your batch workflow. For more information on how to create a basic state machine, please see this Step Functions tutorial.

All code related to this blog series can be found in the associated GitHub repository here.

Build a state machine building block

To skip the following steps, we have provided an AWS CloudFormation template that can deploy your Step Functions state machine. You can use this in combination with the setup you did in part 3 to quickly set up the environment in which to run your analysis.

The state machine is composed of smaller state machines that submit a job to AWS Batch, and then poll and check its execution.

The steps in this building block state machine are as follows:

  1. A job is submitted.
    Each analytical module/job has its own Lambda function for submission and calls the batchSubmitJob Lambda function that you built in the previous blog post. You will build these specialized Lambda functions in the following section.
  2. The state machine queries the AWS Batch API for the job status.
    This is also a Lambda function.
  3. The job status is checked to see if the job has completed.
    If the job status equals SUCCESS, proceed to log the final job status. If the job status equals FAILED, end the execution of the state machine. In all other cases, wait 30 seconds and go back to Step 2.

Here is the JSON representing this state machine.

{
  "Comment": "A simple example that submits a Job to AWS Batch",
  "StartAt": "SubmitJob",
  "States": {
    "SubmitJob": {
      "Type": "Task",
      "Resource": "arn:aws:lambda:us-east-1:<account-id>::function:batchSubmitJob",
      "Next": "GetJobStatus"
    },
    "GetJobStatus": {
      "Type": "Task",
      "Resource": "arn:aws:lambda:us-east-1:<account-id>:function:batchGetJobStatus",
      "Next": "CheckJobStatus",
      "InputPath": "$",
      "ResultPath": "$.status"
    },
    "CheckJobStatus": {
      "Type": "Choice",
      "Choices": [
        {
          "Variable": "$.status",
          "StringEquals": "FAILED",
          "End": true
        },
        {
          "Variable": "$.status",
          "StringEquals": "SUCCEEDED",
          "Next": "GetFinalJobStatus"
        }
      ],
      "Default": "Wait30Seconds"
    },
    "Wait30Seconds": {
      "Type": "Wait",
      "Seconds": 30,
      "Next": "GetJobStatus"
    },
    "GetFinalJobStatus": {
      "Type": "Task",
      "Resource": "arn:aws:lambda:us-east-1:<account-id>:function:batchGetJobStatus",
      "End": true
    }
  }
}

Building the Lambda functions for the state machine

You need two basic Lambda functions for this state machine. The first one submits a job to AWS Batch and the second checks the status of the AWS Batch job that was submitted.

In AWS Step Functions, you specify an input as JSON that is read into your state machine. Each state receives the aggregate of the steps immediately preceding it, and you can specify which components a state passes on to its children. Because you are using Lambda functions to execute tasks, one of the easiest routes to take is to modify the input JSON, represented as a Python dictionary, within the Lambda function and return the entire dictionary back for the next state to consume.

Building the batchSubmitIsaacJob Lambda function

For Step 1 above, you need a Lambda function for each of the steps in your analysis workflow. As you created a generic Lambda function in the previous post to submit a batch job (batchSubmitJob), you can use that function as the basis for the specialized functions you’ll include in this state machine. Here is such a Lambda function for the Isaac aligner.

from __future__ import print_function

import boto3
import json
import traceback

lambda_client = boto3.client('lambda')



def lambda_handler(event, context):
    try:
        # Generate output put
        bam_s3_path = '/'.join([event['resultsS3Path'], event['sampleId'], 'bam/'])

        depends_on = event['dependsOn'] if 'dependsOn' in event else []

        # Generate run command
        command = [
            '--bam_s3_folder_path', bam_s3_path,
            '--fastq1_s3_path', event['fastq1S3Path'],
            '--fastq2_s3_path', event['fastq2S3Path'],
            '--reference_s3_path', event['isaac']['referenceS3Path'],
            '--working_dir', event['workingDir']
        ]

        if 'cmdArgs' in event['isaac']:
            command.extend(['--cmd_args', event['isaac']['cmdArgs']])
        if 'memory' in event['isaac']:
            command.extend(['--memory', event['isaac']['memory']])

        # Submit Payload
        response = lambda_client.invoke(
            FunctionName='batchSubmitJob',
            InvocationType='RequestResponse',
            LogType='Tail',
            Payload=json.dumps(dict(
                dependsOn=depends_on,
                containerOverrides={
                    'command': command,
                },
                jobDefinition=event['isaac']['jobDefinition'],
                jobName='-'.join(['isaac', event['sampleId']]),
                jobQueue=event['isaac']['jobQueue']
            )))

        response_payload = response['Payload'].read()

        # Update event
        event['bamS3Path'] = bam_s3_path
        event['jobId'] = json.loads(response_payload)['jobId']
        
        return event
    except Exception as e:
        traceback.print_exc()
        raise e

In the Lambda console, create a Python 2.7 Lambda function named batchSubmitIsaacJob and paste in the above code. Use the LambdaBatchExecutionRole that you created in the previous post. For more information, see Step 2.1: Create a Hello World Lambda Function.

This Lambda function reads in the inputs passed to the state machine it is part of, formats the data for the batchSubmitJob Lambda function, invokes that Lambda function, and then modifies the event dictionary to pass onto the subsequent states. You can repeat these for each of the other tools, which can be found in the tools//lambda/lambda_function.py script in the GitHub repo.

Building the batchGetJobStatus Lambda function

For Step 2 above, the process queries the AWS Batch DescribeJobs API action with jobId to identify the state that the job is in. You can put this into a Lambda function to integrate it with Step Functions.

In the Lambda console, create a new Python 2.7 function with the LambdaBatchExecutionRole IAM role. Name your function batchGetJobStatus and paste in the following code. This is similar to the batch-get-job-python27 Lambda blueprint.

from __future__ import print_function

import boto3
import json

print('Loading function')

batch_client = boto3.client('batch')

def lambda_handler(event, context):
    # Log the received event
    print("Received event: " + json.dumps(event, indent=2))
    # Get jobId from the event
    job_id = event['jobId']

    try:
        response = batch_client.describe_jobs(
            jobs=[job_id]
        )
        job_status = response['jobs'][0]['status']
        return job_status
    except Exception as e:
        print(e)
        message = 'Error getting Batch Job status'
        print(message)
        raise Exception(message)

Structuring state machine input

You have structured the state machine input so that general file references are included at the top-level of the JSON object, and any job-specific items are contained within a nested JSON object. At a high level, this is what the input structure looks like:

{
        "general_field_1": "value1",
        "general_field_2": "value2",
        "general_field_3": "value3",
        "job1": {},
        "job2": {},
        "job3": {}
}

Building the full state machine

By chaining these state machine components together, you can quickly build flexible workflows that can process genomes in multiple ways. The development of the larger state machine that defines the entire workflow uses four of the above building blocks. You use the Lambda functions that you built in the previous section. Rename each building block submission to match the tool name.

We have provided a CloudFormation template to deploy your state machine and the associated IAM roles. In the CloudFormation console, select Create Stack, choose your template (deploy_state_machine.yaml), and enter in the ARNs for the Lambda functions you created.

Continue through the rest of the steps and deploy your stack. Be sure to check the box next to "I acknowledge that AWS CloudFormation might create IAM resources."

Once the CloudFormation stack is finished deploying, you should see the following image of your state machine.

In short, you first submit a job for Isaac, which is the aligner you are using for the analysis. Next, you use parallel state to split your output from "GetFinalIsaacJobStatus" and send it to both your variant calling step, Strelka, and your QC step, Samtools Stats. These then are run in parallel and you annotate the results from your Strelka step with snpEff.

Putting it all together

Now that you have built all of the components for a genomics secondary analysis workflow, test the entire process.

We have provided sequences from an Illumina sequencer that cover a region of the genome known as the exome. Most of the positions in the genome that we have currently associated with disease or human traits reside in this region, which is 1–2% of the entire genome. The workflow that you have built works for both analyzing an exome, as well as an entire genome.

Additionally, we have provided prebuilt reference genomes for Isaac, located at:

s3://aws-batch-genomics-resources/reference/

If you are interested, we have provided a script that sets up all of that data. To execute that script, run the following command on a large EC2 instance:

make reference REGISTRY=<your-ecr-registry>

Indexing and preparing this reference takes many hours on a large-memory EC2 instance. Be careful about the costs involved and note that the data is available through the prebuilt reference genomes.

Starting the execution

In a previous section, you established a provenance for the JSON that is fed into your state machine. For ease, we have auto-populated the input JSON for you to the state machine. You can also find this in the GitHub repo under workflow/test.input.json:

{
  "fastq1S3Path": "s3://aws-batch-genomics-resources/fastq/SRR1919605_1.fastq.gz",
  "fastq2S3Path": "s3://aws-batch-genomics-resources/fastq/SRR1919605_2.fastq.gz",
  "referenceS3Path": "s3://aws-batch-genomics-resources/reference/hg38.fa",
  "resultsS3Path": "s3://<bucket>/genomic-workflow/results",
  "sampleId": "NA12878_states_1",
  "workingDir": "/scratch",
  "isaac": {
    "jobDefinition": "isaac-myenv:1",
    "jobQueue": "arn:aws:batch:us-east-1:<account-id>:job-queue/highPriority-myenv",
    "referenceS3Path": "s3://aws-batch-genomics-resources/reference/isaac/"
  },
  "samtoolsStats": {
    "jobDefinition": "samtools_stats-myenv:1",
    "jobQueue": "arn:aws:batch:us-east-1:<account-id>:job-queue/lowPriority-myenv"
  },
  "strelka": {
    "jobDefinition": "strelka-myenv:1",
    "jobQueue": "arn:aws:batch:us-east-1:<account-id>:job-queue/highPriority-myenv",
    "cmdArgs": " --exome "
  },
  "snpEff": {
    "jobDefinition": "snpeff-myenv:1",
    "jobQueue": "arn:aws:batch:us-east-1:<account-id>:job-queue/lowPriority-myenv",
    "cmdArgs": " -t hg38 "
  }
}

You are now at the stage to run your full genomic analysis. Copy the above to a new text file, change paths and ARNs to the ones that you created previously, and save your JSON input as input.states.json.

In the CLI, execute the following command. You need the ARN of the state machine that you created in the previous post:

aws stepfunctions start-execution --state-machine-arn <your-state-machine-arn> --input file://input.states.json

Your analysis has now started. By using Spot Instances with AWS Batch, you can quickly scale out your workflows while concurrently optimizing for cost. While this is not guaranteed, most executions of the workflows presented here should cost under $1 for a full analysis.

Monitoring the execution

The output from the above CLI command gives you the ARN that describes the specific execution. Copy that and navigate to the Step Functions console. Select the state machine that you created previously and paste the ARN into the search bar.

The screen shows information about your specific execution. On the left, you see where your execution currently is in the workflow.

In the following screenshot, you can see that your workflow has successfully completed the alignment job and moved onto the subsequent steps, which are variant calling and generating quality information about your sample.

You can also navigate to the AWS Batch console and see that progress of all of your jobs reflected there as well.

Finally, after your workflow has completed successfully, check out the S3 path to which you wrote all of your files. If you run a ls –recursive command on the S3 results path, specified in the input to your state machine execution, you should see something similar to the following:

2017-05-02 13:46:32 6475144340 genomic-workflow/results/NA12878_run1/bam/sorted.bam
2017-05-02 13:46:34    7552576 genomic-workflow/results/NA12878_run1/bam/sorted.bam.bai
2017-05-02 13:46:32         45 genomic-workflow/results/NA12878_run1/bam/sorted.bam.md5
2017-05-02 13:53:20      68769 genomic-workflow/results/NA12878_run1/stats/bam_stats.dat
2017-05-02 14:05:12        100 genomic-workflow/results/NA12878_run1/vcf/stats/runStats.tsv
2017-05-02 14:05:12        359 genomic-workflow/results/NA12878_run1/vcf/stats/runStats.xml
2017-05-02 14:05:12  507577928 genomic-workflow/results/NA12878_run1/vcf/variants/genome.S1.vcf.gz
2017-05-02 14:05:12     723144 genomic-workflow/results/NA12878_run1/vcf/variants/genome.S1.vcf.gz.tbi
2017-05-02 14:05:12  507577928 genomic-workflow/results/NA12878_run1/vcf/variants/genome.vcf.gz
2017-05-02 14:05:12     723144 genomic-workflow/results/NA12878_run1/vcf/variants/genome.vcf.gz.tbi
2017-05-02 14:05:12   30783484 genomic-workflow/results/NA12878_run1/vcf/variants/variants.vcf.gz
2017-05-02 14:05:12    1566596 genomic-workflow/results/NA12878_run1/vcf/variants/variants.vcf.gz.tbi

Modifications to the workflow

You have now built and run your genomics workflow. While diving deep into modifications to this architecture are beyond the scope of these posts, we wanted to leave you with several suggestions of how you might modify this workflow to satisfy additional business requirements.

  • Job tracking with Amazon DynamoDB
    In many cases, such as if you are offering Genomics-as-a-Service, you might want to track the state of your jobs with DynamoDB to get fine-grained records of how your jobs are running. This way, you can easily identify the cost of individual jobs and workflows that you run.
  • Resuming from failure
    Both AWS Batch and Step Functions natively support job retries and can cover many of the standard cases where a job might be interrupted. There may be cases, however, where your workflow might fail in a way that is unpredictable. In this case, you can use custom error handling with AWS Step Functions to build out a workflow that is even more resilient. Also, you can build in fail states into your state machine to fail at any point, such as if a batch job fails after a certain number of retries.
  • Invoking Step Functions from Amazon API Gateway
    You can use API Gateway to build an API that acts as a "front door" to Step Functions. You can create a POST method that contains the input JSON to feed into the state machine you built. For more information, see the Implementing Serverless Manual Approval Steps in AWS Step Functions and Amazon API Gateway blog post.

Conclusion

While the approach we have demonstrated in this series has been focused on genomics, it is important to note that this can be generalized to nearly any high-throughput batch workload. We hope that you have found the information useful and that it can serve as a jump-start to building your own batch workloads on AWS with native AWS services.

For more information about how AWS can enable your genomics workloads, be sure to check out the AWS Genomics page.

Other posts in this four-part series:

Please leave any questions and comments below.

Building High-Throughput Genomic Batch Workflows on AWS: Batch Layer (Part 3 of 4)

Post Syndicated from Andy Katz original https://aws.amazon.com/blogs/compute/building-high-throughput-genomic-batch-workflows-on-aws-batch-layer-part-3-of-4/

Aaron Friedman is a Healthcare and Life Sciences Partner Solutions Architect at AWS

Angel Pizarro is a Scientific Computing Technical Business Development Manager at AWS

This post is the third in a series on how to build a genomics workflow on AWS. In Part 1, we introduced a general architecture, shown below, and highlighted the three common layers in a batch workflow:

  • Job
  • Batch
  • Workflow

In Part 2, you built a Docker container for each job that needed to run as part of your workflow, and stored them in Amazon ECR.

In Part 3, you tackle the batch layer and build a scalable, elastic, and easily maintainable batch engine using AWS Batch.

AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. It dynamically provisions the optimal quantity and type of compute resources (for example, CPU or memory optimized instances) based on the volume and specific resource requirements of the batch jobs that you submit. With AWS Batch, you do not need to install and manage your own batch computing software or server clusters, which allows you to focus on analyzing results, such as those of your genomic analysis.

Integrating applications into AWS Batch

If you are new to AWS Batch, we recommend reading Setting Up AWS Batch to ensure that you have the proper permissions and AWS environment.

After you have a working environment, you define several types of resources:

  • IAM roles that provide service permissions
  • A compute environment that launches and terminates compute resources for jobs
  • A custom Amazon Machine Image (AMI)
  • A job queue to submit the units of work and to schedule the appropriate resources within the compute environment to execute those jobs
  • Job definitions that define how to execute an application

After the resources are created, you’ll test the environment and create an AWS Lambda function to send generic jobs to the queue.

This genomics workflow covers the basic steps. For more information, see Getting Started with AWS Batch.

Creating the necessary IAM roles

AWS Batch simplifies batch processing by managing a number of underlying AWS services so that you can focus on your applications. As a result, you create IAM roles that give the service permissions to act on your behalf. In this section, deploy the AWS CloudFormation template included in the GitHub repository and extract the ARNs for later use.

To deploy the stack, go to the top level in the repo with the following command:

aws cloudformation create-stack --template-body file://batch/setup/iam.template.yaml --stack-name iam --capabilities CAPABILITY_NAMED_IAM

You can capture the output from this stack in the Outputs tab in the CloudFormation console:

Creating the compute environment

In AWS Batch, you will set up a managed compute environments. Managed compute environments automatically launch and terminate compute resources on your behalf based on the aggregate resources needed by your jobs, such as vCPU and memory, and simple boundaries that you define.

When defining your compute environment, specify the following:

  • Desired instance types in your environment
  • Min and max vCPUs in the environment
  • The Amazon Machine Image (AMI) to use
  • Percentage value for bids on the Spot Market and VPC subnets that can be used.

AWS Batch then provisions an elastic and heterogeneous pool of Amazon EC2 instances based on the aggregate resource requirements of jobs sitting in the RUNNABLE state. If a mix of CPU and memory-intensive jobs are ready to run, AWS Batch provisions the appropriate ratio and size of CPU and memory-optimized instances within your environment. For this post, you will use the simplest configuration, in which instance types are set to "optimal" allowing AWS Batch to choose from the latest C, M, and R EC2 instance families.

While you could create this compute environment in the console, we provide the following CLI commands. Replace the subnet IDs and key name with your own private subnets and key, and the image-id with the image you will build in the next section.

ACCOUNTID=<your account id>
SERVICEROLE=<from output in CloudFormation template>
IAMFLEETROLE=<from output in CloudFormation template>
JOBROLEARN=<from output in CloudFormation template>
SUBNETS=<comma delimited list of subnets>
SECGROUPS=<your security groups>
SPOTPER=50 # percentage of on demand
IMAGEID=<ami-id corresponding to the one you created>
INSTANCEROLE=<from output in CloudFormation template>
REGISTRY=${ACCOUNTID}.dkr.ecr.us-east-1.amazonaws.com
KEYNAME=<your key name>
MAXCPU=1024 # max vCPUs in compute environment
ENV=myenv

# Creates the compute environment
aws batch create-compute-environment --compute-environment-name genomicsEnv-$ENV --type MANAGED --state ENABLED --service-role ${SERVICEROLE} --compute-resources type=SPOT,minvCpus=0,maxvCpus=$MAXCPU,desiredvCpus=0,instanceTypes=optimal,imageId=$IMAGEID,subnets=$SUBNETS,securityGroupIds=$SECGROUPS,ec2KeyPair=$KEYNAME,instanceRole=$INSTANCEROLE,bidPercentage=$SPOTPER,spotIamFleetRole=$IAMFLEETROLE

Creating the custom AMI for AWS Batch

While you can use default Amazon ECS-optimized AMIs with AWS Batch, you can also provide your own image in managed compute environments. We will use this feature to provision additional scratch EBS storage on each of the instances that AWS Batch launches and also to encrypt both the Docker and scratch EBS volumes.

AWS Batch has the same requirements for your AMI as Amazon ECS. To build the custom image, modify the default Amazon ECS-Optimized Amazon Linux AMI in the following ways:

  • Attach a 1 TB scratch volume to /dev/sdb
  • Encrypt the Docker and new scratch volumes
  • Mount the scratch volume to /docker_scratch by modifying /etcfstab

The first two tasks can be addressed when you create the custom AMI in the console. Spin up a small t2.micro instance, and proceed through the standard EC2 instance launch.

After your instance has launched, record the IP address and then SSH into the instance. Copy and paste the following code:

sudo yum -y update
sudo parted /dev/xvdb mklabel gpt
sudo parted /dev/xvdb mkpart primary 0% 100%
sudo mkfs -t ext4 /dev/xvdb1
sudo mkdir /docker_scratch
sudo echo -e '/dev/xvdb1\t/docker_scratch\text4\tdefaults\t0\t0' | sudo tee -a /etc/fstab
sudo mount -a

This auto-mounts your scratch volume to /docker_scratch, which is your scratch directory for batch processing. Next, create your new AMI and record the image ID.

Creating the job queues

AWS Batch job queues are used to coordinate the submission of batch jobs. Your jobs are submitted to job queues, which can be mapped to one or more compute environments. Job queues have priority relative to each other. You can also specify the order in which they consume resources from your compute environments.

In this solution, use two job queues. The first is for high priority jobs, such as alignment or variant calling. Set this with a high priority (1000) and map back to the previously created compute environment. Next, set a second job queue for low priority jobs, such as quality statistics generation. To create these compute environments, enter the following CLI commands:

aws batch create-job-queue --job-queue-name highPriority-${ENV} --compute-environment-order order=0,computeEnvironment=genomicsEnv-${ENV}  --priority 1000 --state ENABLED
aws batch create-job-queue --job-queue-name lowPriority-${ENV} --compute-environment-order order=0,computeEnvironment=genomicsEnv-${ENV}  --priority 1 --state ENABLED

Creating the job definitions

To run the Isaac aligner container image locally, supply the Amazon S3 locations for the FASTQ input sequences, the reference genome to align to, and the output BAM file. For more information, see tools/isaac/README.md.

The Docker container itself also requires some information on a suitable mountable volume so that it can read and write files temporary files without running out of space.

Note: In the following example, the FASTQ files as well as the reference files to run are in a publicly available bucket.

FASTQ1=s3://aws-batch-genomics-resources/fastq/SRR1919605_1.fastq.gz
FASTQ2=s3://aws-batch-genomics-resources/fastq/SRR1919605_2.fastq.gz
REF=s3://aws-batch-genomics-resources/reference/isaac/
BAM=s3://mybucket/genomic-workflow/test_results/bam/

mkdir ~/scratch

docker run --rm -ti -v $(HOME)/scratch:/scratch $REPO_URI --bam_s3_folder_path $BAM \
--fastq1_s3_path $FASTQ1 \
--fastq2_s3_path $FASTQ2 \
--reference_s3_path $REF \
--working_dir /scratch 

Locally running containers can typically expand their CPU and memory resource headroom. In AWS Batch, the CPU and memory requirements are hard limits and are allocated to the container image at runtime.

Isaac is a fairly resource-intensive algorithm, as it creates an uncompressed index of the reference genome in memory to match the query DNA sequences. The large memory space is shared across multiple CPU threads, and Isaac can scale almost linearly with the number of CPU threads given to it as a parameter.

To fit these characteristics, choose an optimal instance size to maximize the number of CPU threads based on a given large memory footprint, and deploy a Docker container that uses all of the instance resources. In this case, we chose a host instance with 80+ GB of memory and 32+ vCPUs. The following code is example JSON that you can pass to the AWS CLI to create a job definition for Isaac.

aws batch register-job-definition --job-definition-name isaac-${ENV} --type container --retry-strategy attempts=3 --container-properties '
{"image": "'${REGISTRY}'/isaac",
"jobRoleArn":"'${JOBROLEARN}'",
"memory":80000,
"vcpus":32,
"mountPoints": [{"containerPath": "/scratch", "readOnly": false, "sourceVolume": "docker_scratch"}],
"volumes": [{"name": "docker_scratch", "host": {"sourcePath": "/docker_scratch"}}]
}'

You can copy and paste the following code for the other three job definitions:

aws batch register-job-definition --job-definition-name strelka-${ENV} --type container --retry-strategy attempts=3 --container-properties '
{"image": "'${REGISTRY}'/strelka",
"jobRoleArn":"'${JOBROLEARN}'",
"memory":32000,
"vcpus":32,
"mountPoints": [{"containerPath": "/scratch", "readOnly": false, "sourceVolume": "docker_scratch"}],
"volumes": [{"name": "docker_scratch", "host": {"sourcePath": "/docker_scratch"}}]
}'

aws batch register-job-definition --job-definition-name snpeff-${ENV} --type container --retry-strategy attempts=3 --container-properties '
{"image": "'${REGISTRY}'/snpeff",
"jobRoleArn":"'${JOBROLEARN}'",
"memory":10000,
"vcpus":4,
"mountPoints": [{"containerPath": "/scratch", "readOnly": false, "sourceVolume": "docker_scratch"}],
"volumes": [{"name": "docker_scratch", "host": {"sourcePath": "/docker_scratch"}}]
}'

aws batch register-job-definition --job-definition-name samtoolsStats-${ENV} --type container --retry-strategy attempts=3 --container-properties '
{"image": "'${REGISTRY}'/samtools_stats",
"jobRoleArn":"'${JOBROLEARN}'",
"memory":10000,
"vcpus":4,
"mountPoints": [{"containerPath": "/scratch", "readOnly": false, "sourceVolume": "docker_scratch"}],
"volumes": [{"name": "docker_scratch", "host": {"sourcePath": "/docker_scratch"}}]
}'

The value for "image" comes from the previous post on creating a Docker image and publishing to ECR. The value for jobRoleArn you can find from the output of the CloudFormation template that you deployed earlier. In addition to providing the number of CPU cores and memory required by Isaac, you also give it a storage volume for scratch and staging. The volume comes from the previously defined custom AMI.

Testing the environment

After you have created the Isaac job definition, you can submit the job using the AWS Batch submitJob API action. While the base mappings for Docker run are taken care of in the job definition that you just built, the specific job parameters should be specified in the container overrides section of the API call. Here’s what this would look like in the CLI, using the same parameters as in the bash commands shown earlier:

aws batch submit-job --job-name testisaac --job-queue highPriority-${ENV} --job-definition isaac-${ENV}:1 --container-overrides '{
"command": [
			"--bam_s3_folder_path", "s3://mybucket/genomic-workflow/test_batch/bam/",
            "--fastq1_s3_path", "s3://aws-batch-genomics-resources/fastq/ SRR1919605_1.fastq.gz",
            "--fastq2_s3_path", "s3://aws-batch-genomics-resources/fastq/SRR1919605_2.fastq.gz",
            "--reference_s3_path", "s3://aws-batch-genomics-resources/reference/isaac/",
            "--working_dir", "/scratch",
			"—cmd_args", " --exome ",]
}'

When you execute a submitJob call, jobId is returned. You can then track the progress of your job using the describeJobs API action:

aws batch describe-jobs –jobs <jobId returned from submitJob>

You can also track the progress of all of your jobs in the AWS Batch console dashboard.

To see exactly where a RUNNING job is at, use the link in the AWS Batch console to direct you to the appropriate location in CloudWatch logs.

Completing the batch environment setup

To finish, create a Lambda function to submit a generic AWS Batch job.

In the Lambda console, create a Python 2.7 Lambda function named batchSubmitJob. Copy and paste the following code. This is similar to the batch-submit-job-python27 Lambda blueprint. Use the LambdaBatchExecutionRole that you created earlier. For more information about creating functions, see Step 2.1: Create a Hello World Lambda Function.

from __future__ import print_function

import json
import boto3

batch_client = boto3.client('batch')

def lambda_handler(event, context):
    # Log the received event
    print("Received event: " + json.dumps(event, indent=2))
    # Get parameters for the SubmitJob call
    # http://docs.aws.amazon.com/batch/latest/APIReference/API_SubmitJob.html
    job_name = event['jobName']
    job_queue = event['jobQueue']
    job_definition = event['jobDefinition']
    
    # containerOverrides, dependsOn, and parameters are optional
    container_overrides = event['containerOverrides'] if event.get('containerOverrides') else {}
    parameters = event['parameters'] if event.get('parameters') else {}
    depends_on = event['dependsOn'] if event.get('dependsOn') else []
    
    try:
        response = batch_client.submit_job(
            dependsOn=depends_on,
            containerOverrides=container_overrides,
            jobDefinition=job_definition,
            jobName=job_name,
            jobQueue=job_queue,
            parameters=parameters
        )
        
        # Log response from AWS Batch
        print("Response: " + json.dumps(response, indent=2))
        
        # Return the jobId
        event['jobId'] = response['jobId']
        return event
    
    except Exception as e:
        print(e)
        message = 'Error getting Batch Job status'
        print(message)
        raise Exception(message)

Conclusion

In part 3 of this series, you successfully set up your data processing, or batch, environment in AWS Batch. We also provided a Python script in the corresponding GitHub repo that takes care of all of the above CLI arguments for you, as well as building out the job definitions for all of the jobs in the workflow: Isaac, Strelka, SAMtools, and snpEff. You can check the script’s README for additional documentation.

In Part 4, you’ll cover the workflow layer using AWS Step Functions and AWS Lambda.

Please leave any questions and comments below.

Torrent Sites See Traffic Boost After ExtraTorrent Shutdown

Post Syndicated from Ernesto original https://torrentfreak.com/torrent-sites-see-traffic-boost-after-extratorrent-shutdown-170528/

boatssailWhen ExtraTorrent shut down last week, millions of people were left without their favorite spot to snatch torrents.

This meant that after the demise of KickassTorrents and Torrentz last summer, another major exodus commenced.

The search for alternative torrent sites is nicely illustrated by Google Trends. Immediately after ExtraTorrent shut down, worldwide searches for “torrent sites” shot through the roof, as seen below.

“Torrent sites” searches (30 days)

As is often the case, most users spread across sites that are already well-known to the file-sharing public.

TorrentFreak spoke to several people connected to top torrent sites who all confirmed that they had witnessed a significant visitor boost over the past week and a half. As the largest torrent site around, many see The Pirate Bay as the prime alternative.

And indeed, a TPB staffer confirms that they have seen a big wave of new visitors coming in, to the extent that it was causing “gateway errors,” making the site temporarily unreachable.

Thus far the new visitors remain rather passive though. The Pirate Bay hasn’t seen a large uptick in registrations and participation in the forum remains normal as well.

“Registrations haven’t suddenly increased or anything like that, and visitor numbers to the forum are about the same as usual,” TPB staff member Spud17 informs TorrentFreak.

Another popular torrent site, which prefers not to be named, reported a surge in traffic too. For a few days in a row, this site handled 100,000 extra unique visitors. A serious number, but the operator estimates that he only received about ten percent of ET’s total traffic.

More than 40% of these new visitors come from India, where ExtraTorrent was relatively popular. The site operator further notes that about two thirds have an adblocker, adding that this makes the new traffic pretty much useless, for those who are looking to make money.

That brings us to the last category of site owners, the opportunist copycats, who are actively trying to pull estranged ExtraTorrent visitors on board.

Earlier this week we wrote about the attempts of ExtraTorrent.cd, which falsely claims to have a copy of the ET database, to lure users. In reality, however, it’s nothing more than a Pirate Bay mirror with an ExtraTorrent skin.

And then there are the copycats over at ExtraTorrent.ag. These are the same people who successfully hijacked the EZTV and YIFY/YTS brands earlier. With ExtraTorrent.ag they now hope to expand their portfolio.

Over the past few days, we received several emails from other ExtraTorrent “copies”, all trying to get a piece of the action. Not unexpected, but pretty bold, particularly considering the fact that ExtraTorrent operator SaM specifically warned people not to fall for these fakes and clones.

With millions of people moving to new sites, it’s safe to say that the torrent ‘community’ is in turmoil once again, trying to find a new status quo. But this probably won’t last for very long.

While some of the die-hard ExtraTorrent fans will continue to mourn the loss of their home, history has told is that in general, the torrent community is quick to adapt. Until the next site goes down…

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Managing a Remote Workforce

Post Syndicated from Natalie C original https://www.backblaze.com/blog/managing-a-remote-workforce/

working in an airport
While Backblaze has customers all around the globe, the company itself is a pretty small enterprise with just over 50 employees. Many of those employees are actually remote. 75% of Backblaze employees work from the main Backblaze office (San Mateo), 15% are datacenter employees, and 10% working remotely full-time.

Many companies that were the pioneers with flexible work arrangements are now pulling back and asking their employees to report into an office. Why? Some part of that is due to not knowing how to manage these types of employees and belief that having an employee in the office, will improve work performance.

At Backblaze, we think that managing our diverse workforce is certainly a challenge… but, as the saying goes, the juice is worth the squeeze.

Communication is Key

When Backblaze first started, everyone worked out of the same room. Being 5’ away from someone tends to make communication easy (sometimes too easy). The first datacenter was just a few miles away, so if we needed to do something in it, we’d just hop in a car and drive over – calling co-workers from our cell-phones if we needed some help or guidance. Now, things have changed slightly and we use a lot of different tools to talk amongst ourselves.

It started with emails, then morphed into Gchat, then to Google Hangouts, and now we have a whole suite of communication tools. We use Hangouts and Slack to chat internally, Meet for video conferencing to bridge remote employees, , and good old fashioned telephones when the need arises. Tools like Trello, Redbooth, and Jira can help project manage as well – making sure that everyone stays on the same page.
For HR related needs, we use a variety of tools/perks to simplify employees lives whether they are at the office or at home enjoying time with their families. These tools include an Human Resource Information System (“HRIS”) called Namely, Expensify (expenses), Eshares (stock), Fond (perks) and Heal.

The most popular tool we use is Slack. Each department, location, product, and support group have their own Channel. We also have social channels where all the GIFs and news links live. Slack also has the added benefit of allowing us to limit what information is discussed where. For example, contract employees do not have access to channels that go beyond their scope and focus areas.

Solve for Culture, not Offsite v Onsite

One of the keys to managing a remote workforce is realizing that you’re solving for overall culture. It’s not about whether any group of employees are in office X or Y. The real question is: Are we creating an environment where we remove the friction from people performing their roles? There are follow-up questions like “do we have the right roles defined?” and “do we have people in roles where they will succeed?”. But by looking at managing our workforce from that point of view, it makes it easier to identify what tools and resources we need to be successful.

There’s no right way to manage remote employees. Every work environment is different and the culture, available technology, and financial capability affects how employees can interact. Backblaze went through a ton of iterations before we found the right tools for what we were trying to accomplish, and we’re constantly evolving and experimenting. But we have found some consistent patterns…

    • Nothing Beats Human Interaction

Even with all of the communication tools at our disposal, getting together in person is still the best way to get through projects and make sure everyone is on the same page. While having group meetings via Slack and Meet are great for planning, inevitably something will fall through the cracks or get lost in cyberspace due to poor connections. We combat this by having all of our remote employees come to the main office once every two months. When we hired our first remote engineers this was a once-a-month visit, but as we got more accustomed to working together and over the web, we scaled it back.

These visits allow our engineers to be in the office, be part of meetings that they’d otherwise miss, and meet any new employees we’ve hired. We think it’s important for people to know who they’re working with, and we love that everyone at Backblaze knows (or at least recognizes) each other. We also plan our company outings around these visits, and this brings about a great company culture since we get a chance to be out of the office together and interact socially – which is a lot more fun than interacting professionally.

    • Don’t Fear HR

When you have a small workforce, duties can sometimes be divided amongst a variety of people – even if those duties don’t pertain to their ‘day job’. Having a full-time HR person allowed folks to jettison some of their duties, and allowed them to get back to their primary job functions. It also allowed HR to handle delicate matters, many of which were amongst the most dreaded for folks who were covering some of the responsibilities.

What we’ve found in creating the full-time HR role for our remote workforce was that we finally had an expert on all HR-related things. This meant that we had someone who knew the laws of the land inside and out and could figure out how the different healthcare systems worked in the states where our employees reside (no small feat).

But Why Bother?

There is a principle question that we haven’t yet addressed: Why do we even have remote employees? This gets back to the idea of looking at the culture and environment first. At Backblaze, we look to hire the right person. There are costs to having remote employees, but if they are the right person for the role (when accounting for the “costs”), then that’s the right thing to do. Backblaze is performance driven, not based off of attendance and how long you stay at the office. I believe the you need a balance of both office work as well as remote to allow the employee to be most productive. But every company and setting is different; so experiments need to take place to figure out what would be the perfect blend for your team atmosphere.

The post Managing a Remote Workforce appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Was The Disney Movie ‘Hacking Ransom’ a Giant Hoax?

Post Syndicated from Andy original https://torrentfreak.com/was-the-disney-movie-hacking-ransom-a-giant-hoax-170524/

Last Monday, during a town hall meeting in New York, Disney CEO Bob Iger informed a group of ABC employees that hackers had stolen one of the company’s movies.

The hackers allegedly said they’d keep the leak private if Disney paid them a ransom. In response, Disney indicated that it had no intention of paying. Setting dangerous precedents in this area is unwise, the company no doubt figured.

After Hollywood Reporter broke the news, Deadline followed up with a report which further named the movie as ‘Pirates of the Caribbean: Dead Men Tell No Tales’, a fitting movie to parallel an emerging real-life swashbuckling plot, no doubt.

What the Deadline article didn’t do was offer any proof that Pirates 5 was the movie in question. Out of the blue, however, it did mention that a purported earlier leak of The Last Jedi had been revealed by “online chatter” to be a fake. Disney refused to comment.

Armed with this information, TF decided to have a dig around. Was Pirates 5 being discussed within release groups as being available, perhaps? Initially, our inquiries drew a complete blank but then out of the blue we found ourselves in conversation with the person claiming to be the Disney ‘hacker’.

“I can provide the original emails sent to Disney as well as some other unknown details,” he told us via encrypted mail.

We immediately asked several questions. Was the movie ‘Pirates 5’? How did he obtain the movie? How much did he try to extort from Disney? ‘EMH,’ as we’ll call him, quickly replied.

“It’s The Last Jedi. Bob Iger never made public the title of the film, Deadline was just going off and naming the next film on their release slate,” we were told. “We demanded 2BTC per month until September.”

TF was then given copies of correspondence that EMH had been having with numerous parties about the alleged leak. They included discussions with various release groups, a cyber-security expert, and Disney.

As seen in the screenshot, the email was purportedly sent to Disney on May 1. The Hollywood Reporter article, published two weeks later, noted the following;

“The Disney chief said the hackers demanded that a huge sum be paid in Bitcoin. They said they would release five minutes of the film at first, and then in 20-minute chunks until their financial demands are met,” HWR wrote.

While the email to Disney looked real enough, the proof of any leaked pudding is in the eating. We asked EMH how he had demonstrated to Disney that he actually has the movie in his possession. Had screenshots or clips been sent to the company? We were initially told they had not (plot twists were revealed instead) so this immediately raised suspicions.

Nevertheless, EMH then went on to suggest that release groups had shown interest in the copy and he proved that by forwarding his emails with them to TF.

“Make sure they know there is still work to be done on the CGI characters. There are little dots on their faces that are visible. And the colour grading on some scenes looks a little off,” EMH told one group, who said they understood.

“They all understand its not a completed workprint.. that is why they are sought after by buyers.. exclusive stuff nobody else has or can get,” they wrote back.

“That why they pay big $$$ for it.. a completed WP could b worth $25,000,” the group’s unedited response reads.

But despite all the emails and discussion, we were still struggling to see how EMH had shown to anyone that he really had The Last Jedi. We then learned, however, that screenshots had been sent to blogger Sam Braidley, a Cyber Security MSc and Computer Science BSc Graduate.

Since the information sent to us by EMH confirmed discussion had taken place with Braidley concerning the workprint, we contacted him directly to find out what he knew about the supposed Pirates 5 and/or The Last Jedi leak. He was very forthcoming.

“A user going by the username of ‘Darkness’ commented on my blog about having a leaked copy of The Last Jedi from a contact he knew from within Lucas Films. Of course, this garnered a lot of interest, although most were cynical of its authenticity,” Braidley explained.

The claim that ‘Darkness’ had obtained the copy from a contact within Lucas was certainly of interest ,since up to now the press narrative had been that Disney or one of its affiliates had been ‘hacked.’

After confirming that ‘Darkness’ used the same email as our “EMH,” we asked EMH again. Where had the movie been obtained from?

“Wasn’t hacked. Was given to me by a friend who works at a post production company owned by [Lucasfilm],” EMH said. After further prompting he reiterated: “As I told you, we obtained it from an employee.”

If they weren’t ringing loudly enough already, alarm bells were now well and truly clanging. Who would reveal where they’d obtained a super-hot leaked movie from when the ‘friend’ is only one step removed from the person attempting the extortion? Who would take such a massive risk?

Braidley wasn’t buying it either.

“I had my doubts following the recent [Orange is the New Black] leak from ‘The Dark Overlord,’ it seemed like someone trying to live off the back of its press success,” he said.

Braidley told TF that Darkness/EMH seemed keen for him to validate the release, as a member of a well-known release group didn’t believe that it was real, something TF confirmed with the member. A screenshot was duly sent over to Braidley for his seal of approval.

“The quality was very low and the scene couldn’t really show that it was in fact Star Wars, let alone The Last Jedi,” Braidley recalls, noting that other screenshots were considered not to be from the movie in question either.

Nevertheless, Darkness/EMH later told Braidley that another big release group had only declined to release the movie due to the possiblity of security watermarks being present in the workprint.

Since no groups had heard of a credible Pirates 5 leak, the claims that release groups were in discussion over the leaking of The Last Jedi intrigued us. So, through trusted sources and direct discussion with members, we tried to learn more.

While all groups admitted being involved or at least being aware of discussions taking place, none appeared to believe that a movie had been obtained from Disney, was being held for ransom, or would ever be leaked.

“Bullshit!” one told us. “Fake news,” said another.

With not even well-known release groups believing that leaks of The Last Jedi or Pirates 5 are anywhere on the horizon, that brought us full circle to the original statement by Disney chief Bob Iger claiming that a movie had been stolen.

What we do know for sure is that everything reported initially by Hollywood Reporter about a ransom demand matches up with statements made by Darkness/EMH to TorrentFreak, Braidley, and several release groups. We also know from copy emails obtained by TF that the discussions with the release groups took place well before HWR broke the story.

With Disney not commenting on the record to either HWR or Deadline (publications known to be Hollywood-friendly) it seemed unlikely that TF would succeed where they had failed.

So, without comprimising any of our sources, we gave a basic outline of our findings to a previously receptive Disney contact, in an effort to tie Darkness/EMH with the email address that he told us Disney already knew. Predictably, perhaps, we received no response.

At this point one has to wonder. If no credible evidence of a leak has been made available and the threats to leak the movie haven’t been followed through on, what was the point of the whole affair?

Money appears to have been the motive, but it seems likely that none will be changing hands. But would someone really bluff the leaking of a movie to a company like Disney in order to get a ‘ransom’ payment or scam a release group out of a few dollars? Perhaps.

Braidley informs TF that Darkness/EMH recently claimed that he’d had the copy of The Last Jedi since March but never had any intention of leaking it. He did, however, need money for a personal matter involving a family relative.

With this in mind, we asked Darkness/EMH why he’d failed to carry through with his threats to leak the movie, bit by bit, as his email to Disney claimed. He said there was never any intention of leaking the movie “until we are sure it wont be traced back” but “if the right group comes forward and meets our strict standards then the leak could come as soon as 2-3 weeks.”

With that now seeming increasingly unlikely (but hey, you never know), this might be the final chapter in what turns out to be the famous hacking of Disney that never was. Or, just maybe, undisclosed aces remain up sleeves.

“Just got another comment on my blog from [Darkness],” Braidley told TF this week. “He now claims that the Emoji movie has been leaked and is being held to ransom.”

Simultaneously he was telling TF the same thing. ‘Hacking’ announcement from Sony coming soon? Stay tuned…..

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.