Tag Archives: BP

Why that "file-copy" forensics of DNC hack is wrong

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/08/why-that-file-copy-forensics-of-dnc.html

People keep asking me about this story about how forensics “experts” have found proof the DNC hack was an inside job, because files were copied at 22-megabytes-per-second, faster than is reasonable for Internet connections.

This story is bogus.
Yes, the forensics is correct that at some point, files were copied at 22-mBps. But there’s no evidence this was the point at Internet transfer out of the DNC.
One point might from one computer to another within the DNC. Indeed, as someone experienced doing this sort of hack, it’s almost certain that at some point, such a copy happened. The computers you are able to hack into are rarely the computers that have the data you want. Instead, you have to copy the data from other computers to the hacked computer, and then exfiltrate the data out of the hacked computer.
Another point might have been from one computer to another within the hacker’s own network, after the data was stolen. As a hacker, I can tell you that I frequently do this. Indeed, as this story points out, the timestamps of the file shows that the 22-mBps copy happened months after the hack was detected.
If the 22-mBps was the copy exfiltrating data, it might not have been from inside the DNC building, but from some cloud service, as this tweet points out. Hackers usually have “staging” servers in the cloud that can talk to other cloud serves at easily 10 times the 22-mBps, even around the world. I have staging servers that will do this, and indeed, have copied files at this data rate. If the DNC had that data or backups in the cloud, this would explain it. 
My point is that while the forensic data-point is good, there’s just a zillion ways of explaining it. It’s silly to insist on only the one explanation that fits your pet theory.
As a side note, you can tell this already from the way the story is told. For example, rather than explain the evidence and let it stand on its own, the stories hype the credentials of those who believe the story, using the “appeal to authority” fallacy.

72-Year-Old Man Accused of ‘Pirating’ Over a Thousand Torrents

Post Syndicated from Ernesto original https://torrentfreak.com/72-year-old-man-accused-of-pirating-over-a-thousand-torrents-170810/

In recent years, file-sharers around the world have been pressured to pay significant settlement fees, or face legal repercussions.

These so-called ‘copyright trolling‘ efforts are a common occurrence in the United States too, where hundreds of thousands of people have been targeted in recent years.

While a significant number of defendants are indeed guilty, there are also many that are wrongfully accused. Third-parties may have connected to their Wi-Fi, for example, which isn’t a rarity.

In Hawaii, a recent target of a copyright trolling expedition claims to be innocent, and he’s taken his case to the local press. The 72-year-old John J. Harding doesn’t fit the typical profile of a prolific pirate, but that’s exactly what a movie company has accused him of being.

In June, Harding received a letter from local attorney Kerry Culpepper, who works for the rightsholders of movies such as ‘Mechanic: Resurrection’ and ‘Once Upon a Time in Venice.’

The letter accused the 72-year-old of downloading a movie and also listed over 1,000 other downloads that were tied to his IP-address. Harding was understandably shocked by the threat and says he never downloads anything.

“I’ve never illegally downloaded anything … or even legally! I use my computer for email, games, news and that’s about it,” Harding told HawaiiNewsNow.

“I know definitely that I’m not guilty and my wife is not guilty. So what’s going on? Did somebody hack us? Is somebody out there actively hacking us? How they do that and go about doing that, I have no idea,” Harding added.

As is common in these cases, the copyright holder asked the Hawaii Federal Court for a subpoena, which ordered the associated Internet provider to hand over the personal details of the alleged infringers. The attorney then went on to send out settlement requests to the exposed users.

Harding received a letter offering an easy $3,900 settlement, which would increase to $4,900 if he failed to respond before August 7th. However, the elderly man wasn’t keen on taking the deal, describing the pay-up-or-else demand as “absolutely absurd.”

The attorney reiterated to the local newspaper that these are not idle threats. People risk $150,000 per illegal download, he stressed. That said, mistakes happen and people who feel that they are wrongfully accused should contact his office.

Culpepper explained it further with an analogy while adding a new dimension to the ‘you wouldn’t steal a car’ meme in the process.

“This is similar to a car stolen. If your car was stolen and your car hit someone or did some damage, initially the victim would look to see who was the owner of the car. You would probably tell them, someone stole my car. That time, that person would try to find the person who stole your car,” he said.

The attorney says that they are not trying to bankrupt people. Their goal is to deter piracy. There are cases where they’ve accepted lower settlements or even a mere apology, he notes.

How the 72-year-old will respond in unknown, but judging for his tone he may be looking for an apology himself. Going to the press was probably a smart move, as rightsholders generally don’t like the PR that comes with this kind of story.

These cases are by no means unique though. While browsing through the court dockets of Culpepper’s recent cases we quickly stumbled upon a similar denial. This one comes from a Honolulu woman who’s accused of pirating ‘Mechanic: Resurrection.’

“I have never downloaded the movie they are referencing and when I do download movies I use legal services such as Amazon, and Apple TV,” she wrote to the court, urging it to keep her personal information private.

“I do have frequent guests at our house often using the Internet. In the future I will request that nobody uses any file sharing on our Internet connection,” the letter added.

Unfortunately for her, the letter includes her full name and address, which means that she has effectively exposed herself. This likely means that she will soon receive a settlement request in the mail, just like Harding did, if she hasn’t already.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Backblaze Cloud Backup 5.0: The Rapid Access Release

Post Syndicated from Yev original https://www.backblaze.com/blog/cloud-backup-5-0-rapid-access/

Announcing Backblaze Cloud Backup 5.0: the Rapid Access Release. We’ve been at the backup game for a long time now, and we continue to focus on providing the best unlimited backup service on the planet. A lot of the features in this release have come from listening to our customers about how they want to use their data. “Rapid Access” quickly became the theme because, well, we’re all acquiring more and more data and want to access it in a myriad of ways.

This release brings a lot of new functionality to Backblaze Computer Backup: faster backups, accelerated file browsing, image preview, individual file download (without creating a “restore”), and file sharing. To top it all off, we’ve refreshed the user interface on our client app. We hope you like it!

Speeding Things Up

New code + new hardware + elbow grease = things are going to move much faster.

Faster Backups

We’ve doubled the number of threads available for backup on both Mac and PC . This gives our service the ability to intelligently detect the right settings for you (based on your computer, capacity, and bandwidth). As always, you can manually set the number of threads — keep in mind that if you have a slow internet connection, adding threads might have the opposite effect and slow you down. On its default settings, our client app will now automatically evaluate what’s best given your environment. We’ve internally tested our service backing up at over 100 Mbps, which means if you have a fast-enough internet connection, you could back up 50 GB in just one hour.

Faster Browsing

We’ve introduced a number of enhancements that increase file browsing speed by 3x. Hidden files are no longer displayed by default, but you can still show them with one click on the restore page. This gives the restore interface a cleaner look, and helps you navigate backup history if you need to roll back time.

Faster Restore Preparation

We take pride in providing a variety of ways for consumers to get their data back. When something has happened to your computer, getting your files back quickly is critical. Both web download restores and Restore by Mail will now be much faster. In some cases up to 10x faster!

Preview — Access — Share

Our system has received a number of enhancements — all intended to give you more access to your data.

Image Preview

If you have a lot of photos, this one’s for you. When you go to the restore page you’ll now be able to click on each individual file that we have backed up, and if it’s an image you’ll see a preview of that file. We hope this helps people figure out which pictures they want to download (this especially helps people with a lot of photos named something along the lines of: 2017-04-20-9783-41241.jpg). Now you can just click on the picture to preview it.

Access

Once you’ve clicked on a file (30MB and smaller), you’ll be able to individually download that file directly in your browser. You’ll no longer need to wait for a single-file restore to be built and zipped up; you’ll be able to download it quickly and easily. This was a highly requested feature and we’re stoked to get it implemented.

Share

We’re leveraging Backblaze B2 Cloud Storage and giving folks the ability to publicly share their files. In order to use this feature, you’ll need to enable Backblaze B2 on your account (if you haven’t already, there’s a simple wizard that will pop up the first time you try to share a file). Files can be shared anywhere in the world via URL. All B2 accounts have 10GB/month of storage and 1GB/day of downloads (equivalent to sharing an iPhone photo 1,000 times per month) for free. You can increase those limits in your B2 Settings. Keep in mind that any file you share will be accessible to anybody with the link. Learn more about File Sharing.

For now, we’ve limited the Preview/Access/Share functionality to files 30MB and smaller, but larger files will be supported in the coming weeks!

Other Goodies

In addition to adding 2FV via ToTP, we’ve also been hard at work on the client. In version 5.0 we’ve touched up the user interface to make it a bit more lively, and we’ve also made the client IPv6 compatible.

Backblaze 5.0 Available: August 10, 2017

We will slowly be auto-updating all users in the coming weeks. To update now:

This version is now the default download on www.backblaze.com.

We hope you enjoy Backblaze Cloud Backup v5.0!

The post Backblaze Cloud Backup 5.0: The Rapid Access Release appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Getting Your Data into the Cloud is Just the Beginning

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/cost-data-of-transfer-cloud-storage/

Total Cloud Storage Cost

Organizations should consider not just the cost of getting their data into the cloud, but also long-term costs for storage and retrieval when deciding which cloud storage solution meets their needs.

As cloud storage has become ubiquitous, organizations large and small are joining in. For larger organizations the lure of reducing capital expenses and their associated operational costs is enticing. For smaller organizations, cloud storage often replaces an unmanageable closet full of external hard drives, thumb drives, SD cards, and other devices. With terabytes or even petabytes of data, the common challenge facing organizations, large and small, is how to get their data up to the cloud.

Transferring Data to the Cloud

The obvious solution for getting your data to the cloud is to upload your data from your internal network through the internet to the cloud storage vendor you’ve selected. Cloud storage vendors don’t charge you for uploading your data to their cloud, but you, of course, have to pay your network provider and that’s where things start to get interesting. Here are a few things to consider.

  • The initial upload: Unless you are just starting out, you will have a large amount of data you want to upload to the cloud. This could be data you wish to archive or have had archived previously, for example data stored on LTO tapes or kept stored on external hard drives.
  • Pipe size: This is the amount of upload bandwidth of your network connection. This is measured in Mbps (megabits per second). Remember, your data is stored in MB (megabytes), so an upload connection of 80 Mbps will transfer no more than 10 MB of data per second and most likely a lot less.
  • Cost and caps: In some places, organizations pay a flat monthly rate for a specified level of service (speed) for internet access. In other locations, internet access is metered, or pay as you go. In either case, there can be internet service caps that limit or completely stop data transfer once you reach your contracted threshold.

One or more of these challenges has the potential to make the initial upload of your data expensive and potentially impossible. You could wait until cloud storage companies start buying up internet providers and make data upload cheap (or free with Amazon Prime!), but there is another option.

Data Transfer Devices

Given the potential challenges of using your network for the initial upload of your data to the cloud, a handful of cloud storage companies have introduced data transfer or data ingest services. Backblaze has the B2 Fireball, Amazon has Snowball (and other similar devices), and Google recently introduced their Transfer Appliance.

KLRU-TV Austin PBS uploaded their Austin City Limits musical anthology series to Backblaze using a B2 Fireball.

These services work as follows:

  • The provider sends you a portable (or somewhat portable) storage device.
  • You connect the device to your network and load some amount of data on the device over your internal network connection.
  • You return the device, loaded with your data, to the provider, who uploads your data to your cloud storage account from inside their own data center.

Data Transfer Devices Save Time

Assuming your Internet connection is a flat rate service that has no caps or limits and your organizational operations can withstand the traffic, you still may want to opt to use a data transfer service to move your data to the cloud. Why? Time. For example, if your initial data upload is 100 TB here’s how long it would take using different network upload connection speeds:

Network Speed Upload Time
10 Mbps 3 years
100 Mbps 124 days
500 Mbps 25 days
1 Gbps 12 days

This assumes you are using most of your upload connection to upload your data, which is probably not realistic if you want to stay in business. You could potentially rent a better connection or upgrade your connection permanently, both of which add to the cost of running your business.

Speaking of cost, there is of course a charge for the data transfer service that can be summarized as follows:

  • Backblaze B2 Fireball — Up to 40 TB of data per trip for $550.00 for 30 days in use at your site.
  • Amazon Snowball — up to 50 TB of data per trip for $200.00 for 10 days use at your site, plus $15/day each day in use at your site thereafter.
  • Google Transfer Appliance — up to 100 TB of data per trip for $300.00 for 10 days use at your site, plus $10/day each day in use at your site thereafter.

These prices do not include shipping, which can range from $100 to $900 depending on shipping method, location, etc.

Both Amazon and Google have transfer devices that are larger and cost more. For comparison purposes below we’ll use the three device versions listed above.

The Real Cost of Uploading Your Data

If we stopped our review at the previous paragraph and we were prepared to load up our transfer device in 10 days or less, the clear winner would be Google. But, this leaves out two very important components of any cloud storage project; the cost of storing your data and the cost of downloading your data.

Let’s look at two examples:

Example 1 — Archive 100 TB of data:

  • Use the data transfer service move 100 TB of data to the cloud storage service.
  • Accomplish the transfer within 10 days.
  • Store that 100 TB of data for 1 year.
Service Transfer Cost Cloud Storage Total
Backblaze B2 $1,650 (3 trips) $6,000 $7,650
Google Cloud $300 (1 trip) $24,000 $24,300
Amazon S3 $400 (2 trips) $25,200 $25,600

Results:

  • Using the B2 Fireball to store data in Backblaze B2 saves you $16,650 over a one-year period versus the Google solution.
  • The payback period for using a Backblaze B2 FireBall versus a Google Transfer Appliance is less than 1 month.

Example 2 — Store and use 100 TB of data:

  • Use the data transfer service to move 100 TB of data to the cloud storage service.
  • Accomplish the transfer within 10 days.
  • Store that 100 TB of data for 1 year.
  • Add 5 TB a month (on average) to the total stored.
  • Delete 2 TB a month (on average) from the total stored.
  • Download 10 TB a month (on average) from the total stored.
Service Transfer Cost Cloud Storage Total
Backblaze B2 $1,650 (3 trips) $9,570 $11,220
Google Cloud $300 (1 trip) $39,684 $39,984
Amazon S3 $400 (2 trips) $36,114 $36,514

Results:

  • Using the B2 Fireball to store data in Backblaze B2 saves you $28,764 over a one-year period versus the Google solution.
  • The payback period for using a Backblaze B2 FireBall versus a Google Transfer Appliance is less than 1 month.

Notes:

  • All prices listed are based on list prices from the vendor websites as of the date of this blog post.
  • We are accomplishing the transfer of your data to the device within the 10 day “free” period specified by Amazon and Google.
  • We are comparing cloud storage services that have similar performance. For example, once the data is uploaded, it is readily available for download. The data is also available for access via a Web GUI, CLI, API, and/or various applications integrated with the cloud storage service. Multiple versions of files can be kept as desired. Files can be deleted any time.

To be fair, it requires Backblaze three trips to move 100 TB while it only takes 1 trip for the Google Transfer Appliance. This adds some cost to prepare, monitor, and ship three B2 Fireballs versus one Transfer Appliance. Even with that added cost, the Backblaze B2 solution will still be significantly less expensive over the one year period and beyond.

Have a Data Transfer Device Owner

Before you run out and order a transfer device, make sure the transfer process is someone’s job once the device arrives at your organization. Filling a transfer device should only take a few days, but if it is forgotten, you’ll find you’ve had the device for 2 or 3 weeks. While that’s not much of a problem with a B2 Fireball, it could start to get expensive otherwise.

Just the Beginning

As with most “new” technologies and services, you can expect other companies to jump in and provide various data ingest services. The cost will get cheaper or even free as cloud storage companies race to capture and lock up the data you have kept locally all these years. When you are evaluating cloud storage solutions, it’s best to look past the data ingest loss-leader price, and spend a few minutes to calculate the long-term cost of storing and using your data.

The post Getting Your Data into the Cloud is Just the Beginning appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

TVAddons Returns, But in Ugly War With Canadian Telcos Over Kodi Addons

Post Syndicated from Andy original https://torrentfreak.com/tvaddons-returns-ugly-war-canadian-telcos-kodi-addons-170801/

After Dish Network filed a lawsuit against TVAddons in Texas, several high-profile Kodi addons took the decision to shut down. Soon after, TVAddons itself went offline.

In the weeks that followed, several TVAddons-related domains were signed over (1,2) to a Canadian law firm, a mysterious situation that didn’t dovetail well with the US-based legal action.

TorrentFreak can now reveal that the shutdown of TVAddons had nothing to do with the US action and everything to do with a separate lawsuit filed in Canada.

The complaint against TVAddons

Two months ago on June 2, a collection of Canadian telecoms giants including Bell Canada, Bell ExpressVu, Bell Media, Videotron, Groupe TVA, Rogers Communications and Rogers Media, filed a complaint in Federal Court against Montreal resident, Adam Lackman, the man behind TVAddons.

The 18-page complaint details the plaintiffs’ case against Lackman, claiming that he communicated copyrighted TV shows including Game of Thrones, Prison Break, The Big Bang Theory, America’s Got Talent, Keeping Up With The Kardashians and dozens more, to the public in breach of copyright.

The key claim is that Lackman achieved this by developing, hosting, distributing or promoting Kodi add-ons.

Adam Lackman, the man behind TVAddons (@adam.lackman on Instagram)

A total of 18 major add-ons are detailed in the complaint including 1Channel, Exodus, Phoenix, Stream All The Sources, SportsDevil, cCloudTV and Alluc, to name a few. Also under the spotlight is the ‘FreeTelly’ custom Kodi build distributed by TVAddons alongside its Kodi configuration tool, Indigo.

“[The defendant] has made the [TV shows] available to the public by telecommunication in a way that allows members of the public to have access to them from a place and at a time individually chosen by them…consequently infringing the Plaintiffs’ copyright…in contravention of sections 2.4(1.1), 3(1)(f) and 27(1) of the Copyright Act,” the complaint reads.

The complaint alleges that Lackman “induced and/or authorized users” of the FreeTelly and Indigo tools to carry out infringement by his handling and promotion of infringing add-ons, including through TVAddons.ag and Offshoregit.com, in contravention of sections 3(1)(f) and 27(1) of the Copyright Act.

“Approximately 40 million unique users located around the world are actively using Infringing Addons hosted by TVAddons every month, and approximately 900,000 Canadian households use Infringing Add-ons to access television content. The amount of users of Infringing add-ons hosted TVAddons is constantly increasing,” the complaint adds.

To limit the harm allegedly caused by TVAddons, the complaint asked for interim, interlocutory, and permanent injunctions restraining Lackman and associates from developing, promoting or distributing any of the allegedly infringing add-ons or software. On top, the plaintiffs requested punitive and exemplary damages, plus costs.

The interim injunction and Anton Piller Order

Following the filing of the complaint, on June 9 the Federal Court handed down a time-limited interim injunction against Lackman which restrained him from various activities in respect of TVAddons. The process took place ex parte, meaning in secret, without Lackman being able to mount a defense.

The Court also authorized a bailiff and computer forensics experts to take control of Internet domains including TVAddons.ag and Offshoregit.com plus social media and hosting provider accounts for a period of 14 days. These were transferred to Daniel Drapeau at DrapeauLex, an independent court-appointed supervising counsel.

The order also contained an Anton Piller order, a civil search warrant that grants plaintiffs no-notice permission to enter a defendant’s premises in order to secure and copy evidence to support their case, before it can be destroyed or tampered with.

The order covered not only data related to the TVAddons platform, such as operating and financial details, revenues, and banking information, but everything in Lackman’s possession.

The Court ordered the telecoms companies to inform Lackman that the case against him is a civil proceeding and that he could deny entry to his property if he wished. However, that option would put him in breach of the order and would place him at risk of being fined or even imprisoned. Catch 22 springs to mind.

The Court did, however, put limits on the number of people that could be present during the execution of the Anton Piller order (ostensibly to avoid intimidation) and ordered the plaintiffs to deposit CAD$50,000 with the Court, in case the order was improperly executed. That decision would later prove an important one.

The search and interrogation of TVAddons’ operator

On June 12, the order was executed and Lackman’s premises were searched for more than 16 hours. For nine hours he was interrogated and effectively denied his right to remain silent since non-cooperation with an Anton Piller order amounts to contempt of court. The Court’s stated aim of not intimidating Lackman failed.

The TVAddons operator informs TorrentFreak that he heard a disturbance in the hallway outside and spotted several men hiding on the other side of the door. Fearing for his life, Lackman called the police and when they arrived he opened the door. At this point, the police were told by those in attendance to leave, despite Lackman’s protests.

Once inside, Lackman was told he had an hour to find a lawyer, but couldn’t use any electronic device to get one. Throughout the entire day, Lackman says he was reminded by the plaintiffs’ lawyer that he could be held in contempt of court and jailed, even though he was always cooperating.

“I had to sit there and not leave their sight. I was denied access to medication,” Lackman told TorrentFreak. “I had a doctor’s appointment I was forced to miss. I wasn’t even allowed to call and cancel.”

In papers later filed with the court by Lackman’s team, the Anton Piller order was described as a “bombe atomique” since TVAddons had never been served with so much as a copyright takedown notice in advance of this action.

The Anton Piller controversy

Anton Piller orders are only valid when passing a three-step test: when there is a strong prima facie case against the respondent, the damage – potential or actual – is serious for the applicant, and when there is a real possibility that evidence could be destroyed.

For Bell Canada, Bell ExpressVu, Bell Media, Videotron, Groupe TVA, Rogers Communications and Rogers Media, serious problems emerged on at least two of these points after the execution of the order.

For example, TVAddons carried more than 1,500 add-ons yet only 1% of those add-ons were considered to be infringing, a tiny number in the overall picture. Then there was the not insignificant problem with the exchange that took place during the hearing to obtain the order, during which Lackman was not present.

Clearly, the securing of existing evidence wasn’t the number one priority.

Plaintiffs: We want to destroy TVAddons

And the problems continued.

No right to remain silent, no right to consult a lawyer

The Anton Piller search should have been carried out between 8am and 8pm but actually carried on until midnight. As previously mentioned, Adam Lackman was effectively denied his right to remain silent and was forbidden from getting advice from his lawyer.

None of this sat well with the Honourable B. Richard Bell during a subsequent Federal Court hearing to consider the execution of the Anton Piller order.

“It is important to note that the Defendant was not permitted to refuse to answer questions under fear of contempt proceedings, and his counsel was not permitted to clarify the answers to questions. I conclude unhesitatingly that the Defendant was subjected to an examination for discovery without any of the protections normally afforded to litigants in such circumstances,” the Judge said.

“Here, I would add that the ‘questions’ were not really questions at all. They took the form of orders or directions. For example, the Defendant was told to ‘provide to the bailiff’ or ‘disclose to the Plaintiffs’ solicitors’.”

Evidence preservation? More like a fishing trip

But shockingly, the interrogation of Lackman went much, much further. TorrentFreak understands that the TVAddons operator was given a list of 30 names of people that might be operating sites or services similar to TVAddons. He was then ordered to provide all of the information he had on those individuals.

Of course, people tend to guard their online identities so it’s possible that the information provided by Lackman will be of limited use, but Judge Bell was not happy that the Anton Piller order was abused by the plaintiffs in this way.

“I conclude that those questions, posed by Plaintiffs’ counsel, were solely made in furtherance of their investigation and constituted a hunt for further evidence, as opposed to the preservation of then existing evidence,” he wrote in a June 29 order.

But he was only just getting started.

Plaintiffs unlawfully tried to destroy TVAddons before trial

The Judge went on to note that from their own mouths, the Anton Piller order was purposely designed by the plaintiffs to completely shut down TVAddons, despite the fact that only a tiny proportion of the add-ons available on the site were allegedly used to infringe copyright.

“I am of the view that [the order’s] true purpose was to destroy the livelihood of the Defendant, deny him the financial resources to finance a defense to the claim made against him, and to provide an opportunity for discovery of the Defendant in circumstances where none of the procedural safeguards of our civil justice system could be engaged,” Judge Bell wrote.

As noted, plaintiffs must also have a “strong prima facie case” to obtain an Anton Piller order but Judge Bell says he’s not convinced that one exists. Instead, he praised the “forthright manner” of Lackman, who successfully compared the ability of Kodi addons to find content in the same way as Google search can.

So why the big turn around?

Judge Bell said that while the prima facie case may have appeared strong before the judge who heard the matter ex parte (without Lackman being present to defend himself), the subsequent adversarial hearing undermined it, to the point that it no longer met the threshold.

As a result of these failings, Judge Bell declared the Anton Piller order unlawful. Things didn’t improve for the plaintiffs on the injunction front either.

The Judge said that he believes that Lackman has “an arguable case” that he is not violating the Copyright Act by merely providing addons and that TVAddons is his only source of income. So, if an injunction to close the site was granted, the litigation would effectively be over, since the plaintiffs already admitted that their aim was to neutralize the platform.

If the platform was neutralized, Lackman could no longer earn money from the site, which would harm his ability to mount a defense.

“In considering the balance of convenience, I also repeat that the plaintiffs admit that the vast majority of add-ons are non-infringing. Whether the remaining approximately 1% are infringing is very much up for debate. For these reasons, I find the balance of convenience favors the defendant, and no interlocutory injunction will be issued,” the Judge declared.

With the Anton Piller order declared unlawful and no interlocutory injunction (one effective until the final determination of the case) handed down, things were about to get worse for the telecoms companies.

They had paid CAD$50,000 to the court in security in case things went wrong with the Anton Piller order, so TVAddons was entitled to compensation from that amount. That would be helpful, since at this point TVAddons had already run up CAD$75,000 in legal expenses.

On top, the Judge told independent counsel to give everything seized during the Anton Piller search back to Lackman.

The order to return items previously seized

But things were far from over. Within days, the telecoms companies took the decision to the Court of Appeal, asking for a stay of execution (a delay in carrying out a court order) to retain possession of items seized, including physical property, domains, and social media accounts.

Mid-July the appeal was granted and certain confidentiality clauses affecting independent counsel (including Daniel Drapeau, who holds the TVAddons’ domains) were ordered to be continued. However, considering the problems with the execution of the Anton Piller order, Bell Canada, TVA, Videotron and Rogers et al, were ordered to submit an additional security bond of CAD$140,000, on top of the CAD$50,000 already deposited.

So the battle continues, and continue it will

Speaking with TorrentFreak, Adam Lackman says that he has no choice but to fight the telcoms companies since not doing so would result in a loss by default judgment. Interestingly, both he and one of the judges involved in the case thus far believe he has an arguable case.

Lackman says that his activities are protected under the Canadian Copyright Act, specifically subparagraph 2.4(1)(b) which states as follows:

A person whose only act in respect of the communication of a work or other subject-matter to the public consists of providing the means of telecommunication necessary for another person to so communicate the work or other subject-matter does not communicate that work or other subject-matter to the public;

Of course, finding out whether that’s indeed the case will be a costly endeavor.

“It all comes down to whether we will have the financial resources necessary to mount our defense and go to trial. We won’t have ad revenue coming in, since losing our domain names means that we’ll lose the majority of our traffic for quite some time into the future,” Lackman told TF in a statement.

“We’re hoping that others will be as concerned as us about big companies manipulating the law in order to shut down what they see as competition. We desperately need help in financially supporting our legal defense, we cannot do it alone.

“We’ve run up a legal bill of over $100,000 to date. We’re David, and they are four Goliaths with practically unlimited resources. If we lose, it will mean that new case law is made, case law that could mean increased censorship of the internet.”

In the hope of getting support, TVAddons has launched a fundraiser campaign and in the meantime, a new version of the site is back on a new domain, TVAddons.co.

Given TVAddons’ line of defense, the nature of both the platform and Kodi addons, and the fact that there has already been a serious abuse of process during evidence preservation, this is now one of the most interesting and potentially influential copyright cases underway anywhere today.

TVAddons is being represented by Éva Richard , Hilal Ayoubi and Karim Renno in Canada, plus Erin Russell and Jason Sweet in the United States.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Run Common Data Science Packages on Anaconda and Oozie with Amazon EMR

Post Syndicated from John Ohle original https://aws.amazon.com/blogs/big-data/run-common-data-science-packages-on-anaconda-and-oozie-with-amazon-emr/

In the world of data science, users must often sacrifice cluster set-up time to allow for complex usability scenarios. Amazon EMR allows data scientists to spin up complex cluster configurations easily, and to be up and running with complex queries in a matter of minutes.

Data scientists often use scheduling applications such as Oozie to run jobs overnight. However, Oozie can be difficult to configure when you are trying to use popular Python packages (such as “pandas,” “numpy,” and “statsmodels”), which are not included by default.

One such popular platform that contains these types of packages (and more) is Anaconda. This post focuses on setting up an Anaconda platform on EMR, with an intent to use its packages with Oozie. I describe how to run jobs using a popular open source scheduler like Oozie.

Walkthrough

For this post, you walk through the following tasks:

  • Create an EMR cluster.
  • Download Anaconda on your master node.
  • Configure Oozie.
  • Test the steps.

Create an EMR cluster

Spin up an Amazon EMR cluster using the console or the AWS CLI. Use the latest release, and include Apache Hadoop, Apache Spark, Apache Hive, and Oozie.

To create a three-node cluster in the us-east-1 region, issue an AWS CLI command such as the following. This command must be typed as one line, as shown below. It is shown here separated for readability purposes only.

aws emr create-cluster \ 
--release-label emr-5.7.0 \ 
 --name '<YOUR-CLUSTER-NAME>' \
 --applications Name=Hadoop Name=Oozie Name=Spark Name=Hive \ 
 --ec2-attributes '{"KeyName":"<YOUR-KEY-PAIR>","SubnetId":"<YOUR-SUBNET-ID>","EmrManagedSlaveSecurityGroup":"<YOUR-EMR-SLAVE-SECURITY-GROUP>","EmrManagedMasterSecurityGroup":"<YOUR-EMR-MASTER-SECURITY-GROUP>"}' \ 
 --use-default-roles \ 
 --instance-groups '[{"InstanceCount":1,"InstanceGroupType":"MASTER","InstanceType":"<YOUR-INSTANCE-TYPE>","Name":"Master - 1"},{"InstanceCount":<YOUR-CORE-INSTANCE-COUNT>,"InstanceGroupType":"CORE","InstanceType":"<YOUR-INSTANCE-TYPE>","Name":"Core - 2"}]'

One-line version for reference:

aws emr create-cluster --release-label emr-5.7.0 --name '<YOUR-CLUSTER-NAME>' --applications Name=Hadoop Name=Oozie Name=Spark Name=Hive --ec2-attributes '{"KeyName":"<YOUR-KEY-PAIR>","SubnetId":"<YOUR-SUBNET-ID>","EmrManagedSlaveSecurityGroup":"<YOUR-EMR-SLAVE-SECURITY-GROUP>","EmrManagedMasterSecurityGroup":"<YOUR-EMR-MASTER-SECURITY-GROUP>"}' --use-default-roles --instance-groups '[{"InstanceCount":1,"InstanceGroupType":"MASTER","InstanceType":"<YOUR-INSTANCE-TYPE>","Name":"Master - 1"},{"InstanceCount":<YOUR-CORE-INSTANCE-COUNT>,"InstanceGroupType":"CORE","InstanceType":"<YOUR-INSTANCE-TYPE>","Name":"Core - 2"}]'

Download Anaconda

SSH into your EMR master node instance and download the official Anaconda installer:

wget https://repo.continuum.io/archive/Anaconda2-4.4.0-Linux-x86_64.sh

At the time of publication, Anaconda 4.4 is the most current version available. For the download link location for the latest Python 2.7 version (Python 3.6 may encounter issues), see https://www.continuum.io/downloads.  Open the context (right-click) menu for the Python 2.7 download link, choose Copy Link Location, and use this value in the previous wget command.

This post used the Anaconda 4.4 installation. If you have a later version, it is shown in the downloaded filename:  “anaconda2-<version number>-Linux-x86_64.sh”.

Run this downloaded script and follow the on-screen installer prompts.

chmod u+x Anaconda2-4.4.0-Linux-x86_64.sh
./Anaconda2-4.4.0-Linux-x86_64.sh

For an installation directory, select somewhere with enough space on your cluster, such as “/mnt/anaconda/”.

The process should take approximately 1–2 minutes to install. When prompted if you “wish the installer to prepend the Anaconda2 install location”, select the default option of [no].

After you are done, export the PATH to include this new Anaconda installation:

export PATH=/mnt/anaconda/bin:$PATH

Zip up the Anaconda installation:

cd /mnt/anaconda/
zip -r anaconda.zip .

The zip process may take 4–5 minutes to complete.

(Optional) Upload this anaconda.zip file to your S3 bucket for easier inclusion into future EMR clusters. This removes the need to repeat the previous steps for future EMR clusters.

Configure Oozie

Next, you configure Oozie to use Pyspark and the Anaconda platform.

Get the location of your Oozie sharelibupdate folder. Issue the following command and take note of the “sharelibDirNew” value:

oozie admin -sharelibupdate

For this post, this value is “hdfs://ip-192-168-4-200.us-east-1.compute.internal:8020/user/oozie/share/lib/lib_20170616133136”.

Pass in the required Pyspark files into Oozies sharelibupdate location. The following files are required for Oozie to be able to run Pyspark commands:

  • pyspark.zip
  • py4j-0.10.4-src.zip

These are located on the EMR master instance in the location “/usr/lib/spark/python/lib/”, and must be put into the Oozie sharelib spark directory. This location is the value of the sharelibDirNew parameter value (shown above) with “/spark/” appended, that is, “hdfs://ip-192-168-4-200.us-east-1.compute.internal:8020/user/oozie/share/lib/lib_20170616133136/spark/”.

To do this, issue the following commands:

hdfs dfs -put /usr/lib/spark/python/lib/py4j-0.10.4-src.zip hdfs://ip-192-168-4-200.us-east-1.compute.internal:8020/user/oozie/share/lib/lib_20170616133136/spark/
hdfs dfs -put /usr/lib/spark/python/lib/pyspark.zip hdfs://ip-192-168-4-200.us-east-1.compute.internal:8020/user/oozie/share/lib/lib_20170616133136/spark/

After you’re done, Oozie can use Pyspark in its processes.

Pass the anaconda.zip file into HDFS as follows:

hdfs dfs -put /mnt/anaconda/anaconda.zip /tmp/myLocation/anaconda.zip

(Optional) Verify that it was transferred successfully with the following command:

hdfs dfs -ls /tmp/myLocation/

On your master node, execute the following command:

export PYSPARK_PYTHON=/mnt/anaconda/bin/python

Set the PYSPARK_PYTHON environment variable on the executor nodes. Put the following configurations in your “spark-opts” values in your Oozie workflow.xml file:

–conf spark.executorEnv.PYSPARK_PYTHON=./anaconda_remote/bin/python
–conf spark.yarn.appMasterEnv.PYSPARK_PYTHON=./anaconda_remote/bin/python

This is referenced from the Oozie job in the following line in your workflow.xml file, also included as part of your “spark-opts”:

--archives hdfs:///tmp/myLocation/anaconda.zip#anaconda_remote

Your Oozie workflow.xml file should now look something like the following:

<workflow-app name="spark-wf" xmlns="uri:oozie:workflow:0.5">
<start to="start_spark" />
<action name="start_spark">
    <spark xmlns="uri:oozie:spark-action:0.1">
        <job-tracker>${jobTracker}</job-tracker>
        <name-node>${nameNode}</name-node>
        <prepare>
            <delete path="/tmp/test/spark_oozie_test_out3"/>
        </prepare>
        <master>yarn-cluster</master>
        <mode>cluster</mode>
        <name>SparkJob</name>
        <class>clear</class>
        <jar>hdfs:///user/oozie/apps/myPysparkProgram.py</jar>
        <spark-opts>--queue default
            --conf spark.ui.view.acls=*
            --executor-memory 2G --num-executors 2 --executor-cores 2 --driver-memory 3g
            --conf spark.executorEnv.PYSPARK_PYTHON=./anaconda_remote/bin/python
            --conf spark.yarn.appMasterEnv.PYSPARK_PYTHON=./anaconda_remote/bin/python
            --archives hdfs:///tmp/myLocation/anaconda.zip#anaconda_remote
        </spark-opts>
    </spark>
    <ok to="end"/>
    <error to="kill"/>
</action>
        <kill name="kill">
                <message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
        </kill>
        <end name="end"/>
</workflow-app>

Test steps

To test this out, you can use the following job.properties and myPysparkProgram.py file, along with the following steps:

job.properties

masterNode ip-xxx-xxx-xxx-xxx.us-east-1.compute.internal
nameNode hdfs://${masterNode}:8020
jobTracker ${masterNode}:8032
master yarn
mode cluster
queueName default
oozie.libpath ${nameNode}/user/oozie/share/lib
oozie.use.system.libpath true
oozie.wf.application.path ${nameNode}/user/oozie/apps/

Note: You can get your master node IP address (denoted as “ip-xxx-xxx-xxx-xxx” here) from the value for the sharelibDirNew parameter noted earlier.

myPysparkProgram.py

from pyspark import SparkContext, SparkConf
import numpy
import sys

conf = SparkConf().setAppName('myPysparkProgram')
sc = SparkContext(conf=conf)

rdd = sc.textFile("/user/hadoop/input.txt")

x = numpy.sum([3,4,5]) #total = 12

rdd = rdd.map(lambda line: line + str(x))
rdd.saveAsTextFile("/user/hadoop/output")

Put the “myPysparkProgram.py” into the location mentioned between the “<jar>xxxxx</jar>” tags in your workflow.xml. In this example, the location is “hdfs:///user/oozie/apps/”. Use the following command to move the “myPysparkProgram.py” file to the correct location:

hdfs dfs -put myPysparkProgram.py /user/oozie/apps/

Put the above workflow.xml file into the “/user/oozie/apps/” location in hdfs:

hdfs dfs –put workflow.xml /user/oozie/apps/

Note: The job.properties file is run locally from the EMR master node.

Create a sample input.txt file with some data in it. For example:

input.txt

This is a sentence.
So is this. 
This is also a sentence.

Put this file into hdfs:

hdfs dfs -put input.txt /user/hadoop/

Execute the job in Oozie with the following command. This creates an Oozie job ID.

oozie job -config job.properties -run

You can check the Oozie job state with the command:

oozie job -info <Oozie job ID>

  1. When the job is successfully finished, the results are located at:
/user/hadoop/output/part-00000
/user/hadoop/output/part-00001

  1. Run the following commands to view the output:
hdfs dfs -cat /user/hadoop/output/part-00000
hdfs dfs -cat /user/hadoop/output/part-00001

The output will be:

This is a sentence. 12
So is this 12
This is also a sentence 12

Summary

The myPysparkProgram.py has successfully imported the numpy library from the Anaconda platform and has produced some output with it. If you tried to run this using standard Python, you’d encounter the following error:

Now when your Python job runs in Oozie, any imported packages that are implicitly imported by your Pyspark script are imported into your job within Oozie directly from the Anaconda platform. Simple!

If you have questions or suggestions, please leave a comment below.


Additional Reading

Learn how to use Apache Oozie workflows to automate Apache Spark jobs on Amazon EMR.

 


About the Author

John Ohle is an AWS BigData Cloud Support Engineer II for the BigData team in Dublin. He works to provide advice and solutions to our customers on their Big Data projects and workflows on AWS. In his spare time, he likes to play music, learn, develop tools and write documentation to further help others – both colleagues and customers alike.

 

 

 

Top Ten Ways to Protect Yourself Against Phishing Attacks

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/top-ten-ways-protect-phishing-attacks/

It’s hard to miss the increasing frequency of phishing attacks in the news. Earlier this year, a major phishing attack targeted Google Docs users, and attempted to compromise at least one million Google Docs accounts. Experts say the “phish” was convincing and sophisticated, and even people who thought they would never be fooled by a phishing attack were caught in its net.

What is phishing?

Phishing attacks use seemingly trustworthy but malicious emails and websites to obtain your personal account or banking information. The attacks are cunning and highly effective because they often appear to come from an organization or business you actually use. The scam comes into play by tricking you into visiting a website you believe belongs to the trustworthy organization, but in fact is under the control of the phisher attempting to extract your private information.

Phishing attacks are once again in the news due to a handful of high profile ransomware incidents. Ransomware invades a user’s computer, encrypts their data files, and demands payment to decrypt the files. Ransomware most often makes its way onto a user’s computer through a phishing exploit, which gives the ransomware access to the user’s computer.

The best strategy against phishing is to scrutinize every email and message you receive and never to get caught. Easier said than done—even smart people sometimes fall victim to a phishing attack. To minimize the damage in an event of a phishing attack, backing up your data is the best ultimate defense and should be part of your anti-phishing and overall anti-malware strategy.

How do you recognize a phishing attack?

A phishing attacker may send an email seemingly from a reputable credit card company or financial institution that requests account information, often suggesting that there is a problem with your account. When users respond with the requested information, attackers can use it to gain access to the accounts.

The image below is a mockup of how a phishing attempt might appear. In this example, courtesy of Wikipedia, the bank is fictional, but in a real attempt the sender would use an actual bank, perhaps even the bank where the targeted victim does business. The sender is attempting to trick the recipient into revealing confidential information by getting the victim to visit the phisher’s website. Note the misspelling of the words “received” and “discrepancy” as recieved and discrepency. Misspellings sometimes are indications of a phishing attack. Also note that although the URL of the bank’s webpage appears to be legitimate, the hyperlink would actually take you to the phisher’s webpage, which would be altogether different from the URL displayed in the message.

By Andrew Levine – en:Image:PhishingTrustedBank.png, Public Domain, https://commons.wikimedia.org/w/index.php?curid=549747

Top ten ways to protect yourself against phishing attacks

  1. Always think twice when presented with a link in any kind of email or message before you click on it. Ask yourself whether the sender would ask you to do what it is requesting. Most banks and reputable service providers won’t ask you to reveal your account information or password via email. If in doubt, don’t use the link in the message and instead open a new webpage and go directly to the known website of the organization. Sign in to the site in the normal manner to verify that the request is legitimate.
  2. A good precaution is to always hover over a link before clicking on it and observe the status line in your browser to verify that the link in the text and the destination link are in fact the same.
  3. Phishers are clever, and they’re getting better all the time, and you might be fooled by a simple ruse to make you think the link is one you recognize. Links can have hard-to-detect misspellings that would result in visiting a site very different than what you expected.
  4. Be wary even of emails and message from people you know. It’s very easy to spoof an email so it appears to come from someone you know, or to create a URL that appears to be legitimate, but isn’t.

For example, let’s say that you work for roughmedia.com and you get an email from Chuck in accounting ([email protected]) that has an attachment for you, perhaps a company form you need to fill out. You likely wouldn’t notice in the sender address that the phisher has replaced the “m” in media with an “r” and an “n” that look very much like an “m.” You think it’s good old Chuck in finance and it’s actually someone “phishing” for you to open the attachment and infect your computer. This type of attack is known as “spear phishing” because it’s targeted at a specific individual and is using social engineering—specifically familiarity with the sender—as part of the scheme to fool you into trusting the attachment. This technique is by far the most successful on the internet today. (This example is based on Gimlet Media’s Reply All Podcast Episode, “What Kind of Idiot Gets Phished?“)

  1. Use anti-malware software, but don’t rely on it to catch all attacks. Phishers change their approach often to keep ahead of the software attack detectors.
  2. If you are asked to enter any valuable information, only do so if you’re on a secure connection. Look for the “https” prefix before the site URL, indicating the site is employing SSL (Secure Socket Layer). If there is no “s” after “http,” it’s best not to enter any confidential information.
By Fabio Lanari – Internet1.jpg by Rock1997 modified., GFDL, https://commons.wikimedia.org/w/index.php?curid=20995390
  1. Avoid logging in to online banks and similar services via public Wi-Fi networks. Criminals can compromise open networks with man-in-the-middle attacks that capture your information or spoof website addresses over the connection and redirect you to a fake page they control.
  2. Email, instant messaging, and gaming social channels are all possible vehicles to deliver phishing attacks, so be vigilant!
  3. Lay the foundation for a good defense by choosing reputable tech vendors and service providers that respect your privacy and take steps to protect your data. At Backblaze, we have full-time security teams constantly looking for ways to improve our security.
  4. When it is available, always take advantage of multi-factor verification to protect your accounts. The standard categories used for authentication are 1) something you know (e.g. your username and password), 2) something you are (e.g. your fingerprint or retina pattern), and 3) something you have (e.g. an authenticator app on your smartphone). An account that allows only a single factor for authentication is more susceptible to hacking than one that supports multiple factors. Backblaze supports multi-factor authentication to protect customer accounts.

Be a good internet citizen, and help reduce phishing and other malware attacks by notifying the organization being impersonated in the phishing attempt, or by forwarding suspicious messages to the Federal Trade Commission at [email protected]. Some email clients and services, such as Microsoft Outlook and Google Gmail, give you the ability to easily report suspicious emails. Phishing emails misrepresenting Apple can be reported to [email protected].

Backing up your data is an important part of a strong defense against phishing and other malware

The best way to avoid becoming a victim is to be vigilant against suspicious messages and emails, but also to assume that no matter what you do, it is very possible that your system will be compromised. Even the most sophisticated and tech-savvy of us can be ensnared if we are tired, in a rush, or just unfamiliar with the latest methods hackers are using. Remember that hackers are working full-time on ways to fool us, so it’s very difficult to keep ahead of them.

The best defense is to make sure that any data that could compromised by hackers—basically all of the data that is reachable via your computer—is not your only copy. You do that by maintaining an active and reliable backup strategy.

Files that are backed up to cloud storage, such as with Backblaze, are not vulnerable to attacks on your local computer in the way that local files, attached drives, network drives, or sync services like Dropbox that have local directories on your computer are.

In the event that your computer is compromised and your files are lost or encrypted, you can recover your files if you have a cloud backup that is beyond the reach of attacks on your computer.

The post Top Ten Ways to Protect Yourself Against Phishing Attacks appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Defending anti-netneutrality arguments

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/07/defending-anti-netneutrality-arguments.html

Last week, activists proclaimed a “NetNeutrality Day”, trying to convince the FCC to regulate NetNeutrality. As a libertarian, I tweeted many reasons why NetNeutrality is stupid. NetNeutrality is exactly the sort of government regulation Libertarians hate most. Somebody tweeted the following challenge, which I thought I’d address here.

The links point to two separate cases.

  • the Comcast BitTorrent throttling case
  • a lawsuit against Time Warning for poor service
The tone of the tweet suggests that my anti-NetNeutrality stance cannot be defended in light of these cases. But of course this is wrong. The short answers are:

  • the Comcast BitTorrent throttling benefits customers
  • poor service has nothing to do with NetNeutrality

The long answers are below.

The Comcast BitTorrent Throttling

The presumption is that any sort of packet-filtering is automatically evil, and against the customer’s interests. That’s not true.
Take GoGoInflight’s internet service for airplanes. They block access to video sites like NetFlix. That’s because they often have as little as 1-mbps for the entire plane, which is enough to support many people checking email and browsing Facebook, but a single person trying to watch video will overload the internet connection for everyone. Therefore, their Internet service won’t work unless they filter video sites.
GoGoInflight breaks a lot of other NetNeutrality rules, such as providing free access to Amazon.com or promotion deals where users of a particular phone get free Internet access that everyone else pays for. And all this is allowed by FCC, allowing GoGoInflight to break NetNeutrality rules because it’s clearly in the customer interest.
Comcast’s throttling of BitTorrent is likewise clearly in the customer interest. Until the FCC stopped them, BitTorrent users were allowed unlimited downloads. Afterwards, Comcast imposed a 300-gigabyte/month bandwidth cap.
Internet access is a series of tradeoffs. BitTorrent causes congestion during prime time (6pm to 10pm). Comcast has to solve it somehow — not solving it wasn’t an option. Their options were:
  • Charge all customers more, so that the 99% not using BitTorrent subsidizes the 1% who do.
  • Impose a bandwidth cap, preventing heavy BitTorrent usage.
  • Throttle BitTorrent packets during prime-time hours when the network is congested.
Option 3 is clearly the best. BitTorrent downloads take hours, days, and sometimes weeks. BitTorrent users don’t mind throttling during prime-time congested hours. That’s preferable to the other option, bandwidth caps.
I’m a BitTorrent user, and a heavy downloader (I scan the Internet on a regular basis from cloud machines, then download the results to home, which can often be 100-gigabytes in size for a single scan). I want prime-time BitTorrent throttling rather than bandwidth caps. The EFF/FCC’s action that prevented BitTorrent throttling forced me to move to Comcast Business Class which doesn’t have bandwidth caps, charging me $100 more a month. It’s why I don’t contribute the EFF — if they had not agitated for this, taking such choices away from customers, I’d have $1200 more per year to donate to worthy causes.
Ask any user of BitTorrent which they prefer: 300gig monthly bandwidth cap or BitTorrent throttling during prime-time congested hours (6pm to 10pm). The FCC’s action did not help Comcast’s customers, it hurt them. Packet-filtering would’ve been a good thing, not a bad thing.

The Time-Warner Case
First of all, no matter how you define the case, it has nothing to do with NetNeutrality. NetNeutrality is about filtering packets, giving some priority over others. This case is about providing slow service for everyone.
Secondly, it’s not true. Time Warner provided the same access speeds as everyone else. Just because they promise 10mbps download speeds doesn’t mean you get 10mbps to NetFlix. That’s not how the Internet works — that’s not how any of this works.
To prove this, look at NetFlix’s connection speed graphis. It shows Time Warner Cable is average for the industry. It had the same congestion problems most ISPs had in 2014, and it has the same inability to provide more than 3mbps during prime-time (6pm-10pm) that all ISPs have today.

The YouTube video quality diagnostic pages show Time Warner Cable to similar to other providers around the country. It also shows the prime-time bump between 6pm and 10pm.
Congestion is an essential part of the Internet design. When an ISP like Time Warner promises you 10mbps bandwidth, that’s only “best effort”. There’s no way they can promise 10mbps stream to everybody on the Internet, especially not to a site like NetFlix that gets overloaded during prime-time.
Indeed, it’s the defining feature of the Internet compared to the old “telecommunications” network. The old phone system guaranteed you a steady 64-kbps stream between any time points in the phone network, but it cost a lot of money. Today’s Internet provide a free multi-megabit stream for free video calls (Skype, Facetime) around the world — but with the occasional dropped packets because of congestion.
Whatever lawsuit money-hungry lawyers come up with isn’t about how an ISP like Time Warner works. It’s only about how they describe the technology. They work no different than every ISP — no different than how anything is possible.
Conclusion

The short answer to the above questions is this: Comcast’s BitTorrent throttling benefits customers, and the Time Warner issue has nothing to do with NetNeutrality at all.

The tweet demonstrates that NetNeutrality really means. It has nothing to do with the facts of any case, especially the frequency that people point to ISP ills that have nothing actually to do with NetNeutrality. Instead, what NetNeutrality really about is socialism. People are convinced corporations are evil and want the government to run the Internet. The Comcast/BitTorrent case is a prime example of why this is a bad idea: government definitions of what customers want is actually far different than what customers actually want.

New – Next-Generation GPU-Powered EC2 Instances (G3)

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-next-generation-gpu-powered-ec2-instances-g3/

I first wrote about the benefits of GPU-powered computing in 2013 when we launched the G2 instance type. Since that launch, AWS customers have used the G2 instances to deliver high performance graphics to mobile devices, TV sets, and desktops.

Today we are taking a step forward and launching the G3 instance type. Powered by NVIDIA Tesla M60 GPUs, these instances are available in three sizes (all VPC-only and EBS-only):

Model GPUs GPU Memory vCPUs Main Memory EBS Bandwidth
g3.4xlarge 1 8 GiB 16 122 GiB 3.5 Gbps
g3.8xlarge 2 16 GiB 32 244 GiB 7 Gbps
g3.16xlarge 4 32 GiB 64 488 GiB 14 Gbps

Each GPU supports 8 GiB of GPU memory, 2048 parallel processing cores, and a hardware encoder capable of supporting up to 10 H.265 (HEVC) 1080p30 streams and up to 18 H.264 1080p30 streams, making them a great fit for 3D rendering & visualization, virtual reality, video encoding, remote graphics workstation (NVIDIA GRID), and other server-side graphics workloads that need a massive amount of parallel processing power. The GPUs support OpenGL 4.5, DirectX 12.0, CUDA 8.0, and OpenCL 1.2. When you launch a G3 instance you have access to an NVIDIA GRID Virtual Workstation License and can make use of the NVIDIA GRID driver without purchasing a license on your own.

The instances use Intel Xeon E5-2686 v4 (Broadwell) processors running at 2.7 GHz. On the networking side, Enhanced Networking (via the Elastic Network Adapter) provides up to 20 Gbps of aggregate network bandwidth within a Placement Group, along with up to 14 Gbps of EBS bandwidth.

Our customers have told us that they are looking forward to visualizing large 3D seismic models, configuring cars in 3D, and providing students with the ability to run high-end 2D and 3D applications. For example, Calgary Scientific can take applications that are powered by the Unreal Engine and make them accessible on mobile devices and from within web pages, with collaborative viewing support. Visit their Demo Gallery to see PureWeb Reality in action:

You can launch these instances today in the US East (Ohio), US East (Northern Virginia), US West (Oregon), US West (Northern California), AWS GovCloud (US), and EU (Ireland) Regions as On-Demand, Reserved Instances, Spot Instances, and Dedicated Hosts, with more Regions coming soon.

Jeff;

Deploying Java Microservices on Amazon EC2 Container Service

Post Syndicated from Nathan Taber original https://aws.amazon.com/blogs/compute/deploying-java-microservices-on-amazon-ec2-container-service/

This post and accompanying code graciously contributed by:

Huy Huynh
Sr. Solutions Architect
Magnus Bjorkman
Solutions Architect

Java is a popular language used by many enterprises today. To simplify and accelerate Java application development, many companies are moving from a monolithic to microservices architecture. For some, it has become a strategic imperative. Containerization technology, such as Docker, lets enterprises build scalable, robust microservice architectures without major code rewrites.

In this post, I cover how to containerize a monolithic Java application to run on Docker. Then, I show how to deploy it on AWS using Amazon EC2 Container Service (Amazon ECS), a high-performance container management service. Finally, I show how to break the monolith into multiple services, all running in containers on Amazon ECS.

Application Architecture

For this example, I use the Spring Pet Clinic, a monolithic Java application for managing a veterinary practice. It is a simple REST API, which allows the client to manage and view Owners, Pets, Vets, and Visits.

It is a simple three-tier architecture:

  • Client
    You simulate this by using curl commands.
  • Web/app server
    This is the Java and Spring-based application that you run using the embedded Tomcat. As part of this post, you run this within Docker containers.
  • Database server
    This is the relational database for your application that stores information about owners, pets, vets, and visits. For this post, use MySQL RDS.

I decided to not put the database inside a container as containers were designed for applications and are transient in nature. The choice was made even easier because you have a fully managed database service available with Amazon RDS.

RDS manages the work involved in setting up a relational database, from provisioning the infrastructure capacity that you request to installing the database software. After your database is up and running, RDS automates common administrative tasks, such as performing backups and patching the software that powers your database. With optional Multi-AZ deployments, Amazon RDS also manages synchronous data replication across Availability Zones with automatic failover.

Walkthrough

You can find the code for the example covered in this post at amazon-ecs-java-microservices on GitHub.

Prerequisites

You need the following to walk through this solution:

  • An AWS account
  • An access key and secret key for a user in the account
  • The AWS CLI installed

Also, install the latest versions of the following:

  • Java
  • Maven
  • Python
  • Docker

Step 1: Move the existing Java Spring application to a container deployed using Amazon ECS

First, move the existing monolith application to a container and deploy it using Amazon ECS. This is a great first step before breaking the monolith apart because you still get some benefits before breaking apart the monolith:

  • An improved pipeline. The container also allows an engineering organization to create a standard pipeline for the application lifecycle.
  • No mutations to machines.

You can find the monolith example at 1_ECS_Java_Spring_PetClinic.

Container deployment overview

The following diagram is an overview of what the setup looks like for Amazon ECS and related services:

This setup consists of the following resources:

  • The client application that makes a request to the load balancer.
  • The load balancer that distributes requests across all available ports and instances registered in the application’s target group using round-robin.
  • The target group that is updated by Amazon ECS to always have an up-to-date list of all the service containers in the cluster. This includes the port on which they are accessible.
  • One Amazon ECS cluster that hosts the container for the application.
  • A VPC network to host the Amazon ECS cluster and associated security groups.

Each container has a single application process that is bound to port 8080 within its namespace. In reality, all the containers are exposed on a different, randomly assigned port on the host.

The architecture is containerized but still monolithic because each container has all the same features of the rest of the containers

The following is also part of the solution but not depicted in the above diagram:

  • One Amazon EC2 Container Registry (Amazon ECR) repository for the application.
  • A service/task definition that spins up containers on the instances of the Amazon ECS cluster.
  • A MySQL RDS instance that hosts the applications schema. The information about the MySQL RDS instance is sent in through environment variables to the containers, so that the application can connect to the MySQL RDS instance.

I have automated setup with the 1_ECS_Java_Spring_PetClinic/ecs-cluster.cf AWS CloudFormation template and a Python script.

The Python script calls the CloudFormation template for the initial setup of the VPC, Amazon ECS cluster, and RDS instance. It then extracts the outputs from the template and uses those for API calls to create Amazon ECR repositories, tasks, services, Application Load Balancer, and target groups.

Environment variables and Spring properties binding

As part of the Python script, you pass in a number of environment variables to the container as part of the task/container definition:

'environment': [
{
'name': 'SPRING_PROFILES_ACTIVE',
'value': 'mysql'
},
{
'name': 'SPRING_DATASOURCE_URL',
'value': my_sql_options['dns_name']
},
{
'name': 'SPRING_DATASOURCE_USERNAME',
'value': my_sql_options['username']
},
{
'name': 'SPRING_DATASOURCE_PASSWORD',
'value': my_sql_options['password']
}
],

The preceding environment variables work in concert with the Spring property system. The value in the variable SPRING_PROFILES_ACTIVE, makes Spring use the MySQL version of the application property file. The other environment files override the following properties in that file:

  • spring.datasource.url
  • spring.datasource.username
  • spring.datasource.password

Optionally, you can also encrypt sensitive values by using Amazon EC2 Systems Manager Parameter Store. Instead of handing in the password, you pass in a reference to the parameter and fetch the value as part of the container startup. For more information, see Managing Secrets for Amazon ECS Applications Using Parameter Store and IAM Roles for Tasks.

Spotify Docker Maven plugin

Use the Spotify Docker Maven plugin to create the image and push it directly to Amazon ECR. This allows you to do this as part of the regular Maven build. It also integrates the image generation as part of the overall build process. Use an explicit Dockerfile as input to the plugin.

FROM frolvlad/alpine-oraclejdk8:slim
VOLUME /tmp
ADD spring-petclinic-rest-1.7.jar app.jar
RUN sh -c 'touch /app.jar'
ENV JAVA_OPTS=""
ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /app.jar" ]

The Python script discussed earlier uses the AWS CLI to authenticate you with AWS. The script places the token in the appropriate location so that the plugin can work directly against the Amazon ECR repository.

Test setup

You can test the setup by running the Python script:
python setup.py -m setup -r <your region>

After the script has successfully run, you can test by querying an endpoint:
curl <your endpoint from output above>/owner

You can clean this up before going to the next section:
python setup.py -m cleanup -r <your region>

Step 2: Converting the monolith into microservices running on Amazon ECS

The second step is to convert the monolith into microservices. For a real application, you would likely not do this as one step, but re-architect an application piece by piece. You would continue to run your monolith but it would keep getting smaller for each piece that you are breaking apart.

By migrating microservices, you would get four benefits associated with microservices:

  • Isolation of crashes
    If one microservice in your application is crashing, then only that part of your application goes down. The rest of your application continues to work properly.
  • Isolation of security
    When microservice best practices are followed, the result is that if an attacker compromises one service, they only gain access to the resources of that service. They can’t horizontally access other resources from other services without breaking into those services as well.
  • Independent scaling
    When features are broken out into microservices, then the amount of infrastructure and number of instances of each microservice class can be scaled up and down independently.
  • Development velocity
    In a monolith, adding a new feature can potentially impact every other feature that the monolith contains. On the other hand, a proper microservice architecture has new code for a new feature going into a new service. You can be confident that any code you write won’t impact the existing code at all, unless you explicitly write a connection between two microservices.

Find the monolith example at 2_ECS_Java_Spring_PetClinic_Microservices.
You break apart the Spring Pet Clinic application by creating a microservice for each REST API operation, as well as creating one for the system services.

Java code changes

Comparing the project structure between the monolith and the microservices version, you can see that each service is now its own separate build.
First, the monolith version:

You can clearly see how each API operation is its own subpackage under the org.springframework.samples.petclinic package, all part of the same monolithic application.
This changes as you break it apart in the microservices version:

Now, each API operation is its own separate build, which you can build independently and deploy. You have also duplicated some code across the different microservices, such as the classes under the model subpackage. This is intentional as you don’t want to introduce artificial dependencies among the microservices and allow these to evolve differently for each microservice.

Also, make the dependencies among the API operations more loosely coupled. In the monolithic version, the components are tightly coupled and use object-based invocation.

Here is an example of this from the OwnerController operation, where the class is directly calling PetRepository to get information about pets. PetRepository is the Repository class (Spring data access layer) to the Pet table in the RDS instance for the Pet API:

@RestController
class OwnerController {

    @Inject
    private PetRepository pets;
    @Inject
    private OwnerRepository owners;
    private static final Logger logger = LoggerFactory.getLogger(OwnerController.class);

    @RequestMapping(value = "/owner/{ownerId}/getVisits", method = RequestMethod.GET)
    public ResponseEntity<List<Visit>> getOwnerVisits(@PathVariable int ownerId){
        List<Pet> petList = this.owners.findById(ownerId).getPets();
        List<Visit> visitList = new ArrayList<Visit>();
        petList.forEach(pet -> visitList.addAll(pet.getVisits()));
        return new ResponseEntity<List<Visit>>(visitList, HttpStatus.OK);
    }
}

In the microservice version, call the Pet API operation and not PetRepository directly. Decouple the components by using interprocess communication; in this case, the Rest API. This provides for fault tolerance and disposability.

@RestController
class OwnerController {

    @Value("#{environment['SERVICE_ENDPOINT'] ?: 'localhost:8080'}")
    private String serviceEndpoint;

    @Inject
    private OwnerRepository owners;
    private static final Logger logger = LoggerFactory.getLogger(OwnerController.class);

    @RequestMapping(value = "/owner/{ownerId}/getVisits", method = RequestMethod.GET)
    public ResponseEntity<List<Visit>> getOwnerVisits(@PathVariable int ownerId){
        List<Pet> petList = this.owners.findById(ownerId).getPets();
        List<Visit> visitList = new ArrayList<Visit>();
        petList.forEach(pet -> {
            logger.info(getPetVisits(pet.getId()).toString());
            visitList.addAll(getPetVisits(pet.getId()));
        });
        return new ResponseEntity<List<Visit>>(visitList, HttpStatus.OK);
    }

    private List<Visit> getPetVisits(int petId){
        List<Visit> visitList = new ArrayList<Visit>();
        RestTemplate restTemplate = new RestTemplate();
        Pet pet = restTemplate.getForObject("http://"+serviceEndpoint+"/pet/"+petId, Pet.class);
        logger.info(pet.getVisits().toString());
        return pet.getVisits();
    }
}

You now have an additional method that calls the API. You are also handing in the service endpoint that should be called, so that you can easily inject dynamic endpoints based on the current deployment.

Container deployment overview

Here is an overview of what the setup looks like for Amazon ECS and the related services:

This setup consists of the following resources:

  • The client application that makes a request to the load balancer.
  • The Application Load Balancer that inspects the client request. Based on routing rules, it directs the request to an instance and port from the target group that matches the rule.
  • The Application Load Balancer that has a target group for each microservice. The target groups are used by the corresponding services to register available container instances. Each target group has a path, so when you call the path for a particular microservice, it is mapped to the correct target group. This allows you to use one Application Load Balancer to serve all the different microservices, accessed by the path. For example, https:///owner/* would be mapped and directed to the Owner microservice.
  • One Amazon ECS cluster that hosts the containers for each microservice of the application.
  • A VPC network to host the Amazon ECS cluster and associated security groups.

Because you are running multiple containers on the same instances, use dynamic port mapping to avoid port clashing. By using dynamic port mapping, the container is allocated an anonymous port on the host to which the container port (8080) is mapped. The anonymous port is registered with the Application Load Balancer and target group so that traffic is routed correctly.

The following is also part of the solution but not depicted in the above diagram:

  • One Amazon ECR repository for each microservice.
  • A service/task definition per microservice that spins up containers on the instances of the Amazon ECS cluster.
  • A MySQL RDS instance that hosts the applications schema. The information about the MySQL RDS instance is sent in through environment variables to the containers. That way, the application can connect to the MySQL RDS instance.

I have again automated setup with the 2_ECS_Java_Spring_PetClinic_Microservices/ecs-cluster.cf CloudFormation template and a Python script.

The CloudFormation template remains the same as in the previous section. In the Python script, you are now building five different Java applications, one for each microservice (also includes a system application). There is a separate Maven POM file for each one. The resulting Docker image gets pushed to its own Amazon ECR repository, and is deployed separately using its own service/task definition. This is critical to get the benefits described earlier for microservices.

Here is an example of the POM file for the Owner microservice:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>org.springframework.samples</groupId>
    <artifactId>spring-petclinic-rest</artifactId>
    <version>1.7</version>
    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>1.5.2.RELEASE</version>
    </parent>
    <properties>
        <!-- Generic properties -->
        <java.version>1.8</java.version>
        <docker.registry.host>${env.docker_registry_host}</docker.registry.host>
    </properties>
    <dependencies>
        <dependency>
            <groupId>javax.inject</groupId>
            <artifactId>javax.inject</artifactId>
            <version>1</version>
        </dependency>
        <!-- Spring and Spring Boot dependencies -->
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-actuator</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-rest</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-cache</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-jpa</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>
        <!-- Databases - Uses HSQL by default -->
        <dependency>
            <groupId>org.hsqldb</groupId>
            <artifactId>hsqldb</artifactId>
            <scope>runtime</scope>
        </dependency>
        <dependency>
            <groupId>mysql</groupId>
            <artifactId>mysql-connector-java</artifactId>
            <scope>runtime</scope>
        </dependency>
        <!-- caching -->
        <dependency>
            <groupId>javax.cache</groupId>
            <artifactId>cache-api</artifactId>
        </dependency>
        <dependency>
            <groupId>org.ehcache</groupId>
            <artifactId>ehcache</artifactId>
        </dependency>
        <!-- end of webjars -->
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-devtools</artifactId>
            <scope>runtime</scope>
        </dependency>
    </dependencies>
    <build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
            <plugin>
                <groupId>com.spotify</groupId>
                <artifactId>docker-maven-plugin</artifactId>
                <version>0.4.13</version>
                <configuration>
                    <imageName>${env.docker_registry_host}/${project.artifactId}</imageName>
                    <dockerDirectory>src/main/docker</dockerDirectory>
                    <useConfigFile>true</useConfigFile>
                    <registryUrl>${env.docker_registry_host}</registryUrl>
                    <!--dockerHost>https://${docker.registry.host}</dockerHost-->
                    <resources>
                        <resource>
                            <targetPath>/</targetPath>
                            <directory>${project.build.directory}</directory>
                            <include>${project.build.finalName}.jar</include>
                        </resource>
                    </resources>
                    <forceTags>false</forceTags>
                    <imageTags>
                        <imageTag>${project.version}</imageTag>
                    </imageTags>
                </configuration>
            </plugin>
        </plugins>
    </build>
</project>

Test setup

You can test this by running the Python script:

python setup.py -m setup -r <your region>

After the script has successfully run, you can test by querying an endpoint:

curl <your endpoint from output above>/owner

Conclusion

Migrating a monolithic application to a containerized set of microservices can seem like a daunting task. Following the steps outlined in this post, you can begin to containerize monolithic Java apps, taking advantage of the container runtime environment, and beginning the process of re-architecting into microservices. On the whole, containerized microservices are faster to develop, easier to iterate on, and more cost effective to maintain and secure.

This post focused on the first steps of microservice migration. You can learn more about optimizing and scaling your microservices with components such as service discovery, blue/green deployment, circuit breakers, and configuration servers at http://aws.amazon.com/containers.

If you have questions or suggestions, please comment below.

Burner laptops for DEF CON

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/07/burner-laptops-for-def-con.html

Hacker summer camp (Defcon, Blackhat, BSidesLV) is upon us, so I thought I’d write up some quick notes about bringing a “burner” laptop. Chrome is your best choice in terms of security, but I need Windows/Linux tools, so I got a Windows laptop.

I chose the Asus e200ha for $199 from Amazon with free (and fast) shipping. There are similar notebooks with roughly the same hardware and price from other manufacturers (HP, Dell, etc.), so I’m not sure how this compares against those other ones. However, it fits my needs as a “burner” laptop, namely:

  • cheap
  • lasts 10 hours easily on battery
  • weighs 2.2 pounds (1 kilogram)
  • 11.6 inch and thin

Some other specs are:

  • 4 gigs of RAM
  • 32 gigs of eMMC flash memory
  • quad core 1.44 GHz Intel Atom CPU
  • Windows 10
  • free Microsoft Office 365 for one year
  • good, large keyboard
  • good, large touchpad
  • USB 3.0
  • microSD
  • WiFi ac
  • no fans, completely silent

There are compromises, of course.

  • The Atom CPU is slow, thought it’s only noticeable when churning through heavy webpages. Adblocking addons or Brave are a necessity. Most things are usably fast, such as using Microsoft Word.
  • Crappy sound and video, though VLC does a fine job playing movies with headphones on the airplane. Using in bright sunlight will be difficult.
  • micro-HDMI, keep in mind if intending to do presos from it, you’ll need an HDMI adapter
  • It has limited storage, 32gigs in theory, about half that usable.
  • Does special Windows 10 compressed install that you can’t actually upgrade without a completely new install. It doesn’t have the latest Windows 10 Creators update. I lost a gig thinking I could compress system files.

Copying files across the 802.11ac WiFi to the disk was quite fast, several hundred megabits-per-second. The eMMC isn’t as fast as an SSD, but its a lot faster than typical SD card speeds.

The first thing I did once I got the notebook was to install the free VeraCrypt full disk encryption. The CPU has AES acceleration, so it’s fast. There is a problem with the keyboard driver during boot that makes it really hard to enter long passwords — you have to carefully type one key at a time to prevent extra keystrokes from being entered.

You can’t really install Linux on this computer, but you can use virtual machines. I installed VirtualBox and downloaded the Kali VM. I had some problems attaching USB devices to the VM. First of all, VirtualBox requires a separate downloaded extension to get USB working. Second, it conflicts with USBpcap that I installed for Wireshark.

It comes with one year of free Office 365. Obviously, Microsoft is hoping to hook the user into a longer term commitment, but in practice next year at this time I’d get another burner $200 laptop rather than spend $99 on extending the Office 365 license.

Let’s talk about the CPU. It’s Intel’s “Atom” processor, not their mainstream (Core i3 etc.) processor. Even though it has roughly the same GHz as the processor in a 11inch MacBook Air and twice the cores, it’s noticeably and painfully slower. This is especially noticeable on ad-heavy web pages, while other things seem to work just fine. It has hardware acceleration for most video formats, though I had trouble getting Netflix to work.

The tradeoff for a slow CPU is phenomenal battery life. It seems to last forever on battery. It’s really pretty cool.

Conclusion

A Chromebook is likely more secure, but for my needs, this $200 is perfect.

Google Removed 2.5 Billion ‘Pirate’ Search Results

Post Syndicated from Ernesto original https://torrentfreak.com/google-removed-2-5-billion-pirate-search-results-170706/

Google is coping with a continuous increase in takedown requests from copyright holders, which target pirate sites in search results.

Just a few years ago the search engine removed ‘only’ a few thousand URLs per day, but this has since grown to millions. When added up, the numbers are truly staggering.

In its transparency report, Google now states that it has removed 2.5 billion reported links for alleged copyright infringement. This is roughly 90 percent of all requests the company received.

The chart below breaks down the takedown requests into several categories. In addition to the URLs that were removed, the search engine also received 154 million duplicate URLs and 25 million invalid URLs.

Another 80 million links remain in search results because they can’t be classified as copyright infringing, according to Google.

Google’s takedown overview

The 2.5 billion removed links are spread out over 1.1 million websites. File-storage service 4shared takes the crown with 64 million targeted URLs, followed at a distance by mp3toys.xyz, rapidgator.net, uploaded.net, and chomikuj.pl.

While rightsholders have increased their takedown efforts over the years, the major entertainment industry groups are still not happy with the current state of Google’s takedown process.

One of the main complaints has been that content which Google de-lists often reappears under new URLs.

“They need to take more proactive responsibility to reduce infringing content that appears on their platform, and, where we expressly notify infringing content to them, to ensure that they do not only take it down, but also keep it down,” a BPI spokesperson told us last month.

Ideally, rightsholders would like Google to ensure that content “stays down” while blocking the most notorious pirate sites from search results entirely. Known ‘pirate’ sites such as The Pirate Bay have no place in search results, they argue.

Google, however, believes such broad measures will lead to all sorts of problems, including over-blocking, and maintains that the current system is working as the DMCA was intended.

The search engine did implement various other initiatives to counter piracy, including the downranking of pirate sites and promoting legal options in search results, which it details in its regularly updated “How Google Fights Piracy” report.

In addition, Google and various rightsholders have signed a voluntary agreement to address “domain hopping” by pirate sites and share data to better understand how users are searching for content. For now, however, this effort is limited to the UK.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

New Lawsuit Demands ISP Blockades Against ‘Pirate’ Site Sci-Hub

Post Syndicated from Ernesto original https://torrentfreak.com/new-lawsuit-demands-isp-blockades-against-pirate-site-sci-hub-170629/

Founded more than 140 years ago, the American Chemical Society (ACS) is a leading source of academic publications in the field of chemistry.

The non-profit organization has around 157,000 members and researchers publish tens of thousands of articles a year in its peer-reviewed journals.

ACS derives a significant portion of its revenue from its publishing work, which is in large part behind a paywall. As such, it is not happy with websites that offer their copyrighted articles for free, such as Sci-Hub.

The deviant ‘pirate site’ believes that all scientific articles should be open to the public, as that’s in the best interest of science. While some academics are sympathetic to the goal, publishers share a different view.

Just last week Sci-Hub lost its copyright infringement case against Elsevier, and now ACS is following suit with a separate case. In a complaint filed in a Virginia District Court, the scientific society demands damages for Sci-Hub’s copyright and trademark infringements.

According to the filing, Sci-Hub has “stolen Plaintiff’s copyright-protected scientific articles and reproduced and distributed them on the Internet without permission.”

ACS points out that Sci-Hub is operating two websites that are nearly identical to the organization’s official site, located at pubs.acs.org.sci-hub.cc and acs.org.secure.sci-hub.cc. These are confusing to the public, they claim, and also an infringement of its copyrights and trademarks.

“The Pirated/Spoofed Site appears to almost completely replicate the content of Plaintiff’s website. For example, the Pirated/Spoofed Site replicates webpages on ACS’s history, purpose, news, scholarship opportunities, and budget,” the complaint (pdf) reads.

“Each of these pages on the Pirated/Spoofed Site contains ACS’s Copyrighted Works and the ACS Marks, creating the impression that the Pirated/Spoofed Site is associated with ACS.”

From the ACS complaint

By offering its articles for free and mimicking the ACS website, Sci-Hub is in direct competition with the scientific society. As a result, ACS claims to lose revenue.

“Defendants are attempting to divert users and revenues away from ACS by replicating and distributing ACS’s Copyrighted Works without authorization,” the complaint reads.

With the lawsuit, ACS hopes to recoup the money it claims to have lost. It’s likely that the total damages amount will run in the millions. However, if the defendants stay out of reach, this might be hard to collect.

Perhaps this is why the current lawsuit has included a request for a broader injunction against Sci-Hub. Not only does it ask for domain name seizures, but the scientific society also wants search engines, web hosting companies and general Internet providers to block access to the site.

“That those in privity with Defendants and those with notice of the injunction, including any Internet search engines, web hosting and Internet service providers, domain name registrars, and domain name registries cease facilitating access to any or all domain names and websites through which Defendants engage in unlawful access to, use, reproduction, and distribution of the ACS Marks or ACS’s Copyrighted Works,” it reads.

If granted, it would mean that Internet providers such as Comcast would have to block users from accessing Sci-Hub. That’s a big deal since pirate site blockades are not common in the United States.

It might very well be that ACS is not expecting any compensation for the alleged copyright and trademark infringements, but that the broad injunction is their main goal. If that is the case, this case could turn out to be more crucial than it looks at first sight.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Continuous Delivery of Nested AWS CloudFormation Stacks Using AWS CodePipeline

Post Syndicated from Prakash Palanisamy original https://aws.amazon.com/blogs/devops/continuous-delivery-of-nested-aws-cloudformation-stacks-using-aws-codepipeline/

In CodePipeline Update – Build Continuous Delivery Workflows for CloudFormation Stacks, Jeff Barr discusses infrastructure as code and how to use AWS CodePipeline for continuous delivery. In this blog post, I discuss the continuous delivery of nested CloudFormation stacks using AWS CodePipeline, with AWS CodeCommit as the source repository and AWS CodeBuild as a build and testing tool. I deploy the stacks using CloudFormation change sets following a manual approval process.

Here’s how to do it:

In AWS CodePipeline, create a pipeline with four stages:

  • Source (AWS CodeCommit)
  • Build and Test (AWS CodeBuild and AWS CloudFormation)
  • Staging (AWS CloudFormation and manual approval)
  • Production (AWS CloudFormation and manual approval)

Pipeline stages, the actions in each stage, and transitions between stages are shown in the following diagram.

CloudFormation templates, test scripts, and the build specification are stored in AWS CodeCommit repositories. These files are used in the Source stage of the pipeline in AWS CodePipeline.

The AWS::CloudFormation::Stack resource type is used to create child stacks from a master stack. The CloudFormation stack resource requires the templates of the child stacks to be stored in the S3 bucket. The location of the template file is provided as a URL in the properties section of the resource definition.

The following template creates three child stacks:

  • Security (IAM, security groups).
  • Database (an RDS instance).
  • Web stacks (EC2 instances in an Auto Scaling group, elastic load balancer).
Description: Master stack which creates all required nested stacks

Parameters:
  TemplatePath:
    Type: String
    Description: S3Bucket Path where the templates are stored
  VPCID:
    Type: "AWS::EC2::VPC::Id"
    Description: Enter a valid VPC Id
  PrivateSubnet1:
    Type: "AWS::EC2::Subnet::Id"
    Description: Enter a valid SubnetId of private subnet in AZ1
  PrivateSubnet2:
    Type: "AWS::EC2::Subnet::Id"
    Description: Enter a valid SubnetId of private subnet in AZ2
  PublicSubnet1:
    Type: "AWS::EC2::Subnet::Id"
    Description: Enter a valid SubnetId of public subnet in AZ1
  PublicSubnet2:
    Type: "AWS::EC2::Subnet::Id"
    Description: Enter a valid SubnetId of public subnet in AZ2
  S3BucketName:
    Type: String
    Description: Name of the S3 bucket to allow access to the Web Server IAM Role.
  KeyPair:
    Type: "AWS::EC2::KeyPair::KeyName"
    Description: Enter a valid KeyPair Name
  AMIId:
    Type: "AWS::EC2::Image::Id"
    Description: Enter a valid AMI ID to launch the instance
  WebInstanceType:
    Type: String
    Description: Enter one of the possible instance type for web server
    AllowedValues:
      - t2.large
      - m4.large
      - m4.xlarge
      - c4.large
  WebMinSize:
    Type: String
    Description: Minimum number of instances in auto scaling group
  WebMaxSize:
    Type: String
    Description: Maximum number of instances in auto scaling group
  DBSubnetGroup:
    Type: String
    Description: Enter a valid DB Subnet Group
  DBUsername:
    Type: String
    Description: Enter a valid Database master username
    MinLength: 1
    MaxLength: 16
    AllowedPattern: "[a-zA-Z][a-zA-Z0-9]*"
  DBPassword:
    Type: String
    Description: Enter a valid Database master password
    NoEcho: true
    MinLength: 1
    MaxLength: 41
    AllowedPattern: "[a-zA-Z0-9]*"
  DBInstanceType:
    Type: String
    Description: Enter one of the possible instance type for database
    AllowedValues:
      - db.t2.micro
      - db.t2.small
      - db.t2.medium
      - db.t2.large
  Environment:
    Type: String
    Description: Select the appropriate environment
    AllowedValues:
      - dev
      - test
      - uat
      - prod

Resources:
  SecurityStack:
    Type: "AWS::CloudFormation::Stack"
    Properties:
      TemplateURL:
        Fn::Sub: "https://s3.amazonaws.com/${TemplatePath}/security-stack.yml"
      Parameters:
        S3BucketName:
          Ref: S3BucketName
        VPCID:
          Ref: VPCID
        Environment:
          Ref: Environment
      Tags:
        - Key: Name
          Value: SecurityStack

  DatabaseStack:
    Type: "AWS::CloudFormation::Stack"
    Properties:
      TemplateURL:
        Fn::Sub: "https://s3.amazonaws.com/${TemplatePath}/database-stack.yml"
      Parameters:
        DBSubnetGroup:
          Ref: DBSubnetGroup
        DBUsername:
          Ref: DBUsername
        DBPassword:
          Ref: DBPassword
        DBServerSecurityGroup:
          Fn::GetAtt: SecurityStack.Outputs.DBServerSG
        DBInstanceType:
          Ref: DBInstanceType
        Environment:
          Ref: Environment
      Tags:
        - Key: Name
          Value:   DatabaseStack

  ServerStack:
    Type: "AWS::CloudFormation::Stack"
    Properties:
      TemplateURL:
        Fn::Sub: "https://s3.amazonaws.com/${TemplatePath}/server-stack.yml"
      Parameters:
        VPCID:
          Ref: VPCID
        PrivateSubnet1:
          Ref: PrivateSubnet1
        PrivateSubnet2:
          Ref: PrivateSubnet2
        PublicSubnet1:
          Ref: PublicSubnet1
        PublicSubnet2:
          Ref: PublicSubnet2
        KeyPair:
          Ref: KeyPair
        AMIId:
          Ref: AMIId
        WebSG:
          Fn::GetAtt: SecurityStack.Outputs.WebSG
        ELBSG:
          Fn::GetAtt: SecurityStack.Outputs.ELBSG
        DBClientSG:
          Fn::GetAtt: SecurityStack.Outputs.DBClientSG
        WebIAMProfile:
          Fn::GetAtt: SecurityStack.Outputs.WebIAMProfile
        WebInstanceType:
          Ref: WebInstanceType
        WebMinSize:
          Ref: WebMinSize
        WebMaxSize:
          Ref: WebMaxSize
        Environment:
          Ref: Environment
      Tags:
        - Key: Name
          Value: ServerStack

Outputs:
  WebELBURL:
    Description: "URL endpoint of web ELB"
    Value:
      Fn::GetAtt: ServerStack.Outputs.WebELBURL

During the Validate stage, AWS CodeBuild checks for changes to the AWS CodeCommit source repositories. It uses the ValidateTemplate API to validate the CloudFormation template and copies the child templates and configuration files to the appropriate location in the S3 bucket.

The following AWS CodeBuild build specification validates the CloudFormation templates listed under the TEMPLATE_FILES environment variable and copies them to the S3 bucket specified in the TEMPLATE_BUCKET environment variable in the AWS CodeBuild project. Optionally, you can use the TEMPLATE_PREFIX environment variable to specify a path inside the bucket. This updates the configuration files to use the location of the child template files. The location of the template files is provided as a parameter to the master stack.

version: 0.1

environment_variables:
  plaintext:
    CHILD_TEMPLATES: |
      security-stack.yml
      server-stack.yml
      database-stack.yml
    TEMPLATE_FILES: |
      master-stack.yml
      security-stack.yml
      server-stack.yml
      database-stack.yml
    CONFIG_FILES: |
      config-prod.json
      config-test.json
      config-uat.json

phases:
  install:
    commands:
      npm install jsonlint -g
  pre_build:
    commands:
      - echo "Validating CFN templates"
      - |
        for cfn_template in $TEMPLATE_FILES; do
          echo "Validating CloudFormation template file $cfn_template"
          aws cloudformation validate-template --template-body file://$cfn_template
        done
      - |
        for conf in $CONFIG_FILES; do
          echo "Validating CFN parameters config file $conf"
          jsonlint -q $conf
        done
  build:
    commands:
      - echo "Copying child stack templates to S3"
      - |
        for child_template in $CHILD_TEMPLATES; do
          if [ "X$TEMPLATE_PREFIX" = "X" ]; then
            aws s3 cp "$child_template" "s3://$TEMPLATE_BUCKET/$child_template"
          else
            aws s3 cp "$child_template" "s3://$TEMPLATE_BUCKET/$TEMPLATE_PREFIX/$child_template"
          fi
        done
      - echo "Updating template configurtion files to use the appropriate values"
      - |
        for conf in $CONFIG_FILES; do
          if [ "X$TEMPLATE_PREFIX" = "X" ]; then
            echo "Replacing \"TEMPLATE_PATH_PLACEHOLDER\" for \"$TEMPLATE_BUCKET\" in $conf"
            sed -i -e "s/TEMPLATE_PATH_PLACEHOLDER/$TEMPLATE_BUCKET/" $conf
          else
            echo "Replacing \"TEMPLATE_PATH_PLACEHOLDER\" for \"$TEMPLATE_BUCKET/$TEMPLATE_PREFIX\" in $conf"
            sed -i -e "s/TEMPLATE_PATH_PLACEHOLDER/$TEMPLATE_BUCKET\/$TEMPLATE_PREFIX/" $conf
          fi
        done

artifacts:
  files:
    - master-stack.yml
    - config-*.json

After the template files are copied to S3, CloudFormation creates a test stack and triggers AWS CodeBuild as a test action.

Then the AWS CodeBuild build specification executes validate-env.py, the Python script used to determine whether resources created using the nested CloudFormation stacks conform to the specifications provided in the CONFIG_FILE.

version: 0.1

environment_variables:
  plaintext:
    CONFIG_FILE: env-details.yml

phases:
  install:
    commands:
      - pip install --upgrade pip
      - pip install boto3 --upgrade
      - pip install pyyaml --upgrade
      - pip install yamllint --upgrade
  pre_build:
    commands:
      - echo "Validating config file $CONFIG_FILE"
      - yamllint $CONFIG_FILE
  build:
    commands:
      - echo "Validating resources..."
      - python validate-env.py
      - exit $?

Upon successful completion of the test action, CloudFormation deletes the test stack and proceeds to the UAT stage in the pipeline.

During this stage, CloudFormation creates a change set against the UAT stack and then executes the change set. This updates the UAT environment and makes it available for acceptance testing. The process continues to a manual approval action. After the QA team validates the UAT environment and provides an approval, the process moves to the Production stage in the pipeline.

During this stage, CloudFormation creates a change set for the nested production stack and the process continues to a manual approval step. Upon approval (usually by a designated executive), the change set is executed and the production deployment is completed.
 

Setting up a continuous delivery pipeline

 
I used a CloudFormation template to set up my continuous delivery pipeline. The codepipeline-cfn-codebuild.yml template, available from GitHub, sets up a full-featured pipeline.

When I use the template to create my pipeline, I specify the following:

  • AWS CodeCommit repositories.
  • SNS topics to send approval notifications.
  • S3 bucket name where the artifacts will be stored.

The CFNTemplateRepoName points to the AWS CodeCommit repository where CloudFormation templates, configuration files, and build specification files are stored.

My repo contains following files:

The continuous delivery pipeline is ready just seconds after clicking Create Stack. After it’s created, the pipeline executes each stage. Upon manual approvals for the UAT and Production stages, the pipeline successfully enables continuous delivery.


 

Implementing a change in nested stack

 
To make changes to a child stack in a nested stack (for example, to update a parameter value or add or change resources), update the master stack. The changes must be made in the appropriate template or configuration files and then checked in to the AWS CodeCommit repository. This triggers the following deployment process:

 

Conclusion

 
In this post, I showed how you can use AWS CodePipeline, AWS CloudFormation, AWS CodeBuild, and a manual approval process to create a continuous delivery pipeline for both infrastructure as code and application deployment.

For more information about AWS CodePipeline, see the AWS CodePipeline documentation. You can get started in just a few clicks. All CloudFormation templates, AWS CodeBuild build specification files, and the Python script that performs the validation are available in codepipeline-nested-cfn GitHub repository.


About the author

 
Prakash Palanisamy is a Solutions Architect for Amazon Web Services. When he is not working on Serverless, DevOps or Alexa, he will be solving problems in Project Euler. He also enjoys watching educational documentaries.

A Raspbian desktop update with some new programming tools

Post Syndicated from Simon Long original https://www.raspberrypi.org/blog/a-raspbian-desktop-update-with-some-new-programming-tools/

Today we’ve released another update to the Raspbian desktop. In addition to the usual small tweaks and bug fixes, the big new changes are the inclusion of an offline version of Scratch 2.0, and of Thonny (a user-friendly IDE for Python which is excellent for beginners). We’ll look at all the changes in this post, but let’s start with the biggest…

Scratch 2.0 for Raspbian

Scratch is one of the most popular pieces of software on Raspberry Pi. This is largely due to the way it makes programming accessible – while it is simple to learn, it covers many of the concepts that are used in more advanced languages. Scratch really does provide a great introduction to programming for all ages.

Raspbian ships with the original version of Scratch, which is now at version 1.4. A few years ago, though, the Scratch team at the MIT Media Lab introduced the new and improved Scratch version 2.0, and ever since we’ve had numerous requests to offer it on the Pi.

There was, however, a problem with this. The original version of Scratch was written in a language called Squeak, which could run on the Pi in a Squeak interpreter. Scratch 2.0, however, was written in Flash, and was designed to run from a remote site in a web browser. While this made Scratch 2.0 a cross-platform application, which you could run without installing any Scratch software, it also meant that you had to be able to run Flash on your computer, and that you needed to be connected to the internet to program in Scratch.

We worked with Adobe to include the Pepper Flash plugin in Raspbian, which enables Flash sites to run in the Chromium browser. This addressed the first of these problems, so the Scratch 2.0 website has been available on Pi for a while. However, it still needed an internet connection to run, which wasn’t ideal in many circumstances. We’ve been working with the Scratch team to get an offline version of Scratch 2.0 running on Pi.

Screenshot of Scratch on Raspbian

The Scratch team had created a website to enable developers to create hardware and software extensions for Scratch 2.0; this provided a version of the Flash code for the Scratch editor which could be modified to run locally rather than over the internet. We combined this with a program called Electron, which effectively wraps up a local web page into a standalone application. We ended up with the Scratch 2.0 application that you can find in the Programming section of the main menu.

Physical computing with Scratch 2.0

We didn’t stop there though. We know that people want to use Scratch for physical computing, and it has always been a bit awkward to access GPIO pins from Scratch. In our Scratch 2.0 application, therefore, there is a custom extension which allows the user to control the Pi’s GPIO pins without difficulty. Simply click on ‘More Blocks’, choose ‘Add an Extension’, and select ‘Pi GPIO’. This loads two new blocks, one to read and one to write the state of a GPIO pin.

Screenshot of new Raspbian iteration of Scratch 2, featuring GPIO pin control blocks.

The Scratch team kindly allowed us to include all the sprites, backdrops, and sounds from the online version of Scratch 2.0. You can also use the Raspberry Pi Camera Module to create new sprites and backgrounds.

This first release works well, although it can be slow for some operations; this is largely unavoidable for Flash code running under Electron. Bear in mind that you will need to have the Pepper Flash plugin installed (which it is by default on standard Raspbian images). As Pepper Flash is only compatible with the processor in the Pi 2.0 and Pi 3, it is unfortunately not possible to run Scratch 2.0 on the Pi Zero or the original models of the Pi.

We hope that this makes Scratch 2.0 a more practical proposition for many users than it has been to date. Do let us know if you hit any problems, though!

Thonny: a more user-friendly IDE for Python

One of the paths from Scratch to ‘real’ programming is through Python. We know that the transition can be awkward, and this isn’t helped by the tools available for learning Python. It’s fair to say that IDLE, the Python IDE, isn’t the most popular piece of software ever written…

Earlier this year, we reviewed every Python IDE that we could find that would run on a Raspberry Pi, in an attempt to see if there was something better out there than IDLE. We wanted to find something that was easier for beginners to use but still useful for experienced Python programmers. We found one program, Thonny, which stood head and shoulders above all the rest. It’s a really user-friendly IDE, which still offers useful professional features like single-stepping of code and inspection of variables.

Screenshot of Thonny IDE in Raspbian

Thonny was created at the University of Tartu in Estonia; we’ve been working with Aivar Annamaa, the lead developer, on getting it into Raspbian. The original version of Thonny works well on the Pi, but because the GUI is written using Python’s default GUI toolkit, Tkinter, the appearance clashes with the rest of the Raspbian desktop, most of which is written using the GTK toolkit. We made some changes to bring things like fonts and graphics into line with the appearance of our other apps, and Aivar very kindly took that work and converted it into a theme package that could be applied to Thonny.

Due to the limitations of working within Tkinter, the result isn’t exactly like a native GTK application, but it’s pretty close. It’s probably good enough for anyone who isn’t a picky UI obsessive like me, anyway! Have a look at the Thonny webpage to see some more details of all the cool features it offers. We hope that having a more usable environment will help to ease the transition from graphical languages like Scratch into ‘proper’ languages like Python.

New icons

Other than these two new packages, this release is mostly bug fixes and small version bumps. One thing you might notice, though, is that we’ve made some tweaks to our custom icon set. We wondered if the icons might look better with slightly thinner outlines. We tried it, and they did: we hope you prefer them too.

Downloading the new image

You can either download a new image from the Downloads page, or you can use apt to update:

sudo apt-get update
sudo apt-get dist-upgrade

To install Scratch 2.0:

sudo apt-get install scratch2

To install Thonny:

sudo apt-get install python3-thonny

One more thing…

Before Christmas, we released an experimental version of the desktop running on Debian for x86-based computers. We were slightly taken aback by how popular it turned out to be! This made us realise that this was something we were going to need to support going forward. We’ve decided we’re going to try to make all new desktop releases for both Pi and x86 from now on.

The version of this we released last year was a live image that could run from a USB stick. Many people asked if we could make it permanently installable, so this version includes an installer. This uses the standard Debian install process, so it ought to work on most machines. I should stress, though, that we haven’t been able to test on every type of hardware, so there may be issues on some computers. Please be sure to back up your hard drive before installing it. Unlike the live image, this will erase and reformat your hard drive, and you will lose anything that is already on it!

You can still boot the image as a live image if you don’t want to install it, and it will create a persistence partition on the USB stick so you can save data. Just select ‘Run with persistence’ from the boot menu. To install, choose either ‘Install’ or ‘Graphical install’ from the same menu. The Debian installer will then walk you through the install process.

You can download the latest x86 image (which includes both Scratch 2.0 and Thonny) from here or here for a torrent file.

One final thing

This version of the desktop is based on Debian Jessie. Some of you will be aware that a new stable version of Debian (called Stretch) was released last week. Rest assured – we have been working on porting everything across to Stretch for some time now, and we will have a Stretch release ready some time over the summer.

The post A Raspbian desktop update with some new programming tools appeared first on Raspberry Pi.

Court Grants Subpoenas to Unmask ‘TVAddons’ and ‘ZemTV’ Operators

Post Syndicated from Ernesto original https://torrentfreak.com/court-grants-subpoenas-to-unmask-tvaddons-and-zemtv-operators-170621/

Earlier this month we broke the news that third-party Kodi add-on ZemTV and the TVAddons library were being sued in a federal court in Texas.

In a complaint filed by American satellite and broadcast provider Dish Network, both stand accused of copyright infringement, facing up to $150,000 for each offense.

While the allegations are serious, Dish doesn’t know the full identities of the defendants.

To find out more, the company requested a broad range of subpoenas from the court, targeting Amazon, Github, Google, Twitter, Facebook, PayPal, and several hosting providers.

From Dish’s request

This week the court granted the subpoenas, which means that they can be forwarded to the companies in question. Whether that will be enough to identify the people behind ‘TVAddons’ and ‘ZemTV’ remains to be seen, but Dish has cast its net wide.

For example, the subpoena directed at Google covers any type of information that can be used to identify the account holder of [email protected], which is believed to be tied to ZemTV.

The information requested from Google includes IP address logs with session date and timestamps, but also covers “all communications,” including GChat messages from 2014 onwards.

Similarly, Twitter is required to hand over information tied to the accounts of the users “TV Addons” and “shani_08_kodi” as well as other accounts linked to tvaddons.ag and streamingboxes.com. This also applies the various tweets that were sent through the account.

The subpoena specifically mentions “all communications, including ‘tweets’, Twitter sent to or received from each Twitter Account during the time period of February 1, 2014 to present.”

From the Twitter subpoena

Similar subpoenas were granted for the other services, tailored towards the information Dish hopes to find there. For example, the broadcast provider also requests details of each transaction from PayPal, as well as all debits and credits to the accounts.

In some parts, the subpoenas appear to be quite broad. PayPal is asked to reveal information on any account with the credit card statement “Shani,” for example. Similarly, Github is required to hand over information on accounts that are ‘associated’ with the tvaddons.ag domain, which is referenced by many people who are not directly connected to the site.

The service providers in question still have the option to challenge the subpoenas or ask the court for further clarification. A full overview of all the subpoena requests is available here (Exhibit 2 and onwards), including all the relevant details. This also includes several letters to foreign hosting providers.

While Dish still appears to be keen to find out who is behind ‘TVAddons’ and ‘ZemTV,’ not much has been heard from the defendants in question.

ZemTV developer “Shani” shut down his addon soon after the lawsuit was announced, without mentioning it specifically. TVAddons, meanwhile, has been offline for well over a week, without any notice in public about the reason for the prolonged downtime.

The court’s order granting the subpoenas and letters of request is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

BPI Breaks Record After Sending 310 Million Google Takedowns

Post Syndicated from Andy original https://torrentfreak.com/bpi-breaks-record-after-sending-310-million-google-takedowns-170619/

A little over a year ago during March 2016, music industry group BPI reached an important milestone. After years of sending takedown notices to Google, the group burst through the 200 million URL barrier.

The fact that it took BPI several years to reach its 200 million milestone made the surpassing of the quarter billion milestone a few months later even more remarkable. In October 2016, the group sent its 250 millionth takedown to Google, a figure that nearly doubled when accounting for notices sent to Microsoft’s Bing.

But despite the volumes, the battle hadn’t been won, let alone the war. The BPI’s takedown machine continued to run at a remarkable rate, churning out millions more notices per week.

As a result, yet another new milestone was reached this month when the BPI smashed through the 300 million URL barrier. Then, days later, a further 10 million were added, with the latter couple of million added during the time it took to put this piece together.

BPI takedown notices, as reported by Google

While demanding that Google places greater emphasis on its de-ranking of ‘pirate’ sites, the BPI has called again and again for a “notice and stay down” regime, to ensure that content taken down by the search engine doesn’t simply reappear under a new URL. It’s a position BPI maintains today.

“The battle would be a whole lot easier if intermediaries played fair,” a BPI spokesperson informs TF.

“They need to take more proactive responsibility to reduce infringing content that appears on their platform, and, where we expressly notify infringing content to them, to ensure that they do not only take it down, but also keep it down.”

The long-standing suggestion is that the volume of takedown notices sent would reduce if a “take down, stay down” regime was implemented. The BPI says it’s difficult to present a precise figure but infringing content has a tendency to reappear, both in search engines and on hosting sites.

“Google rejects repeat notices for the same URL. But illegal content reappears as it is re-indexed by Google. As to the sites that actually host the content, the vast majority of notices sent to them could be avoided if they implemented take-down & stay-down,” BPI says.

The fact that the BPI has added 60 million more takedowns since the quarter billion milestone a few months ago is quite remarkable, particularly since there appears to be little slowdown from month to month. However, the numbers have grown so huge that 310 billion now feels a lot like 250 million, with just a few added on top for good measure.

That an extra 60 million takedowns can almost be dismissed as a handful is an indication of just how massive the issue is online. While pirates always welcome an abundance of links to juicy content, it’s no surprise that groups like the BPI are seeking more comprehensive and sustainable solutions.

Previously, it was hoped that the Digital Economy Bill would provide some relief, hopefully via government intervention and the imposition of a search engine Code of Practice. In the event, however, all pressure on search engines was removed from the legislation after a separate voluntary agreement was reached.

All parties agreed that the voluntary code should come into effect two weeks ago on June 1 so it seems likely that some effects should be noticeable in the near future. But the BPI says it’s still early days and there’s more work to be done.

“BPI has been working productively with search engines since the voluntary code was agreed to understand how search engines approach the problem, but also what changes can and have been made and how results can be improved,” the group explains.

“The first stage is to benchmark where we are and to assess the impact of the changes search engines have made so far. This will hopefully be completed soon, then we will have better information of the current picture and from that we hope to work together to continue to improve search for rights owners and consumers.”

With more takedown notices in the pipeline not yet publicly reported by Google, the BPI informs TF that it has now notified the search giant of 315 million links to illegal content.

“That’s an astonishing number. More than 1 in 10 of the entire world’s notices to Google come from BPI. This year alone, one in every three notices sent to Google from BPI is for independent record label repertoire,” BPI concludes.

While it’s clear that groups like BPI have developed systems to cope with the huge numbers of takedown notices required in today’s environment, it’s clear that few rightsholders are happy with the status quo. With that in mind, the fight will continue, until search engines are forced into compromise. Considering the implications, that could only appear on a very distant horizon.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

More notes on US-CERTs IOCs

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/06/more-notes-on-us-certs-iocs.html

Yet another Russian attack against the power grid, and yet more bad IOCs from the DHS US-CERT.

IOCs are “indicators of compromise“, things you can look for in order to order to see if you, too, have been hacked by the same perpetrators. There are several types of IOCs, ranging from the highly specific to the uselessly generic.

A uselessly generic IOC would be like trying to identify bank robbers by the fact that their getaway car was “white” in color. It’s worth documenting, so that if the police ever show up in a suspected cabin in the woods, they can note that there’s a “white” car parked in front.

But if you work bank security, that doesn’t mean you should be on the lookout for “white” cars. That would be silly.

This is what happens with US-CERT’s IOCs. They list some potentially useful things, but they also list a lot of junk that waste’s people’s times, with little ability to distinguish between the useful and the useless.

An example: a few months ago was the GRIZZLEYBEAR report published by US-CERT. Among other things, it listed IP addresses used by hackers. There was no description which would be useful IP addresses to watch for, and which would be useless.

Some of these IP addresses were useful, pointing to servers the group has been using a long time as command-and-control servers. Other IP addresses are more dubious, such as Tor exit nodes. You aren’t concerned about any specific Tor exit IP address, because it changes randomly, so has no relationship to the attackers. Instead, if you cared about those Tor IP addresses, what you should be looking for is a dynamically updated list of Tor nodes updated daily.

And finally, they listed IP addresses of Yahoo, because attackers passed data through Yahoo servers. No, it wasn’t because those Yahoo servers had been compromised, it’s just that everyone passes things though them, like email.

A Vermont power-plant blindly dumped all those IP addresses into their sensors. As a consequence, the next morning when an employee checked their Yahoo email, the sensors triggered. This resulted in national headlines about the Russians hacking the Vermont power grid.

Today, the US-CERT made similar mistakes with CRASHOVERRIDE. They took a report from Dragos Security, then mutilated it. Dragos’s own IOCs focused on things like hostile strings and file hashes of the hostile files. They also included filenames, but similar to the reason you’d noticed a white car — because it happened, not because you should be on the lookout for it. In context, there’s nothing wrong with noting the file name.

But the US-CERT pulled the filenames out of context. One of those filenames was, humorously, “svchost.exe”. It’s the name of an essential Windows service. Every Windows computer is running multiple copies of “svchost.exe”. It’s like saying “be on the lookout for Windows”.

Yes, it’s true that viruses use the same filenames as essential Windows files like “svchost.exe”. That’s, generally, something you should be aware of. But that CRASHOVERRIDE did this is wholly meaningless.

What Dragos Security was actually reporting was that a “svchost.exe” with the file hash of 79ca89711cdaedb16b0ccccfdcfbd6aa7e57120a was the virus — it’s the hash that’s the important IOC. Pulling the filename out of context is just silly.

Luckily, the DHS also provides some of the raw information provided by Dragos. But even then, there’s problems: they provide it in formatted form, for HTML, PDF, or Excel documents. This corrupts the original data so that it’s no longer machine readable. For example, from their webpage, they have the following:

import “pe”
import “hash”

Among the problems are the fact that the quote marks have been altered, probably by Word’s “smart quotes” feature. In other cases, I’ve seen PDF documents get confused by the number 0 and the letter O, as if the raw data had been scanned in from a printed document and OCRed.

If this were a “threat intel” company,  we’d call this snake oil. The US-CERT is using Dragos Security’s reports to promote itself, but ultimate providing negative value, mutilating the content.

This, ultimately, causes a lot of harm. The press trusted their content. So does the network of downstream entities, like municipal power grids. There are tens of thousands of such consumers of these reports, often with less expertise than even US-CERT. There are sprinklings of smart people in these organizations, I meet them at hacker cons, and am fascinated by their stories. But institutionally, they are dumbed down the same level as these US-CERT reports, with the smart people marginalized.

There are two solutions to this problem. The first is that when the stupidity of what you do causes everyone to laugh at you, stop doing it. The second is to value technical expertise, empowering those who know what they are doing. Examples of what not to do are giving power to people like Obama’s cyberczar, Michael Daniels, who once claimed his lack of technical knowledge was a bonus, because it allowed him to see the strategic picture instead of getting distracted by details.