Tag Archives: RAID

Microsoft Denies Piracy Extortion Claims, Returns Fire

Post Syndicated from Ernesto original https://torrentfreak.com/microsoft-denies-piracy-extortion-claims-returns-fire-180416/

For many years, Microsoft and the Software Alliance (BSA) have carried out piracy investigations into organizations large and small.

Companies accused of using Microsoft software without permission usually get a letter asking them to pay up, or face legal consequences.

This also happened to Hanna Instruments, a Rhode Island-based company that sells analytical instruments. Last year, the company was accused of using Microsoft Office products without a proper license.

In a letter, BSA’s lawyers informed Hanna that it would face up to $4,950,000 in damages if the case went to court. Instead, however, they offered to settle the matter for $72,074.

Adding some extra pressure, BSA also warned that Microsoft could get a court order that would allow U.S. marshals to raid the company’s premises.

Where most of these cases are resolved behind closed doors, this one escalated. After being repeatedly contacted by BSA’s lawyers, Hanna decided to take the matter to court, claiming that Microsoft and BSA were trying to ‘extort’ money on ‘baseless’ accusations.

“BSA, Microsoft, and their counsel have, without supplying one scintilla of evidence, issued a series of letters for the sole purpose of extorting inflated monetary damages,” the company informed the court.

Late last week Microsoft and BSA replied to the complaint. While the two companies admit that they reached out to Hanna and offered a settlement, they deny several other allegations, including the extortion claims.

Instead, the companies submit a counterclaim, backing up their copyright infringement accusations and demanding damages.

“Hanna has engaged and continues to engage in the unauthorized installation, reproduction, and distribution and other unlawful use of Microsoft Software on computers on its premises and has used unlicensed copies of Microsoft Software to conduct its business,” they write.

According to Microsoft and BSA, the Rhode Island company still uses unauthorized product keys to activate and install unlicensed Microsoft software.

Turning Hanna’s own evidence against itself, they argue that two product keys were part of a batch of an educational program in China — not for commercial use in the United States.

Microsoft / BSA counterclaim

Another key could be traced back to what appears to be a counterfeit store which Microsoft has since shut down.

“The materials provided by Hanna also indicate that it purchased at least one copy of Microsoft Software from BuyCheapSoftware.com, a now-defunct website that was sued by Microsoft for selling stolen, abused, and otherwise unauthorized decoupled product keys,” Microsoft and BSA write.

According to Hanna, BSA previously failed to provide evidence to prove that the company was using unlicensed keys. However, the counterclaim suggests that the initial accusations had merit.

Whether BSA’s tactic of bringing up millions of dollars in damages and a possible raid by the U.S. Marshalls is the best strategy to resolve such a matter is up for debate of course.

It could very well be that Hanna was duped into buying counterfeit software, without knowing it. Perhaps this will come out as the case progresses. That said, it could also help if both sides simply have a good conversation to see if they can make peace, without threats.

Microsoft and BSA’s reply and counterclaim is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

‘Pirate’ Android App Store Operator Avoids Prison

Post Syndicated from Ernesto original https://torrentfreak.com/pirate-android-app-store-operator-avoids-prison-180413/

Assisted by police in France and the Netherlands, the FBI took down the “pirate” Android stores Appbucket, Applanet and SnappzMarket in the summer of 2012.

During the years that followed several people connected to the Android app sites were arrested and indicted, and slowly but surely these cases are reaching their conclusions.

This week the Northern District Court of Georgia announced the sentencing of one of the youngest defendants. Aaron Buckley was fifteen when he started working on Applanet, and still a teenager when armed agents raided his house.

Years passed and a lot has changed since then, Buckley’s attorney informed the court before sentencing. The former pirate, who pleaded guilty to Conspiracy to Commit Copyright Infringement and Criminal Copyright Infringement, is a completely different person today.

Similar to many people who have a run-in with the law, life wasn’t always easy on him. Computers offered a welcome escape but also dragged Buckley into trouble, something he deeply regrets now.

Following the indictment, things started to change. The Applanet operator picked up his life, away from the computer, and also got involved in community work. Among other things, he plays a leading role in a popular support community for LGBT teenagers.

Given the tough circumstances of his personal life, which we won’t elaborate on, his attorney requested a downward departure from the regular sentencing guidelines, to allow for lesser punishment.

After considering all the options, District Court Judge Timothy C. Batten agreed to a lower sentence. Unlike some other pirate app stores operators, who must spend years in prison, Buckley will not be incarcerated.

Instead, the Applanet operator, who is now in his mid-twenties, will be put on probation for three years, including a year of home confinement.

The sentence (pdf)

In addition, he has to perform 20 hours of community service and work towards passing a General Educational Development (GED) exam.

It’s tough to live with the prospect of possibly spending years in jail, especially for more than a decade. Given the circumstances, this sentence must be a huge relief.

TorrentFreak contacted Buckley, who informed us that he is happy with the outcome and ready to work on a bright future.

“I really respect the government and the judge in their sentencing and am extremely grateful that they took into account all concerns of my health and life situation in regards to possible sentences,” he tells us.

“I am just glad to have another chance to use my time and skills to hopefully contribute to society in a more positive way as much as I am capable thanks to the outcome of the case.”

Time to move on.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Major Pirate Site Operators’ Sentences Increased on Appeal

Post Syndicated from Andy original https://torrentfreak.com/major-pirate-site-operators-sentences-increased-on-appeal-180330/

With The Pirate Bay the most famous pirate site in Swedish history still in full swing, a lesser known streaming platform started to gain traction more than half a decade ago.

From humble beginnings, Swefilmer eventually grew to become Sweden’s most popular movie and TV show streaming site. At one stage it was credited alongside another streaming portal for serving up to 25% of all online video streaming in Sweden.

But in 2015, everything came crashing down. An operator of the site in his early twenties was raided by local police and arrested. An older Turkish man, who was accused of receiving donations from users and setting up Swefilmer’s deals with advertisers, was later arrested in Germany.

Their activities between November 2013 and June 2015 landed them an appearance before the Varberg District Court last January, where they were accused of making more than $1.5m in advertising revenue from copyright infringement.

The prosecutor described the site as being like “organized crime”. The then 26-year-old was described as the main player behind the site, with the then 23-year-old playing a much smaller role. The latter received an estimated $4,000 of the proceeds, the former was said to have pocketed more than $1.5m.

As expected, things didn’t go well. The older man, who was described as leading a luxury lifestyle, was convicted of 1,044 breaches of copyright law and serious money laundering offenses. He was sentenced to three years in prison and ordered to forfeit 14,000,000 SEK (US$1.68m).

Due to his minimal role, the younger man was given probation and ordered to complete 120 hours of community service. Speaking with TorrentFreak at the time, the 23-year-old said he was relieved at the relatively light sentence but noted it may not be over yet.

Indeed, as is often the case with these complex copyright prosecutions, the matter found itself at the Court of Appeal of Western Sweden. On Wednesday its decision was handed down and it’s bad news for both men.

“The Court of Appeal, like the District Court, judges the men for breach of copyright law,” the Court said in a statement.

“They are judged to have made more than 1,400 copyrighted films available through the Swefilmer streaming service, without obtaining permission from copyright holders. One of the men is also convicted of gross money laundering because he received revenues from the criminal activity.”

In respect of the now 27-year-old, the Court decided to hand down a much more severe sentence, extending the term of imprisonment from three to four years.

There was some better news in respect of the amount he has to forfeit to the state, however. The District Court set this amount at 14,000,000 SEK (US$1.68m) but the Court of Appeal reduced it to ‘just’ 4,000,000 SEK (US$482,280).

The younger man’s conditional sentence was upheld but community service was replaced with a fine of 10,000 SEK (US$1,200). Also, along with his accomplice, he must now pay significant damages to a Norwegian plaintiff in the case.

“Both men will jointly pay damages of NOK 2.2 million (US$283,000) together with interest to Nordisk Film A / S for copyright infringement in one of the films posted on the website,” the Court writes in its decision.

But even now, the matter may not be closed. Ansgar Firsching, the older man’s lawyer, told SVT that the case could go all the way to the Supreme Court.

“I have informed my client about the content of the judgment and it is highly likely that he will turn to the Supreme Court,” Firsching said.

It appears that the 27-year-old will argue that at the time of the alleged offenses, merely linking to copyrighted content was not a criminal offense but whether this approach will succeed is seriously up for debate.

While linking was previously considered by some to sit in a legal gray area, the District Court drew heavily on the GS Media ruling handed down by the European Court of Justice in September 2016.

In that case, the EU Court found that those who post links to content they do not know is infringing in a non-commercial environment usually don’t commit infringement. The Swefilmer case doesn’t immediately appear to fit either of those parameters.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Dotcom Wins Privacy Breach Case Against New Zealand Government

Post Syndicated from Ernesto original https://torrentfreak.com/dotcom-wins-privacy-breach-case-against-new-zealand-government-180326/

Following the Megaupload shutdown and the raid on Kim Dotcom’s mansion, many hours have been spent on the case in courts around the world.

While Dotcom and several of his former colleagues were targeted for alleged copyright crimes, thus far the major battles have been focused on other legal aspects of the case.

In a complaint filed at the Human Rights Tribunal, Dotcom accused the New Zealand Government of improperly withholding information. In 2015 Dotcom asked 28 ministers and several government departments to disclose information they held on him, without result.

The requests were labeled as “urgent” due to Dotcom’s pending legal case, but then-Attorney General Chris Finlayson denied them as being vexatious and without sufficient grounds.

Today the Human Rights Tribunal ruled that, by denying the requests, “…the Crown to be in clear breach of its obligations under the Privacy Act,” awarding the Megaupload founder $90,000 in damages for “loss of dignity or injury to feelings.”

While the financial windfall must be welcome, Dotcom also sees this ruling as a big victory in the grander scheme of things. According to the New Zealand entrepreneur, it means that the U.S. extradition bid is dead in the water.

“What does the Human Rights Tribunal Judgement mean for my Extradition case? It is OVER!” Dotcom just tweeted

“By unlawfully withholding information that could have helped my case the former Attorney General of New Zealand has perverted the course of Justice,” he adds.

It’s over…?

In addition to awarding damages, the ruling also requires the ministers and Government to comply with the original requests, as Newshub writes.

The Tribunal’s decision is a clear win for Dotcom. While it doesn’t automatically end the extradition case, going forward it certainly doesn’t hurt the position of Megaupload’s founder.

Who it could hurt, according to Dotcom, is New Zealand’s Privacy Commissioner John Edwards.

“I call for the immediate resignation of the Privacy Commissioner of New Zealand for his complicity with the former Attorney General and Crown Law in unlawfully withholding information that New Zealanders were legally entitled to,” Dotcom tweets.

The Privacy Commissioner retweeted Dotcom’s request without commenting on it, which elicited another blow from Dotcom.

“I appreciate the acknowledgment. The Human Rights Tribunal judgment makes you look utterly incompetent at best or co-conspiratorial at worst. Which is it? Either way, you’re done,” Dotcom added.

A copy of the Human Rights Tribunal ruling is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Introducing the B2 Snapshot Return Refund Program

Post Syndicated from Ahin Thomas original https://www.backblaze.com/blog/b2-snapshot-return-refund-program/

B2 Snapshot Return Refund Program

What Is the B2 Snapshot Return Refund Program?

Backblaze’s mission is making cloud storage astonishingly easy and affordable. That guides our focus — making our customers’ data more usable. Today, we’re pleased to introduce a trial of the B2 Snapshot Return Refund program. B2 customers have long been able to create a Snapshot of their data and order a hard drive with that data sent via FedEx anywhere in the world. Starting today, if the customer sends the drive back to Backblaze within 30 days, they will get a full refund. This new feature is available automatically for B2 customers when they order a Snapshot. There are no extra buttons to push or boxes to check — just send back the drive within 30 days and we’ll refund your money. To put it simply, we are offering the cloud storage industry’s only refundable rapid data egress service.

You Shouldn’t be Afraid to Use Your Own Data

Last week, we cut the price of B2 downloads in half — from 2¢ per GB to 1¢ per GB. That 50% reduction makes B2’s download price 1/5 that of Amazon’s S3 (with B2 storage pricing already 1/4 that of S3). The price reduction and today’s introduction of the B2 Snapshot Return Refund program are deliberate moves to eliminate the industry’s biggest barrier to entry — the cost of using data stored in the cloud.  Storage vendors who make it expensive to restore, or place time lag impediments to access, are reducing the usefulness of your data. We believe this is antithetical to encouraging the use of the cloud in the first place.

Learning From Our Customers

Our Computer Backup product already has a Restore Return Refund program. It’s incredibly popular, and we enjoy the almost daily “you just saved my bacon” letters that come back with the returned hard drives. Our customer surveys have repeatedly demonstrated that the ability to get data back is one of the things that has made our Computer Backup service one of the most popular in the industry. So, it made sense to us that our B2 customers could use a similar program.

There are many ways B2 customers can benefit from using the B2 Snapshot Return Refund program, here is a typical scenario.

Media and Entertainment Workflow Based Snapshots

Businesses in the Media and Entertainment (M&E) industry tend to have large quantities of digital media, and the amount of data will continue to increase in the coming years with more 4K and 8K cameras coming into regular use. When an organization needs to deliver or share that data, they typically have to manually download data from their internal storage system, and copy it on a thumb drive or hard drive, or perhaps create an LTO tape. Once that is done, they take their storage device, label it, and mail to their customer. Not only is this practice costly, time consuming, and potentially insecure, it doesn’t scale well with larger amounts of data.

With just a few clicks, you can easily distribute or share your digital media if it stored in the B2 Cloud. Here’s how the process works:

  1. Log in to your Backblaze B2 account.
  2. Navigate to the bucket where the data is located.
  3. Select the files, or the entire bucket, you wish to send and create a “Snapshot.”
  4. Once the Snapshot is complete you have choices:
    • Download the Snapshot and pay $0.01/GB for the download
    • Have Backblaze copy the Snapshot to an external hard drive and FedEx it anywhere in the world. This stores up to 3.5 TB and costs $189.00. Return the hard drive to Backblaze within 30 days and you’ll get your $189.00 back.
    • Have Backblaze copy the Snapshot to a flash drive and FedEx it anywhere in the world. This stores up to 110 GB and costs $99.00. FedEx shipping to the specified location is included. Return the flash drive to Backblaze within 30 days and you’ll get your $99.00 back.

You can always keep the hard drive or flash drive and Backblaze, of course, will keep your money.

Each drive containing a Snapshot is encrypted. The encryption key can be found in your Backblaze B2 account after you log in. The FedEX tracking number is there as well. When the hard drive arrives at its destination you can provide the encryption key to the recipient and they’ll be able to access the files. Note that the encryption key must be entered each time the hard drive is started, so the data remains protected even if the hard drive is returned to Backblaze.

The B2 Snapshot Return Refund program supports Snapshots as large as 3.5 terabytes. That means you can send about 50 hours of 4k video to a client or partner by selecting the hard drive option. If you select the flash drive option, a Snapshot can be up to 110 gigabytes, which is about 1hr and 45 min of 4k video.

While the example uses an M&E workflow, any workflow requiring the exchange or distribution of large amounts of data across distinct geographies will benefit from this service.

This is a Trial Program

Backblaze fully intends to offer the B2 Snapshot Return Refund Program for a long time. That said, there is no program like this in the industry and so we want to put some guardrails on it to ensure we can offer a sustainable program for all. Thus, the “fine print”:

  • Minimum Snapshot Size — a Snapshot must be greater than 10 GB to qualify for this program. Why? You can download a 10 GB Snapshot in a few minutes. Why pay us to do the same thing and have it take a couple of days??
  • The 30 Day Clock — The clock starts on the day the drive is marked as delivered to you by FedEx and the clock ends on the date postmarked on the package we receive. If that’s 30 days or less, your refund will be granted.
  • 5 Drive Refunds Per Year — We are initially setting a limit of 5 drive refunds per B2 account per year. By placing a cap on the number of drive refunds per year, we are able to provide a service that is responsive to our entire client base. We expect to change or remove this limit once we have enough data to understand the demand and can make sure we are staffed properly.

It is Your Data — Use It

Our industry has a habit of charging little to store data and then usurious amounts to get it back. There are certainly real costs involved in data retrieval. We outlined them in our post on the Cost of Cloud Storage. The industry rates charged for data retrieval are clearly strategic moves to try and lock customers in. To us, that runs counter to trying to do our part to make data useful and our customers’ lives easier. That viewpoint drives our efforts behind lowering our download pricing and the creation of this program.

We hope you enjoy the B2 Snapshot Return Refund program. If you have a moment, please tell us in the comments below how you might use it!

The post Introducing the B2 Snapshot Return Refund Program appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Dotcom Affidavit Calls For Obama to Give Evidence in Megaupload Case

Post Syndicated from Andy original https://torrentfreak.com/dotcom-affidavit-calls-for-obama-to-give-evidence-in-megaupload-case-180320/

For more than six years since the raid on Megaupload, founder Kim Dotcom has insisted that the case against him, his co-defendants, and his company, was politically motivated.

The serial entrepreneur states unequivocally that former president Barack Obama’s close ties to Hollywood were the driving force.

Later today, Obama will touch down for a visit to New Zealand. In what appears to be a tightly managed affair, with heavy restrictions placed on the media and publicity, it seems clear that Obama wants to maintain control over his social and business engagements in the country.

But of course, New Zealand is home to Kim Dotcom and as someone who feels wronged by the actions of the former administration, he is determined to use this opportunity to shine more light on Obama’s role in the downfall of his company.

In a statement this morning, Dotcom reiterated his claims that attempts to have him extradited to the United States have no basis in law, chiefly due to the fact that the online dissemination of copyright-protected works by Megaupload’s users is not an extradition offense in New Zealand.

But Dotcom also attacks the politics behind his case, arguing that the Obama administration was under pressure from Hollywood to do something about copyright enforcement or risk losing financial support.

In connection with his case, Dotcom is currently suing the New Zealand government for billions of dollars so while Obama is in town, Dotcom is demanding that the former president gives evidence.

Dotcom’s case is laid out in a highly-detailed sworn affidavit dated March 19, 2018. The Megaupload founder explains that Hollywood has historically been a major benefactor of the Democrats so when seeking re-election for a further term, the Democrats were under pressure from the movie companies to make an example of Megaupload and Dotcom.

Dotcom notes that while he was based in Hong Kong, extradition to the US would be challenging. So, with Dotcom seeking residence in New Zealand, a plot was hatched to allow him into the country, despite the New Zealand government knowing that a criminal prosecution lay in wait for him. Dotcom says that by doing a favor for Hollywood, it could mean that New Zealand became a favored destination for US filmmakers.

“The interests of the United States and New Zealand were therefore perfectly aligned. I provided the perfect opportunity for New Zealand to facilitate the United States’ show of force on copyright enforcement,” Dotcom writes.

Citing documents obtained from Open Secrets, Dotcom shows how the Democrats took an 81% share of more than $46m donated to political parties in the US during the 2008 election cycle. In the 2010 cycle, 76% of more than $24m went to the Democrats and in 2012, they scooped up 78% of more than $56m.

Dotcom then recalls the attempts at passing the Stop Online Piracy Act (SOPA), which would have shifted the enforcement of copyright onto ISPs, assisting Hollywood greatly. Ultimately, Congressional support for the proposed legislation was withdrawn and Dotcom recalls this was followed by a public threat from the MPAA to withdraw campaign contributions on which the Democrats were especially reliant.

“The message to the White House was plain: do not expect funding if you do not advance the MPAA’s legislative agenda. On 20 January 2012, the day after this statement, I was arrested,” Dotcom notes.

Describing Megaupload as a highly profitable and innovative platform that highlighted copyright owners’ failure to keep up with the way in which content is now consumed, Dotcom says it made the perfect target for the Democrats.

Convinced the party was at the root of his prosecution, he utilized his connections in Hong Kong to contact Thomas Hart, a lawyer and lobbyist in Washington, D.C. with strong connections to the Democrats and the White House.

Dotcom said a telephone call between him and Mr Hart revealed that then Vice President Joe Biden was at the center of Dotcom’s prosecution but that Obama was dissatisfied with the way things had been handled.

“Biden did admit to have… you know, kind of started it, you know, along with support from others but it was Biden’s decision…,” Hart allegedly said.

“What he [President Obama] expressed to me was a growing concern about the matter. He indicated an awareness of that it had not gone well, that it was more complicated than he thought, that he will turn his attention to it more prominently after November.”

Dotcom says that Obama was “questioning the whole thing,” a suggestion that he may not have been fully committed to the continuing prosecution.

The affidavit then lists a whole series of meetings in 2011, documented in the White House visitor logs. They include meetings with then United States Attorney Neil McBride, various representatives from Hollywood, MPAA chief Chris Dodd, Mike Ellis of the MPA (who was based in Hong Kong and had met with New Zealand’s then Minister of Justice, Simon Power) and the Obama administration.

In summary, Dotcom suggests there was a highly organized scheme against him, hatched between Hollywood and the Obama administration, that had the provision of funds to win re-election at its heart.

From there, an intertwined agreement was reached at the highest levels of both the US and New Zealand governments where the former would benefit through tax concessions to Hollywood (and a sweetening of relations between the countries) and the latter would benefit financially through investment.

All New Zealand had to do was let Dotcom in for a while and then hand him over to the United States for prosecution. And New Zealand definitely knew that Dotcom was wanted by the US. Emails obtained by Dotcom concerning his residency application show that clearly.

“Kim DOTCOM is not of security concern but is likely to soon become the subject of a joint FBI / NZ Police criminal investigation. We have passed this over to NZ Police,” one of the emails reads. Another, well over a year before the raid, also shows the level of knowledge.

Bad but wealthy, so we have plans for him…

With “political pressure” to grant Dotcom’s application in place, Immigration New Zealand finally gave the Megaupload founder the thumbs-up on November 1, 2010. Dotcom believes that New Zealand was concerned he may have walked away from his application.

“This would have been of grave concern to the Government, which, at that time, was in negotiations with Hollywood lobby,” his affidavit reads.

“The last thing they would have needed at that delicate stage of the negotiations was for me to walk away from New Zealand and return to Hong Kong, where extradition would be more difficult. I believe that this concern is what prompted the ‘political pressure’ that led to my application finally being granted despite the presence of factors that would have caused anyone else’s application to have been rejected.”

Dotcom says that after being granted residency, there were signs things weren’t going to plan for him. The entrepreneur applied to buy his now-famous former mansion for NZ$37m, an application that was initially approved. However, after being passed to Simon Power, the application was denied.

“It would appear that, although my character was apparently good enough for me to be granted residence in November 2010, in July 2011 it was not considered good enough for me to buy property in New Zealand,” Dotcom notes.

“The Honourable Mr Power clearly did not want me purchasing $37 million of real estate, presumably because he knew that the United States was going to seek forfeiture of my assets and he did not want what was then the most expensive property in New Zealand being forfeited to the United States government.”

Of course, Dotcom concludes by highlighting the unlawful spying by New Zealand’s GCSB spy agency and the disproportionate use of force displayed by the police when they raided him in 2010 using dozens of armed officers. This, combined with all of the above, means that questions about his case must now be answered at the highest levels. With Obama in town, there’s no time like the present.

“As the evidence above demonstrates, this improper purpose which was then embraced by the New Zealand authorities, originated in the White House under the Obama administration. It is therefore necessary to examine Mr Obama in this proceeding,” Dotcom concludes.

Press blackouts aside, it appears that Obama has rather a lot of golf lined up for the coming days. Whether he’ll have any time to answer Dotcom’s questions is one thing but whether he’ll even be asked to is perhaps the most important point of all.

The full affidavit and masses of supporting evidence can be found here.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

How The Pirate Bay Helped Spotify Become a Success

Post Syndicated from Ernesto original https://torrentfreak.com/how-the-pirate-bay-helped-spotify-become-a-success-180319/

When Spotify launched its first beta in the fall of 2008, many people were blown away by its ease of use.

With the option to stream millions of tracks supported by an occasional ad, or free of ads for a small subscription fee, Spotify offered something that was more convenient than piracy.

In the years that followed, Spotify rolled out its music service in more than 60 countries, amassing over 160 million users. While the service is often billed as a piracy killer, ironically, it also owes its success to piracy.

As a teenager, Spotify founder and CEO Daniel Ek was fascinated by Napster, which triggered a piracy revolution in the late nineties. Napster made all the music in the world accessible in a few clicks, something Spotify also set out to do a few years later, legally.

“I want to replicate my first experience with piracy,” Ek told Businessweek years ago. “What eventually killed it was that it didn’t work for the people participating with the content. The challenge here is about solving both of those things.”

While the technical capabilities were certainly there, the main stumbling block was getting the required licenses. The music industry hadn’t had a lot of good experiences with the Internet a decade ago so there was plenty of hesitation.

The same was true of Sweden, where The Pirate Bay had just gained a lot of traction. There was a pro-sharing culture being cultivated by Piratbyrån, Swedish for the Piracy Bureau, which was the driving force behind the torrent site in the early days.

After the first Pirate Bay raid in 2006, thousands of people gathered in the streets of Stockholm to declare their support for the site and their right to share.

Pro-piracy protest in Stockholm (Jon Åslund, CC BY 2.5)

Interestingly, however, this pro-piracy climate turned out to be in Spotify’s favor. In a detailed feature in the Swedish newspaper Breakit Per Sundin, CEO of Sony BMG at the time, suggests that The Pirate Bay helped Spotify.

“If Pirate Bay had not existed or made such a mess in the market, I don’t think Spotify would have seen the light of the day. You wouldn’t get the licenses you wanted,” Sundin said.

With music industry revenues dropping, record labels had to fire hundreds of people. They were becoming desperate and were looking for change, something Spotify was promising.

At the time, the idea of having millions of songs readily and legally available was totally new. Many immediately saw it as an “alternative to music piracy” and even Pirate Bay founder Peter Sunde was impressed.

“It was great. It was always what was missing in the pirate services, that intuitive interface,” Sunde told Breakit.

Sunde also believed that The Pirate Bay and all the buzz around piracy in Sweden was a great boon to Spotify. But while the latter turned into a billion-dollar business that’s about to go public, Sunde and the other TPB founders still owe the labels millions in damages.

“Without file-sharing, The Pirate Bay and the political work done by Piratbyrån, it was not possible to get the licensing agreements Spotify received,” Sunde said. “Sometimes I think I should have received 10, 20 or 30 percent of Spotify, as a thank you for the help.”

In addition to creating the right climate for the major record labels to get on board, The Pirate Bay also appears to have been of more practical assistance.

When Spotify first launched several people noticed that some tracks still had tags from pirate groups such as FairLight in the title. Those are not the files you expect the labels to offer, but files that were on The Pirate Bay.

Also, Spotify mysteriously offered music from a band that decided to share their music on The Pirate Bay, instead of the usual outlets. There’s only one place that could have originated from.

The Pirate Bay.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Founder of Fan-Made Subtitle Site Lose Copyright Infringement Appeal

Post Syndicated from Andy original https://torrentfreak.com/founder-of-fan-made-subtitle-site-lose-copyright-infringement-appeal-180318/

For millions of people around the world, subtitles are the only way to enjoy media in languages other than that in the original production. For the deaf and hard of hearing, they are absolutely essential.

Movie and TV show companies tend to be quiet good at providing subtitles eventually but in line with other restrictive practices associated with their industry, it can often mean a long wait for the consumer, particularly in overseas territories.

For this reason, fan-made subtitles have become somewhat of a cottage industry in recent years. Where companies fail to provide subtitles quickly enough, fans step in and create them by hand. This has led to the rise of a number of subtitling platforms, including the now widely recognized Undertexter.se in Sweden.

The platform had its roots back in 2003 but first hit the headlines in 2013 when Swedish police caused an uproar by raiding the site and seizing its servers.

“The people who work on the site don’t consider their own interpretation of dialog to be something illegal, especially when we’re handing out these interpretations for free,” site founder Eugen Archy said at the time.

Vowing to never give up in the face of pressure from the authorities, anti-piracy outfit Rättighetsalliansen (Rights Alliance), and companies including Nordisk Film, Paramount, Universal, Sony and Warner, Archy said that the battle over what began as a high school project would continue.

“No Hollywood, you played the wrong card here. We will never give up, we live in a free country and Swedish people have every right to publish their own interpretations of a movie or TV show,” he said.

It took four more years but in 2017 the Undertexter founder was prosecuted for distributing copyright-infringing subtitles while facing a potential prison sentence.

Things didn’t go well and last September the Attunda District Court found him guilty and sentenced the then 32-year-old operator to probation. In addition, he was told to pay 217,000 Swedish krona ($26,400) to be taken from advertising and donation revenues collected through the site.

Eugen Archy took the case to appeal, arguing that the Svea Hovrätt (Svea Court of Appeal) should acquit him of all the charges and dismiss or at least reduce the amount he was ordered to pay by the lower court. Needless to say, this was challenged by the prosecution.

On appeal, Archy agreed that he was the person behind Undertexter but disputed that the subtitle files uploaded to his site infringed on the plaintiffs’ copyrights, arguing they were creative works in their own right.

While to an extent that may have been the case, the Court found that the translations themselves depended on the rights connected to the original work, which were entirely held by the relevant copyright holders. While paraphrasing and parody might be allowed, pure translations are completely covered by the rights in the original and cannot be seen as new and independent works, the Court found.

The Svea Hovrätt also found that Archy acted intentionally, noting that in addition to administering the site and doing some translating work himself, it was “inconceivable” that he did not know that the subtitles made available related to copyrighted dialog found in movies.

In conclusion, the Court of Appeal upheld Archy’s copyright infringement conviction (pdf, Swedish) and sentenced him to probation, as previously determined by the Attunda District Court.

Last year, the legal status of user-created subtitles was also tested in the Netherlands. In response to local anti-piracy outfit BREIN forcing several subtitling groups into retreat, a group of fansubbers decided to fight back.

After raising their own funds, in 2016 the “Free Subtitles Foundation” (Stichting Laat Ondertitels Vrij – SLOV) took the decision to sue BREIN with the hope of obtaining a favorable legal ruling.

In 2017 it all fell apart when the Amsterdam District Court handed down its decision and sided with BREIN on each count.

The Court found that subtitles can only be created and distributed after permission has been obtained from copyright holders. Doing so outside these parameters amounts to copyright infringement.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Needed: Senior Software Engineer

Post Syndicated from Yev original https://www.backblaze.com/blog/needed-senior-software-engineer/

Want to work at a company that helps customers in 156 countries around the world protect the memories they hold dear? A company that stores over 500 petabytes of customers’ photos, music, documents and work files in a purpose-built cloud storage system?

Well, here’s your chance. Backblaze is looking for a Sr. Software Engineer!

Company Description:

Founded in 2007, Backblaze started with a mission to make backup software elegant and provide complete peace of mind. Over the course of almost a decade, we have become a pioneer in robust, scalable low cost cloud backup. Recently, we launched B2 – robust and reliable object storage at just $0.005/gb/mo. Part of our differentiation is being able to offer the lowest price of any of the big players while still being profitable.

We’ve managed to nurture a team oriented culture with amazingly low turnover. We value our people and their families. Don’t forget to check out our “About Us” page to learn more about the people and some of our perks.

We have built a profitable, high growth business. While we love our investors, we have maintained control over the business. That means our corporate goals are simple – grow sustainably and profitably.

Some Backblaze Perks:

  • Competitive healthcare plans
  • Competitive compensation and 401k
  • All employees receive Option grants
  • Unlimited vacation days
  • Strong coffee
  • Fully stocked Micro kitchen
  • Catered breakfast and lunches
  • Awesome people who work on awesome projects
  • New Parent Childcare bonus
  • Normal work hours
  • Get to bring your pets into the office
  • San Mateo Office – located near Caltrain and Highways 101 & 280

Want to know what you’ll be doing?

You will work on the server side APIs that authenticate users when they log in, accept the backups, manage the data, and prepare restored data for customers. And you will help build new features as well as support tools to help chase down and diagnose customer issues.

Must be proficient in:

  • Java
  • Apache Tomcat
  • Large scale systems supporting thousands of servers and millions of customers
  • Cross platform (Linux/Macintosh/Windows) — don’t need to be an expert on all three, but cannot be afraid of any

Bonus points for:

  • Cassandra experience
  • JavaScript
  • ReactJS
  • Python
  • Struts
  • JSP’s

Looking for an attitude of:

  • Passionate about building friendly, easy to use Interfaces and APIs.
  • Likes to work closely with other engineers, support, and sales to help customers.
  • Believes the whole world needs backup, not just English speakers in the USA.
  • Customer Focused (!!) — always focus on the customer’s point of view and how to solve their problem!

Required for all Backblaze Employees:

  • Good attitude and willingness to do whatever it takes to get the job done
  • Strong desire to work for a small, fast-paced company
  • Desire to learn and adapt to rapidly changing technologies and work environment
  • Rigorous adherence to best practices
  • Relentless attention to detail
  • Excellent interpersonal skills and good oral/written communication
  • Excellent troubleshooting and problem solving skills

This position is located in San Mateo, California but will also consider remote work as long as you’re no more than three time zones away and can come to San Mateo now and then.

Backblaze is an Equal Opportunity Employer.

If this sounds like you —follow these steps:

  1. Send an email to [email protected] with the position in the subject line.
  2. Include your resume.
  3. Tell us a bit about your programming experience.

The post Needed: Senior Software Engineer appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Needed: Sales Development Representative!

Post Syndicated from Yev original https://www.backblaze.com/blog/needed-sales-development-representative/

At inception, Backblaze was a consumer company. Thousands upon thousands of individuals came to our website and gave us $5/mo to keep their data safe. But, we didn’t sell business solutions. It took us years before we had a sales team. In the last couple of years, we’ve released products that businesses of all sizes love: Backblaze B2 Cloud Storage and Backblaze for Business Computer Backup. Those businesses want to integrate Backblaze into their infrastructure, so it’s time to expand our sales team and hire our first dedicated outbound Sales Development Representative!

Company Description:
Founded in 2007, Backblaze started with a mission to make backup software elegant and provide complete peace of mind. Over the course of almost a decade, we have become a pioneer in robust, scalable low cost cloud backup. Recently, we launched B2 — robust and reliable object storage at just $0.005/gb/mo. Part of our differentiation is being able to offer the lowest price of any of the big players while still being profitable.

We’ve managed to nurture a team oriented culture with amazingly low turnover. We value our people and their families. Don’t forget to check out our “About Us” page to learn more about the people and some of our perks.

We have built a profitable, high growth business. While we love our investors, we have maintained control over the business. That means our corporate goals are simple — grow sustainably and profitably.

Some Backblaze Perks:

  • Competitive healthcare plans
  • Competitive compensation and 401k
  • All employees receive option grants
  • Unlimited vacation days
  • Strong coffee
  • Fully stocked Micro kitchen
  • Catered breakfast and lunches
  • Awesome people who work on awesome projects
  • New Parent Childcare bonus
  • Normal work hours
  • Get to bring your pets into the office
  • San Mateo Office — located near Caltrain and Highways 101 & 280

As our first Sales Development Representative (SDR), we are looking for someone who is organized, has high-energy and strong interpersonal communication skills. The ideal person will have a passion for sales, love to cold call and figure out new ways to get potential customers. Ideally the SDR will have 1-2 years experience working in a fast paced sales environment. We are looking for someone who knows how to manage their time and has top class communication skills. It’s critical that our SDR is able to learn quickly when using new tools.

Additional Responsibilities Include:

  • Generate qualified leads, set up demos and outbound opportunities by phone and email.
  • Work with our account managers to pass qualified leads and track in salesforce.com.
  • Report internally on prospecting performance and identify potential optimizations.
  • Continuously fine tune outbound messaging – both email and cold calls to drive results.
  • Update and leverage salesforce.com and other sales tools to better track business and drive efficiencies.

Qualifications:

  • Bachelor’s degree (B.A.)
  • Minimum of 1-2 years of sales experience.
  • Excellent written and verbal communication skills.
  • Proven ability to work in a fast-paced, dynamic and goal-oriented environment.
  • Maintain a high sense of urgency and entrepreneurial work ethic that is required to drive business outcomes, with exceptional attention to detail.
  • Positive“can do” attitude, passionate and able to show commitment.
  • Fearless yet cordial personality- not afraid to make cold calls and introductions yet personable enough to connect with potential Backblaze customers.
  • Articulate and good listening skills.
  • Ability to set and manage multiple priorities.

What’s it like working with the Sales team?

The Backblaze sales team collaborates. We help each other out by sharing ideas, templates, and our customer’s experiences. When we talk about our accomplishments, there is no “I did this,” only “we.” We are truly a team.

We are honest to each other and our customers and communicate openly. We aim to have fun by embracing crazy ideas and creative solutions. We try to think not outside the box, but with no boxes at all. Customers are the driving force behind the success of the company and we care deeply about their success.

If this all sounds like you:

  1. Send an email to jobscontact@backblaze.com with the position in the subject line.
  2. Tell us a bit about your sales experience.
  3. Include your resume.

The post Needed: Sales Development Representative! appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

HDD vs SSD: What Does the Future for Storage Hold?

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/ssd-vs-hdd-future-of-storage/

SSD 60 TB drive

This is part one of a series. Use the Join button above to receive notification of future posts on this and other topics.

Customers frequently ask us whether and when we plan to move our cloud backup and data storage to SSDs (Solid-State Drives). That’s not a surprising question considering the many advantages SSDs have over magnetic platter type drives, also known as HDDs (Hard-Disk Drives).

We’re a large user of HDDs in our data centers (currently 100,000 hard drives holding over 500 petabytes of data). We want to provide the best performance, reliability, and economy for our cloud backup and cloud storage services, so we continually evaluate which drives to use for operations and in our data centers. While we use SSDs for some applications, which we’ll describe below, there are reasons why HDDs will continue to be the primary drives of choice for us and other cloud providers for the foreseeable future.

HDDs vs SSDs

HDD vs SSD

The laptop computer I am writing this on has a single 512GB SSD, which has become a common feature in higher end laptops. The SSD’s advantages for a laptop are easy to understand: they are smaller than an HDD, faster, quieter, last longer, and are not susceptible to vibration and magnetic fields. They also have much lower latency and access times.

Today’s typical online price for a 2.5” 512GB SSD is $140 to $170. The typical online price for a 3.5” 512 GB HDD is $44 to $65. That’s a pretty significant difference in price, but since the SSD helps make the laptop lighter, enables it to be more resistant to the inevitable shocks and jolts it will experience in daily use, and adds of benefits of faster booting, faster waking from sleep, and faster launching of applications and handling of big files, the extra cost for the SSD in this case is worth it.

Some of these SSD advantages, chiefly speed, also will apply to a desktop computer, so desktops are increasingly outfitted with SSDs, particularly to hold the operating system, applications, and data that is accessed frequently. Replacing a boot drive with an SSD has become a popular upgrade option to breathe new life into a computer, especially one that seems to take forever to boot or is used for notoriously slow-loading applications such as Photoshop.

We covered upgrading your computer with an SSD in our blog post SSD 101: How to Upgrade Your Computer With An SSD.

Data centers are an entirely different kettle of fish. The primary concerns for data center storage are reliability, storage density, and cost. While SSDs are strong in the first two areas, it’s the third where they are not yet competitive. At Backblaze we adopt higher density HDDs as they become available — we’re currently using both 10TB and 12TB drives (among other capacities) in our data centers. Higher density drives provide greater storage density per Storage Pod and Vault and reduce our overhead cost through less required maintenance and lower total power requirements. Comparable SSDs in those sizes would cost roughly $1,000 per terabyte, considerably higher than the corresponding HDD. Simply put, SSDs are not yet in the price range to make their use economical for the benefits they provide, which is the reason why we expect to be using HDDs as our primary storage media for the foreseeable future.

What Are HDDs?

HDDs have been around over 60 years since IBM introduced them in 1956. The first disk drive was the size of a car, stored a mere 3.75 megabytes, and cost $300,000 in today’s dollars.

IBM 350 Disk Storage System — 3.75MB in 1956

The 350 Disk Storage System was a major component of the IBM 305 RAMAC (Random Access Method of Accounting and Control) system, which was introduced in September 1956. It consisted of 40 platters and a dual read/write head on a single arm that moved up and down the stack of magnetic disk platters.

The basic mechanism of an HDD remains unchanged since then, though it has undergone continual refinement. An HDD uses magnetism to store data on a rotating platter. A read/write head is affixed to an arm that floats above the spinning platter reading and writing data. The faster the platter spins, the faster an HDD can perform. Typical laptop drives today spin at either 5400 RPM (revolutions per minute) or 7200 RPM, though some server-based platters spin at even higher speeds.

Exploded drawing of a hard drive

Exploded drawing of a hard drive

The platters inside the drives are coated with a magnetically sensitive film consisting of tiny magnetic grains. Data is recorded when a magnetic write-head flies just above the spinning disk; the write head rapidly flips the magnetization of one magnetic region of grains so that its magnetic pole points up or down, to encode a 1 or a 0 in binary code. If all this sounds like an HDD is vulnerable to shocks and vibration, you’d be right. They also are vulnerable to magnets, which is one way to destroy the data on an HDD if you’re getting rid of it.

The major advantage of an HDD is that it can store lots of data cheaply. One and two terabyte (1,024 and 2,048 gigabytes) hard drives are not unusual for a laptop these days, and 10TB and 12TB drives are now available for desktops and servers. Densities and rotation speeds continue to grow. However, if you compare the cost of common HDDs vs SSDs for sale online, the SSDs are roughly 3-5x the cost per gigabyte. So if you want cheap storage and lots of it, using a standard hard drive is definitely the more economical way to go.

What are the best uses for HDDs?

  • Disk arrays (NAS, RAID, etc.) where high capacity is needed
  • Desktops when low cost is priority
  • Media storage (photos, videos, audio not currently being worked on)
  • Drives with extreme number of reads and writes

What Are SSDs?

SSDs go back almost as far as HDDs, with the first semiconductor storage device compatible with a hard drive interface introduced in 1978, the StorageTek 4305.

Storage Technology 4305 SSD

The StorageTek was an SSD aimed at the IBM mainframe compatible market. The STC 4305 was seven times faster than IBM’s popular 2305 HDD system (and also about half the price). It consisted of a cabinet full of charge-coupled devices and cost $400,000 for 45MB capacity with throughput speeds up to 1.5 MB/sec.

SSDs are based on a type of non-volatile memory called NAND (named for the Boolean operator “NOT AND,” and one of two main types of flash memory). Flash memory stores data in individual memory cells, which are made of floating-gate transistors. Though they are semiconductor-based memory, they retain their information when no power is applied to them — a feature that’s obviously a necessity for permanent data storage.

Samsung SSD

Samsung SSD 850 Pro

Compared to an HDD, SSDs have higher data-transfer rates, higher areal storage density, better reliability, and much lower latency and access times. For most users, it’s the speed of an SSD that primarily attracts them. When discussing the speed of drives, what we are referring to is the speed at which they can read and write data.

For HDDs, the speed at which the platters spin strongly determines the read/write times. When data on an HDD is accessed, the read/write head must physically move to the location where the data was encoded on a magnetic section on the platter. If the file being read was written sequentially to the disk, it will be read quickly. As more data is written to the disk, however, it’s likely that the file will be written across multiple sections, resulting in fragmentation of the data. Fragmented data takes longer to read with an HDD as the read head has to move to different areas of the platter(s) to completely read all the data requested.

Because SSDs have no moving parts, they can operate at speeds far above those of a typical HDD. Fragmentation is not an issue for SSDs. Files can be written anywhere with little impact on read/write times, resulting in read times far faster than any HDD, regardless of fragmentation.

Samsung SSD 850 Pro (back)

Due to the way data is written and read to the drive, however, SSD cells can wear out over time. SSD cells push electrons through a gate to set its state. This process wears on the cell and over time reduces its performance until the SSD wears out. This effect takes a long time and SSDs have mechanisms to minimize this effect, such as the TRIM command. Flash memory writes an entire block of storage no matter how few pages within the block are updated. This requires reading and caching the existing data, erasing the block and rewriting the block. If an empty block is available, a write operation is much faster. The TRIM command, which must be supported in both the OS and the SSD, enables the OS to inform the drive which blocks are no longer needed. It allows the drive to erase the blocks ahead of time in order to make empty blocks available for subsequent writes.

The effect of repeated reading and erasing on an SSD is cumulative and an SSD can slow down and even display errors with age. It’s more likely, however, that the system using the SSD will be discarded for obsolescence before the SSD begins to display read/write errors. Hard drives eventually wear out from constant use as well, since they use physical recording methods, so most users won’t base their selection of an HDD or SSD drive based on expected longevity.

SSD internals

SSD circuit board

Overall, SSDs are considered far more durable than HDDs due to a lack of mechanical parts. The moving mechanisms within an HDD are susceptible to not only wear and tear over time, but to damage due to movement or forceful contact. If one were to drop a laptop with an HDD, there is a high likelihood that all those moving parts will collide, resulting in potential data loss and even destructive physical damage that could kill the HDD outright. SSDs have no moving parts so, while they hold the risk of a potentially shorter life span due to high use, they can survive the rigors we impose upon our portable devices and laptops.

What are the best uses for SSDs?

  • Notebooks, laptops, where performance, lightweight, areal storage density, resistance to shock and general ruggedness are desirable
  • Boot drives holding operating system and applications, which will speed up booting and application launching
  • Working files (media that is being edited: photos, video, audio, etc.)
  • Swap drives where SSD will speed up disk paging
  • Cache drives
  • Database servers
  • Revitalizing an older computer. If you’ve got a computer that seems slow to start up and slow to load applications and files, updating the boot drive with an SSD could make it seem, if not new, at least as if it just came back refreshed from spending some time on the beach.

Stay Tuned for Part 2 of HDD vs SSD

That’s it for part 1. In our second part we’ll take a deeper look at the differences between HDDs and SSDs, how both HDD and SSD technologies are evolving, and how Backblaze takes advantage of SSDs in our operations and data centers.

Here's a tip!Here’s a tip on finding all the posts tagged with SSD on our blog. Just follow https://www.backblaze.com/blog/tag/ssd/.

Don’t miss future posts on HDDs, SSDs, and other topics, including hard drive stats, cloud storage, and tips and tricks for backing up to the cloud. Use the Join button above to receive notification of future posts on our blog.

The post HDD vs SSD: What Does the Future for Storage Hold? appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Dotcom: Obama Admitted “Mistakes Were Made” in Megaupload Case

Post Syndicated from Andy original https://torrentfreak.com/dotcom-obama-admitted-mistakes-were-made-in-megaupload-case-180301/

When Megaupload was forcefully shut down in 2012, it initially appeared like ‘just’ another wave of copyright enforcement action by US authorities.

When additional details began to filter through, the reality of what had happened was nothing short of extraordinary.

Not only were large numbers of Megaupload servers and millions of dollars seized, but Kim Dotcom’s home in New Zealand was subjected to a military-style raid comprised of helicopters and dozens of heavily armed special tactics police. The whole thing was monitored live by the FBI.

Few people who watched the events of that now-infamous January day unfold came to the conclusion this was a routine copyright-infringement case. According to Kim Dotcom, whose life had just been turned upside down, something of this scale must’ve filtered down from the very top of the US government. It was hard to disagree.

At the time, Dotcom told TorrentFreak that then-Vice President Joe Biden directed attorney Neil MacBride to target the cloud storage site and ever since the Megaupload founder has leveled increasingly serious allegations at officials of the former government of Barack Obama.

For example, Dotcom says that since the US would have difficulty gaining access to him in his former home of Hong Kong, the government of New Zealand was persuaded to welcome him in, knowing they would eventually turn him over to the United States. More recently he’s been turning up the pressure again, such as a tweet on February 20th which cast more light on that process.

“Joe Biden had a White House meeting with an ‘extradition expert’ who worked for Hong Kong police and a handful of Hollywood executives to discuss my case. A week prior to this meeting Neil MacBride hand-delivered his action plan to Biden’s chief of staff, also at the White House,” Dotcom wrote.

But this claim is just the tip of an extremely large iceberg that’s involved illegal spying on Dotcom in New Zealand and a dizzying array of legal battles that are set to go on for years to come. But perhaps of most interest now is that rather than wilting away under the pressure, Dotcom appears to be just warming up.

A few hours ago Dotcom commented on an article published in The Hill which revealed that Barack Obama will visit New Zealand in March, possibly to celebrate the opening of Air New Zealand’s new route to the U.S.

Rather than expressing disappointment, the Megaupload founder seemed pleased that the former president would be touching down next month.

“Great. I’ll have a Court subpoena waiting for him in New Zealand,” Dotcom wrote.

But that was just a mere hors d’oeuvre, with the main course was yet to come. But come it did.

“A wealthy Asian Megaupload shareholder hired a friend of the Obamas to enquire about our case. This person was recommended by a member of the Chinese politburo ‘if you want to get to Obama directly’. We did,” Dotcom revealed.

Dotcom says he’ll release a transcript detailing what Obama told his friend on March 21 when Obama arrives in town but in the meantime, he offered another little taster.

“Mistakes were made. It hasn’t gone well,” Obama reportedly told the person reporting back to Megaupload. “It’s a problem. I’ll see to it after the election.”

Of course, Obama’s position after the election was much different to what had gone before, but that didn’t stop Dotcom’s associates infiltrating the process aimed at keeping the Democrats in power.

“Our friendly Obama contact smuggled an @EFF lawyer into a re-election fundraiser hosted by former Vice President Joe Biden,” he revealed.

“When Biden was asked about the Megaupload case he bragged that it was his case and that he ‘took care of it’,” which is what Dotcom has been claiming all along.

On March 21, when Obama lands in New Zealand, Dotcom says he’ll be waiting.

“I’m looking forward to @BarackObama providing some insight into the political dimension of the Megaupload case when he arrives in the New Zealand jurisdiction,” he teased.

Better get the popcorn ready….

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Best Practices for Running Apache Cassandra on Amazon EC2

Post Syndicated from Prasad Alle original https://aws.amazon.com/blogs/big-data/best-practices-for-running-apache-cassandra-on-amazon-ec2/

Apache Cassandra is a commonly used, high performance NoSQL database. AWS customers that currently maintain Cassandra on-premises may want to take advantage of the scalability, reliability, security, and economic benefits of running Cassandra on Amazon EC2.

Amazon EC2 and Amazon Elastic Block Store (Amazon EBS) provide secure, resizable compute capacity and storage in the AWS Cloud. When combined, you can deploy Cassandra, allowing you to scale capacity according to your requirements. Given the number of possible deployment topologies, it’s not always trivial to select the most appropriate strategy suitable for your use case.

In this post, we outline three Cassandra deployment options, as well as provide guidance about determining the best practices for your use case in the following areas:

  • Cassandra resource overview
  • Deployment considerations
  • Storage options
  • Networking
  • High availability and resiliency
  • Maintenance
  • Security

Before we jump into best practices for running Cassandra on AWS, we should mention that we have many customers who decided to use DynamoDB instead of managing their own Cassandra cluster. DynamoDB is fully managed, serverless, and provides multi-master cross-region replication, encryption at rest, and managed backup and restore. Integration with AWS Identity and Access Management (IAM) enables DynamoDB customers to implement fine-grained access control for their data security needs.

Several customers who have been using large Cassandra clusters for many years have moved to DynamoDB to eliminate the complications of administering Cassandra clusters and maintaining high availability and durability themselves. Gumgum.com is one customer who migrated to DynamoDB and observed significant savings. For more information, see Moving to Amazon DynamoDB from Hosted Cassandra: A Leap Towards 60% Cost Saving per Year.

AWS provides options, so you’re covered whether you want to run your own NoSQL Cassandra database, or move to a fully managed, serverless DynamoDB database.

Cassandra resource overview

Here’s a short introduction to standard Cassandra resources and how they are implemented with AWS infrastructure. If you’re already familiar with Cassandra or AWS deployments, this can serve as a refresher.

Resource Cassandra AWS
Cluster

A single Cassandra deployment.

 

This typically consists of multiple physical locations, keyspaces, and physical servers.

A logical deployment construct in AWS that maps to an AWS CloudFormation StackSet, which consists of one or many CloudFormation stacks to deploy Cassandra.
Datacenter A group of nodes configured as a single replication group.

A logical deployment construct in AWS.

 

A datacenter is deployed with a single CloudFormation stack consisting of Amazon EC2 instances, networking, storage, and security resources.

Rack

A collection of servers.

 

A datacenter consists of at least one rack. Cassandra tries to place the replicas on different racks.

A single Availability Zone.
Server/node A physical virtual machine running Cassandra software. An EC2 instance.
Token Conceptually, the data managed by a cluster is represented as a ring. The ring is then divided into ranges equal to the number of nodes. Each node being responsible for one or more ranges of the data. Each node gets assigned with a token, which is essentially a random number from the range. The token value determines the node’s position in the ring and its range of data. Managed within Cassandra.
Virtual node (vnode) Responsible for storing a range of data. Each vnode receives one token in the ring. A cluster (by default) consists of 256 tokens, which are uniformly distributed across all servers in the Cassandra datacenter. Managed within Cassandra.
Replication factor The total number of replicas across the cluster. Managed within Cassandra.

Deployment considerations

One of the many benefits of deploying Cassandra on Amazon EC2 is that you can automate many deployment tasks. In addition, AWS includes services, such as CloudFormation, that allow you to describe and provision all your infrastructure resources in your cloud environment.

We recommend orchestrating each Cassandra ring with one CloudFormation template. If you are deploying in multiple AWS Regions, you can use a CloudFormation StackSet to manage those stacks. All the maintenance actions (scaling, upgrading, and backing up) should be scripted with an AWS SDK. These may live as standalone AWS Lambda functions that can be invoked on demand during maintenance.

You can get started by following the Cassandra Quick Start deployment guide. Keep in mind that this guide does not address the requirements to operate a production deployment and should be used only for learning more about Cassandra.

Deployment patterns

In this section, we discuss various deployment options available for Cassandra in Amazon EC2. A successful deployment starts with thoughtful consideration of these options. Consider the amount of data, network environment, throughput, and availability.

  • Single AWS Region, 3 Availability Zones
  • Active-active, multi-Region
  • Active-standby, multi-Region

Single region, 3 Availability Zones

In this pattern, you deploy the Cassandra cluster in one AWS Region and three Availability Zones. There is only one ring in the cluster. By using EC2 instances in three zones, you ensure that the replicas are distributed uniformly in all zones.

To ensure the even distribution of data across all Availability Zones, we recommend that you distribute the EC2 instances evenly in all three Availability Zones. The number of EC2 instances in the cluster is a multiple of three (the replication factor).

This pattern is suitable in situations where the application is deployed in one Region or where deployments in different Regions should be constrained to the same Region because of data privacy or other legal requirements.

Pros Cons

●     Highly available, can sustain failure of one Availability Zone.

●     Simple deployment

●     Does not protect in a situation when many of the resources in a Region are experiencing intermittent failure.

 

Active-active, multi-Region

In this pattern, you deploy two rings in two different Regions and link them. The VPCs in the two Regions are peered so that data can be replicated between two rings.

We recommend that the two rings in the two Regions be identical in nature, having the same number of nodes, instance types, and storage configuration.

This pattern is most suitable when the applications using the Cassandra cluster are deployed in more than one Region.

Pros Cons

●     No data loss during failover.

●     Highly available, can sustain when many of the resources in a Region are experiencing intermittent failures.

●     Read/write traffic can be localized to the closest Region for the user for lower latency and higher performance.

●     High operational overhead

●     The second Region effectively doubles the cost

 

Active-standby, multi-region

In this pattern, you deploy two rings in two different Regions and link them. The VPCs in the two Regions are peered so that data can be replicated between two rings.

However, the second Region does not receive traffic from the applications. It only functions as a secondary location for disaster recovery reasons. If the primary Region is not available, the second Region receives traffic.

We recommend that the two rings in the two Regions be identical in nature, having the same number of nodes, instance types, and storage configuration.

This pattern is most suitable when the applications using the Cassandra cluster require low recovery point objective (RPO) and recovery time objective (RTO).

Pros Cons

●     No data loss during failover.

●     Highly available, can sustain failure or partitioning of one whole Region.

●     High operational overhead.

●     High latency for writes for eventual consistency.

●     The second Region effectively doubles the cost.

Storage options

In on-premises deployments, Cassandra deployments use local disks to store data. There are two storage options for EC2 instances:

Your choice of storage is closely related to the type of workload supported by the Cassandra cluster. Instance store works best for most general purpose Cassandra deployments. However, in certain read-heavy clusters, Amazon EBS is a better choice.

The choice of instance type is generally driven by the type of storage:

  • If ephemeral storage is required for your application, a storage-optimized (I3) instance is the best option.
  • If your workload requires Amazon EBS, it is best to go with compute-optimized (C5) instances.
  • Burstable instance types (T2) don’t offer good performance for Cassandra deployments.

Instance store

Ephemeral storage is local to the EC2 instance. It may provide high input/output operations per second (IOPs) based on the instance type. An SSD-based instance store can support up to 3.3M IOPS in I3 instances. This high performance makes it an ideal choice for transactional or write-intensive applications such as Cassandra.

In general, instance storage is recommended for transactional, large, and medium-size Cassandra clusters. For a large cluster, read/write traffic is distributed across a higher number of nodes, so the loss of one node has less of an impact. However, for smaller clusters, a quick recovery for the failed node is important.

As an example, for a cluster with 100 nodes, the loss of 1 node is 3.33% loss (with a replication factor of 3). Similarly, for a cluster with 10 nodes, the loss of 1 node is 33% less capacity (with a replication factor of 3).

  Ephemeral storage Amazon EBS Comments

IOPS

(translates to higher query performance)

Up to 3.3M on I3

80K/instance

10K/gp2/volume

32K/io1/volume

This results in a higher query performance on each host. However, Cassandra implicitly scales well in terms of horizontal scale. In general, we recommend scaling horizontally first. Then, scale vertically to mitigate specific issues.

 

Note: 3.3M IOPS is observed with 100% random read with a 4-KB block size on Amazon Linux.

AWS instance types I3 Compute optimized, C5 Being able to choose between different instance types is an advantage in terms of CPU, memory, etc., for horizontal and vertical scaling.
Backup/ recovery Custom Basic building blocks are available from AWS.

Amazon EBS offers distinct advantage here. It is small engineering effort to establish a backup/restore strategy.

a) In case of an instance failure, the EBS volumes from the failing instance are attached to a new instance.

b) In case of an EBS volume failure, the data is restored by creating a new EBS volume from last snapshot.

Amazon EBS

EBS volumes offer higher resiliency, and IOPs can be configured based on your storage needs. EBS volumes also offer some distinct advantages in terms of recovery time. EBS volumes can support up to 32K IOPS per volume and up to 80K IOPS per instance in RAID configuration. They have an annualized failure rate (AFR) of 0.1–0.2%, which makes EBS volumes 20 times more reliable than typical commodity disk drives.

The primary advantage of using Amazon EBS in a Cassandra deployment is that it reduces data-transfer traffic significantly when a node fails or must be replaced. The replacement node joins the cluster much faster. However, Amazon EBS could be more expensive, depending on your data storage needs.

Cassandra has built-in fault tolerance by replicating data to partitions across a configurable number of nodes. It can not only withstand node failures but if a node fails, it can also recover by copying data from other replicas into a new node. Depending on your application, this could mean copying tens of gigabytes of data. This adds additional delay to the recovery process, increases network traffic, and could possibly impact the performance of the Cassandra cluster during recovery.

Data stored on Amazon EBS is persisted in case of an instance failure or termination. The node’s data stored on an EBS volume remains intact and the EBS volume can be mounted to a new EC2 instance. Most of the replicated data for the replacement node is already available in the EBS volume and won’t need to be copied over the network from another node. Only the changes made after the original node failed need to be transferred across the network. That makes this process much faster.

EBS volumes are snapshotted periodically. So, if a volume fails, a new volume can be created from the last known good snapshot and be attached to a new instance. This is faster than creating a new volume and coping all the data to it.

Most Cassandra deployments use a replication factor of three. However, Amazon EBS does its own replication under the covers for fault tolerance. In practice, EBS volumes are about 20 times more reliable than typical disk drives. So, it is possible to go with a replication factor of two. This not only saves cost, but also enables deployments in a region that has two Availability Zones.

EBS volumes are recommended in case of read-heavy, small clusters (fewer nodes) that require storage of a large amount of data. Keep in mind that the Amazon EBS provisioned IOPS could get expensive. General purpose EBS volumes work best when sized for required performance.

Networking

If your cluster is expected to receive high read/write traffic, select an instance type that offers 10–Gb/s performance. As an example, i3.8xlarge and c5.9xlarge both offer 10–Gb/s networking performance. A smaller instance type in the same family leads to a relatively lower networking throughput.

Cassandra generates a universal unique identifier (UUID) for each node based on IP address for the instance. This UUID is used for distributing vnodes on the ring.

In the case of an AWS deployment, IP addresses are assigned automatically to the instance when an EC2 instance is created. With the new IP address, the data distribution changes and the whole ring has to be rebalanced. This is not desirable.

To preserve the assigned IP address, use a secondary elastic network interface with a fixed IP address. Before swapping an EC2 instance with a new one, detach the secondary network interface from the old instance and attach it to the new one. This way, the UUID remains same and there is no change in the way that data is distributed in the cluster.

If you are deploying in more than one region, you can connect the two VPCs in two regions using cross-region VPC peering.

High availability and resiliency

Cassandra is designed to be fault-tolerant and highly available during multiple node failures. In the patterns described earlier in this post, you deploy Cassandra to three Availability Zones with a replication factor of three. Even though it limits the AWS Region choices to the Regions with three or more Availability Zones, it offers protection for the cases of one-zone failure and network partitioning within a single Region. The multi-Region deployments described earlier in this post protect when many of the resources in a Region are experiencing intermittent failure.

Resiliency is ensured through infrastructure automation. The deployment patterns all require a quick replacement of the failing nodes. In the case of a regionwide failure, when you deploy with the multi-Region option, traffic can be directed to the other active Region while the infrastructure is recovering in the failing Region. In the case of unforeseen data corruption, the standby cluster can be restored with point-in-time backups stored in Amazon S3.

Maintenance

In this section, we look at ways to ensure that your Cassandra cluster is healthy:

  • Scaling
  • Upgrades
  • Backup and restore

Scaling

Cassandra is horizontally scaled by adding more instances to the ring. We recommend doubling the number of nodes in a cluster to scale up in one scale operation. This leaves the data homogeneously distributed across Availability Zones. Similarly, when scaling down, it’s best to halve the number of instances to keep the data homogeneously distributed.

Cassandra is vertically scaled by increasing the compute power of each node. Larger instance types have proportionally bigger memory. Use deployment automation to swap instances for bigger instances without downtime or data loss.

Upgrades

All three types of upgrades (Cassandra, operating system patching, and instance type changes) follow the same rolling upgrade pattern.

In this process, you start with a new EC2 instance and install software and patches on it. Thereafter, remove one node from the ring. For more information, see Cassandra cluster Rolling upgrade. Then, you detach the secondary network interface from one of the EC2 instances in the ring and attach it to the new EC2 instance. Restart the Cassandra service and wait for it to sync. Repeat this process for all nodes in the cluster.

Backup and restore

Your backup and restore strategy is dependent on the type of storage used in the deployment. Cassandra supports snapshots and incremental backups. When using instance store, a file-based backup tool works best. Customers use rsync or other third-party products to copy data backups from the instance to long-term storage. For more information, see Backing up and restoring data in the DataStax documentation. This process has to be repeated for all instances in the cluster for a complete backup. These backup files are copied back to new instances to restore. We recommend using S3 to durably store backup files for long-term storage.

For Amazon EBS based deployments, you can enable automated snapshots of EBS volumes to back up volumes. New EBS volumes can be easily created from these snapshots for restoration.

Security

We recommend that you think about security in all aspects of deployment. The first step is to ensure that the data is encrypted at rest and in transit. The second step is to restrict access to unauthorized users. For more information about security, see the Cassandra documentation.

Encryption at rest

Encryption at rest can be achieved by using EBS volumes with encryption enabled. Amazon EBS uses AWS KMS for encryption. For more information, see Amazon EBS Encryption.

Instance store–based deployments require using an encrypted file system or an AWS partner solution. If you are using DataStax Enterprise, it supports transparent data encryption.

Encryption in transit

Cassandra uses Transport Layer Security (TLS) for client and internode communications.

Authentication

The security mechanism is pluggable, which means that you can easily swap out one authentication method for another. You can also provide your own method of authenticating to Cassandra, such as a Kerberos ticket, or if you want to store passwords in a different location, such as an LDAP directory.

Authorization

The authorizer that’s plugged in by default is org.apache.cassandra.auth.Allow AllAuthorizer. Cassandra also provides a role-based access control (RBAC) capability, which allows you to create roles and assign permissions to these roles.

Conclusion

In this post, we discussed several patterns for running Cassandra in the AWS Cloud. This post describes how you can manage Cassandra databases running on Amazon EC2. AWS also provides managed offerings for a number of databases. To learn more, see Purpose-built databases for all your application needs.

If you have questions or suggestions, please comment below.


Additional Reading

If you found this post useful, be sure to check out Analyze Your Data on Amazon DynamoDB with Apache Spark and Analysis of Top-N DynamoDB Objects using Amazon Athena and Amazon QuickSight.


About the Authors

Prasad Alle is a Senior Big Data Consultant with AWS Professional Services. He spends his time leading and building scalable, reliable Big data, Machine learning, Artificial Intelligence and IoT solutions for AWS Enterprise and Strategic customers. His interests extend to various technologies such as Advanced Edge Computing, Machine learning at Edge. In his spare time, he enjoys spending time with his family.

 

 

 

Provanshu Dey is a Senior IoT Consultant with AWS Professional Services. He works on highly scalable and reliable IoT, data and machine learning solutions with our customers. In his spare time, he enjoys spending time with his family and tinkering with electronics & gadgets.

 

 

 

Hollywood Commissioned Tough Jail Sentences for Online Piracy, ISP Says

Post Syndicated from Andy original https://torrentfreak.com/hollywood-commissioned-tough-jail-sentences-for-online-piracy-isp-says-180227/

According to local prosecutors who have handled many copyright infringement cases over the past decade, Sweden is nowhere near tough enough on those who commit online infringement.

With this in mind, the government sought advice on how such crimes should be punished, not only more severely, but also in proportion to the damages alleged to have been caused by defendants’ activities.

The corresponding report was returned to Minister for Justice Heléne Fritzon earlier this month by Council of Justice member Dag Mattsson. The paper proposed a new tier of offenses that should receive special punishment when there are convictions for large-scale copyright infringement and “serious” trademark infringement.

Partitioning the offenses into two broad categories, the report envisions those found guilty of copyright infringement or trademark infringement “of a normal grade” may be sentenced to fines or imprisonment up to a maximum of two years. For those at the other end of the scale, engaged in “cases of gross crimes”, the penalty sought is a minimum of six months in prison and not more than six years.

The proposals have been criticized by those who feel that copyright infringement shouldn’t be put on a par with more serious and even potentially violent crimes. On the other hand, tools to deter larger instances of infringement have been welcomed by entertainment industry groups, who have long sought more robust sentencing options in order to protect their interests.

In the middle, however, are Internet service providers such as Bahnhof, who are often dragged into the online piracy debate due to the allegedly infringing actions of some of their customers. In a statement on the new proposals, the company is clear on why Sweden is preparing to take such a tough stance against infringement.

“It’s not a daring guess that media companies are asking for Sweden to tighten the penalty for illegal file sharing and streaming,” says Bahnhof lawyer Wilhelm Dahlborn.

“It would have been better if the need for legislative change had taken place at EU level and co-ordinated with other similar intellectual property legislation.”

Bahnhof chief Jon Karlung, who is never afraid to speak his mind on such matters, goes a step further. He believes the initiative amounts to a gift to the United States.

“It’s nothing but a commission from the American film industry,” Karlung says.

“I do not mind them going for their goals in court and trying to protect their interests, but it does not mean that the state, the police, and ultimately taxpayers should put mass resources on it.”

Bahnhof notes that the proposals for the toughest extended jail sentences aren’t directly aimed at petty file-sharers. However, the introduction of a new offense of “gross crime” means that the limitation period shifts from the current five years to ten.

It also means that due to the expansion of prison terms beyond two years, secret monitoring of communications (known as HÖK) could come into play.

“If the police have access to HÖK, it can be used to get information about which individuals are file sharing,” warns Bahnhof lawyer Wilhelm Dahlborn.

“One can also imagine a scenario where media companies increasingly report crime as gross in order to get the police to do the investigative work they have previously done. Harder punishments to tackle file-sharing also appear very old-fashioned and equally ineffective.”

As noted in our earlier report, the new proposals also include measures that would enable the state to confiscate all kinds of property, both physical items and more intangible assets such as domain names. Bahnhof also takes issue with this, noting that domains are not the problem here.

“In our opinion, it is not the domain name which is the problem, it is the content of the website that the domain name points to,” the company says.

“Moreover, confiscation of a domain name may conflict with constitutional rules on freedom of expression in a way that is very unfortunate. The issues of freedom of expression and why copyright infringement is to be treated differently haven’t been addressed much in the investigation.”

Under the new proposals, damage to rightsholders and monetary gain by the defendant would also be taken into account when assessing whether a crime is “gross” or not. This raises questions as to what extent someone could be held liable for piracy when a rightsholder maintains damage was caused yet no profit was generated.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Pirate Site Operators’ Jail Sentences Overturned By Court of Appeal

Post Syndicated from Andy original https://torrentfreak.com/pirate-site-operators-jail-sentences-overturned-by-court-of-appeal-180226/

With The Pirate Bay proving to be somewhat of an elusive and irritating target, in 2014 police took on a site capturing an increasing portion of the Swedish pirate market.

Unlike The Pirate Bay which uses torrents, Dreamfilm was a portal for streaming content and it quickly grew alongside the now-defunct Swefilmer to dominate the local illicit in-browser viewing sector. But after impressive growth, things came to a sudden halt.

In January 2015, Dreamfilm announced that the site would be shut down after one of its administrators was detained by the authorities and interrogated. A month later, several more Sweden-based sites went down including the country’s second largest torrent site Tankefetast, torrent site PirateHub, and streaming portal Tankefetast Play (TFPlay).

Anti-piracy group Rights Alliance described the four-site networks as one of “Europe’s leading players for illegal file sharing and streaming.”

Image published by Dreamfilm after the raiddreamfilm

After admitting they’d been involved in the sites but insisting they’d committed no crimes, last year four men aged between 21 and 31-years-old appeared in court charged with copyright infringement. It didn’t go well.

The Linköping District Court found them guilty and decided they should all go to prison, with the then 23-year-old founder receiving the harshest sentence of 10 months, a member of the Pirate Party who reportedly handled advertising receiving 8 months, and two others getting six months each. On top, they were ordered to pay damages of SEK 1,000,000 ($122,330) to film industry plaintiffs.

Like many similar cases in Sweden, the case went to appeal and late last week the court handed down its decision which amends the earlier decision in several ways.

Firstly, the Hovrätten (Court of Appeals) agreed that with the District Court’s ruling that the defendants had used dreamfilm.se, tfplay.org, tankafetast.com and piratehub.net as platforms to deliver movies stored on Russian servers to the public.

One defendant owned the domains, another worked as a site supervisor, while the other pair worked as a programmer and in server acquisition, the Court said.

Dagens Juridik reports that the defendants argued that the websites were not a prerequisite for people to access the films, and therefore they had not been made available to a new market.

However, the Court of Appeal agreed with the District Court’s assessment that the links meant that the movies had been made available to a “new audience”, which under EU law means that a copyright infringement had been committed. As far as the samples presented in the case would allow, the men were found to have committed between 45 and 118 breaches of copyright law.

The Court also found that the website operation had a clear financial motive, delivering movies to the public for free while earning money from advertising.

While agreeing with the District Court on most points, the Court of Appeals decided to boost the damages award from SEK 1,000,000 ($122,330) to SEK 4,250,000 ($519,902). However, there was much better news in respect of the prison sentences.

Taking into consideration the young age of the men (who before this case had no criminal records) and the unlikely event that they would offend again, the Court decided that none would have to go to prison as previously determined.

Instead, all of the men were handed conditional sentences with two ordered to pay daily fines, which are penalties based on the offender’s daily personal income.

Last week it was reported that Sweden is preparing to take a tougher line with large-scale online copyright infringers. Proposals currently with the government foresee a new crime of “gross infringement” under both copyright and trademark law, which could lead to sentences of up to six years in prison.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Most Users of Exclusive Torrent Site Also Pay For Services Like Netflix or Prime

Post Syndicated from Andy original https://torrentfreak.com/most-users-of-exclusive-torrent-site-also-pay-for-services-like-netflix-or-prime-180225/

Despite a notable move to unlicensed streaming portals, millions of people still use public torrent sites every day to obtain the latest movies and TV shows. The process is easy, relatively quick, and free.

While these open-to-all platforms are undoubtedly popular, others prefer to use so-called ‘private trackers’, torrent sites with a private members’ club feel. Barriers to entry are much higher and many now require either an invitation from someone who is already a member or the passing of what amounts to an entrance exam.

Once accepted as a member, however, the rewards can be great. While public sites are a bit of a free-for-all, private trackers tend to take control of the content on offer, weeding out poor quality releases and ensuring only the best reach the user. Seeders are also plentiful, meaning that downloads complete in the fastest times.

On the flipside, some of the most exclusive trackers are almost impossible to join. A prime example is HDBits, a site that at last count wouldn’t accept more than 21,000 users yet keeps actual memberships down to around the 18,000 mark. Invites are extremely rare and those already inside tend to guard their accounts with their lives.

Second chances are rare on a site indexing more than 234,000 high-quality releases seeded by more than 950,000 peers and one of the broadest selection of Blu-ray offerings around. That’s what makes the results of a survey currently being carried out on the site even more remarkable.

In a poll launched by site staff, HDBits members – who by definition are already part of one of the most exclusive pirate haunts around – were asked whether they also pay for legal streaming services such as Netflix, Hulu or Amazon Prime.

At the time of writing more than 5,300 members have responded, with a surprising 57% (3,036) stating that they do indeed subscribe to at least one legal streaming service. When questioned on usage, more than a quarter of respondents said they actually use the legal service MORE than they use HDBits, which for a site of that caliber is quite a revelation.

HDBits poll – 57% of pirates pay for legal services

Keeping in mind that the site is creeping towards a quarter of a million torrents and is almost impossible to get into, it’s perhaps no surprise that unscrupulous people with access to an invitation on the site are selling them (against the site’s wishes) for up to $350 each online.

Let that sink in. For access to a pirate service, people are being asked to pay the equivalent of three years’ worth of Netflix subscriptions. Yet of those that are already members, more than a quarter use their Netflix, Hulu or Amazon Prime account more than they do HDBits. That’s a huge feather in the cap for the legal platforms that have nowhere near the selection that HDBits does.

One commenter in the HDBits survey thread gave his opinion on why Netflix might be winning the war.

“A thread several years ago like this was why I bought Netflix stock. Stunned not just that people here would actually pay for streaming 1 year old content in poor quality, but that almost everyone seemed to be doing it. If Netflix can win over [HDBits] then it is clearly a solution that will win over everyone,” he wrote.

Of course, perhaps the most important thing here is that even the most hardcore pirates have no problem purchasing official content, when the environment is right.

Unlike other surveys that can scare people away from admitting they’re breaking the law, most people on HDBits have nothing to hide from their peers. They know they’re pirates and aren’t afraid to admit it, yet almost 60% of them are happy to pay for legal content on top.

Entertainment companies often like to put pirates in one box and legitimate customers in another. Once again it’s now being made clear that such neatly defined barriers aren’t easy to come by.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Getting product security engineering right

Post Syndicated from Michal Zalewski original http://lcamtuf.blogspot.com/2018/02/getting-product-security-engineering.html

Product security is an interesting animal: it is a uniquely cross-disciplinary endeavor that spans policy, consulting,
process automation, in-depth software engineering, and cutting-edge vulnerability research. And in contrast to many
other specializations in our field of expertise – say, incident response or network security – we have virtually no
time-tested and coherent frameworks for setting it up within a company of any size.

In my previous post, I shared some thoughts
on nurturing technical organizations and cultivating the right kind of leadership within. Today, I figured it would
be fitting to follow up with several notes on what I learned about structuring product security work – and about actually
making the effort count.

The “comfort zone” trap

For security engineers, knowing your limits is a sought-after quality: there is nothing more dangerous than a security
expert who goes off script and starts dispensing authoritatively-sounding but bogus advice on a topic they know very
little about. But that same quality can be destructive when it prevents us from growing beyond our most familiar role: that of
a critic who pokes holes in other people’s designs.

The role of a resident security critic lends itself all too easily to a sense of supremacy: the mistaken
belief that our cognitive skills exceed the capabilities of the engineers and product managers who come to us for help
– and that the cool bugs we file are the ultimate proof of our special gift. We start taking pride in the mere act
of breaking somebody else’s software – and then write scathing but ineffectual critiques addressed to executives,
demanding that they either put a stop to a project or sign off on a risk. And hey, in the latter case, they better
brace for our triumphant “I told you so” at some later date.

Of course, escalations of this type have their place, but they need to be a very rare sight; when practiced routinely, they are a telltale
sign of a dysfunctional team. We might be failing to think up viable alternatives that are in tune with business or engineering needs; we might
be very unpersuasive, failing to communicate with other rational people in a language they understand; or it might be that our tolerance for risk
is badly out of whack with the rest of the company. Whatever the cause, I’ve seen high-level escalations where the security team
spoke of valiant efforts to resist inexplicably awful design decisions or data sharing setups; and where product leads in turn talked about
pressing business needs randomly blocked by obstinate security folks. Sometimes, simply having them compare their notes would be enough to arrive
at a technical solution – such as sharing a less sensitive subset of the data at hand.

To be effective, any product security program must be rooted in a partnership with the rest of the company, focused on helping them get stuff done
while eliminating or reducing security risks. To combat the toxic us-versus-them mentality, I found it helpful to have some team members with
software engineering backgrounds, even if it’s the ownership of a small open-source project or so. This can broaden our horizons, helping us see
that we all make the same mistakes – and that not every solution that sounds good on paper is usable once we code it up.

Getting off the treadmill

All security programs involve a good chunk of operational work. For product security, this can be a combination of product launch reviews, design consulting requests, incoming bug reports, or compliance-driven assessments of some sort. And curiously, such reactive work also has the property of gradually expanding to consume all the available resources on a team: next year is bound to bring even more review requests, even more regulatory hurdles, and even more incoming bugs to triage and fix.

Being more tractable, such routine tasks are also more readily enshrined in SDLs, SLAs, and all kinds of other official documents that are often mistaken for a mission statement that justifies the existence of our teams. Soon, instead of explaining to a developer why they should fix a particular problem right away, we end up pointing them to page 17 in our severity classification guideline, which defines that “severity 2” vulnerabilities need to be resolved within a month. Meanwhile, another policy may be telling them that they need to run a fuzzer or a web application scanner for a particular number of CPU-hours – no matter whether it makes sense or whether the job is set up right.

To run a product security program that scales sublinearly, stays abreast of future threats, and doesn’t erect bureaucratic speed bumps just for the sake of it, we need to recognize this inherent tendency for operational work to take over – and we need to reign it in. No matter what the last year’s policy says, we usually don’t need to be doing security reviews with a particular cadence or to a particular depth; if we need to scale them back 10% to staff a two-quarter project that fixes an important API and squashes an entire class of bugs, it’s a short-term risk we should feel empowered to take.

As noted in my earlier post, I find contingency planning to be a valuable tool in this regard: why not ask ourselves how the team would cope if the workload went up another 30%, but bad financial results precluded any team growth? It’s actually fun to think about such hypotheticals ahead of the time – and hey, if the ideas sound good, why not try them out today?

Living for a cause

It can be difficult to understand if our security efforts are structured and prioritized right; when faced with such uncertainty, it is natural to stick to the safe fundamentals – investing most of our resources into the very same things that everybody else in our industry appears to be focusing on today.

I think it’s important to combat this mindset – and if so, we might as well tackle it head on. Rather than focusing on tactical objectives and policy documents, try to write down a concise mission statement explaining why you are a team in the first place, what specific business outcomes you are aiming for, how do you prioritize it, and how you want it all to change in a year or two. It should be a fluid narrative that reads right and that everybody on your team can take pride in; my favorite way of starting the conversation is telling folks that we could always have a new VP tomorrow – and that the VP’s first order of business could be asking, “why do you have so many people here and how do I know they are doing the right thing?”. It’s a playful but realistic framing device that motivates people to get it done.

In general, a comprehensive product security program should probably start with the assumption that no matter how many resources we have at our disposal, we will never be able to stay in the loop on everything that’s happening across the company – and even if we did, we’re not going to be able to catch every single bug. It follows that one of our top priorities for the team should be making sure that bugs don’t happen very often; a scalable way of getting there is equipping engineers with intuitive and usable tools that make it easy to perform common tasks without having to worry about security at all. Examples include standardized, managed containers for production jobs; safe-by-default APIs, such as strict contextual autoescaping for XSS or type safety for SQL; security-conscious style guidelines; or plug-and-play libraries that take care of common crypto or ACL enforcement tasks.

Of course, not all problems can be addressed on framework level, and not every engineer will always reach for the right tools. Because of this, the next principle that I found to be worth focusing on is containment and mitigation: making sure that bugs are difficult to exploit when they happen, or that the damage is kept in check. The solutions in this space can range from low-level enhancements (say, hardened allocators or seccomp-bpf sandboxes) to client-facing features such as browser origin isolation or Content Security Policy.

The usual consulting, review, and outreach tasks are an important facet of a product security program, but probably shouldn’t be the sole focus of your team. It’s also best to avoid undue emphasis on vulnerability showmanship: while valuable in some contexts, it creates a hypercompetitive environment that may be hostile to less experienced team members – not to mention, squashing individual bugs offers very limited value if the same issue is likely to be reintroduced into the codebase the next day. I like to think of security reviews as a teaching opportunity instead: it’s a way to raise awareness, form partnerships with engineers, and help them develop lasting habits that reduce the incidence of bugs. Metrics to understand the impact of your work are important, too; if your engagements are seen mostly as a yet another layer of red tape, product teams will stop reaching out to you for advice.

The other tenet of a healthy product security effort requires us to recognize at a scale and given enough time, every defense mechanism is bound to fail – and so, we need ways to prevent bugs from turning into incidents. The efforts in this space may range from developing product-specific signals for the incident response and monitoring teams; to offering meaningful vulnerability reward programs and nourishing a healthy and respectful relationship with the research community; to organizing regular offensive exercises in hopes of spotting bugs before anybody else does.

Oh, one final note: an important feature of a healthy security program is the existence of multiple feedback loops that help you spot problems without the need to micromanage the organization and without being deathly afraid of taking chances. For example, the data coming from bug bounty programs, if analyzed correctly, offers a wonderful way to alert you to systemic problems in your codebase – and later on, to measure the impact of any remediation and hardening work.

Kim Dotcom Begins New Fight to Avoid Extradition to United States

Post Syndicated from Andy original https://torrentfreak.com/kim-dotcom-begins-new-fight-to-avoid-extradition-to-united-states-180212/

More than six years ago in January 2012, file-hosting site Megaupload was shut down by the United States government and founder Kim Dotcom and his associates were arrested in New Zealand.

What followed was an epic legal battle to extradite Dotcom, Mathias Ortmann, Finn Batato, and Bram van der Kolk to the United States to face several counts including copyright infringement, racketeering, and money laundering. Dotcom has battled the US government every inch of the way.

The most significant matters include the validity of the search warrants used to raid Dotcom’s Coatesville home on January 20, 2012. Despite a prolonged trip through the legal system, in 2014 the Supreme Court dismissed Dotcom’s appeals that the search warrants weren’t valid.

In 2015, the District Court later ruled that Dotcom and his associates are eligible for extradition. A subsequent appeal to the High Court failed when in February 2017 – and despite a finding that communicating copyright-protected works to the public is not a criminal offense in New Zealand – a judge also ruled in favor.

Of course, Dotcom and his associates immediately filed appeals and today in the Court of Appeal in Wellington, their hearing got underway.

Lawyer Grant Illingworth, representing Van der Kolk and Ortmann, told the Court that the case had “gone off the rails” during the initial 10-week extradition hearing in 2015, arguing that the case had merited “meaningful” consideration by a judge, something which failed to happen.

“It all went wrong. It went absolutely, totally wrong,” Mr. Illingworth said. “We were not heard.”

As expected, Illingworth underlined the belief that under New Zealand law, a person may only be extradited for an offense that could be tried in a criminal court locally. His clients’ cases do not meet that standard, the lawyer argued.

Turning back the clocks more than six years, Illingworth again raised the thorny issue of the warrants used to authorize the raids on the Megaupload defendants.

It had previously been established that New Zealand’s GCSB intelligence service had illegally spied on Dotcom and his associates in the lead up to their arrests. However, that fact was not disclosed to the District Court judge who authorized the raids.

“We say that there was misleading conduct at this stage because there was no reference to the fact that information had been gathered illegally by the GCSB,” he said.

But according to Justice Forrest Miller, even if this defense argument holds up the High Court had already found there was a prima facie case to answer “with bells on”.

“The difficulty that you face here ultimately is whether the judicial process that has been followed in both of the courts below was meaningful, to use the Canadian standard,” Justice Miller said.

“You’re going to have to persuade us that what Justice Gilbert [in the High Court] ended up with, even assuming your interpretation of the legislation is correct, was wrong.”

Although the US seeks to extradite Dotcom and his associates on 13 charges, including racketeering, copyright infringement, money laundering and wire fraud, the Court of Appeal previously confirmed that extradition could be granted based on just some of the charges.

The stakes couldn’t be much higher. The FBI says that the “Megaupload Conspiracy” earned the quartet $175m and if extradited to the US, they could face decades in jail.

While Dotcom was not in court today, he has been active on Twitter.

“The court process went ‘off the rails’ when the only copyright expert Judge in NZ was >removed< from my case and replaced by a non-tech Judge who asked if Mega was ‘cow storage’. He then simply copy/pasted 85% of the US submissions into his judgment," Dotcom wrote.

Dotcom also appeared to question the suitability of judges at both the High Court and Court of Appeal for the task in hand.

“Justice Miller and Justice Gilbert (he wrote that High Court judgment) were business partners at the law firm Chapman Tripp which represents the Hollywood Studios in my case. Both Judges are now at the Court of Appeal. Gilbert was promoted shortly after ruling against me,” Dotcom added.

Dotcom is currently suing the New Zealand government for billions of dollars in damages over the warrant which triggered his arrest and the demise of Megaupload.

The hearing is expected to last up to two-and-a-half weeks.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons