Tag Archives: reputation

The Pirate Bay Isn’t Affected By Adverse Court Rulings – Everyone Else Is

Post Syndicated from Andy original https://torrentfreak.com/the-pirate-bay-isnt-affected-by-adverse-court-rulings-everyone-else-is-170618/

For more than a decade The Pirate Bay has been the world’s most controversial site. Delivering huge quantities of copyrighted content to the masses, the platform is revered and reviled across the copyright spectrum.

Its reputation is one of a defiant Internet swashbuckler, but due to changes in how the site has been run in more recent times, its current philosophy is more difficult to gauge. What has never been in doubt, however, is the site’s original intent to be as provocative as possible.

Through endless publicity stunts, some real, some just for the ‘lulz’, The Pirate Bay managed to attract a massive audience, all while incurring the wrath of every major copyright holder in the world.

Make no mistake, they all queued up to strike back, but every subsequent rightsholder action was met by a Pirate Bay middle finger, two fingers, or chin flick, depending on the mood of the day. This only served to further delight the masses, who happily spread the word while keeping their torrents flowing.

This vicious circle of being targeted by the entertainment industries, mocking them, and then reaping the traffic benefits, developed into the cheapest long-term marketing campaign the Internet had ever seen. But nothing is ever truly for free and there have been consequences.

After taunting Hollywood and the music industry with its refusals to capitulate, endless legal action that the site would have ordinarily been forced to participate in largely took place without The Pirate Bay being present. It doesn’t take a law degree to work out what happened in each and every one of those cases, whatever complex route they took through the legal system. No defense, no win.

For example, the web-blocking phenomenon across the UK, Europe, Asia and Australia was driven by the site’s absolute resilience and although there would clearly have been other scapegoats had The Pirate Bay disappeared, the site was the ideal bogeyman the copyright lobby required to move forward.

Filing blocking lawsuits while bringing hosts, advertisers, and ISPs on board for anti-piracy initiatives were also made easier with the ‘evil’ Pirate Bay still online. Immune from every anti-piracy technique under the sun, the existence of the platform in the face of all onslaughts only strengthened the cases of those arguing for even more drastic measures.

Over a decade, this has meant a significant tightening of the sharing and streaming climate. Without any big legislative changes but plenty of case law against The Pirate Bay, web-blocking is now a walk in the park, ad hoc domain seizures are a fairly regular occurrence, and few companies want to host sharing sites. Advertisers and brands are also hesitant over where they place their ads. It’s a very different world to the one of 10 years ago.

While it would be wrong to attribute every tightening of the noose to the actions of The Pirate Bay, there’s little doubt that the site and its chaotic image played a huge role in where copyright enforcement is today. The platform set out to provoke and succeeded in every way possible, gaining supporters in their millions. It could also be argued it kicked a hole in a hornets’ nest, releasing the hell inside.

But perhaps the site’s most amazing achievement is the way it has managed to stay online, despite all the turmoil.

This week yet another ruling, this time from the powerful European Court of Justice, found that by offering links in the manner it does, The Pirate Bay and other sites are liable for communicating copyright works to the public. Of course, this prompted the usual swathe of articles claiming that this could be the final nail in the site’s coffin.

Wrong.

In common with every ruling, legal defeat, and legislative restriction put in place due to the site’s activities, this week’s decision from the ECJ will have zero effect on the Pirate Bay’s availability. For right or wrong, the site was breaking the law long before this ruling and will continue to do so until it decides otherwise.

What we have instead is a further tightened legal landscape that will have a lasting effect on everything BUT the site, including weaker torrent sites, Internet users, and user-uploaded content sites such as YouTube.

With The Pirate Bay carrying on regardless, that is nothing short of remarkable.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Digital painter rundown

Post Syndicated from Eevee original https://eev.ee/blog/2017/06/17/digital-painter-rundown/

Another patron post! IndustrialRobot asks:

You should totally write about drawing/image manipulation programs! (Inspired by https://eev.ee/blog/2015/05/31/text-editor-rundown/)

This is a little trickier than a text editor comparison — while most text editors are cross-platform, quite a few digital art programs are not. So I’m effectively unable to even try a decent chunk of the offerings. I’m also still a relatively new artist, and image editors are much harder to briefly compare than text editors…

Right, now that your expectations have been suitably lowered:

Krita

I do all of my digital art in Krita. It’s pretty alright.

Okay so Krita grew out of Calligra, which used to be KOffice, which was an office suite designed for KDE (a Linux desktop environment). I bring this up because KDE has a certain… reputation. With KDE, there are at least three completely different ways to do anything, each of those ways has ludicrous amounts of customization and settings, and somehow it still can’t do what you want.

Krita inherits this aesthetic by attempting to do literally everything. It has 17 different brush engines, more than 70 layer blending modes, seven color picker dockers, and an ungodly number of colorspaces. It’s clearly intended primarily for drawing, but it also supports animation and vector layers and a pretty decent spread of raster editing tools. I just right now discovered that it has Photoshop-like “layer styles” (e.g. drop shadow), after a year and a half of using it.

In fairness, Krita manages all of this stuff well enough, and (apparently!) it manages to stay out of your way if you’re not using it. In less fairness, they managed to break erasing with a Wacom tablet pen for three months?

I don’t want to rag on it too hard; it’s an impressive piece of work, and I enjoy using it! The emotion it evokes isn’t so much frustration as… mystified bewilderment.

I once filed a ticket suggesting the addition of a brush size palette — a panel showing a grid of fixed brush sizes that makes it easy to switch between known sizes with a tablet pen (and increases the chances that you’ll be able to get a brush back to the right size again). It’s a prominent feature of Paint Tool SAI and Clip Studio Paint, and while I’ve never used either of those myself, I’ve seen a good few artists swear by it.

The developer response was that I could emulate the behavior by creating brush presets. But that’s flat-out wrong: getting the same effect would require creating a ton of brush presets for every brush I have, plus giving them all distinct icons so the size is obvious at a glance. Even then, it would be much more tedious to use and fill my presets with junk.

And that sort of response is what’s so mysterious to me. I’ve never even been able to use this feature myself, but a year of amateur painting with Krita has convinced me that it would be pretty useful. But a developer didn’t see the use and suggested an incredibly tedious alternative that only half-solves the problem and creates new ones. Meanwhile, of the 28 existing dockable panels, a quarter of them are different ways to choose colors.

What is Krita trying to be, then? What does Krita think it is? Who precisely is the target audience? I have no idea.


Anyway, I enjoy drawing in Krita well enough. It ships with a respectable set of brushes, and there are plenty more floating around. It has canvas rotation, canvas mirroring, perspective guide tools, and other art goodies. It doesn’t colordrop on right click by default, which is arguably a grave sin (it shows a customizable radial menu instead), but that’s easy to rebind. It understands having a background color beneath a bottom transparent layer, which is very nice. You can also toggle any brush between painting and erasing with the press of a button, and that turns out to be very useful.

It doesn’t support infinite canvases, though it does offer a one-click button to extend the canvas in a given direction. I’ve never used it (and didn’t even know what it did until just now), but would totally use an infinite canvas.

I haven’t used the animation support too much, but it’s pretty nice to have. Granted, the only other animation software I’ve used is Aseprite, so I don’t have many points of reference here. It’s a relatively new addition, too, so I assume it’ll improve over time.

The one annoyance I remember with animation was really an interaction with a larger annoyance, which is: working with selections kind of sucks. You can’t drag a selection around with the selection tool; you have to switch to the move tool. That would be fine if you could at least drag the selection ring around with the selection tool, but you can’t do that either; dragging just creates a new selection.

If you want to copy a selection, you have to explicitly copy it to the clipboard and paste it, which creates a new layer. Ctrl-drag with the move tool doesn’t work. So then you have to merge that layer down, which I think is where the problem with animation comes in: a new layer is non-animated by default, meaning it effectively appears in any frame, so simply merging it down with merge it onto every single frame of the layer below. And you won’t even notice until you switch frames or play back the animation. Not ideal.

This is another thing that makes me wonder about Krita’s sense of identity. It has a lot of fancy general-purpose raster editing features that even GIMP is still struggling to implement, like high color depth support and non-destructive filters, yet something as basic as working with selections is clumsy. (In fairness, GIMP is a bit clumsy here too, but it has a consistent notion of “floating selection” that’s easy enough to work with.)

I don’t know how well Krita would work as a general-purpose raster editor; I’ve never tried to use it that way. I can’t think of anything obvious that’s missing. The only real gotcha is that some things you might expect to be tools, like smudge or clone, are just types of brush in Krita.

GIMP

Ah, GIMP — open source’s answer to Photoshop.

It’s very obviously intended for raster editing, and I’m pretty familiar with it after half a lifetime of only using Linux. I even wrote a little Scheme script for it ages ago to automate some simple edits to a couple hundred files, back before I was aware of ImageMagick. I don’t know what to say about it, specifically; it’s fairly powerful and does a wide variety of things.

In fact I’d say it’s almost frustratingly intended for raster editing. I used GIMP in my first attempts at digital painting, before I’d heard of Krita. It was okay, but so much of it felt clunky and awkward. Painting is split between a pencil tool, a paintbrush tool, and an airbrush tool; I don’t really know why. The default brushes are largely uninteresting. Instead of brush presets, there are tool presets that can be saved for any tool; it’s a neat idea, but doesn’t feel like a real substitute for brush presets.

Much of the same functionality as Krita is there, but it’s all somehow more clunky. I’m sure it’s possible to fiddle with the interface to get something friendlier for painting, but I never really figured out how.

And then there’s the surprising stuff that’s missing. There’s no canvas rotation, for example. There’s only one type of brush, and it just stamps the same pattern along a path. I don’t think it’s possible to smear or blend or pick up color while painting. The only way to change the brush size is via the very sensitive slider on the tool options panel, which I remember being a little annoying with a tablet pen. Also, you have to specifically enable tablet support? It’s not difficult or anything, but I have no idea why the default is to ignore tablet pressure and treat it like a regular mouse cursor.

As I mentioned above, there’s also no support for high color depth or non-destructive editing, which is honestly a little embarrassing. Those are the major things Serious Professionals™ have been asking for for ages, and GIMP has been trying to provide them, but it’s taking a very long time. The first signs of GEGL, a new library intended to provide these features, appeared in GIMP 2.6… in 2008. The last major release was in 2012. GIMP has been working on this new plumbing for almost as long as Krita’s entire development history. (To be fair, Krita has also raised almost €90,000 from three Kickstarters to fund its development; I don’t know that GIMP is funded at all.)

I don’t know what’s up with GIMP nowadays. It’s still under active development, but the exact status and roadmap are a little unclear. I still use it for some general-purpose editing, but I don’t see any reason to use it to draw.

I do know that canvas rotation will be in the next release, and there was some experimentation with embedding MyPaint’s brush engine (though when I tried it it was basically unusable), so maybe GIMP is interested in wooing artists? I guess we’ll see.

MyPaint

Ah, MyPaint. I gave it a try once. Once.

It’s a shame, really. It sounds pretty great: specifically built for drawing, has very powerful brushes, supports an infinite canvas, supports canvas rotation, has a simple UI that gets out of your way. Perfect.

Or so it seems. But in MyPaint’s eagerness to shed unnecessary raster editing tools, it forgot a few of the more useful ones. Like selections.

MyPaint has no notion of a selection, nor of copy/paste. If you want to move a head to align better to a body, for example, the sanctioned approach is to duplicate the layer, erase the head from the old layer, erase everything but the head from the new layer, then move the new layer.

I can’t find anything that resembles HSL adjustment, either. I guess the workaround for that is to create H/S/L layers and floodfill them with different colors until you get what you want.

I can’t work seriously without these basic editing tools. I could see myself doodling in MyPaint, but Krita works just as well for doodling as for serious painting, so I’ve never gone back to it.

Drawpile

Drawpile is the modern equivalent to OpenCanvas, I suppose? It lets multiple people draw on the same canvas simultaneously. (I would not recommend it as a general-purpose raster editor.)

It’s a little clunky in places — I sometimes have bugs where keyboard focus gets stuck in the chat, or my tablet cursor becomes invisible — but the collaborative part works surprisingly well. It’s not a brush powerhouse or anything, and I don’t think it allows textured brushes, but it supports tablet pressure and canvas rotation and locked alpha and selections and whatnot.

I’ve used it a couple times, and it’s worked well enough that… well, other people made pretty decent drawings with it? I’m not sure I’ve managed yet. And I wouldn’t use it single-player. Still, it’s fun.

Aseprite

Aseprite is for pixel art so it doesn’t really belong here at all. But it’s very good at that and I like it a lot.

That’s all

I can’t name any other serious contender that exists for Linux.

I’m dimly aware of a thing called “Photo Shop” that’s more intended for photos but functions as a passable painter. More artists seem to swear by Paint Tool SAI and Clip Studio Paint. Also there’s Paint.NET, but I have no idea how well it’s actually suited for painting.

And that’s it! That’s all I’ve got. Krita for drawing, GIMP for editing, Drawpile for collaborative doodling.

Team DIMENSION Returns to The Piracy Scene

Post Syndicated from Ernesto original https://torrentfreak.com/team-dimension-returns-to-the-piracy-scene-170608/

In April, one of the best known TV Scene groups suddenly disappeared.

DIMENSION has been a high profile name for over a decade, both in the Scene and on torrent sites, good for tens of thousands of TV-show releases.

Nearly two months had passed since the sudden disappearance and most followers had already said their virtual goodbyes. Out of nowhere, however, several new DIMENSION releases began popping up this week.

It started with a Gotham episode on Tuesday, followed by Angie Tribeca and Pretty Little Liars. The sudden reapparance came without a public explanation, but it’s pretty clear that the group is back in full swing.

DIMENSION returns

The question remains why the group was absent for so long and if the old crew is intact. TorrentFreak spoke to a source who says that the leader and several top members are no longer with the group.

A recent Scene notice titled “Farewell.To.Team-DIMENSION” appeared to confirm that there were internal struggles in the group. However, this appears to be fake, as it was copied from an earlier notice.

Still, there is no doubt that DIMENSION (and the associated LOL “group,” which releases the SD versions) has picked up where it left a few weeks ago, with new TV-releases coming out on a regular basis.

And while reputation is key in the Scene, the average downloader probably can’t be bothered by internal troubles and politics.

They just want their TV fix.

Pirate responses to the comeback

Update: The Scene notice referred to in this article is fake, we updated the article to reflect this.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Online Platforms Should Collaborate to Ban Piracy and Terrorism, Report Suggests

Post Syndicated from Andy original https://torrentfreak.com/online-platforms-collaborate-ban-piracy-terrorism-report-suggests-170608/

With deep ties to the content industries, the Digital Citizens Alliance periodically produces reports on Internet piracy. It has published reports on cyberlockers and tried to blame Cloudflare for the spread of malware, for example.

One of the key themes pursued by DCA is that Internet piracy is inextricably linked to a whole bunch of other online evils and that tackling the former could deliver a much-needed body blow to the latter.

Its new report, titled ‘Trouble in Our Digital Midst’, takes this notion and runs with it, bundling piracy with everything from fake news to hacking, to malware and brand protection, to the sextortion of “young girls and boys” via their computer cameras.

The premise of the report is that cybercrime as a whole is undermining America’s trust in the Internet, noting that 64% of US citizens say that their trust in digital platforms has dropped in the last year. Given the topics under the spotlight, it doesn’t take long to see where this is going – Internet platforms like Google, Facebook and YouTube must tackle the problem.

“When asked, ‘In your opinion, are digital platforms doing enough to keep the Internet safe and trustworthy, or are do they need to do more?’ a staggering 75 percent responded that they need to do more to keep the Internet safe,” the report notes.

It’s abundantly clear that the report is mostly about piracy but a lot of effort has been expended to ensure that people support its general call for the Internet to be cleaned up. By drawing attention to things that even most pirates might find offensive, it’s easy to find more people in agreement.

“Nearly three-quarters of respondents see the pairing of brand name advertising with offensive online content – like ISIS/terrorism recruiting videos – as a threat to the continued trust and integrity of the Internet,” the report notes.

Of course, this is an incredibly sensitive topic. When big brand ads turned up next to terrorist recruiting videos on YouTube, there was an almighty stink, and rightly so. However, at every turn, the DCA report manages to weave the issue of piracy into the equation, noting that the problem includes the “$200 million in advertising that shows up on illegal content theft websites often unbeknownst to the brands.”

The overriding theme is that platforms like Google, Facebook, and YouTube should be able to tackle all of these problems in the same way. Filtering out a terrorist video is the same as removing a pirate movie. And making sure that ads for big brands don’t appear alongside terrorist videos will be just as easy as starving pirates of revenue, the suggestion goes.

But if terrorism doesn’t grind your gears, what about fake news?

“64 percent of Americans say that the Fake News issue has made them less likely to trust the Internet as a source of information,” the report notes.

At this juncture, Facebook gets a gentle pat on the back for dealing with fake news and employing 3,000 people to monitor for violent videos being posted to the network. This shows that the company “takes seriously” the potential harm bad actors pose to Internet safety. But in keeping with the theme running throughout the report, it’s clear DCA are carefully easing in the thin end of the wedge.

“We are at only the beginning of thinking through other kinds of illicit and illegal activity happening on digital platforms right now that we must gain or re-gain control over,” DCA writes.

Quite. In the very next sentence, the group goes on to warn about the sale of drugs and stolen credit cards, adding that the sale of illicit streaming devices (modified Kodi boxes etc) is actually an “insidious yet effective delivery mechanism to infect computers with malware such as Remote Access Trojans.”

Both Amazon and Facebook receive praise in the report for their recent banning (1,2) of augmented Kodi devices but their actions are actually framed as the companies protecting their own reputations, rather than the interests of the media groups that have been putting them under pressure.

“And though this issue underscores the challenges faced by digital platforms – not all of which act with the same level of responsibility – it also highlights the fact digital platforms can and will step up when their own brands are at stake,” the report reads.

But pirate content and Remote Access Trojans through Kodi boxes are only the beginning. Pirate sites are playing a huge part as well, DCA claims, with one in three “content theft websites” exposing people to identify theft, ransomware, and sextortion via “the computer cameras of young girls and boys.”

Worst still, if that was possible, the lack of policing by online platforms means that people are able to “showcase live sexual assaults, murders, and other illegal conduct.”

DCA says that with all this in mind, Americans are looking for online digital platforms to help them. The group claims that citizens need proactive protection from these ills and want companies like Facebook to take similar steps to those taken when warning consumers about fake news and violent content.

So what can be done to stop this tsunami of illegality? According to DCA, platforms like Google, Facebook, YouTube, and Twitter need to up their game and tackle the problem together.

“While digital platforms collaborate on policy and technical issues, there is no evidence that they are sharing information about the bad actors themselves. That enables criminals and bad actors to move seamlessly from platform to platform,” DCA writes.

“There are numerous examples of industry working together to identify and share information about exploitive behavior. For example, casinos share information about card sharks and cheats, and for decades the retail industry has shared information about fraudulent credit cards. A similar model would enable digital platforms and law enforcement to more quickly identify and combat those seeking to leverage the platforms to harm consumers.”

How this kind of collaboration could take place in the real world is open to interpretation but the DCA has a few suggestions of its own. Again, it doesn’t shy away from pulling people on side with something extremely offensive (in this case child pornography) in order to push what is clearly an underlying anti-piracy agenda.

“With a little help from engineers, digital platforms could create fingerprints of unlawful conduct that is shared across platforms to proactively block such conduct, as is done in a limited capacity with child pornography,” DCA explains.

“If these and other newly developed measures were adopted, digital platforms would have the information to enable them to make decisions whether to de-list or demote websites offering illicit goods and services, and the ability to stop the spread of illegal behavior that victimizes its users.”

The careful framing of the DCA report means that there’s something for everyone. If you don’t agree with them on tackling piracy, then their malware, fake news, or child exploitation angles might do the trick. It’s quite a clever strategy but one that the likes of Google, Facebook, and YouTube will recognize immediately.

And they need to – because apparently, it’s their job to sort all of this out. Good luck with that.

The full report can be found here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Symantec Patent Protects Torrent Users Against Malware

Post Syndicated from Ernesto original https://torrentfreak.com/symantec-patent-protects-torrent-users-against-malware-170606/

In recent years we have documented a wide range of patent applications, several of which had a clear anti-piracy angle.

Symantec Corporation, known for the popular anti-virus software Norton Security, is taking a more torrent-friendly approach. At least, that’s what a recently obtained patent suggests.

The patent describes a system that can be used to identify fake torrents and malware-infected downloads, which are a common problem on badly-moderated torrent sites. Downloaders of these torrents are often redirected to scam websites or lured into installing malware.

Here’s where Symantec comes in with their automatic torrent moderating solution. Last week the company obtained a patent for a system that can rate the trustworthiness of torrents and block suspicious content to protect users.

“While the BitTorrent protocol represents a popular method for distributing files, this protocol also represents a common means for distributing malicious software. Unfortunately, torrent hosting sites generally fail to provide sufficient information to reliably predict whether such files are trustworthy,” the patent reads.

Unlike traditional virus scans, where the file itself is scanned for malicious traits, the patented technology uses a reputation score to make the evaluation.

The trustworthiness of torrents is determined by factors including the reputation of the original uploaders, torrent sites, trackers and other peers. For example, if an IP-address of a seeder is linked to several malicious torrents, it will get a low reputation score.

“For example, if an entity has been involved in several torrent transactions that involved malware-infected target files, the reputation information associated with the entity may indicate that the entity has a poor reputation, indicating a high likelihood that the target file represents a potential security risk,” Symantec notes.

In contrast, if a torrent is seeded by a user that only shares non-malicious files, the trustworthiness factor goes up.

Reputation information

If a torrent file has a high likelihood of being linked to malware or other malicious content, the system can take appropriate “security actions.” This may be as simple as deleting the suspicious torrent, or a more complex respone such as blocking all related network traffic.

“Examples of such security actions include, without limitation, alerting a user of the potential security risk, blocking access to the target file until overridden by the user, blocking network traffic associated with the torrent transaction, quarantining the target file, and/or deleting the target file,” Symantec writes.

Security actions

Symantec Corporation applied for the pattern nearly four years ago, but thus far we haven’t seen it used in the real world.

Many torrent users would likely appreciate an extra layer of security, although they might be concerned about overblocking and possible monitoring of their download habits. This means that, for now, they will have to rely on site moderators, and most importantly, common sense.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

MPAA Chief Praises Site-Blocking But Italians Love Piracy – and the Quality

Post Syndicated from Andy original https://torrentfreak.com/mpaa-chief-praises-site-blocking-but-italians-love-pirate-quality-170606/

After holding a reputation for being soft on piracy for many years, in more recent times Italy has taken a much tougher stance. The country now takes regular action against pirate sites and has a fairly aggressive site-blocking mechanism.

On Monday, the industry gathered in Rome and was presented with new data from local anti-piracy outfit FAPAV. The research revealed that while there has been some improvement over the past six years, 39% of Italians are still consuming illicit movies, TV shows, sporting events and other entertainment, at the rate of 669m acts of piracy every year.

While movie piracy is down 4% from 2010, the content most often consumed by pirates is still films, with 33% of the adult population engaging in illicit consumption during the past year.

The downward trend was not shared by TV shows, however. In the past seven years, piracy has risen to 22% of the population, up 13% on figures from 2010.

In keeping with the MPAA’s recent coding of piracy in 1.0, 2.0, and 3.0 variants (P2P as 1.0, streaming websites as 2.0, streaming devices/Kodi as 3.0), FAPAV said that Piracy 2.0 had become even more established recently, with site operators making considerable technological progress.

“The research tells us we can not lower our guard, we always have to work harder and with greater determination in communication and awareness, especially with regard to digital natives,” said FAPAV Secretary General, Bagnoli Rossi.

The FAPAV chief said that there needs to be emphasis in two areas. One, changing perceptions among the public over the seriousness of piracy via education and two, placing pressure on websites using the police, judiciary, and other law enforcement agencies.

“The pillars of anti-piracy protection are: the judicial authority, self-regulatory agreements, communication and educational activities,” said Rossi, adding that cooperation with Italy’s AGCOM had resulted in 94 sites being blocked over three years.

FAPAV research has traditionally focused on people aged 15 and up but the anti-piracy group believes that placing more emphasis on younger people (aged 10-14) is important since they also consume a lot of pirated content online. MPAA chief Chris Dodd, who was at the event, agreed with the sentiment.

“Today’s youth are the future of the audiovisual industry. Young people must learn to respect the people who work in film and television that in 96% of cases never appear [in front of camera] but still work behind the scenes,” Dodd said.

“It is important to educate and direct them towards legal consumption, which creates jobs and encourages investment. Technology has expanded options to consume content legally and at any time and place, but at the same time has given attackers the opportunity to develop illegal businesses.”

Despite large-scale site-blocking not being a reality in the United States, Dodd was also keen to praise Italy for its efforts while acknowledging the wider blocking regimes in place across the EU.

“We must not only act by blocking pirate sites (we have closed a little less than a thousand in Europe) but also focus on legal offers. Today there are 480 legal online distribution services worldwide. We must have more,” Dodd said.

The outgoing MPAA chief reiterated that movies, music, games and a wide range of entertainment products are all available online legally now. Nevertheless, piracy remains a “growing phenomenon” that has criminals at its core.

“Piracy is composed of criminal organizations, ready to steal sensitive data and to make illegal profits any way they can. It’s a business that harms the entire audiovisual market, which in Europe alone has a million working professionals. To promote the culture of legality means protecting this market and its collective heritage,” Dodd said.

In Italy, convincing pirates to go legal might be more easily said than done. Not only do millions download video every year, but the majority of pirates are happy with the quality too. 89% said they were pleased with the quality of downloaded movies while the satisfaction with TV shows was even greater with 91% indicating approval.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

RIAA Says Artists Don’t Need “Moral Rights,” Artists Disagree

Post Syndicated from Ernesto original https://torrentfreak.com/riaa-says-artists-dont-need-moral-rights-artists-disagree-170521/

Most people who create something like to be credited for their work. Whether you make a video, song, photo, or blog post, it feels ‘right’ to receive recognition.

The right to be credited is part of the so-called “moral rights,” which are baked into many copyright laws around the world, adopted at the international level through the Berne Convention.

However, in the United States, this is not the case. The US didn’t sign the Berne Convention right away and opted out from the “moral rights” provision when they eventually joined it.

Now that the U.S. Copyright Office is looking into ways to improve current copyright law, the issue has been brought to the forefront again. The Government recently launched a consultation to hear the thoughts of various stakeholders, which resulted in several noteworthy contributions.

As it turns out, both the MPAA and RIAA are against the introduction of statutory moral rights for artists. They believe that the current system works well and they fear that it’s impractical and expensive to credit all creators for their contributions.

The MPAA stresses that new moral rights may make it harder for producers to distribute their work and may violate the First Amendment rights of producers, artists, and third parties who wish to use the work of others.

In the movie industry, many employees are not credited for their work. They get paid, but can’t claim any “rights” to the products they create, something the MPAA wants to keep intact.

“Further statutory recognition of the moral rights of attribution and integrity risks upsetting this well-functioning system that has made the United States the unrivaled world leader in motion picture production for over a century,” they stress.

The RIAA has a similar view, although the central argument is somewhat different.

The US record labels say that they do everything they can to generate name recognition for their main artists. However, crediting everyone who’s involved in making a song, such as the writer, is not always a good idea.

“A new statutory attribution right, in addition to being unnecessary, would likely have significant unintended consequences,” the RIAA writes (pdf).

The RIAA explains that the music industry has weathered several dramatic shifts over the past two decades. They argue that the transition from physical to digital music – and later streaming – while being confronted with massive piracy, has taken its toll.

There are signs of improvement now, but if moral rights are extended, the RIAA fears that everything might collapse once gain.

“After fifteen years of declining revenues, the recorded music industry outlook is finally showing signs of improvement. This fragile recovery results largely from growing consumer adoption of new streaming models..,” the RIAA writes.

“We urge the Office to avoid legislative proposals that could hamper this nascent recovery by injecting significant additional risk, uncertainty, and complexity into the recorded music business.”

According to the RIAA it would be costly for streaming services credit everyone who’s involved in the creative process. In addition, they simply might not have the screen real estate to pull this off.

“If a statutory attribution right suddenly required these services to provide attribution to others involved in the creative process, that would presumably require costly changes to their user interfaces and push them up against the size limitations of their display screens.”

This means less money for the artists and more clutter on the screen, according to the music group. Music fans probably wouldn’t want to see the list of everyone who worked on a song anyway, they claim.

“To continue growing, streaming services must provide a compelling product to consumers. Providing a long list of on-screen attributions would not make for an engaging or useful experience for consumers,” RIAA writes.

The streaming example is just one of the many issues that may arise, in the eyes of the record labels. They also expect problems with tracks that are played on the radio, or in commercials, where full credits are rarely given.

Interestingly, many of the artists the RIAA claims to represent don’t agree with the group’s comments.

Music Creators North America and The Future of Music Coalition, for example, believe that artists should have statutory moral rights. The latter group argues that, currently, small artists are often powerless against large corporations.

“Moral rights would serve to alleviate the powerlessness faced by creators who often must relinquish their copyright to make a living from their work. These creators should still be provided some right of attribution and integrity as these affect a creator’s reputation and ultimately livelihood.”

The Future of Music Coalition disagrees with the paternalistic perspective that the public isn’t interested in detailed information about the creators of music.

“While interest levels may vary, a significant portion of the public has a great interest in understanding who exactly contributed to the creation works of art which they admire,” they write (pdf).

Knowing who’s involved requires attribution, so it’s crucial that this information becomes available, they argue.

“Music enthusiasts revel in the details of music they adore, but when care is not taken to document and preserve that information, those details can often lost over time and eventually unattainable.”

“To argue that the public generally has a homogenously disinterested opinion of creators is insulting both to the public and to creators,” The Future of Music Coalition adds.

The above shows that the rights of artists are clearly not always aligned with the interests of record labels.

Interestingly, the RIAA and MPAA do agree with major tech companies and civil rights groups such as EFF and Public Knowledge. These are also against new moral rights, albeit for different reasons.

It’s now up to the U.S. Copyright Office to determine if change is indeed required, or if everything will remain the same.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Building a Competitive Moat: Turning Challenges Into Advantages

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/turning-challenges-into-advantages/

castle on top of a storage pod

In my previous post on how Backblaze got started, I mentioned that “just because we knew the right solution, didn’t mean that it was possible.” I’ll dig into that here. The right solution was to offer unlimited backup for $5 per month. The price of storage at the time, however, would have likely forced us to price our unlimited backup service at 2x – 5x that.

We were faced with a difficult challenge – compromise a fundamental feature of our product by removing the unlimited storage element, increase our price point in order to cover our costs but likely limit our potential customer base, seek funding in order to run at a loss while we built market share with a hope/prayer we could make a profit in the future, or find another way (huge unknown that might not have a solution). Below I’ll dig into the options that were available, the paths we tried, and how this challenge completely transformed our company and ended up being our greatest technological advantage.

Available Options:

Use a Storage Service

Originally we intended to build the backup application, but leave the back-end storage to others; likely Amazon S3. This had many advantages:

  1. We would not have to worry about the storage at all
  2. It would scale up or down as we needed it
  3. We would pay only for what we used

Especially as a small, bootstrapped company with limited resources – these were incredible benefits.

There was just one problem. At S3’s then current pricing ($0.15/GB/month), a customer storing just 33 GB would cost us 100% of the $5 per month we would collect. Additionally, we would need to pay S3 transaction and download charges, along with our engineering/support/marketing and other expenses.. The conclusion, even if the average customer stored just 33 GB, it would cost us at least $10/month for a customer that we were charging just $5/month.

In 2007, when we were getting started, there were a few other storage services available. But all were more expensive. Despite the fantastic benefits of using such a service, it simply didn’t work for us.

Buy Storage Systems

Buying storage systems didn’t have all the benefits of using a storage service – we would have to forecast need, buy in big blocks up front, manage data centers, etc. – but it seemed the second-best option. Companies such as EMC, NetApp, Dell, and others sold hundreds of petabytes of storage systems where they provide the servers, software, and support.

Alas, there were two problems: One temporary, the other permanent (and fatal). The temporary problem was that these systems were hundreds of thousands of dollars just to get started. This was challenging for us from a cash-flow perspective, but it was just a question of coming up with the cash. The permanent problem was that these systems cost ~$1,000/TB of storage. Hard drives were selling for ~$100/TB, so there was a 10x markup for the storage system. That markup eliminated pursuing this path. What if the the average customer had 100 GB to store? It would take us 20 months to pay off the purchase. We weren’t sure how much data the average customer would have, but the scenarios we were running made it seem like a $5/month price point was unsustainable.

Our Choices Where:

Don’t Offer the Right Solution

If it’s impossible to offer unlimited backup for $5/month, there are certainly choices. We could have raised the price to $10/month, not make the backup unlimited, or close-up shop altogether. All doable, none ideal.

Raise Funding

Plenty of companies raise funding before they can be self-sustaining, and it can work out great for everyone. We had raised funding for a previous company and believed we could have done it for Backblaze. And raising funding would have taken care of the cash-flow issue if we chose to buy storage systems.

However, it would have left us with a business with negative unit economics – we would lose money on every customer, and the faster we grew, the more money we would lose. VCs do fund these types of companies often (many of the delivery companies today fall in this realm) with the idea that, at scale, you improve your cost structure and possibly also charge more. But it’s a dangerous game since not only is the business not self-sustaining, it inevitably must be significantly altered in order to survive.

Find a Way to Store Data for Less

If there were some way to store data for less, significantly less, it could all work. We had a tiny glimmer of hope that it would be possible: Since hard drives only cost ~$100/TB, if we could somehow use those drives without adding much overhead, that would be quite affordable.

“we wanted to build a sustainable business from day one and build a culture that believes dollars come from customers.”

Our first decision was to not compromise our product by restricting the amount of storage. Although this would have been a much easier solution, it violated our core mission: Create a simple and inexpensive solution to backup all of your important data.

We had previously also decided not to raise funding to get started because we wanted to build a sustainable business from day one and build a culture that believes dollars come from customers. With those decisions made, we moved onto finding the best solution to fulfill our mission and create a viable company.

Experimentation

All we wanted was to attach hard drives to the Internet. If we could do that inexpensively, our backup application could store the data there and we could offer our unlimited backup service.

A hard drive needs to be connected to a server to be available on the Internet. It certainly wouldn’t be very cost effective to have one server for every hard drive, as the server costs would dominate the equation. Alternatively, trying to attach a lot of drives to a server resulted in needing expensive “enterprise” servers. The goal then became cost-efficiently attaching as many hard drives as possible to one server. According to its spec, USB is supposed to allow for 127 devices to be daisy-chained to a single port. We tried; it didn’t work. We considered Firewire, which could connect 63 devices, but the connectors are aimed at graphic designers and ended up too expensive on a unit-basis. Our next thought was to use small consumer-grade DAS (Direct-attached storage) devices and connect those to a server. We managed to attach 8 DAS devices with 4 drives each for a total of 32 hard drives connected to one server.

DAS units attached to a server
This worked well, but it was operationally challenging as none of these devices were meant to fit in a data center rack. Further complicating matters was that moving one of these setups required cabling 10 power cords, and separately moving 9 boxes. Fine at small scale, but very hard to scale up.

We realized that we didn’t need all the boxes, we just needed backplanes to connect the drives from the DAS boxes to the motherboard from the server. We found a different DAS box that supports port multipliers and took that backplane. How did we decide on that DAS box? Tim, co-founder & Chief Cloud Officer, remembers going to Fry’s and picking the box that looked “about right”.

That all laid the path for our eventual 45 drive design. The next thought was: If we could put all that in one box, it might be the solution we were looking for. The first iteration of this was a plywood box.

the first wooden storage pod

That eventually evolved into a steel server and what we refer to as a Storage Pod.

steel storage pod chassis

Building a Storage Platform

The Storage Pod became our key building block, but was just a tiny component of the ‘storage platform’. We had to write software that would run on each Storage Pod, software that would create redundancy between the Storage Pods, and central software and systems that would coordinate other aspects of the system to accept/load balance/validate/clean-up data. We had to find and train contract manufacturers to build the Storage Pods, find and negotiate data center space and bandwidth, setup processes to buy drives and track their reliability, hire people to maintain the systems, and setup the business processes to do all of this and more at scale.

All of this ended up taking tremendous technical effort, management engagement, and work from all corners of Backblaze. But it has also paid enormous dividends.

The Transformation

We started Backblaze thinking of ourselves as a backup company. In reality, we became a storage company with ‘backup’ as the first service we offered on our storage platform. Our backup service relies on the storage platform as, without the storage platform, we couldn’t offer unlimited backup. To enable the backup service, storage became the foundation of our company and is still what we live and breathe every day.

It didn’t just change how we built the service, it changed the fundamental DNA of the company.

Dividends

Creating our own storage platform was certainly hard. But it enabled us to offer our unlimited backup for a low price and do that while running a sustainable business.

“It didn’t just change how we built the service, it changed the fundamental DNA of the company.”

We felt that we had a service and price point that customers wanted, and we “unlocked” the way to let us build it. Having our storage platform also provides us with a deep connection to our customers and the storage community – we share how we build Storage Pods and how reliable hard drives in our environment have been. That content, in turns, helps brings awareness to Backblaze; the awareness helps establish the company as a tech leader; that reputation helps us recruit to our growing team and earns customers who are evaluating our solutions vs Storage Company X.

And after years of being a storage company with a backup service, and being asked all the time to just offer our storage directly, we launched our Backblaze B2 Cloud Storage service. We offer this raw storage at a price of $0.005/GB/month – that’s less than 1/4th of the price of S3.

If we had built our backup service on one of the existing storage services or storage systems, it would have been easier – but none of this would have been possible. This challenge, which we have spent a decade working to overcome, has also transformed our company and became our greatest technological advantage.

The post Building a Competitive Moat: Turning Challenges Into Advantages appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Hiring a Content Director

Post Syndicated from Ahin Thomas original https://www.backblaze.com/blog/hiring-content-director/


Backblaze is looking to hire a full time Content Director. This role is an essential piece of our team, reporting directly to our VP of Marketing. As the hiring manager, I’d like to tell you a little bit more about the role, how I’m thinking about the collaboration, and why I believe this to be a great opportunity.

A Little About Backblaze and the Role

Since 2007, Backblaze has earned a strong reputation as a leader in data storage. Our products are astonishingly easy to use and affordable to purchase. We have engaged customers and an involved community that helps drive our brand. Our audience numbers in the millions and our primary interaction point is the Backblaze blog. We publish content for engineers (data infrastructure, topics in the data storage world), consumers (how to’s, merits of backing up), and entrepreneurs (business insights). In all categories, our Content Director drives our earned positioned as leaders.

Backblaze has a culture focused on being fair and good (to each other and our customers). We have created a sustainable business that is profitable and growing. Our team places a premium on open communication, being cleverly unconventional, and helping each other out. The Content Director, specifically, balances our needs as a commercial enterprise (at the end of the day, we want to sell our products) with the custodianship of our blog (and the trust of our audience).

There’s a lot of ground to be covered at Backblaze. We have three discreet business lines:

  • Computer Backup -> a 10 year old business focusing on backing up consumer computers.
  • B2 Cloud Storage -> Competing with Amazon, Google, and Microsoft… just at ¼ of the price (but with the same performance characteristics).
  • Business Backup -> Both Computer Backup and B2 Cloud Storage, but focused on SMBs and enterprise.

The Best Candidate Is…

An excellent writer – possessing a solid academic understanding of writing, the creative process, and delivering against deadlines. You know how to write with multiple voices for multiple audiences. We do not expect our Content Director to be a storage infrastructure expert; we do expect a facility with researching topics, accessing our engineering and infrastructure team for guidance, and generally translating the technical into something easy to understand. The best Content Director must be an active participant in the business/ strategy / and editorial debates and then must execute with ruthless precision.

Our Content Director’s “day job” is making sure the blog is running smoothly and the sales team has compelling collateral (emails, case studies, white papers).

Specifically, the Perfect Content Director Excels at:

  • Creating well researched, elegantly constructed content on deadline. For example, each week, 2 articles should be published on our blog. Blog posts should rotate to address the constituencies for our 3 business lines – not all blog posts will appeal to everyone, but over the course of a month, we want multiple compelling pieces for each segment of our audience. Similarly, case studies (and outbound emails) should be tailored to our sales team’s proposed campaigns / audiences. The Content Director creates ~75% of all content but is responsible for editing 100%.
  • Understanding organic methods for weaving business needs into compelling content. The majority of our content (but not EVERY piece) must tie to some business strategy. We hate fluff and hold our promotional content to a standard of being worth someone’s time to read. To be effective, the Content Director must understand the target customer segments and use cases for our products.
  • Straddling both Consumer & SaaS mechanics. A key part of the job will be working to augment the collateral used by our sales team for both B2 Cloud Storage and Business Backup. This content should be compelling and optimized for converting leads. And our foundational business line, Computer Backup, deserves to be nurtured and grown.
  • Product marketing. The Content Director “owns” the blog. But also assists in writing cases studies / white papers, creating collateral (email, trade show). Each of these things has a variety of call to action(s) and audiences. Direct experience is a plus, experience that will plausibly translate to these areas is a requirement.
  • Articulating views on storage, backup, and cloud infrastructure. Not everyone has experience with this. That’s fine, but if you do, it’s strongly beneficial.

A Thursday In The Life:

  • Coordinate Collaborators – We are deliverables driven culture, not a meeting driven one. We expect you to collaborate with internal blog authors and the occasional guest poster.
  • Collaborate with Design – Ensure imagery for upcoming posts / collateral are on track.
  • Augment Sales team – Lock content for next week’s outbound campaign.
  • Self directed blog agenda – Feedback for next Tuesday’s post is addressed, next Thursday’s post is circulated to marketing team for feedback & SEO polish.
  • Review Editorial calendar, make any changes.

Oh! And We Have Great Perks:

  • Competitive healthcare plans
  • Competitive compensation and 401k
  • All employees receive Option grants
  • Unlimited vacation days
  • Strong coffee & fully stocked Micro kitchen
  • Catered breakfast and lunches
  • Awesome people who work on awesome projects
  • Childcare bonus
  • Normal work hours
  • Get to bring your pets into the office
  • San Mateo Office – located near Caltrain and Highways 101 & 280.

Interested in Joining Our Team?

Send us an email to [email protected] with the subject “Content Director”. Please include your resume and 3 brief abstracts for content pieces.
Some hints for each of your three abstracts:

  • Create a compelling headline
  • Write clearly and concisely
  • Be brief, each abstract should be 100 words or less – no longer
  • Target each abstract to a different specific audience that is relevant to our business lines

Thank you for taking the time to read and consider all this. I hope it sounds like a great opportunity for you or someone you know. Principles only need apply.

The post Hiring a Content Director appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

RFD: the alien abduction prophecy protocol

Post Syndicated from Michal Zalewski original http://lcamtuf.blogspot.com/2017/05/rfd-alien-abduction-prophecy-protocol.html

“It’s tough to make predictions, especially about the future.”
– variously attributed to Yogi Berra and Niels Bohr

Right. So let’s say you are visited by transdimensional space aliens from outer space. There’s some old-fashioned probing, but eventually, they get to the point. They outline a series of apocalyptic prophecies, beginning with the surprise 2032 election of Dwayne Elizondo Mountain Dew Herbert Camacho as the President of the United States, followed by a limited-scale nuclear exchange with the Grand Duchy of Ruritania in 2036, and culminating with the extinction of all life due to a series of cascading Y2K38 failures that start at an Ohio pretzel reprocessing plan. Long story short, if you want to save mankind, you have to warn others of what’s to come.

But there’s a snag: when you wake up in a roadside ditch in Alabama, you realize that nobody is going to believe your story! If you come forward, your professional and social reputation will be instantly destroyed. If you’re lucky, the vindication of your claims will come fifteen years later; if not, it might turn out that you were pranked by some space alien frat boys who just wanted to have some cheap space laughs. The bottom line is, you need to be certain before you make your move. You figure this means staying mum until the Election Day of 2032.

But wait, this plan is also not very good! After all, how could your future self convince others that you knew about President Camacho all along? Well… if you work in information security, you are probably familiar with a neat solution: write down your account of events in a text file, calculate a cryptographic hash of this file, and publish the resulting value somewhere permanent. Fifteen years later, reveal the contents of your file and point people to your old announcement. Explain that you must have been in the possession of this very file back in 2017; otherwise, you would not have known its hash. Voila – a commitment scheme!

Although elegant, this approach can be risky: historically, the usable life of cryptographic hash functions seemed to hover at somewhere around 15 years – so even if you pick a very modern algorithm, there is a real risk that future advances in cryptanalysis could severely undermine the strength of your proof. No biggie, though! For extra safety, you could combine several independent hashing functions, or increase the computational complexity of the hash by running it in a loop. There are also some less-known hash functions, such as SPHINCS, that are designed with different trade-offs in mind and may offer longer-term security guarantees.

Of course, the computation of the hash is not enough; it needs to become an immutable part of the public record and remain easy to look up for years to come. There is no guarantee that any particular online publishing outlet is going to stay afloat that long and continue to operate in its current form. The survivability of more specialized and experimental platforms, such as blockchain-based notaries, seems even less clear. Thankfully, you can resort to another kludge: if you publish the hash through a large number of independent online venues, there is a good chance that at least one of them will be around in 2032.

(Offline notarization – whether of the pen-and-paper or the PKI-based variety – offers an interesting alternative. That said, in the absence of an immutable, public ledger, accusations of forgery or collusion would be very easy to make – especially if the fate of the entire planet is at stake.)

Even with this out of the way, there is yet another profound problem with the plan: a current-day scam artist could conceivably generate hundreds or thousands of political predictions, publish the hashes, and then simply discard or delete the ones that do not come true by 2032 – thus creating an illusion of prescience. To convince skeptics that you are not doing just that, you could incorporate a cryptographic proof of work into your approach, attaching a particular CPU time “price tag” to every hash. The future you could then claim that it would have been prohibitively expensive for the former you to attempt the “prediction spam” attack. But this argument seems iffy: a $1,000 proof may already be too costly for a lower middle class abductee, while a determined tech billionaire could easily spend $100,000 to pull off an elaborate prank on the entire world. Not to mention, massive CPU resources can be commandeered with little or no effort by the operators of large botnets and many other actors of this sort.

In the end, my best idea is to rely on an inherently low-bandwidth publication medium, rather than a high-cost one. For example, although a determined hoaxer could place thousands of hash-bearing classifieds in some of the largest-circulation newspapers, such sleigh-of-hand would be trivial for future sleuths to spot (at least compared to combing through the entire Internet for an abandoned hash). Or, as per an anonymous suggestion relayed by Thomas Ptacek: just tattoo the signature on your body, then post some post some pics; there are only so many places for a tattoo to go.

Still, what was supposed to be a nice, scientific proof devolved into a bunch of hand-wavy arguments and poorly-quantified probabilities. For the sake of future abductees: is there a better way?

Blizzard Beats “Cheat” Maker, Wins $8.5 Million Copyright Damages

Post Syndicated from Ernesto original https://torrentfreak.com/blizzard-beats-cheat-maker-wins-85-million-copyright-damages-170403/

While most gamers do their best to win fair and square, there are always those who try to cheat themselves to victory.

With the growth of the gaming industry, the market for “cheats,” “hacks” and bots has also grown spectacularly. The German company Bossland is one of the frontrunners in this area.

Bossland created cheats and bots for several Blizzard games including World of Warcraft, Diablo 3, Heroes of the Storm, Hearthstone, and Overwatch, handing its users an unfair advantage over the competition. Blizzard is not happy with these and the two companies have been battling in court for quite some time, both in the US and Germany.

Last week a prominent US case came to a conclusion in the California District Court. Because Bossland decided not to represent itself, it was a relatively easy for Blizzard, which was awarded several million in copyright damages.

The court agreed that hacks developed by Bossland effectively bypassed Blizzard’s cheat protection technology “Warden,” violating the DMCA. By reverse engineering the games and allowing users to play modified versions, Bossland infringed Blizzard’s copyrights and allowed its users to do the same.

“Bossland materially contributes to infringement by creating the Bossland Hacks, making the Bossland Hacks available to the public, instructing users how to install and operate the Bossland Hacks, and enabling users to use the software to create derivative works,” the court’s order reads (pdf).

The WoW Honorbuddy

The infringing actions are damaging to the game maker as they render its anti-cheat protection ineffective. The cheaters, subsequently, ruin the gaming experience for other players who may lose interest, causing additional damage.

“Blizzard has established a showing of resulting damage or harm because Blizzard expends a substantial amount of money combating the use of the Bossland Hacks to ensure fair game play,” the court writes.

“Additionally, players of the Blizzard Games lodge complaints against cheating players, which has caused users to grow dissatisfied with the Blizzard Games and cease playing. Accordingly, the in-game cheating also harms Blizzard’s goodwill and reputation.”

As a result, the court grants the statutory copyright damages Blizzard requested for 42,818 violations within the United States, totaling $8,563,600. In addition, the game developer is entitled to $174,872 in attorneys’ fees.

To prevent further damage, Bossland is also prohibited from marketing or sellings its cheats in the United States. This applies to hacks including “Honorbuddy,” “Demonbuddy,”
“Stormbuddy,” “Hearthbuddy,” and “Watchover Tyrant,” as well as any other software designed to exploit Blizzard games.

While its a hefty judgment, the order doesn’t really come as a surprise given that the German cheat maker failed to defend itself.

Bossland CEO Zwetan Letschew previously informed TorrentFreak that his company would continue the legal battle after the issue of a default judgment. Whatever the outcome, the cheats will remain widely available outside of the US for now.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Third of EU Citizens OK With Piracy When There Are No Legal Options

Post Syndicated from Andy original https://torrentfreak.com/third-of-eu-citizens-ok-with-piracy-when-there-are-no-legal-options-170327/

The European Union Intellectual Property Office has published the findings of a new study commissioned from Deloitte which aims to better understand how EU citizens perceive intellectual property issues.

The report is the product of 26,500 interviews with citizens aged 15 and over and paints a fairly positive picture for rightsholders and other businesses that rely on the exploitation of intellectual property.

The striking headline figure is that 97% of respondents believe that content creators should be able to protect their rights in order to get paid for their work. Alongside almost total support for IP rights, an impressive 83% indicate they would prefer to access digital content through legal services when there is an affordable option available.

Across the EU, just 10% of respondents said they’d deliberately obtained content from illegal sources during the past 12 months, a figure that jumps to 27% among 15 to 24-year-olds. A similar survey carried out in 2013 produced close to the same results.

But while 10% is the average percentage of pirates across all EU countries, several major EU members buck the trends in interesting ways.

France, for example, has many years’ experience of the state-sponsored Hadopi “three strikes” anti-piracy program. With millions of notices sent to ISP subscribers, the program was supposed to educate citizens away from piracy. However, 15% of French citizens admit to downloading or streaming from illegal sources, five percentage points higher than the EU average.

In Germany, where copyright trolls have been running rampant for many years and claiming a deterrent effect, just 7% say they download or stream from illegal sources. While this figure lower than the EU average might seem the logical conclusion, the same percentage is shared with Italy where there is no trolling or state-sponsored anti-piracy scheme.

In Spain, a country that is trying to shake off a reputation of being a piracy haven, 16% of citizens admit to online piracy. That’s double the 8% of UK citizens who admit to consuming unauthorized content online.

As usual, however, there are significant gray areas when it comes to content consumption and whether or not people can be labeled as hardcore pirates.

Just under a third (32%) of the those surveyed said they access content online, whether that’s from a legal or illegal source. Under a quarter (22%) say they use only authorized services. Just 5% use illegal sources alone and 5% said they use a mix of paid lawful and illegal sources.

“This suggests that respondents are willing to switch between legal and illegal sources in order to gain access to content,” the study found.

Also of interest are the significant numbers of citizens who feel that piracy is acceptable under particular sets of circumstances.

A not insignificant 35% of respondents said that it’s acceptable to obtain content illegally as long as it’s only for personal use. Since millions of citizens are already taxed via a private copying levy, the notion that copying for yourself is acceptable shouldn’t come as too much of a surprise, although the charge itself applies to blank media, not illegal downloads.

Interestingly, close to a third (31%) believe that it’s acceptable to obtain content illegally if there are no immediately available legal alternatives. So, if a distributor chooses to bring content late to a region or makes content otherwise difficult to obtain, millions believe it’s ok for citizens to help themselves. While that’s probably a concern for rightsholders, it’s a problem that can be fixed.

Overall, an encouraging 71% of pirate respondents said they would stop obtaining content from illegal sources if there was an accessible and affordable legal alternative. Around 20% said they would not necessarily go legal, even if there was an available and affordable option.

“The availability of affordable content from legal offers as the top reason for stopping the behavior is most strongly cited by respondents in the following categories: respondents aged 25 to 39 (74 %), employed (76 %), living in large urbanized cities (75 %), and the most educated (72 %), which is in line with the profile of a typical online user,” the survey notes.

Close to 30% believe that being better informed could help them back away from illegal sources while just 5% said they could never be stopped, no matter what.

But while many consumers want to “do the right thing”, there appears to be confusion when it comes to assessing whether an online service is legal or not. Almost a quarter (24%) of Europeans surveyed said they’d questioned whether an online source was legal, a five-point increase over the earlier 2013 study.

That being said, there’s a perception that legal services can provide a better product. When comparing the quality of content offered on legal and illegal platforms, 69% said that licensed services come out on top, an opinion shared by illegal downloaders and legal consumers alike.

However, when it comes to diversity of content, just over half of respondents (56%) said that legal services do a better job, a figure that drops to 45% among those who illegally download some content. Making a broader range of content available online could address this particularly lukewarm response.

António Campinos, Executive Director of EUIPO, said that the results of the survey show that EU citizens generally have respect for intellectual property but there is still room for improvement.

“Overall, we see that support for IP rights is high among EU citizens,” he said.

“But we also see that more needs to be done to help young people in particular understand the importance of IP to our economy and society, especially now, when encouraging innovation and creativity is increasingly the focus of economic policy across our European Union.”

The full report can be downloaded here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Demonoid Returns After Two Months Downtime

Post Syndicated from Ernesto original https://torrentfreak.com/demonoid-returns-two-months-downtime-170320/

demonoid-logoWe’ve written the phrase “Demonoid is Back” quite a few times already on TorrentFreak.

The site has established a reputation as the “comeback kid” due to its ‘habit’ of going offline for weeks or even months, and then reappearing in its full glory as if nothing ever happened.

Over the past several weeks, the site has been offline once again. Late January the site went down due to hosting and some internal problems and was forced to remain offline longer than expected.

According to Deimos, the site’s founder, there was some disagreement with the person who handled most of the technical aspects of the tracker. After both sides failed to reach an agreement, he saw no other option than to take control again.

“I gave control to the wrong guys while the problems started, but it’s time to control stuff again,” Deimos told us earlier.

This weekend the site came back online and it’s currently accessible through Demonoid.pw as well as the Dnoid.me domain name. The site is running on new hardware, and there may still be a few bugs, but otherwise it’s fully operational.

TorrentFreak reached out to Deimos this weekend, who informed us that the site was restored from a recent database backup. So, everyone should be able to access his or her old account as usual.

In an update posted on the site, Demonoid’s operator is also surprisingly open about what happened, mentioning the aforementioned internal problems and personal issues as a reason for the prolonged downtime.

“After some long downtime caused mainly due to some problems with some of the people we made the mistake of trusting, and some personal problems regarding the health of a family member that drained the time and money destined to move the site elsewhere, we are finally back with you,” the update reads.

Now that the site is back, it’s clear that one of the oldest BitTorrent communities will live on. But who would have expected anything different from the comeback kid?

Demonoid is back

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Maximising site performance: 5 key considerations

Post Syndicated from Davy Jones original https://www.anchor.com.au/blog/2017/03/maximising-site-performance-key-considerations/

The ongoing performance of your website or application is an area where ‘not my problem’ can be a recurring sentiment from all stakeholders.  It’s not just a case of getting your shiny new website or application onto the biggest, spec-ed-up, dedicated server or cloud instance that money can buy because there are many factors that can influence the performance of your website that you, yes you, need to make friends with.

The relationship between site performance and business outcomes

Websites have evolved into web applications, starting out as simple text in html format to complex, ‘rich’ multimedia content requiring buckets of storage and computing power. Your server needs to run complex scripts and processes, and serve up content to global visitors because let’s face it, you probably have customers everywhere (or at least have plans to achieve a global customer base ). It is a truth universally acknowledged, that the performance of your website is directly related to customer experience, so underestimating the impact of having poor site performance will absolutely affect your brand reputation, sales revenue and business outcomes negatively, jeopardising your business’ success.

Site performance stakeholders

There is an increasing range of literature around the growing importance of optimising site performance for maximum customer experience but who is responsible for owning the customer site experience? Is it the marketing team, development team, digital agency or your hosting provider? The short answer is that all of the stakeholders can either directly or indirectly impact your site performance.

Let’s explore this shared responsibility in more detail, let’s break it down into five areas that affect a website’s performance.

5 key site performance considerations

In order to truly appreciate the performance of your website or application, you must take into consideration 5 key areas that affect your website’s ability to run at maximum performance:

  1. Site Speed
  2. Reliability and availability
  3. Code Efficiency
  4. Scalability
  5. Development Methodology
1. Site Speed

Site speed is the most critical metric. We all know and have experienced the frustration of “this site is slow, it takes too long to load!”. It’s the main (and sometimes, only) metric that most people would think about when it comes to the performance of a web application.

But what does it mean for a site to be slow? Well, it usually comes down to these factors:

a. The time it takes for the server to respond to a visitor requesting a page.
b. The time it takes to download all necessary content to display the website.
c.  The time it takes for your browser to load and display all the content.

Usually, the hosting provider will look over  (a), and the developers would look over (b) and (c), as those points are directly related to the web application.

2. Reliability and availability

Reliability and availability go hand-in-hand.

There’s no point in having a fast website if it’s not *reliably* fast. What do we mean by that?

Well, would you be happy if your website was only fast sometimes? If your Magento retail store is lightning fast when you are the only one using it, but becomes unresponsive during a sale, then the service isn’t performing up to scratch. The hosting provider has to provide you with a service that stays up, and can withstand the traffic going to it.

Outages are also inevitable, as 100% uptime is a myth. But with some clever infrastructure designs, we can minimise downtime as close to zero as we can get! Here at Anchor, our services are built with availability in mind. If your service is inaccessible, then it’s not reliable.

Our multitude of hosting options on offer such as VPS, dedicated and cloud are designed specifically for your needs. Proactive and reactive support, and hands-on management means your server stays reliable and available.

We know some businesses are concerned about the very public outage of AWS in the US recently, however AWS have taken action across all regions to prevent this from occurring again. AWS’s detailed response can be found at S3 Service Disruption in the Northern Virginia (US-EAST-1) Region.

As an advanced consulting partner with Amazon Web Services (AWS), we can guide customers through the many AWS configurations that will deliver the reliability required.  Considerations include utilising multiple availability zones, read-only replicas, automatic backups, and disaster recovery options such as warm standby.  

3. Code Efficiency

Let’s talk about efficiency of a codebase, that’s the innards of the application.

The code of an application determines how hard the CPU (the brain of your computer) has to work to process all the things the application wants to be able to do. The more work your application performs, the harder the CPU has to work to keep up.

In short, you want code to be efficient, and not have to do extra, unnecessary work. Here is a quick example:

# Example 1:    2 + 2 = 4

# Example 2:    ( ( 1 + 5) / 3 ) * 1 ) + 2 = 4

The end result is the same, but the first example gets straight to the point. It’s much easier to understand and faster to process. Efficient code means the server is able to do more with the same amount of resources, and most of the time it would also be faster!

We work with many code efficient partners who create awesome sites that drive conversions.  Get in touch if you’re looking for a code efficient developer, we’d be happy to suggest one of our tried and tested partners

4. Scalability

Accurately predicting the spikes in traffic to your website or application is tricky business.  Over or under-provisioning of infrastructure can be costly, so ensuring that your build has the potential to scale can help your website or application to optimally perform at all times.  Scaling up involves adding more resources to the current systems. Scaling out involves adding more nodes. Both have their advantages and disadvantages. If you want to know more, feel free to talk to any member of our sales team to get started.

If you are using a public cloud infrastructure like Amazon Web Services (AWS) there are several ways that scalability can be built into your infrastructure from the start.  Clusters are at the heart of scalability and there are a number of tools can optimise your cluster efficiency such as Amazon CloudWatch, that can trigger scaling activities, and Elastic Load Balancing to direct traffic to the various clusters within your auto scaling group.  For developers wanting complete control over AWS resources, Elastic Beanstalk may be more appropriate.

5. Development Methodology

Development methodologies describe the process of what needs to happen in order to introduce changes to software. A commonly used methodology nowadays is the ‘DevOps’ methodology.

What is DevOps?

It’s the union of Developers and IT Operations teams working together to achieve a common goal.

How can it improve your site’s performance?

Well, DevOps is a way of working, a culture that introduces close collaboration between the two teams of Developers and IT Operations in a single workflow.   By integrating these teams the process of creating, testing and deploying software applications can be streamlined. Instead of each team working in a silo, cross-functional teams work together to efficiently solve problems to get to a stable release faster. Faster releases mean that your website or application gets updates more frequently and updating your application more frequently means you are faster to fix bugs and introduce new features. Check out this article ‘5 steps to prevent your website getting hacked‘ for more details. 

The point is the faster you can update your applications the faster it is for you to respond to any changes in your situation.  So if DevOps has the potential to speed up delivery and improve your site or application performance, why isn’t everyone doing it?

Simply put, any change can be hard. And for a DevOps approach to be effective, each team involved needs to find new ways of working harmoniously with other teams toward a common goal. It’s not just a process change that is needed, toolsets, communication and company culture also need to be addressed.

The Anchor team love putting new tools through their paces.  We love to experiment and iterate on our processes in order to find one that works with our customers. We are experienced in working with a variety of teams, and love to challenge ourselves. If you are looking for an operations team to work with your development team, get in touch.

***
If your site is running slow or you are experiencing downtime, we can run a free hosting check up on your site and highlight the ‘quick wins’ on your site to boost performance.

The post Maximising site performance: 5 key considerations appeared first on AWS Managed Services by Anchor.

Amazon SES Can Now Automatically Warm Up Your Dedicated IP Addresses

Post Syndicated from Cristian Smochina original https://aws.amazon.com/blogs/ses/amazon-ses-can-now-automatic-warm-up-your-dedicated-ip-addresses/

The SES team is pleased to announce that, starting today, Amazon SES can automatically warm up your new dedicated IP addresses. Before the automatic warm-up feature was available, Amazon SES customers who leased dedicated IPs implemented their own warm-up mechanisms. Customers gradually increased email sending through a new dedicated IP before using the dedicated IP to its full capacity. This blog post explains how Amazon SES warms up your dedicated IPs, and how to enable the warm-up feature.

Why do I have to warm up my dedicated IPs?

You must warm up your dedicated IPs before you send a high volume of emails. Many receiving ISPs do not accept emails from an IP that suddenly sends a large volume of email. ISPs perceive this behavior as an indicator of abuse and a possible source of spam. To avoid emails getting dropped or having your sending severely throttled, warm up your IPs by gradually increasing the volume of emails you send through a new IP address. You can find more guidance about the warm up process in the developer guide.

How does Amazon SES warm up my dedicated IPs?

After you enable automatic warm-up, Amazon SES limits the maximum number of emails that you send daily through your new dedicated IP addresses according to a predefined warm-up plan. This automated warm-up process takes up to 45 days. The process ensures that traffic through the newly leased dedicated IP address is gradually increased to establish a positive reputation with receiving ISPs. The maximum daily amount of mail increases from the first day until a maximum of 50,000 emails can be sent from an IP.

When do I have to enable the automatic warm-up?

By default, automatic warm-up is enabled for your account. All newly leased dedicated IP addresses are placed in the automatic warm-up plan. You can disable the automatic warm-up from the Dedicated IPs page in the Amazon SES console. If you are already using dedicated IPs to send emails, go to the Amazon SES console to turn this feature on to take advantage of automatic warm-up.

PrintScreen

Note: disabling automatic warm-up stops the warm-up process. All of your IP addresses will be considered fully warmed up. Re-enabling automatic warm-up does not start the warm-up for the dedicated IPs already allocated to your Amazon SES account.

What happens with the emails sent beyond the daily maximum limit from the warm-up plan?

If you enabled automatic warm-up and you are leasing dedicated IPs for the first time, then all emails that you send beyond the pre-planned daily warm-up plan are sent through shared IPs instead. This means that during the warm-up period, Amazon SES uses your dedicated and shared IPs from the Amazon SES IP pools to send your emails. After the warm-up is complete, Amazon SES sends emails only through your dedicated IPs, and the maximum number of emails you can send is limited by your daily email sending quota. For more information, see Managing Your Amazon SES Sending Limits in the Amazon SES Developer Guide.

If you are an existing dedicated IP customer requesting additional dedicated IPs, emails beyond the daily maximum limit per dedicated IP in the warm-up plan are sent only through dedicated IPs already allocated to your account.

Does automatic warm-up incur extra cost?

No. See the Amazon SES pricing page for dedicated IP pricing information.

We hope you find this feature useful! If you have any questions or comments, let us know in the SES Forum or in the comment section of the blog.

Demonoid is Still Down, But Not Out

Post Syndicated from Ernesto original https://torrentfreak.com/demonoid-is-still-down-but-not-out/

demonoid-logoAs one of the oldest torrent communities around, Demonoid has run into quite a few rough patches over the years.

Whether it’s media industry pressure, lawsuits, blocking orders, hosting problems or police investigations, Demonoid has seen it all.

The site has established a reputation as the “comeback kid,” due to its tendency to go offline for weeks or even months, and then reappear in full glory as if nothing ever happened.

Over the past weeks, the site has been on a downswing once again. Late January the site went down due to hosting problems and it has remained offline since. While a server migration can take some time, well over a month is quite unusual.

TorrentFreak contacted Demonoid through the official Twitter account and spoke to someone who identified himself as “Deimos,” the site’s original founder.

He informed us that there are some internal issues that caused a problem. According to Deimos, there was some disagreement with the person who handled most of the technical aspects of the operation.

Over the past several weeks both parties tried to come to an agreement but without result, meaning that Deimos has decided to take back control. New hardware is on the way and he hopes that the site will be back online in the near future.

“I hoped things worked out with the person in question, but this doesn’t appear to be an option. So, we ordered some new servers and we are waiting for the arrival, initial setup and whatnot,” Deimos says.

“I gave control to the wrong guys while the problems started, but it’s time to control stuff again.”

When the site returns it will still be hosted on the recent Dnoid.me domain. All user data is safe and intact as well, so the site will make a full comeback just as it has done before.

For now, Demonoid users have no other option than to wait until the site returns. For some, this is easier said than done. While the current Demonoid community is a bit smaller than it was at its height, it’s still a prime location for users who are sharing more obscure content that’s hard to find on public sites.

But then again, Demonoid users are not new to long downtime stretches.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Some notes on space heaters (GPU rigs)

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/02/some-notes-on-space-heaters-gpu-rigs.html

So I carried my GPU rig up to my bedroom to act as a space heater. I thought I’d write some notes on it.

This is a “GPU rig”, containing five graphics cards. Graphics cards have highly parallel processors (GPUs) with roughly 10 times the performance of a CPU — but only for highly parallel problems.

Two such problems are password cracking [*] and cryptocurrency mining.

Password cracking is something cybersecurity professionals regularly need to do. When doing a pentest, or assessment, we’ll get lists of passwords we need to crack. Having a desktop computer with a couple of graphics cards is a useful thing to have.

There are three popular cryptocurrencies: Bitcoin, Ethereum, and ZCash. Everyone is using ASICs for Bitcoin, so you can’t mine them on a GPU any more, but GPUs are still useful for Ethereum and ZCash.

The trick to building a rig with lots of GPU is to get a PCIe 1x extender, so that you can mount the card far away from the motherboard for better cooling. They cost around $10 each. You then need to buy a motherboard with lots of PCIe slots. One with lots of 1x slots will do — we don’t need a lot of bandwidth to the cards.

You then need to buy either a single high-end power supply, or team together two cheaper power supplies. The limitation will be the power from the wall socket, which ranges from around 1600 watts to 1900 watts.

If you don’t want to build a rig, but just stick one or two GPUs in your desktop computer, then here are some things to consider.

There are two vendors of GPUs: nVidia and AMD/Radeon. While nVidia has a better reputation for games and high-end supercomputer math (floating point), Radeon’s have been better with the integer math used for crypto. So you want Radeon cards.

Older cards work well. The 5-year-old Radeon 7970 is actually a pretty good card for this sort of work. You may find some for free from people discarding them in favor of newer cards for gaming.

If buying newer cards, the two you care about are either the Radeon R9 Fury/Nano, or the Radeon RX 470/480. The Fury/Nano cards are slightly faster, the RX 470/480 are more power efficient.

You want to get at least 4 gigabytes of memory per card, which I think is what they come with anyway. You might consider 8 gigabytes. The reason for this is that Ethereum is designed to keep increasing memory requirements, to avoid the way ASICs took over in Bitcoin mining. At some point in the future, 4 gigabytes won’t be enough and you’ll need 8 gigabytes. This is many years away, but seeing how old cards remaining competitive for many years, it’s something to consider.

With all this said, if you’ve got a desktop and want to add a card, or if you want to build a rig, then I suggest the following card:

  • AMD Radeon RX 480 w/ 8gigs or RAM for $199 at Newegg [*]

A few months from now, things will of course change, but it’s a good choice for now. This is especially useful for rigs: 6 Fury cards in a rig risks overloading the wall socket, so that somebody innocently turning on a light could crash your rig. In contrast, a rig 6 RX480 cards fit well within the power budget of a single socket.

Now let’s talk software. For password cracking, get Hashcat. For mining, choose a mining pool, and they’ll suggest software. The resources at zcash.flypool.org are excellent for either Windows or Linux mining. Though, on Windows, I couldn’t get mining to work unless I also went back to older video drivers, which was a pain.

Let’s talk electrical power consumption. Mining profitability is determined by your power costs. Where I live, power costs $0.05/kwh, except during summer months (June through September). This is extremely cheap. In California, power costs closer to $0.15/kwh. The difference is significant. I make a profit at the low rates, but would lose money at the higher rates.

Because everyone else is doing it, you can never make money at mining. Sure, if you run the numbers now, you may convince yourself that you’ll break even after a year, but that assumes everything stays static. Other people are making the same calculation, and will buy new GPUs to enter the market, driving down returns, so nobody ever breaks even. The only time mining is profitable is when the price of the cryptocurrency goes up — but in that case, you’d have made even more money just buying the cryptocurrency instead of a mining rig.

The reason I mine is that I do it over TOR, the Onion Router that hides my identity. This mines clean, purely anonymous Bitcoins that I can use (in theory) to pay for hosting services anonymously. I haven’t done that yet, but I’ll have several Bitcoins worth of totally anonymous currency to use if I ever need them.

Otherwise, I use the rig for password cracking. The trick to password cracking is being smarter (knowing what to guess), not being more powerful. But having a powerful rig helps, too, especially since I can just start it up and let it crunch unattended for days.

Conclusion

You probably don’t want to build a rig, unless you are geek like me who enjoys playing with technology. You’ll never earn back what you invest in it, and it’s a lot of hassle setting up.

On the other hand, if you have a desktop computer, you might want to stick in an extra graphics card for password cracking and mining in the background. This is especially true if you want to generate anonymous Bitcoin.

Italy Blocks 290 ‘Pirate’ Movie & TV Show Domains in 4 Months

Post Syndicated from Andy original https://torrentfreak.com/italy-blocks-290-pirate-movie-tv-show-domains-in-4-months-170222-170222/

For so many years, Italy developed a reputation for doing little to stop the spread of infringing content online. As a result, pirate sites flourished and millions of citizens decided that paying for content was a thing of the past.

In recent times, things have changed. Italy now has one of the toughest anti-piracy regimes in Europe and regularly launches new actions, often targeting multiple sites in coordinated operations.

This week marks the start of another, with the Special Command Units of the Guardia di Finanza (GdF), a militarized police force under the authority of the Minister of Economy and Finance, acting on the orders of the Public Prosecutor of Rome.

Following the signing of a special decree issued by the Court of Rome, the GdF targeted the domains of 41 websites alleged to be involved in the distribution of first-run movies such as The Magnificent 7, Suicide Squad, and Legend of Tarzan.

Within the batch were many sites streaming live sporting events, such as soccer matches broadcasted by The Premier League, La Ligue 1, Bundesliga, La Liga and Champions League. Those who transmitted motor racing events were also in the frame, after drawing fire from Formula 1 and Moto GP broadcasters.

All domain names will be blocked by local ISPs or potentially seized, if within reach of local authorities.

Authorities report that in common with an operation carried out earlier this month, two anti-piracy strategies were employed, the so-called “follow-the-money” approach (whereby site owners are identified via payments made by advertisers and similar business partners) and the reportedly newer “follow-the-hosting” angle.

Investigating site hosts has been a core anti-piracy strategy for many years so precisely what’s new about this recent effort isn’t clear. However, much is being made of the ability to discover the true location of sites that attempt to hide behind various anonymization techniques available via the cloud.

Whether a true breakthrough has been made is hard to decipher since local authorities have a tendency to be a little dramatic. Nevertheless, there can be no doubts over their commitment.

According to Fulvia Sarzana, a lawyer with the Sarzana and Partners law firm which specializes in Internet and copyright disputes, a total of 290 sites have been targeted by court injunctions in the past four months alone.

Back in November, a landmark action to block more than 150 sites involved in the unauthorized streaming of movies and sports took place following the signing of a mass injunction by a judge in Rome. It was the biggest single blocking action in Italy since measures began in 2008.

Then, in early February, authorities widened their net further still, with a somewhat unusual campaign targeting sites that offered unauthorized digital versions of dozens of national newspapers and magazines including Cosmopolitan and Vanity Fair.

With the latest blockades, Italy is now a true front-runner among European site-blocking nations. With many hundreds of domains now the subject of an injunction, the country is now firmly among the top three blocking countries, alongside the UK and Portugal.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

CloudFlare Puts Pirate Sites on New IP Addresses, Avoids Cogent Blockade

Post Syndicated from Ernesto original https://torrentfreak.com/cloudflare-puts-pirate-sites-on-new-ip-addresses-avoids-cogent-blockade-170215/

Last week the news broke that Cogent, which operates one of the largest Internet backbone networks, blackholed IP-addresses that were linked to several notorious sites including The Pirate Bay.

As a result of this action, people from all over the world were unable to get to their favorite download or streaming portals.

The blocking intervention is quite controversial, not least because the IP-addresses in question don’t belong to the sites themselves, but to the popular CDN provider CloudFlare.

While CloudFlare hasn’t publicly commented on the issue yet, it now appears to have taken countermeasures. A little while ago the company moved The Pirate Bay and many other sites such as Primewire, Popcorn-Time.se, and Torrentz.cd to a new set of IP-addresses.

As of yesterday, the sites in question have been assigned the IP-addresses 104.31.16.3 and 104.31.17.3, still grouped together. Most, if not all of the sites, are blocked by court order in the UK so this is presumably done to prevent ISP overblocking of ‘regular’ CloudFlare subscribers.

TPB accessible on the new CloudFlare IP-address

Since Cogent hasn’t blackholed the new addresses, yet, the sites are freely accessible on their network once again. At the same time, the old CloudFlare IP-addresses remain blocked.

Old CloudFlare IP-addresses remains blocked

TorrentFreak spoke to the operator of one of the sites involved who said that he made no changes on his end. CloudFlare didn’t alert the site owner about the issue either.

We contacted CloudFlare yesterday asking for a comment of the situation, but a company could not give an official response at the time.

It seems likely that the change of IP-addresses is an intentional response from CloudFlare to bypass the blocking. The company has a reputation of fighting overreach and keeping its subscribers online so that it would be fitting.

The next question that comes to mind is will Cogent respond, and if so, how? Or has the underlying issue perhaps been resolved in another way?

If their original blockade was meant to block one or more of the sites involved, will they then also block the new IP-addresses? Only time will tell.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Why your compliance scan keeps failing

Post Syndicated from Christian Larsen original https://www.anchor.com.au/blog/2017/02/compliance-scan-failing/

At Anchor, a large number of our customers seek various certifications for their hosted properties. These commonly include PCI, IRAP and many others.

The business drivers for undertaking a certification programme vary between customers, but often involve the need to meet some form or regulatory or industry compliance requirement. Whilst it may not be the initial driver, the fundamental goal of many of these activities is to reduce business risk by improving security posture.

We’ve become accustomed to dealing with auditors to assist our customers in undertaking certification initiatives. Unfortunately, just as in any industry, the quality of service, analysis and rigour that these entities employ can vary wildly.

Automated scans

Auditors commonly use automated tooling as an initial mechanism when assessing the compliance of a hosted infrastructure against the certification that their customer is attempting to achieve. This is a perfectly valid approach; automation increases consistency and reduces incidences of error as well as effort expended.

These tools will often expose a number of non-compliances, many of which are perfectly valid in context of the certification against which they are testing. Unfortunately, in our experience, there are often a large number of invalid non-compliance items on these reports.

Many of these non-compliance items relate to the use of outdated software, but auditor assumptions as to what version numbers actually mean can often be faulty.

Keeping software up to date

Without doubt, outdated and unpatched software is a huge security risk. Keeping software up to date is one of the most basic tenets of any mature security posture. As software becomes more prolific in our lives, so do the number of bugs and potential security vulnerabilities that come with it.

The Common Vulnerabilities and Exposures system is the industry standard mechanism for tracking known security flaws in software. Each vulnerability is assigned an ID by the Mitre corporation, which can be later referenced by software developers and operators to assess their security exposure. Software vendors will use this data as input into their development processes to associate bug fixes with the next release that addresses them.

As consequence, consumers of affected software can then reference the software change logs or documentation to identify at what point in time and what version the software was fixed for the particular vulnerability they are concerned with. The more recent the vulnerability, the more likely it is that a more recent version of the software will address it.

With this in mind, a reasonable expectation is thus that to attain the latest security fixes, one must update to the latest available version of the affected software. Many auditor tools make the same assumption; it is unfortunate that this line of thought is naive.

Problems with updating software

One of the problems associated with updating to the latest available version of software is that vendors generally do not just release new security fixes. They’ll often (almost always) couple in new features, some of which will also change the functionality of said software.

It is these changes in functionality that represent a problem for many organisations. Updating and patching software becomes a monolithic and arduous task because it is difficult to test, validate and orchestrate at even small scales. Changes to functionality of interfaces can potentially break production services. The unfortunate consequence is that patching becomes an irregular task, which only serves to increase the organisation’s length of exposure to publicly known security vulnerabilities. From a business perspective, uptime will generally take precedence over security.

As a matter of policy, Anchor updates and patches all customer services on a weekly basis. For a large, heterogeneous fleet of services such as ours, given the prior constraints, this may seem like a mammoth task. Whilst not trivial at scale, it is actually not at all a disruptive activity for our customers — In fact, this author can count on one hand the quantity of incidents that have been caused due to Anchor’s patching practices, and all have been resolved within the bounds of the associated maintenance window.

Backporting

Anchor is able to achieve this record by taking advantage of others‘ work. Wherever possible, the software we make available to customers is that provided and packaged by the operating system (OS) vendor. We rely heavily on the OS vendors and their incredible track record for releasing stable, reliable updates that address security concerns.

For many OS vendors, there exists a policy that software within a major release should remain stable; this means that there should be no or minimal changes to packaged software for the lifetime of the release. Red Hat Enterprise Linux (RHEL) is one such example with a stellar reputation. RHEL has a published application compatibility policy.

Stable software, however, should not be stagnant software. To ensure that the OS vendor’s packaged software remains secure, they will backport security fixes into their products from upstream software projects.

From Red Hat’s backporting documentation:

We use the term backporting to describe the action of taking a fix for a security flaw out of the most recent version of an upstream software package and applying that fix to an older version of the package we distribute.
When we backport security fixes, we:

  • identify the fixes and isolate them from any other changes
  • make sure the fixes do not introduce unwanted side effects
  • apply the fixes to our previously released versions

Red Hat and many other vendors will publicly document their security advisories so that users can assess the vulnerability of the software they have been provided. It is also possible to search by CVE identifier.

By taking advantage of the OS vendor’s existing security practice, Anchor is able to regularly update our customer’s infrastructure whilst ensuring stability and security.

The caveat to this approach, however, is that the software versions listed as being in use register as vulnerable and outdated to many naive auditor tools.

Ignoring auditor advice

A common consequence of an initial audit is a request to update to the most recent version of the software identified, without regard for the current security practices. A list of CVE IDs that scanning tools associate with the currently used software will also be supplied.

Again, from Red Hat’s backporting documentation:

“Backporting has a number of advantages for customers, but it can create confusion when it is not understood. Customers need to be aware that just looking at the version number of a package will not tell them if they are vulnerable or not.
Also, some security scanning and auditing tools make decisions about vulnerabilities based solely on the version number of components they find. This results in false positives as the tools do not take into account backported security fixes.”



It is at this stage that a dispute must be filed with the findings; a tiresome but necessary process. Some auditors will record current security practice as a compensating control.

Most of the time, we are successful in such discussions, but there are many times when an auditor will not accept ‘No’ for an answer.

Why not just update to the latest version?

An obvious solution to appease a difficult auditor is to simply heed their advice and update the identified software to the latest version provided by the upstream project. This can in actuality be a terrible decision that may reduce overall security in the long term.

When you deviate from the software provided by the operating system vendor, you no longer receive the benefits of their maintenance. The burden falls onto you to regularly update, test and deploy the software, dealing with any changes in functionality along the way.

This practice is difficult at any scale. As consequence, it is very likely to become an infrequent activity that will result in less updates and greater length of exposure to any known vulnerabilities.

Anchor will always recommend to customers that they stick to OS vendor provided software packages where possible. It increases security, stability and reliability whilst reducing ongoing maintenance overhead.

In conclusion

Sticking with OS vendor packaged software is not the only option. One of the consequences of sticking to packaged software is that there will be times when you do actually require new features in current releases that aren’t available in the installed package — Remember that the OS vendor’s objective is to maintain stability, not increase functionality. In these circumstances, a trade off may be required to achieve the desired business outcome. There’s nothing wrong with deviating from established practice if you are aware of the risks and willing to accept responsibility.

Certification programmes can be taxing on all involved, but predominantly lead to positive outcomes once completed. Auditors, whilst not often popular, can be a necessary part of the process and are ultimately there to assist wherever they can — despite common wisdom, they’re not the enemy.

It’s important to have a good understanding of your own security practices and the risks you are willing to accept. With this knowledge in hand, you’ll be well prepared to have a productive conversation with your auditor and, with some luck, survive certification with your sanity intact.

The post Why your compliance scan keeps failing appeared first on AWS Managed Services by Anchor.