Tag Archives: party

A Million ‘Pirate’ Boxes Sold in the UK During The Last Two Years

Post Syndicated from Andy original https://torrentfreak.com/a-million-pirate-boxes-sold-in-the-uk-during-the-last-two-years-170919/

With the devices hitting the headlines on an almost weekly basis, it probably comes as no surprise that ‘pirate’ set-top boxes are quickly becoming public enemy number one with video rightsholders.

Typically loaded with the legal Kodi software but augmented with third-party addons, these often Android-based pieces of hardware drag piracy out of the realm of the computer savvy and into the living rooms of millions.

One of the countries reportedly most affected by this boom is the UK. The consumption of these devices among the general public is said to have reached epidemic proportions, and anecdotal evidence suggests that terms like Kodi and Showbox are now household terms.

Today we have another report to digest, this time from the Federation Against Copyright Theft, or FACT as they’re often known. Titled ‘Cracking Down on Digital Piracy,’ the report provides a general overview of the piracy scene, tackling well-worn topics such as how release groups and site operators work, among others.

The report is produced by FACT after consultation with the Police Intellectual Property Crime Unit, Intellectual Property Office, Police Scotland, and anti-piracy outfit Entura International. It begins by noting that the vast majority of the British public aren’t involved in the consumption of infringing content.

“The most recent stats show that 75% of Brits who look at content online abide by the law and don’t download or stream it illegally – up from 70% in 2013. However, that still leaves 25% who do access material illegally,” the report reads.

The report quickly heads to the topic of ‘pirate’ set-top boxes which is unsurprising, not least due to FACT’s current focus as a business entity.

While it often positions itself alongside government bodies (which no doubt boosts its status with the general public), FACT is a private limited company serving The Premier League, another company desperate to stamp out the use of infringing devices.

Nevertheless, it’s difficult to argue with some of the figures cited in the report.

“At a conservative estimate, we believe a million set-top boxes with software added
to them to facilitate illegal downloads have been sold in the UK in the last couple
of years,” the Intellectual Property Office reveals.

Interestingly, given a growing tech-savvy public, FACT’s report notes that ready-configured boxes are increasingly coming into the country.

“Historically, individuals and organized gangs have added illegal apps and add-ons onto the boxes once they have been imported, to allow illegal access to premium channels. However more recently, more boxes are coming into the UK complete with illegal access to copyrighted content via apps and add-ons already installed,” FACT notes.

“Boxes are often stored in ‘fulfillment houses’ along with other illegal electrical items and sold on social media. The boxes are either sold as one-off purchases, or with a monthly subscription to access paid-for channels.”

While FACT press releases regularly blur the lines when people are prosecuted for supplying set-top boxes in general, it’s important to note that there are essentially two kinds of products on offer to the public.

The first relies on Kodi-type devices which provide on-going free access to infringing content. The second involves premium IPTV subscriptions which are a whole different level of criminality. Separating the two when reading news reports can be extremely difficult, but it’s a hugely important to recognize the difference when assessing the kinds of sentences set-top box suppliers are receiving in the UK.

Nevertheless, FACT correctly highlights that the supply of both kinds of product are on the increase, with various parties recognizing the commercial opportunities.

“A significant number of home-grown British criminals are now involved in this type of crime. Some of them import the boxes wholesale through entirely legal channels, and modify them with illegal software at home. Others work with sophisticated criminal networks across Europe to bring the boxes into the UK.

“They then sell these boxes online, for example through eBay or Facebook, sometimes managing to sell hundreds or thousands of boxes before being caught,” the company adds.

The report notes that in some cases the sale of infringing set-top boxes occurs through cottage industry, with suppliers often working on their own or with small groups of friends and family. Invetiably, perhaps, larger scale operations are reported to be part of networks with connections to other kinds of crime, such as dealing in drugs.

“In contrast to drugs, streaming devices provide a relatively steady and predictable revenue stream for these criminals – while still being lucrative, often generating hundreds of thousands of pounds a year, they are seen as a lower risk activity with less likelihood of leading to arrest or imprisonment,” FACT reports.

While there’s certainly the potential to earn large sums from ‘pirate’ boxes and premium IPTV services, operating on the “hundreds of thousands of pounds a year” scale in the UK would attract a lot of unwanted attention. That’s not saying that it isn’t already, however.

Noting that digital piracy has evolved hugely over the past three or four years, the report says that the cases investigated so far are just the “tip of the iceberg” and that many other cases are in the early stages and will only become known to the public in the months and years ahead.

Indeed, the Intellectual Property Office hints that some kind of large-scale enforcement action may be on the horizon.

“We have identified a significant criminal business model which we have discussed and shared with key law enforcement partners. I can’t go into detail on this, but as investigations take their course, you will see the scale,” an IPO spokesperson reveals.

While details are necessarily scarce, a source familiar with this area told TF that he would be very surprised if the targets aren’t the growing handful of commercial UK-based IPTV re-sellers who offer full subscription TV services for a few pounds per month.

“They’re brazen. Watch this space,” he said.

FACT’s full report, Cracking Down on Digital Piracy, can be downloaded here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

TVAddons: A Law Firm is Not Spying on Our Kodi Users

Post Syndicated from Andy original https://torrentfreak.com/tvaddons-a-law-firm-is-not-spying-on-our-kodi-users-170918/

A few months ago, TVAddons was without doubt the leading repository for third-party Kodi addons.

During March, the platform had 40 million unique users connected to the site’s servers, together transferring a petabyte of addons and updates.

In June, however, things started to fall apart. After news broke that the site was being sued in a federal court in Texas, TVAddons disappeared. It was assumed these events were connected but it later transpired the platform was being used in Canada as well, and that was the true reason for the downtime.

While it’s easy to be wise after the event, in hindsight it might’ve been better for the platform to go public about the Canadian matter quite a bit sooner than it did. Of course, there are always legal considerations that prevent early disclosure, but when popular sites disappear into a black hole, two plus two can quickly equal five when fed through the web’s rumor machine.

Things weren’t helped in July when it was discovered that the site’s former domains had been handed over to a Canada-based law firm. Again, no official explanation was forthcoming and again, people became concerned.

If this had been a plaintiff’s law firm, people would’ve had good reason to worry, since it would have been technically possible to spy on TVAddons’ users. However, as the truth began to filter out and court papers became available, it soon became crystal clear that simply wasn’t the case.

The bottom line, which is backed up by publicly available court papers, is that the law firm holding the old TVAddons domains is not the law firm suing TVAddons. Instead, it was appointed by the court to hold TVAddons’ property until the Canadian lawsuit is brought to a conclusion, whenever that might be.

“They have a legal obligation to protect our property at all cost, and prevent anyone (especially the law firm who is suing is) from gaining access to them,” says TVAddons.

“The law firm who is holding them is doing nothing more than protecting our property until the time that it will finally be returned after the appeal takes place.”

Unfortunately, assurances provided by TVAddons and information published by the court itself hasn’t been enough to stop some people fearing the worst. While the facts have plenty of support on Twitter and Facebook, there also appears to be an element who would like to see TVAddons fail in its efforts to re-establish itself.

Only time will tell who will win that battle but in the meantime, TVAddons has tried to cover all the bases in an update post on its blog.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Ukraine Faces Call for US Trade Sanctions over Online Piracy

Post Syndicated from Ernesto original https://torrentfreak.com/ukraine-faces-call-us-trade-sanctions-over-online-piracy-170918/

The International Intellectual Property Alliance (IIPA) is recommending that the U.S. Government should suspend Ukraine’s GSP trade benefits, claiming that the country doesn’t do enough to protect the interests of copyright holders.

Last year Ukraine enjoyed $53.7 million in unilateral duty-free benefits in the US, while US companies suffering millions of dollars in losses in Ukraine due to online piracy, they argue.

The IIPA, which includes a wide range of copyright groups including the MPAA, RIAA, BSA and ESA, characterizes the country as a safe harbor for pirate sites. While physical piracy was properly addressed ten years ago after a previous sanction, digital piracy remains rampant.

One of the main problems is that local hosting companies are offering their services to a wide variety of copyright-infringing websites. Without proper enforcement, more and more websites have moved their services there.

“By allowing these problems to fester for years, weak digital enforcement has resulted in an exponential increase in the number of illegal peer-to-peer (‘P2P’) hosting and website-based Internet piracy sites, including some of the world’s largest BitTorrent sites located in Ukraine,” IIPA writes.

“Some Internet pirates have purposefully moved their servers and operations to Ukraine in the past few years to take advantage of the current lawless situation. Many of these illegal services and sites target audiences throughout Europe and the United States.”

The copyright holders highlight the defunct ExtraTorrent site as an example but note that there are also many other torrent sites, pirate streaming sites, cyberlockers, and linking sites in Ukraine.

While pirate sites are hosted all over the world, the problem is particularly persistent in Ukraine because many local hosting companies fail to process takedown requests. This, despite repeated calls from copyright holders to work with them.

“Many of the websites offering pirated copyright materials are thriving in part because of the support of local ISPs,” IIPA writes.

“The copyright industries have, for years, sought private agreements with ISPs to establish effective mechanisms to take down illegal websites and slow illegal P2P traffic. In the absence of legislation, however, these voluntary efforts have generally not succeeded, although, some ISPs will delete links upon request.”

In order to make real progress, the copyright holders call for new legislation to hold Internet services accountable and to make it easier to come after pirate sites that are hosted in Ukraine.

“Legislation is needed to institute proper notice and takedown provisions, including a requirement that service providers terminate access to individuals (or entities) that have repeatedly engaged in infringement, and the retention of information for law enforcement, as well as to provide clear third party liability regarding ISPs.”

In addition to addressing online piracy, IIPA further points out that the collecting societies in Ukraine are not functioning properly. At the moment there are 18 active and competing organizations, creating a chaotic situation where rightsholders are not properly rewarded, they suggest.

IIPA recommends that the U.S. Government accepts its petition and suspends or withdraws Ukraine’s benefits until the country takes proper action.

Ukraine’s Government, for its part, informs the US Government that progress is being made. There are already several new laws in the works to improve intellectual property protection. The issue is one of the Government’s “key priorities,” they state, hoping to avert any sanctions.

IIPA’s full submission to the US Trade Representative is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Kodi ‘Trademark Troll’ Has Interesting Views on Co-Opting Other People’s Work

Post Syndicated from Andy original https://torrentfreak.com/kodi-trademark-troll-has-interesting-views-on-co-opting-other-peoples-work-170917/

The Kodi team, operating under the XBMC Foundation, announced last week that a third-party had registered the Kodi trademark in Canada and was using it for their own purposes.

That person was Geoff Gavora, who had previously been in communication with the Kodi team, expressing how important the software was to his sales.

“We had hoped, given the positive nature of his past emails, that perhaps he was doing this for the benefit of the Foundation. We learned, unfortunately, that this was not the case,” XBMC Foundation President Nathan Betzen said.

According to the Kodi team, Gavora began delisting Amazon ads placed by companies selling Kodi-enabled products, based on infringement of Gavora’s trademark rights.

“[O]nly Gavora’s hardware can be sold, unless those companies pay him a fee to stay on the store,” Betzen explained.

Predictably, Gavora’s move is being viewed as highly controversial, not least since he’s effectively claiming licensing rights in Canada over what should be a free and open source piece of software. TF obtained one of the notices Amazon sent to a seller of a Kodi-enabled device in Canada, following a complaint from Gavora.

Take down Kodi from Amazon, or pay Gavora

So who is Geoff Gavora and what makes him tick? Thanks to a 2016 interview with Ali Salman of the Rapid Growth Podcast, we have a lot of information from the horse’s mouth.

It all began in 2011, when Gavora began jailbreaking Apple TVs, loading them with XBMC, and selling them to friends.

“I did it as a joke, for beer money from my friends,” Gavora told Salman.

“I’d do it for $25 to $50 and word of mouth spread that I was doing this so we could load on this media center to watch content and online streams from it.”

Intro to the interview with Ali Salman

Soon, however, word of mouth caused the business to grow wings, Gavora claims.

“So they started telling people and I start telling people it’s $50, and then I got so busy so I start telling people it’s $75. I’m getting too busy with my work and with this. And it got to the point where I was making more jailbreaking these Apple TVs than I was at my career, and I wasn’t very happy at my career at that time.”

Jailbreaking was supposed to be a side thing to tide Gavora over until another job came along, but he had a problem – he didn’t come from a technical background. Nevertheless, what Gavora did have was a background in marketing and with a decent knowledge of how to succeed in customer service, he majored on that front.

Gavora had come to learn that while people wanted his devices, they weren’t very good at operating XBMC (Kodi’s former name) which he’d loaded onto them. With this in mind, he began offering web support and phone support via a toll-free line.

“I started receiving calls from New York, Dallas, and then Australia, Hong Kong. Everyone around the world was calling me and saying ‘we hear there’s some kid in Calgary, some young child, who’s offering tech support for the Apple TV’,” Gavora said.

But with things apparently going well, a wrench was soon thrown into the works when Apple released the third variant of its Apple TV and Gavorra was unable to jailbreak it. This prompted him to market his own Linux-based set-top device and his business, Raw-Media, grew from there.

While it seems likely that so-called ‘Raw Boxes’ were doing reasonably well with consumers, what was the secret of their success? Podcast host Salman asked Gavora for his ‘networking party 10-second pitch’, and the Canadian was happy to oblige.

“I get this all the time actually. I basically tell people that I sell a box that gives them free TV and movies,” he said.

This was met with laughter from the host, to which Gavora added, “That’s sort of the three-second pitch and everyone’s like ‘Oh, tell me more’.”

“Who doesn’t like free TV, come on?” Salman responded. “Yeah exactly,” Gavora said.

The image below, taken from a January 2016 YouTube unboxing video, shows one of the products sold by Gavora’s company.

Raw-Media Kodi Box packaging (note Kodi logo)

Bearing in mind the offer of free movies and TV, the tagline on the box, “Stop paying for things you don’t want to watch, watch more free tv!” initially looks quite provocative. That being said, both the device and Kodi are perfectly capable of playing plenty of legal content from free sources, so there’s no problem there.

What is surprising, however, is that the unboxing video shows the device being booted up, apparently already loaded with infamous third-party Kodi addons including PrimeWire, Genesis, Icefilms, and Navi-X.

The unboxing video showing the Kodi setup

Given that Gavora has registered the Kodi trademark in Canada and prints the official logo on his packaging, this runs counter to the official Kodi team’s aggressive stance towards boxes ready-configured with what they categorize as banned addons. Matters are compounded when one visits the product support site.

As seen in the image below, Raw-Media devices are delivered with a printed card in the packaging informing people where to get the after-sales services Gavora says he built his business upon. The cards advise people to visit No-Issue.ca, a site setup to offer text and video-based support to set-top box buyers.

No-Issue.ca (which is hosted on the same server as raw-media.ca and claimed officially as a sister site here) now redirects to No-Issue.is, as per a 2016 announcement. It has a fairly bland forum but the connected tutorial videos, found on No Issue’s YouTube channel, offer a lot more spice.

Registered under Gavora’s online nickname Gombeek (which is also used on the official Kodi forums), the channel is full of videos detailing how to install and use a wide range of addons.

The No-issue YouTube Channel tutorials

But while supplying tutorial videos is one thing, providing the actual software addons is another. Surprisingly, No-Issue does that too. Filed away under the URL http://solved.no-issue.is/ is a Kodi repository which distributes a wide range of addons, including many that specialize in infringing content, according to the Kodi team.

The No-Issue repository

A source familiar with Raw-Media’s devices informs TF that they’re no longer delivered with addons installed. However, tools hosted on No-Issue.is automate the installation process for the customer, with unlisted YouTube Videos (1,2) providing the instructions.

XBMC Foundation President Nathan Betzen says that situation isn’t ideal.

“If that really is his repo it is disappointing to see that Gavora is charging a fee or outright preventing the sale of boxes with Kodi installed that do not include infringing add-ons, while at the same time he is distributing boxes himself that do include the infringing add-ons like this,” Betzen told TF.

While the legality of this type of service is yet to be properly tested in Canada and may yet emerge as entirely permissible under local law, Gavora himself previously described his business as operating in a gray area.

“If I could go back in time four years, I would’ve been more aggressive in the beginning because there was a lot of uncertainty being in a gray market business about how far I could push it,” he said.

“I really shouldn’t say it’s a gray market because everything I do is completely above board, I just felt it was more gray market so I was a bit scared,” he added.

But, legality aside (which will be determined in due course through various cases 1,2), the situation is still problematic when it comes to the Kodi trademark.

The official Kodi team indicate they don’t want to be associated with any kind of questionable addon or even tutorials for the same. Nevertheless, several of the addons installed by No-Issue (including PrimeWire, cCloud TV, Genesis, Icefilms, MoviesHD, MuchMovies and Navi-X, to name a few), are present on the Kodi team’s official ban list.

The fact remains, however, that Gavora successfully registered the trademark in Canada (one month later it was transferred to a brand new company at the same address), and Kodi now have no control over the situation in the country, short of a settlement or some kind of legal action.

Kodi matters aside, though, we get more insight into Gavora’s attitudes towards intellectual property after learning that he studied gemology and jewelry at school. He’s a long-standing member of jewelry discussion forum Ganoskin.com (his profile links to Gavora.com, a domain Gavora owns, as per information supplied by Amazon).

Things get particularly topical in a 2006 thread titled “When your work gets ripped“. The original poster asked how people feel when their jewelry work gets copied and Gavora made his opinions known.

“I think that what most people forget to remember is that when a piece from Tiffany’s or Cartier is ripped off or copied they don’t usually just copy the work, they will stamp it with their name as well,” Gavora said.

“This is, in fact, fraud and they are deceiving clients into believing they are purchasing genuine Tiffany’s or Cartier pieces. The client is in fact more interested in purchasing from an artist than they are the piece. Laying claim to designs (unless a symbol or name is involved) is outrageous.”

Unless that ‘design’ is called Kodi, of course, then it’s possible to claim it as your own through an administrative process and begin demanding licensing fees from the public. That being said, Gavora does seem to flip back and forth a little, later suggesting that being copied is sometimes ok.

“If someone copies your design and produces it under their own name, I think one should be honored and revel in the fact that your design is successful and has caused others to imitate it and grow from it,” he wrote.

“I look forward to the day I see one of my original designs copied, that is the day I will know my design is a success.”

From their public statements, this opinion isn’t shared by the Kodi team in respect of their product. Despite the Kodi name, software and logo being all their own work, they now find themselves having to claw back rights in Canada, in order to keep the product free in the region. For now, however, that seems like a difficult task.

TorrentFreak wrote to Gavora and asked him why he felt the need to register the Kodi trademark, but we received no response. That means we didn’t get the chance to ask him why he’s taking down Amazon listings for other people’s devices, or about something else that came up in the podcast.

“My biggest weakness, I guess, is that I’m too ethical about how I do my business,” he said, referring to how he deals with customers.

Only time will tell how that philosophy will affect Gavora’s attitudes to trademarks and people’s desire not to be charged for using free, open source software.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

UK Copyright Trolls Cite Hopeless Case to Make People Pay Up

Post Syndicated from Andy original https://torrentfreak.com/uk-copyright-trolls-cite-hopeless-case-to-make-people-pay-up-170916/

Our coverage of Golden Eye International dates back more than five years. Much like similar companies in the copyright troll niche, the outfit monitors BitTorrent swarms, collects IP addresses, and then heads off to court to obtain alleged pirates’ identities.

From there it sends letters threatening legal action, unless recipients pay a ‘fine’ of hundreds of pounds to settle an alleged porn piracy case. While some people pay up, others refuse to do so on the basis they are innocent, the ISP bill payer, or simply to have their day in court. Needless to say, a full-on court battle on the merits is never on the agenda.

Having gone quiet for an extended period of time, it was assumed that Golden Eye had outrun its usefulness as a ‘fine’ collection outfit. Just lately, however, there are signs that the company is having another go at reviving old cases against people who previously refused to pay.

A post on Slyck forums, which runs a support thread for people targeted by trolls, reveals the strategy.

“I dealt with these Monkeys last year. I spent 5 weeks practically arguing with them. They claim they have to prove it based on the balance of probability’s [sic]. I argue that they actually have to prove it was me,” ‘Matt’ wrote in August.

“It wasn’t me, and despite giving them reasonable doubt it wasn’t me. (I’m Gay… why would I be downloading straight porn?) They still persuaded it, trying to dismiss anything that cast any doubt on their claim. The emails finished how I figured they would…. They were going to send court documentation. It never arrived.”

After months of silence, at the end of August this year ‘Matt’ says GoldenEye got in touch again, suggesting that a conclusion to another copyright case might encourage him to cough up. He says that Golden Eye contacted him saying that someone settled out of court with TCYK, another copyright troll, for £1,000.

“My thoughts…Idiots and doubt it,” ‘Matt’ said. “Honestly, I almost cried I thought I had got rid of these trolls and they are back for round two.”

This wasn’t an isolated case. Another recipient of a Golden Eye threat also revealed getting contacted by the company, also with fresh pressure to pay.

“You may be interested to know that a solicitor, acting on behalf of Robert Kemble in a claim similar to ours but brought by TCYK LLC, entered into an agreement to settle the court case by paying £1,000,” Golden Eye told the individual.

“In view of the agreement reached in the Kemble case, we would invite you to reconsider your position as to whether you would like to reach settlement with us. We would point out, that, despite the terms of settlement in the Kemble case, we remain prepared to stand by our original offer of settlement with you, that is payment of £500.00.”

After last corresponding with the Golden Eye in January after repeated denials, new contact from the company would be worrying for anyone. It certainly affected this person negatively.

“I am now at a loss and don’t know what more I can do. I do not want to settle this, but also I cannot afford a solicitor. Any further advice would be gratefully appreciated as [i’m] now having panic attacks,” the person wrote.

After citing the Robert Kemble case, one might think that Golden Eye would be good enough to explain the full situation. They didn’t – so let’s help them a little bit in that respect, to help their targets make an informed decision.

Robert Kemble was a customer of Sky Broadband. TCYK, in conjunction with UK-based Hatton and Berkeley, sent a letter to Kemble in July 2015 asking him to pay a ‘fine’ for alleged Internet piracy of the Robert Redford movie The Company You Keep, way back in April 2013.

So far, so ordinary – but here’s the big deal.

Unlike the people being re-targeted by Golden Eye this time around, Kemble admitted in writing that infringement had been going on via his account.

In a response, Kemble told TCYK that he was shocked to receive their letter but after speaking to people in his household, had discovered that a child had been downloading films. He didn’t say that the Redford film was among them but he apologized to the companies all the same. Clearly, that wasn’t going to be enough.

In August 2015, TCYK wrote back to Kemble, effectively holding him responsible for other people’s actions while demanding a settlement of £600 to be paid to third-party company, Ranger Bay Limited.

“The child who is responsible for the infringement should sign the undertakings in our letter to you. Please when replying specify clearly on the undertakings the child’s full name and age,” the company later wrote. Nice.

What took place next was a round of letter tennis between Kemble’s solicitor and those acting for TCYK, with the latter insisting that Kemble had already admitted infringement (or authorizing the same) and demanding around £2000 to settle the case at this later stage.

With no settlement forthcoming, TCYK demanded £5,000 in the small claims court.

“The Defendant has admitted that his internet address has been used to infringe the Claimant’s copyright whereby, through the Defendant’s licencees’ use of the Defendant’s internet address, he acquired the Work and then communicated the Work in a digital form via the internet to the public without the license or consent of the Claimant,” the TCYK claim form reads.

TorrentFreak understands that the court process that followed didn’t center on the merits of the infringement case, but procedural matters over how the case was handled. On this front, Kemble failed in his efforts to have the case – which was heard almost a year ago – decided in his favor.

Now, according to Golden Eye at least, Kemble has settled with TCYK for £1000, which is just £300 more than their final pre-court offer. Hardly sounds like good value for money.

The main point, though, is that this case wouldn’t have gotten anywhere near a court if Kemble hadn’t admitted liability of sorts in the early stages. This is a freak case in all respects and has no bearing on anyone’s individual case, especially those who haven’t admitted liability.

So, for people getting re-hounded by Golden Eye now, remember the Golden Rule. If you’re innocent, by all means tell them, and stick to your guns. But, at your peril tell them anything else on top, or risk having it used against you.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

MetalKettle Addon Repository Vulnerable After GitHub ‘Takeover’

Post Syndicated from Ernesto original https://torrentfreak.com/metalkettle-after-github-takeover-170915/

A few weeks ago MetalKettle, one of the most famous Kodi addon developers of recent times, decided to call it quits.

Worried about potential legal risks, he saw no other option than to halt all development of third-party Kodi addons.

Soon after this announcement, the developer proceeded to remove the GitHub account which was used to distribute his addons. However, he didn’t realize that this might not have been the best decision.

As it turns out, GitHub allows outsiders to re-register names of deleted accounts. While this might not be a problem in most cases, it can be disastrous when the accounts are connected to Kodi add-ons that are constantly pinging for new updates.

In essence, it means that the person who registered the Github account can load content onto the boxes of people who still have the MetalKettle repo installed. Quite a dangerous prospect, something MetalKettle realizes as well.

“Someone has re-registered metalkettle on github. So in theory could pollute any devices with the repo still installed,” he warned on Twitter.

“Warning : if any users have a metalkettle repo installed on their systems or within a build – please delete ASAP,” he added.

MetalKettle warning

It’s not clear what the intentions of the new MetalKettle user are on GitHub, if he or she has any at all. But, people should be very cautious and probably remove it from their systems.

The real MetalKettle, meanwhile, alerted TVAddons to the situation and they have placed the repository on their Indigo blacklist of banned software. This effectively disables the repository on devices with Indigo installed.

GitHub on their turn may want to reconsider their removal policy. Perhaps it’s smarter to not make old usernames available for registration, at least not for a while, as it’s clearly a vulnerability.

This is also shown by another Kodi repo controversy that appeared earlier today. Another GitHub account that was reportedly deleted earlier, resurfaced today pushing a new version of the Exodus addon and other sources.

According to some, the GitHub account is operated by the original Exodus developers and perfectly safe, but others warn that the name was reregistered in bad faith.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Backblaze’s Upgrade Guide for macOS High Sierra

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/macos-high-sierra-upgrade-guide/

High Sierra

Apple introduced macOS 10.13 “High Sierra” at its 2017 Worldwide Developers Conference in June. On Tuesday, we learned we don’t have long to wait — the new OS will be available on September 25. It’s a free upgrade, and millions of Mac users around the world will rush to install it.

We understand. A new OS from Apple is exciting, But please, before you upgrade, we want to remind you to back up your Mac. You want your data to be safe from unexpected problems that could happen in the upgrade. We do, too. To make that easier, Backblaze offers this macOS High Sierra upgrade guide.

Why Upgrade to macOS 10.13 High Sierra?

High Sierra, as the name suggests, is a follow-on to the previous macOS, Sierra. Its major focus is on improving the base OS with significant improvements that will support new capabilities in the future in the file system, video, graphics, and virtual/augmented reality.

But don’t despair; there also are outward improvements that will be readily apparent to everyone when they boot the OS for the first time. We’ll cover both the inner and outer improvements coming in this new OS.

Under the Hood of High Sierra

APFS (Apple File System)

Apple has been rolling out its first file system upgrade for a while now. It’s already in iOS: now High Sierra brings APFS to the Mac. Apple touts APFS as a new file system optimized for Flash/SSD storage and featuring strong encryption, better and faster file handling, safer copying and moving of files, and other improved file system fundamentals.

We went into detail about the enhancements and improvements that APFS has over the previous file system, HFS+, in an earlier post. Many of these improvements, including enhanced performance, security and reliability of data, will provide immediate benefits to users, while others provide a foundation for future storage innovations and will require work by Apple and third parties to support in their products and services.

Most of us won’t notice these improvements, but we’ll benefit from better, faster, and safer file handling, which I think all of us can appreciate.

Video

High Sierra includes High Efficiency Video Encoding (HEVC, aka H.265), which preserves better detail and color while also introducing improved compression over H.264 (MPEG-4 AVC). Even existing Macs will benefit from the HEVC software encoding in High Sierra, but newer Mac models include HEVC hardware acceleration for even better performance.

MacBook Pro

Metal 2

macOS High Sierra introduces Metal 2, the next-generation of Apple’s Metal graphics API that was launched three years ago. Apple claims that Metal 2 provides up to 10x better performance in key areas. It provides near-direct access to the graphics processor (GPU), enabling the GPU to take control over key aspects of the rendering pipeline. Metal 2 will enhance the Mac’s capability for machine learning, and is the technology driving the new virtual reality platform on Macs.

audio video editor screenshot

Virtual Reality

We’re about to see an explosion of virtual reality experiences on both the Mac and iOS thanks to High Sierra and iOS 11. Content creators will be able to use apps like Final Cut Pro X, Epic Unreal 4 Editor, and Unity Editor to create fully immersive worlds that will revolutionize entertainment and education and have many professional uses, as well.

Users will want the new iMac with Retina 5K display or the upcoming iMac Pro to enjoy them, or any supported Mac paired with the latest external GPU and VR headset.

iMac and HTC virtual reality player

Outward Improvements

Siri

Siri logo

Expect a more nature voice from Siri in High Sierra. She or he will be less robotic, with greater expression and use of intonation in speech. Siri will also learn more about your preferences in things like music, helping you choose music that fits your taste and putting together playlists expressly for you. Expect Siri to be able to answer your questions about music-related trivia, as well.

Siri:  what does “scaramouche” refer to in the song Bohemian Rhapsody?

Photos

HD MacBook Pro screenshot

Photos has been redesigned with a new layout and new tools. A redesigned Edit view includes new tools for fine-tuning color and contrast and making adjustments within a defined color range. Some fun elements for creating special effects and memories also have been added. Photos now works with external apps such as Photoshop and Pixelmator. Compatibility with third-party extension adds printing and publishing services to help get your photos out into the world.

Safari

Safari logo

Apple claims that Safari in High Sierra is the world’s fastest desktop browser, outperforming Chrome and other browsers in a range of benchmark tests. They’ve also added autoplay blocking for those pesky videos that play without your permission and tracking blocking to help protect your privacy.

Can My Mac Run macOS High Sierra 10.13?

All Macs introduced in mid 2010 or later are compatible. MacBook and iMac computers introduced in late 2009 are also compatible. You’ll need OS X 10.7.5 “Lion” or later installed, along with at least 2 GB RAM and 8.8 GB of available storage to manage the upgrade.
Some features of High Sierra require an internet connection or an Apple ID. You can check to see if your Mac is compatible with High Sierra on Apple’s website.

Conquering High Sierra — What Do I Do Before I Upgrade?

Back Up That Mac!

It’s always smart to back up before you upgrade the operating system or make any other crucial changes to your computer. Upgrading your OS is a major change to your computer, and if anything goes wrong…well, you don’t want that to happen.

iMac backup screenshot

We recommend the 3-2-1 Backup Strategy to make sure your data is safe. What does that mean? Have three copies of your data. There’s the “live” version on your Mac, a local backup (Time Machine, another copy on a local drive or other computer), and an offsite backup like Backblaze. No matter what happens to your computer, you’ll have a way to restore the files if anything goes wrong. Need help understanding how to back up your Mac? We have you covered with a handy Mac backup guide.

Check for App and Driver Updates

This is when it helps to do your homework. Check with app developers or device manufacturers to find if their apps and devices have updates to work with High Sierra. Visit their websites or use the Check for Updates feature built into most apps (often found in the File or Help menus).

If you’ve downloaded apps through the Mac App Store, make sure to open them and click on the Updates button to download the latest updates.

Updating can be hit or miss when you’ve installed apps that didn’t come from the Mac App Store. To make it easier, visit the MacUpdate website. MacUpdate tracks changes to thousands of Mac apps.


Will Backblaze work with macOS High Sierra?

Yes. We’ve taken care to ensure that Backblaze works with High Sierra. We’ve already enhanced our Macintosh client to report the space available on an APFS container and we plan to add additional support for APFS capabilities that enhance Backblaze’s capabilities in the future.

Of course, we’ll watch Apple’s release carefully for any last minute surprises. We’ll officially offer support for High Sierra once we’ve had a chance to thoroughly test the release version.


Set Aside Time for the Upgrade

Depending on the speed of your Internet connection and your computer, upgrading to High Sierra will take some time. You’ll be able to use your Mac straightaway after answering a few questions at the end of the upgrade process.

If you’re going to install High Sierra on multiple Macs, a time-and-bandwidth-saving tip came from a Backblaze customer who suggested copying the installer from your Mac’s Applications folder to a USB Flash drive (or an external drive) before you run it. The installer routinely deletes itself once the upgrade process is completed, but if you grab it before that happens you can use it on other computers.

Where Do I get High Sierra?

Apple says that High Sierra will be available on September 25. Like other Mac operating system releases, Apple offers macOS 10.13 High Sierra for download from the Mac App Store, which is included on the Mac. As long as your Mac is supported and running OS X 10.7.5 “Lion” (released in 2012) or later, you can download and run the installer. It’s free. Thank you, Apple.

Better to be Safe than Sorry

Back up your Mac before doing anything to it, and make Backblaze part of your 3-2-1 backup strategy. That way your data is secure. Even if you have to roll back after an upgrade, or if you run into other problems, your data will be safe and sound in your backup.

Tell us How it Went

Are you getting ready to install High Sierra? Still have questions? Let us know in the comments. Tell us how your update went and what you like about the new release of macOS.

And While You’re Waiting for High Sierra…

While you’re waiting for Apple to release High Sierra on September 25, you might want to check out these other posts about using your Mac and Backblaze.

The post Backblaze’s Upgrade Guide for macOS High Sierra appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Manage Kubernetes Clusters on AWS Using CoreOS Tectonic

Post Syndicated from Arun Gupta original https://aws.amazon.com/blogs/compute/kubernetes-clusters-aws-coreos-tectonic/

There are multiple ways to run a Kubernetes cluster on Amazon Web Services (AWS). The first post in this series explained how to manage a Kubernetes cluster on AWS using kops. This second post explains how to manage a Kubernetes cluster on AWS using CoreOS Tectonic.

Tectonic overview

Tectonic delivers the most current upstream version of Kubernetes with additional features. It is a commercial offering from CoreOS and adds the following features over the upstream:

  • Installer
    Comes with a graphical installer that installs a highly available Kubernetes cluster. Alternatively, the cluster can be installed using AWS CloudFormation templates or Terraform scripts.
  • Operators
    An operator is an application-specific controller that extends the Kubernetes API to create, configure, and manage instances of complex stateful applications on behalf of a Kubernetes user. This release includes an etcd operator for rolling upgrades and a Prometheus operator for monitoring capabilities.
  • Console
    A web console provides a full view of applications running in the cluster. It also allows you to deploy applications to the cluster and start the rolling upgrade of the cluster.
  • Monitoring
    Node CPU and memory metrics are powered by the Prometheus operator. The graphs are available in the console. A large set of preconfigured Prometheus alerts are also available.
  • Security
    Tectonic ensures that cluster is always up to date with the most recent patches/fixes. Tectonic clusters also enable role-based access control (RBAC). Different roles can be mapped to an LDAP service.
  • Support
    CoreOS provides commercial support for clusters created using Tectonic.

Tectonic can be installed on AWS using a GUI installer or Terraform scripts. The installer prompts you for the information needed to boot the Kubernetes cluster, such as AWS access and secret key, number of master and worker nodes, and instance size for the master and worker nodes. The cluster can be created after all the options are specified. Alternatively, Terraform assets can be downloaded and the cluster can be created later. This post shows using the installer.

CoreOS License and Pull Secret

Even though Tectonic is a commercial offering, a cluster for up to 10 nodes can be created by creating a free account at Get Tectonic for Kubernetes. After signup, a CoreOS License and Pull Secret files are provided on your CoreOS account page. Download these files as they are needed by the installer to boot the cluster.

IAM user permission

The IAM user to create the Kubernetes cluster must have access to the following services and features:

  • Amazon Route 53
  • Amazon EC2
  • Elastic Load Balancing
  • Amazon S3
  • Amazon VPC
  • Security groups

Use the aws-policy policy to grant the required permissions for the IAM user.

DNS configuration

A subdomain is required to create the cluster, and it must be registered as a public Route 53 hosted zone. The zone is used to host and expose the console web application. It is also used as the static namespace for the Kubernetes API server. This allows kubectl to be able to talk directly with the master.

The domain may be registered using Route 53. Alternatively, a domain may be registered at a third-party registrar. This post uses a kubernetes-aws.io domain registered at a third-party registrar and a tectonic subdomain within it.

Generate a Route 53 hosted zone using the AWS CLI. Download jq to run this command:

ID=$(uuidgen) && \
aws route53 create-hosted-zone \
--name tectonic.kubernetes-aws.io \
--caller-reference $ID \
| jq .DelegationSet.NameServers

The command shows an output such as the following:

[
  "ns-1924.awsdns-48.co.uk",
  "ns-501.awsdns-62.com",
  "ns-1259.awsdns-29.org",
  "ns-749.awsdns-29.net"
]

Create NS records for the domain with your registrar. Make sure that the NS records can be resolved using a utility like dig web interface. A sample output would look like the following:

The bottom of the screenshot shows NS records configured for the subdomain.

Download and run the Tectonic installer

Download the Tectonic installer (version 1.7.1) and extract it. The latest installer can always be found at coreos.com/tectonic. Start the installer:

./tectonic/tectonic-installer/$PLATFORM/installer

Replace $PLATFORM with either darwin or linux. The installer opens your default browser and prompts you to select the cloud provider. Choose Amazon Web Services as the platform. Choose Next Step.

Specify the Access Key ID and Secret Access Key for the IAM role that you created earlier. This allows the installer to create resources required for the Kubernetes cluster. This also gives the installer full access to your AWS account. Alternatively, to protect the integrity of your main AWS credentials, use a temporary session token to generate temporary credentials.

You also need to choose a region in which to install the cluster. For the purpose of this post, I chose a region close to where I live, Northern California. Choose Next Step.

Give your cluster a name. This name is part of the static namespace for the master and the address of the console.

To enable in-place update to the Kubernetes cluster, select the checkbox next to Automated Updates. It also enables update to the etcd and Prometheus operators. This feature may become a default in future releases.

Choose Upload “tectonic-license.txt” and upload the previously downloaded license file.

Choose Upload “config.json” and upload the previously downloaded pull secret file. Choose Next Step.

Let the installer generate a CA certificate and key. In this case, the browser may not recognize this certificate, which I discuss later in the post. Alternatively, you can provide a CA certificate and a key in PEM format issued by an authorized certificate authority. Choose Next Step.

Use the SSH key for the region specified earlier. You also have an option to generate a new key. This allows you to later connect using SSH into the Amazon EC2 instances provisioned by the cluster. Here is the command that can be used to log in:

ssh –i <key> [email protected]<ec2-instance-ip>

Choose Next Step.

Define the number and instance type of master and worker nodes. In this case, create a 6 nodes cluster. Make sure that the worker nodes have enough processing power and memory to run the containers.

An etcd cluster is used as persistent storage for all of Kubernetes API objects. This cluster is required for the Kubernetes cluster to operate. There are three ways to use the etcd cluster as part of the Tectonic installer:

  • (Default) Provision the cluster using EC2 instances. Additional EC2 instances are used in this case.
  • Use an alpha support for cluster provisioning using the etcd operator. The etcd operator is used for automated operations of the etcd master nodes for the cluster itself, in addition to for etcd instances that are created for application usage. The etcd cluster is provisioned within the Tectonic installer.
  • Bring your own pre-provisioned etcd cluster.

Use the first option in this case.

For more information about choosing the appropriate instance type, see the etcd hardware recommendation. Choose Next Step.

Specify the networking options. The installer can create a new public VPC or use a pre-existing public or private VPC. Make sure that the VPC requirements are met for an existing VPC.

Give a DNS name for the cluster. Choose the domain for which the Route 53 hosted zone was configured earlier, such as tectonic.kubernetes-aws.io. Multiple clusters may be created under a single domain. The cluster name and the DNS name would typically match each other.

To select the CIDR range, choose Show Advanced Settings. You can also choose the Availability Zones for the master and worker nodes. By default, the master and worker nodes are spread across multiple Availability Zones in the chosen region. This makes the cluster highly available.

Leave the other values as default. Choose Next Step.

Specify an email address and password to be used as credentials to log in to the console. Choose Next Step.

At any point during the installation, you can choose Save progress. This allows you to save configurations specified in the installer. This configuration file can then be used to restore progress in the installer at a later point.

To start the cluster installation, choose Submit. At another time, you can download the Terraform assets by choosing Manually boot. This allows you to boot the cluster later.

The logs from the Terraform scripts are shown in the installer. When the installation is complete, the console shows that the Terraform scripts were successfully applied, the domain name was resolved successfully, and that the console has started. The domain works successfully if the DNS resolution worked earlier, and it’s the address where the console is accessible.

Choose Download assets to download assets related to your cluster. It contains your generated CA, kubectl configuration file, and the Terraform state. This download is an important step as it allows you to delete the cluster later.

Choose Next Step for the final installation screen. It allows you to access the Tectonic console, gives you instructions about how to configure kubectl to manage this cluster, and finally deploys an application using kubectl.

Choose Go to my Tectonic Console. In our case, it is also accessible at http://cluster.tectonic.kubernetes-aws.io/.

As I mentioned earlier, the browser does not recognize the self-generated CA certificate. Choose Advanced and connect to the console. Enter the login credentials specified earlier in the installer and choose Login.

The Kubernetes upstream and console version are shown under Software Details. Cluster health shows All systems go and it means that the API server and the backend API can be reached.

To view different Kubernetes resources in the cluster choose, the resource in the left navigation bar. For example, all deployments can be seen by choosing Deployments.

By default, resources in the all namespace are shown. Other namespaces may be chosen by clicking on a menu item on the top of the screen. Different administration tasks such as managing the namespaces, getting list of the nodes and RBAC can be configured as well.

Download and run Kubectl

Kubectl is required to manage the Kubernetes cluster. The latest version of kubectl can be downloaded using the following command:

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl

It can also be conveniently installed using the Homebrew package manager. To find and access a cluster, Kubectl needs a kubeconfig file. By default, this configuration file is at ~/.kube/config. This file is created when a Kubernetes cluster is created from your machine. However, in this case, download this file from the console.

In the console, choose admin, My Account, Download Configuration and follow the steps to download the kubectl configuration file. Move this file to ~/.kube/config. If kubectl has already been used on your machine before, then this file already exists. Make sure to take a backup of that file first.

Now you can run the commands to view the list of deployments:

~ $ kubectl get deployments --all-namespaces
NAMESPACE         NAME                                    DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-system       etcd-operator                           1         1         1            1           43m
kube-system       heapster                                1         1         1            1           40m
kube-system       kube-controller-manager                 3         3         3            3           43m
kube-system       kube-dns                                1         1         1            1           43m
kube-system       kube-scheduler                          3         3         3            3           43m
tectonic-system   container-linux-update-operator         1         1         1            1           40m
tectonic-system   default-http-backend                    1         1         1            1           40m
tectonic-system   kube-state-metrics                      1         1         1            1           40m
tectonic-system   kube-version-operator                   1         1         1            1           40m
tectonic-system   prometheus-operator                     1         1         1            1           40m
tectonic-system   tectonic-channel-operator               1         1         1            1           40m
tectonic-system   tectonic-console                        2         2         2            2           40m
tectonic-system   tectonic-identity                       2         2         2            2           40m
tectonic-system   tectonic-ingress-controller             1         1         1            1           40m
tectonic-system   tectonic-monitoring-auth-alertmanager   1         1         1            1           40m
tectonic-system   tectonic-monitoring-auth-prometheus     1         1         1            1           40m
tectonic-system   tectonic-prometheus-operator            1         1         1            1           40m
tectonic-system   tectonic-stats-emitter                  1         1         1            1           40m

This output is similar to the one shown in the console earlier. Now, this kubectl can be used to manage your resources.

Upgrade the Kubernetes cluster

Tectonic allows the in-place upgrade of the cluster. This is an experimental feature as of this release. The clusters can be updated either automatically, or with manual approval.

To perform the update, choose Administration, Cluster Settings. If an earlier Tectonic installer, version 1.6.2 in this case, is used to install the cluster, then this screen would look like the following:

Choose Check for Updates. If any updates are available, choose Start Upgrade. After the upgrade is completed, the screen is refreshed.

This is an experimental feature in this release and so should only be used on clusters that can be easily replaced. This feature may become a fully supported in a future release. For more information about the upgrade process, see Upgrading Tectonic & Kubernetes.

Delete the Kubernetes cluster

Typically, the Kubernetes cluster is a long-running cluster to serve your applications. After its purpose is served, you may delete it. It is important to delete the cluster as this ensures that all resources created by the cluster are appropriately cleaned up.

The easiest way to delete the cluster is using the assets downloaded in the last step of the installer. Extract the downloaded zip file. This creates a directory like <cluster-name>_TIMESTAMP. In that directory, give the following command to delete the cluster:

TERRAFORM_CONFIG=$(pwd)/.terraformrc terraform destroy --force

This destroys the cluster and all associated resources.

You may have forgotten to download the assets. There is a copy of the assets in the directory tectonic/tectonic-installer/darwin/clusters. In this directory, another directory with the name <cluster-name>_TIMESTAMP contains your assets.

Conclusion

This post explained how to manage Kubernetes clusters using the CoreOS Tectonic graphical installer.  For more details, see Graphical Installer with AWS. If the installation does not succeed, see the helpful Troubleshooting tips. After the cluster is created, see the Tectonic tutorials to learn how to deploy, scale, version, and delete an application.

Future posts in this series will explain other ways of creating and running a Kubernetes cluster on AWS.

Arun

AWS Earns Department of Defense Impact Level 5 Provisional Authorization

Post Syndicated from Chris Gile original https://aws.amazon.com/blogs/security/aws-earns-department-of-defense-impact-level-5-provisional-authorization/

AWS GovCloud (US) Region image

The Defense Information Systems Agency (DISA) has granted the AWS GovCloud (US) Region an Impact Level 5 (IL5) Department of Defense (DoD) Cloud Computing Security Requirements Guide (CC SRG) Provisional Authorization (PA) for six core services. This means that AWS’s DoD customers and partners can now deploy workloads for Controlled Unclassified Information (CUI) exceeding IL4 and for unclassified National Security Systems (NSS).

We have supported sensitive Defense community workloads in the cloud for more than four years, and this latest IL5 authorization is complementary to our FedRAMP High Provisional Authorization that covers 18 services in the AWS GovCloud (US) Region. Our customers now have the flexibility to deploy any range of IL 2, 4, or 5 workloads by leveraging AWS’s services, attestations, and certifications. For example, when the US Air Force needed compute scale to support the Next Generation GPS Operational Control System Program, they turned to AWS.

In partnership with a certified Third Party Assessment Organization (3PAO), an independent validation was conducted to assess both our technical and nontechnical security controls to confirm that they meet the DoD’s stringent CC SRG standards for IL5 workloads. Effective immediately, customers can begin leveraging the IL5 authorization for the following six services in the AWS GovCloud (US) Region:

AWS has been a long-standing industry partner with DoD, federal-agency customers, and private-sector customers to enhance cloud security and policy. We continue to collaborate on the DoD CC SRG, Defense Acquisition Regulation Supplement (DFARS) and other government requirements to ensure that policy makers enact policies to support next-generation security capabilities.

In an effort to reduce the authorization burden of our DoD customers, we’ve worked with DISA to port our assessment results into an easily ingestible format by the Enterprise Mission Assurance Support Service (eMASS) system. Additionally, we undertook a separate effort to empower our industry partners and customers to efficiently solve their compliance, governance, and audit challenges by launching the AWS Customer Compliance Center, a portal providing a breadth of AWS-specific compliance and regulatory information.

We look forward to providing sustained cloud security and compliance support at scale for our DoD customers and adding additional services within the IL5 authorization boundary. See AWS Services in Scope by Compliance Program for updates. To request access to AWS’s DoD security and authorization documentation, contact AWS Sales and Business Development. For a list of frequently asked questions related to AWS DoD SRG compliance, see the AWS DoD SRG page.

To learn more about the announcement in this post, tune in for the AWS Automating DoD SRG Impact Level 5 Compliance in AWS GovCloud (US) webinar on October 11, 2017, at 11:00 A.M. Pacific Time.

– Chris Gile, Senior Manager, AWS Public Sector Risk & Compliance

 

 

Strategies for Backing Up Windows Computers

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/strategies-for-backing-up-windows-computers/

Windows 7, Windows 8, Windows 10 logos

There’s a little company called Apple making big announcements this week, but about 45% of you are on Windows machines, so we thought it would be a good idea to devote a blog post today to Windows users and the options they have for backing up Windows computers.

We’ll be talking about the various options for backing up Windows desktop OS’s 7, 8, and 10, and Windows servers. We’ve written previously about this topic in How to Back Up Windows, and Computer Backup Options, but we’ll be covering some new topics and ways to combine strategies in this post. So, if you’re a Windows user looking for shelter from all the Apple hoopla, welcome to our Apple Announcement Day Windows Backup Day post.

Windows laptop

First, Let’s Talk About What We Mean by Backup

This might seem to our readers like an unneeded appetizer on the way to the main course of our post, but we at Backblaze know that people often mean very different things when they use backup and related terms. Let’s start by defining what we mean when we say backup, cloud storage, sync, and archive.

Backup
A backup is an active copy of the system or files that you are using. It is distinguished from an archive, which is the storing of data that is no longer in active use. Backups fall into two main categories: file and image. File backup software will back up whichever files you designate by either letting you include files you wish backed up or by excluding files you don’t want backed up, or both. An image backup, sometimes called a disaster recovery backup or a system clone, is useful if you need to recreate your system on a new drive or computer.
The first backup generally will be a full backup of all files. After that, the backup will be incremental, meaning that only files that have been changed since the full backup will be added. Often, the software will keep changed versions of the files for some period of time, so you can maintain a number of previous revisions of your files in case you wish to return to something in an earlier version of your file.
The destination for your backup could be another drive on your computer, an attached drive, a network-attached drive (NAS), or the cloud.
Cloud Storage
Cloud storage vendors supply data storage just as a utility company supplies power, gas, or water. Cloud storage can be used for data backups, but it can also be used for data archives, application data, records, or libraries of photos, videos, and other media.
You contract with the service for storing any type of data, and the storage location is available to you via the internet. Cloud storage providers generally charge by some combination of data ingress, egress, and the amount of data stored.
Sync
File sync is useful for files that you wish to have access to from different places or computers, or for files that you wish to share with others. While sync has its uses, it has limitations for keeping files safe and how much it could cost you to store large amounts of data. As opposed to backup, which keeps revision of files, sync is designed to keep two or more locations exactly the same. Sync costs are based on how much data you sync and can get expensive for large amounts of data.
Archive
A data archive is for data that is no longer in active use but needs to be saved, and may or may not ever be retrieved again. In old-style storage parlance, it is called cold storage. An archive could be stored with a cloud storage provider, or put on a hard drive or flash drive that you disconnect and put in the closet, or mail to your brother in Idaho.

What’s the Best Strategy for Backing Up?

Now that we’ve got our terminology clear, let’s talk backup strategies for Windows.

At Backblaze, we advocate the 3-2-1 strategy for safeguarding your data, which means that you should maintain three copies of any valuable data — two copies stored locally and one stored remotely. I follow this strategy at home by working on the active data on my Windows 10 desktop computer (copy one), which is backed up to a Drobo RAID device attached via USB (copy two), and backing up the desktop to Backblaze’s Personal Backup in the cloud (copy three). I also keep an image of my primary disk on a separate drive and frequently update it using Windows 10’s image tool.

I use Dropbox for sharing specific files I am working on that I might wish to have access to when I am traveling or on another computer. Once my subscription with Dropbox expires, I’ll use the latest release of Backblaze that has individual file preview with sharing built-in.

Before you decide which backup strategy will work best for your situation, you’ll need to ask yourself a number of questions. These questions include where you wish to store your backups, whether you wish to supply your own storage media, whether the backups will be manual or automatic, and whether limited or unlimited data storage will work best for you.

Strategy 1 — Back Up to a Local or Attached Drive

The first copy of the data you are working on is often on your desktop or laptop. You can create a second copy of your data on another drive or directory on your computer, or copy the data to a drive directly attached to your computer, such as via USB.

external hard drive and RAID NAS devices

Windows has built-in tools for both file and image level backup. Depending on which version of Windows you use, these tools are called Backup and Restore, File History, or Image. These tools enable you to set a schedule for automatic backups, which ensures that it is done regularly. You also have the choice to use Windows Explorer (aka File Explorer) to manually copy files to another location. Some external disk drives and USB Flash Drives come with their own backup software, and other backup utilities are available for free or for purchase.

Windows Explorer File History screenshot

This is a supply-your-own media solution, meaning that you need to have a hard disk or other medium available of sufficient size to hold all your backup data. When a disk becomes full, you’ll need to add a disk or swap out the full disk to continue your backups.

We’ve written previously on this strategy at Should I use an external drive for backup?

Strategy 2 — Back Up to a Local Area Network (LAN)

Computers, servers, and network-attached-storage (NAS) on your local network all can be used for backing up data. Microsoft’s built-in backup tools can be used for this job, as can any utility that supports network protocols such as NFS or SMB/CIFS, which are common protocols that allow shared access to files on a network for Windows and other operatings systems. There are many third-party applications available as well that provide extensive options for managing and scheduling backups and restoring data when needed.

NAS cloud

Multiple computers can be backed up to a single network-shared computer, server, or NAS, which also could then be backed up to the cloud, which rounds out a nice backup strategy, because it covers both local and remote copies of your data. System images of multiple computers on the LAN can be included in these backups if desired.

Again, you are managing the backup media on the local network, so you’ll need to be sure you have sufficient room on the destination drives to store all your backup data.

Strategy 3 — Back Up to Detached Drive at Another Location

You may have have read our recent blog post, Getting Data Archives Out of Your Closet, in which we discuss the practice of filling hard drives and storing them in a closet. Of course, to satisfy the off-site backup guideline, these drives would need to be stored in a closet that’s in a different geographical location than your main computer. If you’re willing to do all the work of copying the data to drives and transporting them to another location, this is a viable option.

stack of hard drives

The only limitation to the amount of backup data is the number of hard drives you are willing to purchase — and maybe the size of your closet.

Strategy 4 — Back Up to the Cloud

Backing up to the cloud has become a popular option for a number of reasons. Internet speeds have made moving large amounts of data possible, and not having to worry about supplying the storage media simplifies choices for users. Additionally, cloud vendors implement features such as data protection, deduplication, and encryption as part of their services that make cloud storage reliable, secure, and efficient. Unlimited cloud storage for data from a single computer is a popular option.

A backup vendor likely will provide a software client that runs on your computer and backs up your data to the cloud in the background while you’re doing other things, such as Backblaze Personal Backup, which has clients for Windows computers, Macintosh computers, and mobile apps for both iOS and Android. For restores, Backblaze users can download one or all of their files for free from anywhere in the world. Optionally, a 128 GB flash drive or 4 TB drive can be overnighted to the customer, with a refund available if the drive is returned.

Storage Pod in the cloud

Backblaze B2 Cloud Storage is an option for those who need capabilities beyond Backblaze’s Personal Backup. B2 provides cloud storage that is priced based on the amount of data the customer uses, and is suitable for long-term data storage. B2 supports integrations with NAS devices, as well as Windows, Macintosh, and Linux computers and servers.

Services such as BackBlaze B2 are often called Cloud Object Storage or IaaS (Infrastructure as a Service), because they provide a complete solution for storing all types of data in partnership with vendors who integrate various solutions for working with B2. B2 has its own API (Application Programming Interface) and CLI (Command-line Interface) to work with B2, but B2 becomes even more powerful when paired with any one of a number of other solutions for data storage and management provided by third parties who offer both hardware and software solutions.

Backing Up Windows Servers

Windows Servers are popular workstations for some users, and provide needed network services for others. They also can be used to store backups from other computers on the network. They, in turn, can be backed up to attached drives or the cloud. While our Personal Backup client doesn’t support Windows servers, our B2 Cloud Storage has a number of integrations with vendors who supply software or hardware for storing data both locally and on B2. We’ve written a number of blog posts and articles that address these solutions, including How to Back Up your Windows Server with B2 and CloudBerry.

Sometimes the Best Strategy is to Mix and Match

The great thing about computers, software, and networks is that there is an endless number of ways to combine them. Our users and hardware and software partners are ingenious in configuring solutions that save data locally, copy it to an attached or network drive, and then store it to the cloud.

image of cloud backup

Among our B2 partners, Synology, CloudBerry Archiware, QNAP, Morro Data, and GoodSync have integrations that allow their NAS devices to store and retrieve data to and from B2 Cloud Storage. For a drag-and-drop experience on the desktop, take a look at CyberDuck, MountainDuck, and Dropshare, which provide users with an easy and interactive way to store and use data in B2.

If you’d like to explore more options for combining software, hardware, and cloud solutions, we invite you to browse the integrations for our many B2 partners.

Have Questions?

Windows versions, tools, and backup terminology all can be confusing, and we know how hard it can be to make sense of all of it. If there’s something we haven’t addressed here, or if you have a question or contribution, please let us know in the comments.

And happy Windows Backup Day! (Just don’t tell Apple.)

The post Strategies for Backing Up Windows Computers appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

No, Google Drive is Definitely Not The New Pirate Bay

Post Syndicated from Andy original https://torrentfreak.com/no-google-drive-is-definitely-not-the-new-pirate-bay-170910/

Running close to two decades old, the world of true mainstream file-sharing is less of a mystery to the general public than it’s ever been.

Most people now understand the concept of shifting files from one place to another, and a significant majority will be aware of the opportunities to do so with infringing content.

Unsurprisingly, this is a major thorn in the side of rightsholders all over the world, who have been scrambling since the turn of the century in a considerable effort to stem the tide. The results of their work have varied, with some sectors hit harder than others.

One area that has taken a bit of a battering recently involves the dominant peer-to-peer platforms reliant on underlying BitTorrent transfers. Several large-scale sites have shut down recently, not least KickassTorrents, Torrentz, and ExtraTorrent, raising questions of what bad news may arrive next for inhabitants of Torrent Land.

Of course, like any other Internet-related activity, sharing has continued to evolve over the years, with streaming and cloud-hosting now a major hit with consumers. In the main, sites which skirt the borders of legality have been the major hosting and streaming players over the years, but more recently it’s become clear that even the most legitimate companies can become unwittingly involved in the piracy scene.

As reported here on TF back in 2014 and again several times this year (1,2,3), cloud-hosting services operated by Google, including Google Drive, are being used to store and distribute pirate content.

That news was echoed again this week, with a report on Gadgets360 reiterating that Google Drive is still being used for movie piracy. What followed were a string of follow up reports, some of which declared Google’s service to be ‘The New Pirate Bay.’

No. Just no.

While it’s always tempting for publications to squeeze a reference to The Pirate Bay into a piracy article due to the site’s popularity, it’s particularly out of place in this comparison. In no way, shape, or form can a centralized store of data like Google Drive ever replace the underlying technology of sites like The Pirate Bay.

While the casual pirate might love the idea of streaming a movie with a couple of clicks to a browser of his or her choice, the weakness of the cloud system cannot be understated. To begin with, anything hosted by Google is vulnerable to immediate takedown on demand, usually within a matter of hours.

“Google Drive has a variety of piracy counter-measures in place,” a spokesperson told Mashable this week, “and we are continuously working to improve our protections to prevent piracy across all of our products.”

When will we ever hear anything like that from The Pirate Bay? Answer: When hell freezes over. But it’s not just compliance with takedown requests that make Google Drive-hosted files vulnerable.

At the point Google Drive responds to a takedown request, it takes down the actual file. On the other hand, even if Pirate Bay responded to notices (which it doesn’t), it would be unable to do anything about the sharing going on underneath. Removing a torrent file or magnet link from TPB does nothing to negatively affect the decentralized swarm of people sharing files among themselves. Those files stay intact and sharing continues, no matter what happens to the links above.

Importantly, people sharing using BitTorrent do so without any need for central servers – the whole process is decentralized as long as a user can lay his or her hands on a torrent file or magnet link. Those using Google Drive, however, rely on a totally centralized system, where not only is Google king, but it can and will stop the entire party after receiving a few lines of text from a rightsholder.

There is a very good reason why sites like The Pirate Bay have been around for close to 15 years while platforms such as Megaupload, Hotfile, Rapidshare, and similar platforms have all met their makers. File-hosting platforms are expensive-to-run warehouses full of files, each of which brings direct liability for their hosts, once they’re made aware that those files are infringing. These days the choice is clear – take the files down or get brought down, it’s as simple as that.

The Pirate Bay, on the other hand, is nothing more than a treasure map (albeit a valuable one) that points the way to content spread all around the globe in the most decentralized way possible. There are no files to delete, no content to disappear. Comparing a vulnerable Google Drive to this kind of robust system couldn’t be further from the mark.

That being said, this is the way things are going. The cloud, it seems, is here to stay in all its forms. Everyone has access to it and uploading content is easier – much easier – than uploading it to a BitTorrent network. A Google Drive upload is simplicity itself for anyone with a mouse and a file; the same cannot be said about The Pirate Bay.

For this reason alone, platforms like Google Drive and the many dozens of others offering a similar service will continue to become havens for pirated content, until the next big round of legislative change. At the moment, each piece of content has to be removed individually but in the future, it’s possible that pre-emptive filters will kill uploads of pirated content before they see the light of day.

When this comes to pass, millions of people will understand why Google Drive, with its bots checking every file upload for alleged infringement, is not The Pirate Bay. At this point, if people have left it too long, it might be too late to reinvigorate BitTorrent networks to their former glory.

People will try to rebuild them, of course, but realizing why they shouldn’t have been left behind at all is probably the best protection.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

WordPress Reports Surge in ‘Piracy’ Takedown Notices, Rejects 78%

Post Syndicated from Ernesto original https://torrentfreak.com/wordpress-reports-surge-in-piracy-takedown-notices-rejects-78-170909/

Automattic, the company behind the popular WordPress.com blogging platform, receives thousands of takedown requests from rightsholders.

A few days ago the company published its latest transparency report, showing that it had processed 9,273 requests during the first half of 2017.

This is more than double the amount it received during the same period last year, which is a significant increase. Looking more closely at the numbers, we see that this jump is solely due to an increase in incomplete and abusive requests.

Of all the DMCA notices received, only 22% resulted in the takedown of allegedly infringing content. This translates to 2,040 legitimate requests, which is less than the 2,342 Automattic received during the same period last year.

This logically means that the number of abusive and incomplete DMCA notices has skyrocketed. And indeed, in its most recent report, 78% of all requests were rejected due to missing information or plain abuse. That’s much more than the year before when 42% were rejected.

Automattic’s transparency report (first half of 2017)

WordPress prides itself on carefully reviewing the content of each and every takedown notice, to protect its users. This means checking whether a takedown request is properly formatted but also reviewing the legitimacy of the claims.

“We also may decline to remove content if a notice is abusive. ‘Abusive’ notices may be formally complete, but are directed at fair use of content, material that isn’t copyrightable, or content the complaining party misrepresents ownership of a copyright,” Automattic notes.

During the first half of 2017, a total of 649 takedown requests were categorized as abuse. Some of the most blatant examples go into the “Hall of Shame,” such as a recent case where the Canadian city of Abbotsford tried to censor a parody of its logo, which replaced a pine tree with a turd.

While some abuse cases sound trivial they can have a real impact on website operators, as examples outside of WordPress show. Most recently the operator of Oro Jackson, a community dedicated to the anime series “One Piece,” was targeted with several dubious DMCA requests.

The takedown notices were sent by the German company Comeso and were forwarded through their hosting company Linode. The notices urged the operator to remove various forum threads because they included words of phrases such as “G’day” and “Reveries of the Moonlight,” not actual infringing content.

G’day

Fearing legal repercussions, the operator saw no other option than to censor these seemingly harmless discussions (starting a thread with “G’day”!!), until there’s a final decision on the counter-notice. They remain offline today.

It’s understandable that hosting companies have to be strict sometimes, as reviewing copyright claims is not their core business. However, incidents like these show how valuable the skeptical review process of Automattic is.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Russia Blocks 4,000 Pirate Sites Plus 41,000 Innocent as Collateral Damage

Post Syndicated from Andy original https://torrentfreak.com/russia-blocks-4000-pirate-sites-plus-41000-innocent-as-collateral-damage-170905/

After years of criticism from both international and local rightsholders, in 2013 the Russian government decided to get tough on Internet piracy.

Under new legislation, sites engaged in Internet piracy could find themselves blocked by ISPs, rendering them inaccessible to local citizens and solving the piracy problem. Well, that was the theory, at least.

More than four years on, Russia is still grappling with a huge piracy problem that refuses to go away. It has been blocking thousands of sites at a steady rate, including RuTracker, the country’s largest torrent platform, but still the problem persists.

Now, a new report produced by Roskomsvoboda, the Center for the Protection of Digital Rights, and the Pirate Party of Russia, reveals a system that has not only failed to reach its stated aims but is also having a negative effect on the broader Internet.

“It’s already been four years since the creation of this ‘anti-piracy machine’ in Russia. The first amendments related to the fight against ‘piracy’ in the network came into force on August 1, 2013, and since then this mechanism has been twice revised,” Roskomsvoboda said in a statement.

“[These include] the emergence of additional responsibilities to restrict access to network resources and increase the number of subjects who are responsible for removing and blocking content. Since that time, several ‘purely Russian’ trends in ‘anti-piracy’ and trade in rights have also emerged.”

These revisions, which include the permanent blocking of persistently infringing sites and the planned blocking of mirror sites and anonymizers, have been widely documented. However, the researchers say that they want to shine a light on the effects of blocking procedures and subsequent actions that are causing significant issues for third-parties.

As part of the study, the authors collected data on the cases presented to the Moscow City Court by the most active plaintiffs in anti-piracy actions (mainly TV show distributors and music outfits including Sony Music Entertainment and Universal Music). They describe the court process and system overall as lacking.

“The court does not conduct a ‘triple test’ and ignores the position, rights and interests of respondents and third parties. It does not check the availability of illegal information on sites and appeals against decisions of the Moscow City Court do not bring any results,” the researchers write.

“Furthermore, the cancellation of the unlimited blocking of a site is simply impossible and in respect of hosting providers and security services, those web services are charged with all the legal costs of the case.”

The main reason behind this situation is that ‘pirate’ site operators rarely (if ever) turn up to defend themselves. If at some point they are found liable for infringement under the Criminal Code, they can be liable for up to six years in prison, hardly an incentive to enter into a copyright process voluntarily. As a result, hosts and other providers act as respondents.

This means that these third-party companies appear as defendants in the majority of cases, a position they find both “unfair and illogical.” They’re also said to be confused about how they are supposed to fulfill the blocking demands placed upon them by the Court.

“About 90% of court cases take place without the involvement of the site owner, since the requirements are imposed on the hosting provider, who is not responsible for the content of the site,” the report says.

Nevertheless, hosts and other providers have been ordered to block huge numbers of pirate sites.

According to the researchers, the total has now gone beyond 4,000 domains, but the knock on effect is much more expansive. Due to the legal requirement to block sites by both IP address and other means, third-party sites with shared IP addresses get caught up as collateral damage. The report states that more than 41,000 innocent sites have been blocked as the result of supposedly targeted court orders.

But with collateral damage mounting, the main issue as far as copyright holders are concerned is whether piracy is decreasing as a result. The report draws few conclusions on that front but notes that blocks are a blunt instrument. While they may succeed in stopping some people from accessing ‘pirate’ domains, the underlying infringement carries on regardless.

“Blocks create restrictions only for Internet users who are denied access to sites, but do not lead to the removal of illegal information or prevent intellectual property violations,” the researchers add.

With no sign of the system being overhauled to tackle the issues raised in the study (pdf, Russian), Russia is now set to introduce yet new anti-piracy measures.

As recently reported, new laws requiring search engines to remove listings for ‘pirate’ mirror sites comes into effect October 1. Exactly a month later on November 1, VPNs and anonymization tools will have to be removed too, if they fail to meet the standards required under state regulation.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Sci-Hub Faces $4,8 Million Piracy Damages and ISP Blocking

Post Syndicated from Ernesto original https://torrentfreak.com/sci-hub-faces-48-million-piracy-damages-and-isp-blocking-170905/

In June, a New York District Court handed down a default judgment against Sci-Hub.

The pirate site, operated by Alexandra Elbakyan, was ordered to pay $15 million in piracy damages to academic publisher Elsevier.

With the ink on this order barely dry, another publisher soon tagged on with a fresh complaint. The American Chemical Society (ACS), a leading source of academic publications in the field of chemistry, also accused Sci-Hub of mass copyright infringement.

Founded more than 140 years ago, the non-profit organization has around 157,000 members and researchers who publish tens of thousands of articles a year in its peer-reviewed journals. Because many of its works are available for free on Sci-Hub, ACS wants to be compensated.

Sci-Hub was made aware of the legal proceedings but did not appear in court. As a result, a default was entered against the site, and a few days ago ACS specified its demands, which include $4.8 million in piracy damages.

“Here, ACS seeks a judgment against Sci-Hub in the amount of $4,800,000—which is based on infringement of a representative sample of publications containing the ACS Copyrighted Works multiplied by the maximum statutory damages of $150,000 for each publication,” they write.

The publisher notes that the maximum statutory damages are only requested for 32 of its 9,000 registered works. This still adds up to a significant sum of money, of course, but that is needed as a deterrent, ACS claims.

“Sci-Hub’s unabashed flouting of U.S. Copyright laws merits a strong deterrent. This Court has awarded a copyright holder maximum statutory damages where the defendant’s actions were ‘clearly willful’ and maximum damages were necessary to ‘deter similar actors in the future’,” they write.

Although the deterrent effect may sound plausible in most cases, another $4.8 million in debt is unlikely to worry Sci-Hub’s owner, as she can’t pay it off anyway. However, there’s also a broad injunction on the table that may be more of a concern.

The requested injunction prohibits Sci-Hub’s owner to continue her work on the site. In addition, it also bars a wide range of other service providers from assisting others to access it.

Specifically, it restrains “any Internet search engines, web hosting and Internet service providers, domain name registrars, and domain name registries, to cease facilitating access to any or all domain names and websites through which Defendant Sci-Hub engages in unlawful access to [ACS’s works].”

The above suggests that search engines may have to remove the site from their indexes while ISPs could be required to block their users’ access to the site as well, which goes quite far.

Since Sci-Hub is in default, ACS is likely to get what it wants. However, if the organization intends to enforce the order in full, it’s likely that some of these third-party services, including Internet providers, will have to spring into action.

While domain name registries are regularly ordered to suspend domains, search engine removals and ISP blocking are not common in the United States. It would, therefore, be no surprise if this case lingers a little while longer.

A copy of ACS’s proposed default judgment, obtained by TorrentFreak, is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

How Much Does ‘Free’ Premier League Piracy Cost These Days?

Post Syndicated from Andy original https://torrentfreak.com/how-much-does-free-premier-league-piracy-cost-these-days-170902/

Right now, the English Premier League is engaged in perhaps the most aggressively innovative anti-piracy operation the Internet has ever seen. After obtaining a new High Court order, it now has the ability to block ‘pirate’ streams of matches, in real-time, with no immediate legal oversight.

If the Premier League believes a server is streaming one of its matches, it can ask ISPs in the UK to block it, immediately. That’s unprecedented anywhere on the planet.

As previously reported, this campaign caused a lot of problems for people trying to access free and premium streams at the start of the season. Many IPTV services were blocked in the UK within minutes of matches starting, with free streams also dropping like flies. According to information obtained by TF, more than 600 illicit streams were blocked during that weekend.

While some IPTV providers and free streams continued without problems, it seems likely that it’s only a matter of time before the EPL begins to pick off more and more suppliers. To be clear, the EPL isn’t taking services or streams down, it’s only blocking them, which means that people using circumvention technologies like VPNs can get around the problem.

However, this raises the big issue again – that of continuously increasing costs. While piracy is often painted as free, it is not, and as setups get fancier, costs increase too.

Below, we take a very general view of a handful of the many ‘pirate’ configurations currently available, to work out how much ‘free’ piracy costs these days. The list is not comprehensive by any means (and excludes more obscure methods such as streaming torrents, which are always free and rarely blocked), but it gives an idea of costs and how the balance of power might eventually tip.

Basic beginner setup

On a base level, people who pirate online need at least some equipment. That could be an Android smartphone and easily installed free software such as Mobdro or Kodi. An Internet connection is a necessity and if the EPL blocks those all important streams, a VPN provider is required to circumvent the bans.

Assuming people already have a phone and the Internet, a VPN can be bought for less than £5 per month. This basic setup is certainly cheap but overall it’s an entry level experience that provides quality equal to the effort and money expended.

Equipment: Phone, tablet, PC
Comms: Fast Internet connection, decent VPN provider
Overal performance: Low quality, unpredictable, often unreliable
Cost: £5pm approx for VPN, plus Internet costs

Big screen, basic

For those who like their matches on the big screen, stepping up the chain costs more money. People need a TV with an HDMI input and a fast Internet connection as a minimum, alongside some kind of set-top device to run the necessary software.

Android devices are the most popular and are roughly split into two groups – the small standalone box type and the plug-in ‘stick’ variant such as Amazon’s Firestick.

A cheap Android set-top box

These cost upwards of £30 to £40 but the software to install on them is free. Like the phone, Mobdro is an option, but most people look to a Kodi setup with third-party addons. That said, all streams received on these setups are now vulnerable to EPL blocking so in the long-term, users will need to run a paid VPN.

The problem here is that some devices (including the 1st gen Firestick) aren’t ideal for running a VPN on top of a stream, so people will need to dump their old device and buy something more capable. That could cost another £30 to £40 and more, depending on requirements.

Importantly, none of this investment guarantees a decent stream – that’s down to what’s available on the day – but invariably the quality is low and/or intermittent, at best.

Equipment: TV, decent Android set-top box or equivalent
Comms: Fast Internet connection, decent VPN provider
Overall performance: Low to acceptable quality, unpredictable, often unreliable
Cost: £30 to £50 for set-top box, £5pm approx for VPN, plus Internet

Premium IPTV – PC or Android based

At this point, premium IPTV services come into play. People have a choice of spending varying amounts of money, depending on the quality of experience they require.

First of all, a monthly IPTV subscription with an established provider that isn’t going to disappear overnight is required, which can be a challenge to find in itself. We’re not here to review or recommend services but needless to say, like official TV packages they come in different flavors to suit varying wallet sizes. Some stick around, many don’t.

A decent one with a Sky-like EPG costs between £7 and £15 per month, depending on the quality and depth of streams, and how far in front users are prepared to commit.

Fairly typical IPTV with EPG (VOD shown)

Paying for a year in advance tends to yield better prices but with providers regularly disappearing and faltering in their service levels, people are often reluctant to do so. That said, some providers experience few problems so it’s a bit like gambling – research can improve the odds but there’s never a guarantee.

However, even when a provider, price, and payment period is decided upon, the process of paying for an IPTV service can be less than straightforward.

While some providers are happy to accept PayPal, many will only deal in credit cards, bitcoin, or other obscure payment methods. That sets up more barriers to entry that might deter the less determined customer. And, if time is indeed money, fussing around with new payment processors can be pricey, at least to begin with.

Once subscribed though, watching these streams is pretty straightforward. On a base level, people can use a phone, tablet, or set-top device to receive them, using software such as Perfect Player IPTV, for example. Currently available in free (ad supported) and premium (£2) variants, this software can be setup in a few clicks and will provide a decent user experience, complete with EPG.

Perfect Player IPTV

Those wanting to go down the PC route have more options but by far the most popular is receiving IPTV via a Kodi setup. For the complete novice, it’s not always easy to setup but some IPTV providers supply their own free addons, which streamline the process massively. These can also be used on Android-based Kodi setups, of course.

Nevertheless, if the EPL blocks the provider, a VPN is still going to be needed to access the IPTV service.

An Android tablet running Kodi

So, even if we ignore the cost of the PC and Internet connection, users could still find themselves paying between £10 and £20 per month for an IPTV service and a decent VPN. While more channels than simply football will be available from most providers, this is getting dangerously close to the £18 Sky are asking for its latest football package.

Equipment: TV, PC, or decent Android set-top box or equivalent
Comms: Fast Internet connection, IPTV subscription, decent VPN provider
Overal performance: High quality, mostly reliable, user-friendly (once setup)
Cost: PC or £30/£50 for set-top box, IPTV subscription £7 to £15pm, £5pm approx for VPN, plus Internet, plus time and patience for obscure payment methods.
Note: There are zero refunds when IPTV providers disappoint or disappear

Premium IPTV – Deluxe setup

Moving up to the top of the range, things get even more costly. Those looking to give themselves the full home entertainment-like experience will often move away from the PC and into the living room in front of the TV, armed with a dedicated set-top box. Weapon of choice: the Mag254.

Like Amazon’s FireStick, PC or Android tablet, the Mag254 is an entirely legal, content agnostic device. However, enter the credentials provided by many illicit IPTV suppliers and users are presented with a slick Sky-like experience, far removed from anything available elsewhere. The device is operated by remote control and integrates seamlessly with any HDMI-capable TV.

Mag254 IPTV box

Something like this costs around £70 in the UK, plus the cost of a WiFi adaptor on top, if needed. The cost of the IPTV provider needs to be figured in too, plus a VPN subscription if the provider gets blocked by EPL, which is likely. However, in this respect the Mag254 has a problem – it can’t run a VPN natively. This means that if streams get blocked and people need to use a VPN, they’ll need to find an external solution.

Needless to say, this costs more money. People can either do all the necessary research and buy a VPN-capable router/modem that’s also compatible with their provider (this can stretch to a couple of hundred pounds) or they’ll need to invest in a small ‘travel’ router with VPN client features built in.

‘Travel’ router (with tablet running Mobdro for scale)

These devices are available on Amazon for around £25 and sit in between the Mag254 (or indeed any other wireless device) and the user’s own regular router. Once the details of the VPN subscription are entered into the router, all traffic passing through is encrypted and will tunnel through web blocking measures. They usually solve the problem (ymmv) but of course, this is another cost.

Equipment: Mag254 or similar, with WiFi
Comms: Fast Internet connection, IPTV subscription, decent VPN provider
Overall performance: High quality, mostly reliable, very user-friendly
Cost: Mag254 around £75 with WiFi, IPTV subscription £7 to £15pm, £5pm for VPN (plus £25 for mini router), plus Internet, plus patience for obscure payment methods.
Note: There are zero refunds when IPTV providers disappoint or disappear

Conclusion

On the whole, people who want a reliable and high-quality Premier League streaming experience cannot get one for free, no matter where they source the content. There are many costs involved, some of which cannot be avoided.

If people aren’t screwing around with annoying and unreliable Kodi streams, they’ll be paying for an IPTV provider, VPN and other equipment. Or, if they want an easy life, they’ll be paying Sky, BT or Virgin Media. That might sound harsh to many pirates but it’s the only truly reliable solution.

However, for those looking for something that’s merely adequate, costs drop significantly. Indeed, if people don’t mind the hassle of wondering whether a sub-VHS quality stream will appear before the big match and stay on throughout, it can all be done on a shoestring.

But perhaps the most important thing to note in respect of costs is the recent changes to the pricing of Premier League content in the UK. As mentioned earlier, Sky now delivers a sports package for £18pm, which sounds like the best deal offered to football fans in recent years. It will be tempting for sure and has all the hallmarks of a price point carefully calculated by Sky.

The big question is whether it will be low enough to tip significant numbers of people away from piracy. The reality is that if another couple of thousand streams get hit hard again this weekend – and the next – and the next – many pirating fans will be watching the season drift away for yet another month, unviewed. That’s got to be frustrating.

The bottom line is that high-quality streaming piracy is becoming a little bit pricey just for football so if it becomes unreliable too – and that’s the Premier League’s goal – the balance of power could tip. At this point, the EPL will need to treat its new customers with respect, in order to keep them feeling both entertained and unexploited.

Fail on those counts – especially the latter – and the cycle will start again.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

HDClub, Russia’s Leading HD-Only Torrent Site, Permanently Shuts Down

Post Syndicated from Andy original https://torrentfreak.com/hdclub-russias-leading-hd-torrent-site-permanently-shuts-down-170830/

While millions of users frequent popular public torrent sites such as The Pirate Bay and RARBG every day, there’s a thriving scene that’s hidden from the wider public eye.

Every week, private torrent trackers cater to dozens of millions of BitTorrent users who have taken the time and effort to gain access to these more secretive communities. Often labeled as elitist and running counter to the broad sharing ethos that made file-sharing the beast it is today, private sites pride themselves on quality, order and speed, something public sites typically struggle to match.

In addition to these notable qualities, many private sites choose to focus on a particular niche. There are sites dedicated to obscure electronic music, comedy, and even magic, but HDClub’s focus was given away by its name.

Dubbing itself “The HighDefinition BitTorrent Community”, HDClub specialized in HD productions including Blu-ray and 3D content, covering movies, TV shows, music videos, and animation.

Born in 2007, HDClub celebrated its ninth birthday on March 9 last year, with 2017 heralding a full decade online for the site. Catering mainly to the Russian and Ukrainian markets, the site’s releases often preserved an English audio option, ideal for those looking for high-quality releases from an unorthodox source at decent speeds.

Of course, HDClub releases often leaked out of the site, meaning that thousands are still available on regular public trackers, as a search on any Western torrent engine reveals.

A sample of HDClub releases listed on Torrentz2

Importantly, the site offered thousands of releases completely unavailable in Russia from licensed sources, meaning it filled a niche in which official outlets either wouldn’t or couldn’t compete. This earned itself a place in Russia’s Top 1000 sites list, despite being a closed membership platform.

The site’s attention to detail and focus earned it a considerable following. For the past few years the site capped membership at 190,000 people but in practice, attendance floated around the 170,000 mark. Seeders peaked at approximately 400,000 with leechers considerably less, making seeding as difficult as one might expect on a ratio-based tracker.

Now, however, the decade-long run of HDClub has come to an abrupt end. Early this week the tracker went dark, reportedly without advance notice. A Russian language announcement now present on its main page explains the reasons for the site’s demise.

“Recently, we received several dozens of complaints from rightsholders weekly, and our community is subjected to attacks and espionage,” the announcement reads.

While public torrent sites are always bombarded with DMCA-style notices, private sites tend to avoid large numbers of complaints. In this case, however, HDClub were clearly feeling the pressure. The site’s main page was open to the public while featuring popular releases, so this probably didn’t help with the load.

It’s not clear what is meant by “attacks and espionage” but it’s possibly a reference to DDoS assaults and third-parties attempting to monitor the site. Nevertheless, as HDClub points out, the climate for torrent, streaming, and similar sites has become increasingly hostile in the region recently.

“In parallel, there is a tightening of Internet legislation in Russia, Ukraine and EU countries,” the site says.

Interestingly, the site’s operators also suggest that interest from some quarters had waned, noting that “the time of enthusiasts irretrievably goes away.” It’s unclear whether that’s a reference to site users, the site’s operators, or indeed both. But in any event, any significant decline in any area can prove fatal, particularly when other pressures are at play.

“In the circumstances, we can no longer support the work of the club in the originally conceived format. The project is closed, but we ask you to refrain from long farewells. Thank you all and goodbye!” the message concludes.

Interestingly, the site ends with a little teaser, which may indicate some hope for the future.

“There are talks on preserving the heritage of the club,” it reads, without adding further details.

Possibly stay tuned…..

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Disabling Intel ME 11 via undocumented mode (Positive Technologies)

Post Syndicated from ris original https://lwn.net/Articles/732291/rss

A team of Positive Technologies researchers describe
the discovery
of a mechanism that can disable Intel Management Engine
(ME) 11 after hardware is initialized and the main processor starts.
Intel Management Engine is a proprietary technology that consists of a microcontroller integrated into the Platform Controller Hub (PCH) chip and a set of built-in peripherals. The PCH carries almost all communication between the processor and external devices; therefore Intel ME has access to almost all data on the computer. The ability to execute third-party code on Intel ME would allow for a complete compromise of the platform. We see increasing interest in Intel ME internals from researchers all over the world. One of the reasons is the transition of this subsystem to new hardware (x86) and software (modified MINIX as an operating system). The x86 platform allows researchers to make use of the full power of binary code analysis tools. Previously, firmware analysis was difficult because earlier versions of ME were based on an ARCompact microcontroller with an unfamiliar set of instructions.

Renowned Kodi Addon Developer MetalKettle Calls it Quits

Post Syndicated from Andy original https://torrentfreak.com/renowned-kodi-addon-developer-metalkettle-calls-it-quits-170829/

After dominating the piracy landscape for more than a decade, BitTorrent now shares the accolades with web streaming. The latter is often easier to understand for novices, which has led to its meteoric rise.

Currently, software like Kodi grabs most of the headlines. The software is both free and legal but when it’s augmented with third-party addons, it’s transformed into a streaming piracy powerhouse. As a result, addon developers and distributors are coming under pressure these days.

Numerous cases are already underway, notably against addon repository TVAddons and the developer behind third-party addon ZemTV. Both are being sued in the United States by Dish Networks but the case filed against TVAddons in Canada is the most controversial. It’s already proven massively costly for its operator, who has been forced to ask the public for donations to keep up the fight.

With this backdrop of legal problems for prominent players, it’s no surprise that other people involved in the addon scene are considering their positions. This morning brings yet more news of retirement, with one of the most respected addon developers and distributors deciding to call it a day.

Over the past three to four years, the name MetalKettle has become synonymous not only with high-quality addons but also the MetalKettle addon repository, which was previously a leading go-to location for millions of Kodi users.

But now, ‘thanks’ to the increased profile of the Kodi addon scene, the entire operation will shrink away to avoid further negative attention. (Statement published verbatim)

“Over the past year or so Kodi has become more mainstream and public we’ve all seen the actions of others become highlighted legally, with authorities determined to target 3rd party addons making traction. This has eventually caused me to consider ‘what if?’ – the result of which never ends well in my mind,” MetalKettle writes.

“With all this said I have decided to actively give up 3rd party addon development (at least for the time being) and concentrate on being a husband and father.”

The news that MetalKettle will now fall silent will be sad news for the Kodi scene, after hosting plenty of addons over the years including UK Turks, UKTV Again, Xmovies8, Cartoons8, Toonmania, TV Mix, Sports Mix, Live Mix, Watch1080p, and MovieHut, to name just a few.

The distribution of these addons and others ultimately placed MetalKettles on the official Kodi repository blacklist, banned for providing access to premium content without authorization.

More recently, however, MetalKettle joined the Colossus Kodi repository but it seems likely that particular alliance will now come to an end. Whether other developers will take on any of the existing MetalKettle addons is unclear.

Signing off to his fans during the past few hours, MetalKettle (MK) thanked everyone for their support over the past several years.

“It’s much appreciated and made this all worthwhile,” MK said.

While plenty of people contribute to the Kodi scene, it can be quite a hostile environment for those who step out of line. The same cannot be said of MK, as evidenced by the outpouring of gratitude from his associates on Twitter.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

How to Configure an LDAPS Endpoint for Simple AD

Post Syndicated from Cameron Worrell original https://aws.amazon.com/blogs/security/how-to-configure-an-ldaps-endpoint-for-simple-ad/

Simple AD, which is powered by Samba  4, supports basic Active Directory (AD) authentication features such as users, groups, and the ability to join domains. Simple AD also includes an integrated Lightweight Directory Access Protocol (LDAP) server. LDAP is a standard application protocol for the access and management of directory information. You can use the BIND operation from Simple AD to authenticate LDAP client sessions. This makes LDAP a common choice for centralized authentication and authorization for services such as Secure Shell (SSH), client-based virtual private networks (VPNs), and many other applications. Authentication, the process of confirming the identity of a principal, typically involves the transmission of highly sensitive information such as user names and passwords. To protect this information in transit over untrusted networks, companies often require encryption as part of their information security strategy.

In this blog post, we show you how to configure an LDAPS (LDAP over SSL/TLS) encrypted endpoint for Simple AD so that you can extend Simple AD over untrusted networks. Our solution uses Elastic Load Balancing (ELB) to send decrypted LDAP traffic to HAProxy running on Amazon EC2, which then sends the traffic to Simple AD. ELB offers integrated certificate management, SSL/TLS termination, and the ability to use a scalable EC2 backend to process decrypted traffic. ELB also tightly integrates with Amazon Route 53, enabling you to use a custom domain for the LDAPS endpoint. The solution needs the intermediate HAProxy layer because ELB can direct traffic only to EC2 instances. To simplify testing and deployment, we have provided an AWS CloudFormation template to provision the ELB and HAProxy layers.

This post assumes that you have an understanding of concepts such as Amazon Virtual Private Cloud (VPC) and its components, including subnets, routing, Internet and network address translation (NAT) gateways, DNS, and security groups. You should also be familiar with launching EC2 instances and logging in to them with SSH. If needed, you should familiarize yourself with these concepts and review the solution overview and prerequisites in the next section before proceeding with the deployment.

Note: This solution is intended for use by clients requiring an LDAPS endpoint only. If your requirements extend beyond this, you should consider accessing the Simple AD servers directly or by using AWS Directory Service for Microsoft AD.

Solution overview

The following diagram and description illustrates and explains the Simple AD LDAPS environment. The CloudFormation template creates the items designated by the bracket (internal ELB load balancer and two HAProxy nodes configured in an Auto Scaling group).

Diagram of the the Simple AD LDAPS environment

Here is how the solution works, as shown in the preceding numbered diagram:

  1. The LDAP client sends an LDAPS request to ELB on TCP port 636.
  2. ELB terminates the SSL/TLS session and decrypts the traffic using a certificate. ELB sends the decrypted LDAP traffic to the EC2 instances running HAProxy on TCP port 389.
  3. The HAProxy servers forward the LDAP request to the Simple AD servers listening on TCP port 389 in a fixed Auto Scaling group configuration.
  4. The Simple AD servers send an LDAP response through the HAProxy layer to ELB. ELB encrypts the response and sends it to the client.

Note: Amazon VPC prevents a third party from intercepting traffic within the VPC. Because of this, the VPC protects the decrypted traffic between ELB and HAProxy and between HAProxy and Simple AD. The ELB encryption provides an additional layer of security for client connections and protects traffic coming from hosts outside the VPC.

Prerequisites

  1. Our approach requires an Amazon VPC with two public and two private subnets. The previous diagram illustrates the environment’s VPC requirements. If you do not yet have these components in place, follow these guidelines for setting up a sample environment:
    1. Identify a region that supports Simple AD, ELB, and NAT gateways. The NAT gateways are used with an Internet gateway to allow the HAProxy instances to access the internet to perform their required configuration. You also need to identify the two Availability Zones in that region for use by Simple AD. You will supply these Availability Zones as parameters to the CloudFormation template later in this process.
    2. Create or choose an Amazon VPC in the region you chose. In order to use Route 53 to resolve the LDAPS endpoint, make sure you enable DNS support within your VPC. Create an Internet gateway and attach it to the VPC, which will be used by the NAT gateways to access the internet.
    3. Create a route table with a default route to the Internet gateway. Create two NAT gateways, one per Availability Zone in your public subnets to provide additional resiliency across the Availability Zones. Together, the routing table, the NAT gateways, and the Internet gateway enable the HAProxy instances to access the internet.
    4. Create two private routing tables, one per Availability Zone. Create two private subnets, one per Availability Zone. The dual routing tables and subnets allow for a higher level of redundancy. Add each subnet to the routing table in the same Availability Zone. Add a default route in each routing table to the NAT gateway in the same Availability Zone. The Simple AD servers use subnets that you create.
    5. The LDAP service requires a DNS domain that resolves within your VPC and from your LDAP clients. If you do not have an existing DNS domain, follow the steps to create a private hosted zone and associate it with your VPC. To avoid encryption protocol errors, you must ensure that the DNS domain name is consistent across your Route 53 zone and in the SSL/TLS certificate (see Step 2 in the “Solution deployment” section).
  2. Make sure you have completed the Simple AD Prerequisites.
  3. We will use a self-signed certificate for ELB to perform SSL/TLS decryption. You can use a certificate issued by your preferred certificate authority or a certificate issued by AWS Certificate Manager (ACM).
    Note: To prevent unauthorized connections directly to your Simple AD servers, you can modify the Simple AD security group on port 389 to block traffic from locations outside of the Simple AD VPC. You can find the security group in the EC2 console by creating a search filter for your Simple AD directory ID. It is also important to allow the Simple AD servers to communicate with each other as shown on Simple AD Prerequisites.

Solution deployment

This solution includes five main parts:

  1. Create a Simple AD directory.
  2. Create a certificate.
  3. Create the ELB and HAProxy layers by using the supplied CloudFormation template.
  4. Create a Route 53 record.
  5. Test LDAPS access using an Amazon Linux client.

1. Create a Simple AD directory

With the prerequisites completed, you will create a Simple AD directory in your private VPC subnets:

  1. In the Directory Service console navigation pane, choose Directories and then choose Set up directory.
  2. Choose Simple AD.
    Screenshot of choosing "Simple AD"
  3. Provide the following information:
    • Directory DNS – The fully qualified domain name (FQDN) of the directory, such as corp.example.com. You will use the FQDN as part of the testing procedure.
    • NetBIOS name – The short name for the directory, such as CORP.
    • Administrator password – The password for the directory administrator. The directory creation process creates an administrator account with the user name Administrator and this password. Do not lose this password because it is nonrecoverable. You also need this password for testing LDAPS access in a later step.
    • Description – An optional description for the directory.
    • Directory Size – The size of the directory.
      Screenshot of the directory details to provide
  4. Provide the following information in the VPC Details section, and then choose Next Step:
    • VPC – Specify the VPC in which to install the directory.
    • Subnets – Choose two private subnets for the directory servers. The two subnets must be in different Availability Zones. Make a note of the VPC and subnet IDs for use as CloudFormation input parameters. In the following example, the Availability Zones are us-east-1a and us-east-1c.
      Screenshot of the VPC details to provide
  5. Review the directory information and make any necessary changes. When the information is correct, choose Create Simple AD.

It takes several minutes to create the directory. From the AWS Directory Service console , refresh the screen periodically and wait until the directory Status value changes to Active before continuing. Choose your Simple AD directory and note the two IP addresses in the DNS address section. You will enter them when you run the CloudFormation template later.

Note: Full administration of your Simple AD implementation is out of scope for this blog post. See the documentation to add users, groups, or instances to your directory. Also see the previous blog post, How to Manage Identities in Simple AD Directories.

2. Create a certificate

In the previous step, you created the Simple AD directory. Next, you will generate a self-signed SSL/TLS certificate using OpenSSL. You will use the certificate with ELB to secure the LDAPS endpoint. OpenSSL is a standard, open source library that supports a wide range of cryptographic functions, including the creation and signing of x509 certificates. You then import the certificate into ACM that is integrated with ELB.

  1. You must have a system with OpenSSL installed to complete this step. If you do not have OpenSSL, you can install it on Amazon Linux by running the command, sudo yum install openssl. If you do not have access to an Amazon Linux instance you can create one with SSH access enabled to proceed with this step. Run the command, openssl version, at the command line to see if you already have OpenSSL installed.
    [[email protected] ~]$ openssl version
    OpenSSL 1.0.1k-fips 8 Jan 2015

  2. Create a private key using the command, openssl genrsa command.
    [[email protected] tmp]$ openssl genrsa 2048 > privatekey.pem
    Generating RSA private key, 2048 bit long modulus
    ......................................................................................................................................................................+++
    ..........................+++
    e is 65537 (0x10001)

  3. Generate a certificate signing request (CSR) using the openssl req command. Provide the requested information for each field. The Common Name is the FQDN for your LDAPS endpoint (for example, ldap.corp.example.com). The Common Name must use the domain name you will later register in Route 53. You will encounter certificate errors if the names do not match.
    [[email protected] tmp]$ openssl req -new -key privatekey.pem -out server.csr
    You are about to be asked to enter information that will be incorporated into your certificate request.

  4. Use the openssl x509 command to sign the certificate. The following example uses the private key from the previous step (privatekey.pem) and the signing request (server.csr) to create a public certificate named server.crt that is valid for 365 days. This certificate must be updated within 365 days to avoid disruption of LDAPS functionality.
    [[email protected] tmp]$ openssl x509 -req -sha256 -days 365 -in server.csr -signkey privatekey.pem -out server.crt
    Signature ok
    subject=/C=XX/L=Default City/O=Default Company Ltd/CN=ldap.corp.example.com
    Getting Private key

  5. You should see three files: privatekey.pem, server.crt, and server.csr.
    [[email protected] tmp]$ ls
    privatekey.pem server.crt server.csr

    Restrict access to the private key.

    [[email protected] tmp]$ chmod 600 privatekey.pem

    Keep the private key and public certificate for later use. You can discard the signing request because you are using a self-signed certificate and not using a Certificate Authority. Always store the private key in a secure location and avoid adding it to your source code.

  6. In the ACM console, choose Import a certificate.
  7. Using your favorite Linux text editor, paste the contents of your server.crt file in the Certificate body box.
  8. Using your favorite Linux text editor, paste the contents of your privatekey.pem file in the Certificate private key box. For a self-signed certificate, you can leave the Certificate chain box blank.
  9. Choose Review and import. Confirm the information and choose Import.

3. Create the ELB and HAProxy layers by using the supplied CloudFormation template

Now that you have created your Simple AD directory and SSL/TLS certificate, you are ready to use the CloudFormation template to create the ELB and HAProxy layers.

  1. Load the supplied CloudFormation template to deploy an internal ELB and two HAProxy EC2 instances into a fixed Auto Scaling group. After you load the template, provide the following input parameters. Note: You can find the parameters relating to your Simple AD from the directory details page by choosing your Simple AD in the Directory Service console.
Input parameter Input parameter description
HAProxyInstanceSize The EC2 instance size for HAProxy servers. The default size is t2.micro and can scale up for large Simple AD environments.
MyKeyPair The SSH key pair for EC2 instances. If you do not have an existing key pair, you must create one.
VPCId The target VPC for this solution. Must be in the VPC where you deployed Simple AD and is available in your Simple AD directory details page.
SubnetId1 The Simple AD primary subnet. This information is available in your Simple AD directory details page.
SubnetId2 The Simple AD secondary subnet. This information is available in your Simple AD directory details page.
MyTrustedNetwork Trusted network Classless Inter-Domain Routing (CIDR) to allow connections to the LDAPS endpoint. For example, use the VPC CIDR to allow clients in the VPC to connect.
SimpleADPriIP The primary Simple AD Server IP. This information is available in your Simple AD directory details page.
SimpleADSecIP The secondary Simple AD Server IP. This information is available in your Simple AD directory details page.
LDAPSCertificateARN The Amazon Resource Name (ARN) for the SSL certificate. This information is available in the ACM console.
  1. Enter the input parameters and choose Next.
  2. On the Options page, accept the defaults and choose Next.
  3. On the Review page, confirm the details and choose Create. The stack will be created in approximately 5 minutes.

4. Create a Route 53 record

The next step is to create a Route 53 record in your private hosted zone so that clients can resolve your LDAPS endpoint.

  1. If you do not have an existing DNS domain for use with LDAP, create a private hosted zone and associate it with your VPC. The hosted zone name should be consistent with your Simple AD (for example, corp.example.com).
  2. When the CloudFormation stack is in CREATE_COMPLETE status, locate the value of the LDAPSURL on the Outputs tab of the stack. Copy this value for use in the next step.
  3. On the Route 53 console, choose Hosted Zones and then choose the zone you used for the Common Name box for your self-signed certificate. Choose Create Record Set and enter the following information:
    1. Name – The label of the record (such as ldap).
    2. Type – Leave as A – IPv4 address.
    3. Alias – Choose Yes.
    4. Alias Target – Paste the value of the LDAPSURL on the Outputs tab of the stack.
  4. Leave the defaults for Routing Policy and Evaluate Target Health, and choose Create.
    Screenshot of finishing the creation of the Route 53 record

5. Test LDAPS access using an Amazon Linux client

At this point, you have configured your LDAPS endpoint and now you can test it from an Amazon Linux client.

  1. Create an Amazon Linux instance with SSH access enabled to test the solution. Launch the instance into one of the public subnets in your VPC. Make sure the IP assigned to the instance is in the trusted IP range you specified in the CloudFormation parameter MyTrustedNetwork in Step 3.b.
  2. SSH into the instance and complete the following steps to verify access.
    1. Install the openldap-clients package and any required dependencies:
      sudo yum install -y openldap-clients.
    2. Add the server.crt file to the /etc/openldap/certs/ directory so that the LDAPS client will trust your SSL/TLS certificate. You can copy the file using Secure Copy (SCP) or create it using a text editor.
    3. Edit the /etc/openldap/ldap.conf file and define the environment variables BASE, URI, and TLS_CACERT.
      • The value for BASE should match the configuration of the Simple AD directory name.
      • The value for URI should match your DNS alias.
      • The value for TLS_CACERT is the path to your public certificate.

Here is an example of the contents of the file.

BASE dc=corp,dc=example,dc=com
URI ldaps://ldap.corp.example.com
TLS_CACERT /etc/openldap/certs/server.crt

To test the solution, query the directory through the LDAPS endpoint, as shown in the following command. Replace corp.example.com with your domain name and use the Administrator password that you configured with the Simple AD directory

$ ldapsearch -D "[email protected]corp.example.com" -W sAMAccountName=Administrator

You should see a response similar to the following response, which provides the directory information in LDAP Data Interchange Format (LDIF) for the administrator distinguished name (DN) from your Simple AD LDAP server.

# extended LDIF
#
# LDAPv3
# base <dc=corp,dc=example,dc=com> (default) with scope subtree
# filter: sAMAccountName=Administrator
# requesting: ALL
#

# Administrator, Users, corp.example.com
dn: CN=Administrator,CN=Users,DC=corp,DC=example,DC=com
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: user
description: Built-in account for administering the computer/domain
instanceType: 4
whenCreated: 20170721123204.0Z
uSNCreated: 3223
name: Administrator
objectGUID:: l3h0HIiKO0a/ShL4yVK/vw==
userAccountControl: 512
…

You can now use the LDAPS endpoint for directory operations and authentication within your environment. If you would like to learn more about how to interact with your LDAPS endpoint within a Linux environment, here are a few resources to get started:

Troubleshooting

If you receive an error such as the following error when issuing the ldapsearch command, there are a few things you can do to help identify issues.

ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1)
  • You might be able to obtain additional error details by adding the -d1 debug flag to the ldapsearch command in the previous section.
    $ ldapsearch -D "[email protected]" -W sAMAccountName=Administrator –d1

  • Verify that the parameters in ldap.conf match your configured LDAPS URI endpoint and that all parameters can be resolved by DNS. You can use the following dig command, substituting your configured endpoint DNS name.
    $ dig ldap.corp.example.com

  • Confirm that the client instance from which you are connecting is in the CIDR range of the CloudFormation parameter, MyTrustedNetwork.
  • Confirm that the path to your public SSL/TLS certificate configured in ldap.conf as TLS_CAERT is correct. You configured this in Step 5.b.3. You can check your SSL/TLS connection with the command, substituting your configured endpoint DNS name for the string after –connect.
    $ echo -n | openssl s_client -connect ldap.corp.example.com:636

  • Verify that your HAProxy instances have the status InService in the EC2 console: Choose Load Balancers under Load Balancing in the navigation pane, highlight your LDAPS load balancer, and then choose the Instances

Conclusion

You can use ELB and HAProxy to provide an LDAPS endpoint for Simple AD and transport sensitive authentication information over untrusted networks. You can explore using LDAPS to authenticate SSH users or integrate with other software solutions that support LDAP authentication. This solution’s CloudFormation template is available on GitHub.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing this solution, start a new thread on the Directory Service forum.

– Cameron and Jeff