Tag Archives: Facebook

HBO Goes After ‘Online’ Pirates in the Caribbean

Post Syndicated from Ernesto original https://torrentfreak.com/hbo-goes-after-online-pirates-in-the-caribbean-170225/

HBO’s daughter company in Latin America, HBO LA, is not happy with the rampant piracy that’s taking place in the Caribbean.

Earlier this month the company submitted its latest 301 ‘watch list’ submission to the U.S. Government, urging the authorities to take appropriate action.

HBO is steadily expanding its services to the Caribbean and Central American regions. However, their efforts to roll out legitimate services are frustrated by local pirates. These aren’t just individual pirates, large cable operators are in on it too.

“…a lack of enforcement by Caribbean and Central American governments is allowing local cable operators to build substantial enterprise value by increasing their subscriber base through offering pirated content,” HBO LA writes (pdf).

The same goes for hotels, which treat their visitors to prime HBO programming without paying a proper license.

“In addition to piracy by large cable providers, non-U.S. owned hotel chains on a variety of islands are known to pirate content exclusively licensed to HBO LA by using their own onsite facilities or obtaining service from cable operators who pirate,” HBO LA informs the government.

Piracy by cable operators and hotels is not new. HBO has reported these issues to the authorities before, but thus far little has changed. In the meantime, however, the company has started to notice another worrying trend.

Online piracy has started to become more prevalent, with many stores now selling IPTV boxes and other devices that allow users to access HBO content without permission.

“In the past year, HBO LA continued to see a significant increase in the problem of online piracy of its service throughout all of HBO LA’s territory,” HBO LA writes.

“In the Caribbean, several brick-and-mortar stores customarily sell Roku or generic Android set-top devices (like the Mag250, Avov, and the MXIII) preinstalled with an unlicensed streaming service and offering a few hundred channels of content, including content for which HBO LA holds exclusive license in the territory.”

A Facebook ad highlighted by HBO LA

The company lists various examples of stores that offer these kinds of products including the Gizmos and Gadgets Electronics store in Guyana. This store sells Roku devices with an unlicensed streaming service called “ROKU TV” pre-installed.

By selling “pirate” subscriptions to thousands of customers the company is making over a million dollars per year, HBO estimates. And more recently the same store started to sell a subscription-less service as well.

“Additionally, Gizmos and Gadgets Electronics has recently started offering a second integrated hardware and service device known as the Gizmo TV BOX, which offers over 200 channels with no monthly fee,” HBO LA writes.

This is just one example of the many that are listed by the Latin American daughter of HBO.

The cable provider says it’s already taken various steps to stop the different types of infringements but hopes that U.S. authorities will help out where local governments fail. Towards the end of their submission, HBO LA encourages the United States Trade Representative to apply appropriate pressure and threats, to turn the tide.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Facebook and Foxtel Team Up to Crack Down on Live Streaming Piracy

Post Syndicated from Ernesto original https://torrentfreak.com/facebook-and-foxtel-team-up-to-crack-down-on-live-streaming-piracy-170213/

A week ago hundreds of thousands of people watched unauthorized Facebook live streams of a highly anticipated rematch between two Aussie boxers.

Pay TV channel Foxtel, which secured the broadcasting rights for the event, was outraged by the blatant display of piracy and vowed to take the main offenders to court.

This weekend, however, things had calmed down a bit. Foxtel did indeed reach out to the culprits, some of whom had more than 100,000 people watching their unauthorized Facebook streams. The company decided to let them off the hook if they published a formal apology.

Soon after, the two major streaming pirates in this case both admitted their wrongdoing in similarly worded messages.

“Last Friday I streamed Foxtel’s broadcast of the Mundine v Green 2 fight via my Facebook page to thousands of people. I know that this was illegal and the wrong thing to do,” streamer Brett Hevers wrote.

“I unreservedly apologize to Anthony Mundine and Danny Green, to the boxing community, to Foxtel, to the event promoters and to everyone out there who did the right thing and paid to view the fight. It was piracy, and I’m sorry.”

But that doesn’t mean that the streaming piracy problem is no longer an issue. Quite the contrary. Instead of investing time and money in legal cases, Foxtel is putting its efforts in stopping future infringements.

In an op-ed for the Herald Sun, Foxtel CEO Peter Tonagh likens piracy to stealing, a problem that’s particularly common Down Under.

“It is no less of a crime than stealing a loaf of bread from a supermarket or sneaking into a movie theater or a concert without paying. Yet, as a nation, Australians are among the worst offenders in the world,” Tonagh writes.

Foxtel’s CEO sees illegal live streaming as the third wave of piracy, following earlier trends of smart card cracking and file-sharing. The Facebook piracy fest acted as a wake-up call and Tonagh says the company will do everything it can to stop it from becoming as common as the other two.

“Rest assured we will work even harder to address this piracy before it gets out of control. The illegal streaming of the Mundine v Green fight nine days ago was a wake-up call. It was the first time that Foxtel had experienced piracy of a live event on a mass scale,” he notes.

Over the past several days, Foxtel and Facebook have been working on a new technology which should be able to recognize pirated streams automatically and pull them offline soon after they are started. This sounds a lot like YouTube’s Content-ID system, but for live broadcasts.

“We are working on a new tool with Facebook that will allow us to upload a large stream of our events to Facebook headquarters where it can be tracked,” Tonagh tells The Australian behind a paywall.

“If that content is matched on users’ accounts where it’s being streamed without our authorisation then Facebook will alert us and pull it down,” he adds.

The initiative will be welcomed by other rightsholders, who face the same problem. Having an option to have Facebook recognize infringing content on the fly, is likely to make it much easier to stop these streams from becoming viral.

That said, live streaming piracy itself is much broader and not particularly new. There are dozens of niche pirate site that have been offering unauthorized streams for many years already, and they’re not going anywhere anytime soon.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

КЗП пита защо СМС-ите на кирилица стрували повече отколкото на латиница

Post Syndicated from Delian Delchev original http://feedproxy.google.com/~r/delian/~3/rUccfhWEzQw/blog-post_9.html

КЗП иска обяснение защо е неграмотна и не разбира как работят кодовите (а не криптиращите) таблици. На операторите им е трудно да отговорят с цензорен език и без препоръка да се върнат в училище.
За другите неграмотни, също обяснявам.
Един СМС предава 160 байта данни. Защо 160 байта е друг въпрос, но не е важно.
СМС-а може да транспортира всичко, от текстови съобщения до системна информация и дори данни. Една от ранните имплементации на WAP протокола е върху SMS.
Какво си предават през СМС изпращача и получателя е без значение за оператора. Той не се интересува от съдържанието на съобщението, нито променя начина по който го предава в зависимост от съдържанието. За да го прави, би му трябвало инфраструктура, която би увеличила цената на обработка на СМС многократно, и следователно и цената за крайните потребители.
Само крайните устройства (терминалите/телефоните) избират как да кодират съобщението и какво му е съдържанието.
Ако крайните устрийства използват Concatenated SMS/EMS (допълнителна информация към вашият текст в смс-а, която казваа как текста е кодиран и дали има свързани в едно малки SMS-и), можете да изпращате текст съдържащ едновременно всякакви знаци от Unicode кодовите таблици, както и съобщения по-дълги от 160 байта, и дори файлове и картинки. Те просто се разбиват на съобщения от по 160 байта, в които се слага header, който казва как са кодирани знаците и данните вътре, и на как няколкото SMS-а може да са свързани.
Това не е избор на оператора, отново това е конфигурация и избор на телефонният апарат. 
Tелефонният апарат би декодирал байтовете в СМС-а до текст, в зависимост от кодовата си таблица по подразбиране.
И сега, когато изпращате нещо на кирилица, се използва (отново избор на телефоните е, а не на операторите, но за да имате съвместимост този избор трябва да е еднакъв и от двата комуникиращи си телефона, и за това ако променяте настройките може и да не ви разбере получателя) типично UTF-8 кодова таблица. Тоест латинските знаци се предават с 1 байт, но кирилцата, гръцките знаци, и разширените латински знаци, и китайските знаци и математическите символи и т.н. се предават с 2 байта.
Тоест ако пишете СМС само на латиница, може и да имате 160 знака, но ако пишете само с кирилски букви, ще имате 80 знака в съобщение.
Ако добавите EMS/Concatenated SMS (който се включва автоматично, в момента в който напишете по-дълъг текст, на латиница над 160 знака, на кирилица около 80-тина знака), той ще раздели съобщението ви на малки смс-и, и ще добави във всяко едно допълнителен хеадър, с който ще идентифицира например е тези смс-и съставляват едно съобщение и в какъв ред са те.
Тоест, ако напишете текст с 160 латински знака, той ще заеме точно един СМС за да бъде транспортиран.
Но ако напишете текст с 160 кирилски знака, той ще заеме 3 (ТРИ!) СМС-а за да бъде транспортиран. Причината е, че ви трябва минимум 8 байта хеадър за Concatenated SMS, и следователно можете да пренесете само 72 знака в един SMS. Тоест ще ви трябва един SMS за първите 76 знака, един за вторите 76 знака и един за третите 8 знака.
Така реално ще ползвате 3 смс-а и съответно ще платите 3.
А защо кирилицата се кодира с 2 байта а не с 1, обвинявайте тия дето са измислили компюторите, че не са знаели и ползвали всекидневно кирилца.

Дори и не искам да коментирам по същество, защо КЗП се занимава с глупости, от които няма техническа полза за гражданите. В момента над 80% от договорите за мобилни услуги в страната на практика включват неограничено количество СМС-и, и въпреки това средното количество СМС-и, които гражданите изпращат на месец намалява с всеки изминал ден и слиза стабилно под 30-40, половината от които реклами и системни съобщения. За пример преди 10г средното количество СМС-и на абонат бе 3-4 пъти по-голямо. СМС е мъртва услуга, защото е заменена от по ефективни (и предаващи много по големи и мултимедийно богати) услуги работещи върху данни (Skype, Viber, Facebook messenger, Yahoo messenger, Whatsapp, Snapchat, Google Hangouts/Duo/Allo, etc).
Заради ниската употреба на тази услуга, се обмисля на европейско ниво тя да бъде извадена извън системните услуги (тоест да не е задължителна, за да си получаваш например роаминг смс-а). И отново заради ниската и употреба операторите я таксуват на фиксирана такса. Където вече няма значение каква е цената на един СМС (тя е нула) и колко СМС-а са нужни за да напише някой псувня на български.
А усилията на КЗП да вдига патриотичен шум в предизборно време около безсмислена активност, от която реално няма много хора, да могат да се възползват, е изключително интересна

Steal This Show S02E10: In Surveillance Valley

Post Syndicated from Ernesto original https://torrentfreak.com/steal-show-s02e10-surveillance-valley/

stslogo180If you enjoy this episode, consider becoming a patron and getting involved with the show. Check out Steal This Show’s Patreon campaign: support us and get all kinds of fantastic benefits!

This episode features journalist and writer Yasha Levine discussing some of the topics covered in his forthcoming book, Surveillance Valley.

Yasha argues that the biggest threat to our privacy comes not directly from the government, but via the ubiquitous corporate platforms we all use every day – including Google, Facebook, eBay and others – and the ‘data brokers’ that buy and sell the most intimate information about our lives.

Steal This Show aims to release bi-weekly episodes featuring insiders discussing copyright and file-sharing news. It complements our regular reporting by adding more room for opinion, commentary, and analysis.

The guests for our news discussions will vary, and we’ll aim to introduce voices from different backgrounds and persuasions. In addition to news, STS will also produce features interviewing some of the great innovators and minds.

Host: Jamie King

Guest: Yasha Levine

Produced by Jamie King
Edited & Mixed by Riley Byrne
Original Music by David Triana
Web Production by Siraje Amarniss

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Livestream ‘Piracy Fest’ on Facebook Shut Down by Foxtel

Post Syndicated from Ernesto original https://torrentfreak.com/livestream-piracy-fest-on-facebook-shut-down-by-foxtel-lawsuits-are-coming-170204/

boxingstreamOn Friday evening millions of Australians were tuning into to the long awaited rematch between the Australian boxers Anthony Mundine and Danny Green.

Those who wanted to watch it live couldn’t do so cheaply, as it was streamed exclusively by the pay TV provider Foxtel for AUS$59.95.

However, the Internet wouldn’t be the Internet if people didn’t try to find ways around this expensive ‘roadblock.’ And indeed, as soon as the broadcast started tens of thousands of people tuned into unauthorized live streams, including several homebrew re-broadcasts through Facebook.

While it’s not uncommon for unauthorized sports streams to appear on social media, the boxing match triggered a true piracy fest. At one point more than 150,000 fans streamed a feed that was shown from the account of Facebook user Darren Sharpe, who gained instant fame.

Unfortunately for him, this didn’t go unnoticed to the rightsholders. Foxtel was quick to track down Mr. Sharpe and rang him up during the match, a call the Facebook streamer recorded and later shared on YouTube.


“Sorry mate, I just had to chuck that on mute. So you want me to turn off my Foxtel because I can’t stream it?” Darren asked the Foxtel representative.

“No. I want you to stop streaming it on Facebook. Just keep watching the fight at home, there’s no dramas with that. Just don’t stream it on Facebook,” the Foxtel rep replied.

“Mate, I’ve got 78,000 viewers here that aren’t going to be happy with you. I just don’t see why it’s [not] legal. I’m not doing anything wrong, mate. What can you do to me?” Darren said in response.

“It’s a criminal offence against the copyright act, mate. We’ve got technical protection methods inside the box so exactly this thing can’t happen,” the representative replied.


Mr. Sharpe didn’t seem to be very impressed by the allegations, but Foxtel soon showed how serious it was. Since Facebook didn’t turn off the infringing streams right away, the pay TV provider decided to display the customer’s account numbers on the video streams, so they could disable the associated feeds.

According to Foxtel CEO Peter Tonagh, the streamers in question will soon face legal action. This means that the “free” streaming bonanza could turn out to be quite expensive after all.

ABC reports that Brett Hevers, another Facebook user whose unauthorized broadcast reached more than 150,000 people at its peak, believes he has done nothing wrong.

“I streamed the Mundine and Green fight mainly just so a few mates could watch it. A few people couldn’t afford the fee or didn’t have Foxtel so I just thought I’d put it up for them,” Hevers said.

“All of a sudden 153,000 people I think at the peak were watching it,” he adds.

Anticipating significant legal bills, fellow Facebook streamer Darren Sharpe has already decided to start a GoFundMe campaign to cover the cost. At the time of writing, the campaign has already reached over a quarter of the $10,000 goal.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

In defense of the doomsday-minded super-rich

Post Syndicated from Michal Zalewski original http://lcamtuf.blogspot.com/2017/02/in-defense-of-doomsday-minded-super-rich.html

Several days ago, The New Yorker ran a lengthy story titled
“Doomsday prep for the super-rich”. The article revealed that some of the Silicon Valley’s most successful entrepreneurs – including the execs from Yahoo, Facebook, and Reddit – are planning ahead for extreme emergencies, “Doomsday Preppers” style.

The article made quite a few waves in the tech world – and invited near-universal ridicule and scorn. People sneered at the comical excess of blast-proof bunkers and bug-out helicopters, as if a cataclysm that kills off most of us would somehow spare the nouveaux riches. Hardcore survivalists gleefully proclaimed that the highly-paid armed guards would turn on their employers to save own families. Conservatives rolled their eyes at the story of a VC who is preparing for the collapse of the civilized world but finds guns a bit too icky for his taste. Progressives were repulsed by the immorality of the world’s wealthiest people wanting to hide when the angry underclass finally takes it to the streets. In short, no matter where you stood, the story had something for you to hate.

My first instinct was to join the fray; if nothing else, it’s cathartic to have fun at the expense of people who are far wealthier and far more powerful than we could ever be. Sure, I have written about the merits of common-sense emergency preparedness, but to me, it meant having a rainy day fund and a fire extinguisher, not holing up in a decommissioned ICBM silo with 10,000 rounds of ammo and a pallet of canned cheese. Now hold my beer and let me throw the first stone!

But then, I realized that the article in The New Yorker is a human interest piece; it is meant to entertain us and has no other reason to exist. The author is trying to show us a dazzling world that is out of ordinary and out of reach. It may be that the profiled execs spend most of their time planning ahead for far more pedestrian risks, but no sane newspaper would publish a multi-page expose about the brand of fire extinguishers or tarp favored by the ultra-rich. The readers want to read about helicopters and ICBM silos instead – and so the author obliges.

It is also a fallacy to look at the cost of purchases outlined in the article in absolute terms. For us, spending $5M on a luxury compound and a helicopter may seem insane – but for a person with hundreds of millions in the bank, such an investment would be just 1% of their wealth – a reasonable price to pay for insurance against unlikely but somewhat plausible risks. In terms of relative financial impact, it is no different than a person with $10k in the bank spending $100 on a fire extinguisher and some energy bars – hardly a controversial thing to do.

What’s more, although we’re living in a period of unprecedented prosperity and calm, there’s no denying that in the history of mankind, revolutions happen with near-clockwork regularity. We had quite a few in the past one hundred years alone – and when the time comes, it’s usually the heads of the variously-defined aristocracy that roll. Angry mobs are unlikely to torch down Joe Prepper’s cookie-cutter suburban neighborhood, but being near the top of the social ladder carries some distinct risk. We can have a debate about the complicity of the elites, or the comparative significance of this risk versus the plight of other social classes – but either way, the paranoia of the rich may be more grounded in reality than it seems.

Of course, an argument can be made that preparing for the collapse of the society is immoral when their wealth could be better spent on trying to bridge the income gap or otherwise make the world a more harmonious place. It is an interesting claim, but it rings a bit hollow to me. We would not deny the rich the right to buy a fire extinguisher or a bug-out bicycle; our outrage is rather conveniently reserved for the purchases we can’t afford. But more importantly, prepping and philanthropy are not mutually exclusive; in fact, I suspect that some of the folks mentioned in the article spend far more on trying to help the less fortunate than they are spending on canned cheese. Whether this can make a difference, and whether they should be doing more, is a different story.

Едни (не)отворени данни за пушенето в заведенията

Post Syndicated from Боян Юруков original http://yurukov.net/blog/2017/spravki-za-pusheneto-v-zavedenia/

Не, няма да критикувам прозрачността на институциите или да показвам с данни каква е ситуацията. Този път ще е точно обратното. През януари 2016-та изпратих заявление по ЗДОИ до всяко РЗИ в страната със следните въпроси:

  1. Справка за получените сигнали за пушене през 2013, 2014 и 2015 г. разделени по обществена сграда или заведение за обществено хранене и брой за всяко от тези места по месеци
  2. Справка за всички проверки за спазване на забраната за пушене през 2013, 2014 и 2015 г. по обществена сграда или заведение за обществено хранене, дата на проверката и дали е установено нарушение
  3. Справка за наложените санкции за нарушения на забраната за пушене през 2013, 2014 и 2015 г. разделена по обществена сграда или заведение за обществено хранене

3843008

Всички РЗИ-та отговориха в срок и предоставиха малко или много информацията, която исках. Заради разминаването във форматите, точността и вида на данните, унифицирането ѝ беше проблем. Някои ми изпратиха електронни таблици с разбивка по вид фирми. Други просто снимки на таблици с общо числа по години. Бях седнал да ги обобщавам, но така и не довърших цялостната таблица.

Бях напълно забравил за тези данни до преди няколко дни, когато статус във Facebook ме подсети. Глупаво е, че не съм ги използвал цяла година. Явно щом не съм ги обобщил до сега, няма да го направя скоро. Затова ги пускам свободно под CC0 лиценз както съм получил справките. Свалете ги и ги използвайте както намерите за добре. Ще се радвам, ако пуснете в коментарите линк към статиите, графиките или изводите си. Надявам се да са ви полезни.

Ако искате да изпратите заявление за сигналите и проверките през 2016-та, ето таблица с мейлите на РЗИ-тата. Припомням, че заявленията по ЗДОИ няма нужда да се подписват електронно и имате право да изискате цялата комуникация да е през мейла. Отговаряли са ми вежливо и в срок на всички запитвания до сега, така че нямам критика към прозрачността им в това отношение.

Hacker House Smartphone-Connected Door Lock

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/smartphone-connected-door-lock/

The team at YouTube channel Hacker House always deliver when it comes to clear, detailed tutorials, and their newest project, ‘How to Make a Smartphone-Connected Door Lock’, is no exception.

HackerHouse Raspberry Pi Door Lock

Using a Raspberry Pi-powered deadbolt actuator, multiple users can remotely unlock a door via a smartphone app.

The build can be attached to your existing lock, so there’s no need to start pulling out the inner workings of your door.

Hacker House Raspberry Pi Door Lock

The app will also notify you when the door has been unlocked, offering added peace of mind when you’re away from home.

For a full run-through, check out their video below.

How to Make a Smartphone Connected Door Lock

In this video, we show you how to make a smartphone-controlled, internet-connected deadbolt actuator powered by a Raspberry Pi that can be added onto your existing door lock without any modifications to the door. The door lock can be controlled by multiple smartphones, and even notify you whenever someone locks/unlocks the door.

You’ll need access to a 3D printer for some of the parts and, as a way to support their growing channel, the team provide printed parts for sale on eBay.

You may also wish to check out their other Raspberry Pi projects too. They’ve made a lot of cool things, including a Facebook Chatbot, a Portable Arcade Console, a Smart Mirror, and a Motion-tracking Nerf Turret.

How to Make a Raspberry Pi Motion Tracking Airsoft / Nerf Turret

In this video we show you how to build a DIY motion tracking airsoft (or nerf gun) turret with a raspberry pi 3. The airsoft turret is autonomous so it moves and fires the gun when it detects motion. There is also an interactive mode so that you can control it manually from your keyboard.

And in celebration of hitting 50k subscribers, the team are giving away two Raspberry Pis! Just subscribe to their channel and tell them how you would use one in your own project to be in with a chance of winning.

If you have built your own Raspberry Pi-powered lock or security system, we’d love to see it. So go ahead and share it in the comments below, or post it across social media, remembering to tag us in the process.

The post Hacker House Smartphone-Connected Door Lock appeared first on Raspberry Pi.

BREIN Shuts Down ‘Pirate Cinema’ on Facebook

Post Syndicated from Ernesto original https://torrentfreak.com/brein-shuts-down-pirate-cinema-on-facebook-170130/

biosIn the present day and age, online piracy is perhaps more scattered than it’s ever been.

Torrent sites, streaming services, cyberlockers, mobile apps, linking sites and many more are all labeled as infringing sources.

But, the piracy problem is not restricted to ‘shady’ sites and services alone. On many ‘legal’ platforms there’s a wide availability of copyright infringing material as well, Facebook included.

While anyone can casually post an infringing video or song on Facebook, there are some who dedicate entire pages to it. This was also the case for the Dutch page “LiveBioscoop” (LiveCinema) which was started by a 23-year-old man from Rotterdam.

As the name suggests, the page regularly streamed movies online with help from Facebook’s own live streaming service. In a relatively short period, it amassed over 25,000 followers who could regularly vote on which movies the ‘cinema’ should stream next.

The site’s popularity spilled over to the Dutch press last week, with the AD reporting on the unusual activity of LiveBioscoop and a similar page, Livebios. Commenting on the issue, anti-piracy group BREIN said they would investigate the issue, and not without result.

The operator of the Facebook page was quickly confronted by the anti-piracy group. Facing an ex-parte court order from a local court, the man agreed to stop the infringing activities and sign a settlement of €7,500. While the Facebook page itself is still online, infringements have stopped.

Commenting on the issue, BREIN director Tim Kuik says that they decided to go to court straight away, due to the gravity of the issue.

“This is just stealing revenue from cinemas and rightsholders. It has to end as soon as possible. That is why we have opted for an ex parte injunction with a penalty, instead of first issuing a summons,” Kuik says.

The other ‘pirate cinema’ on Facebook wasn’t mentioned by BREIN, but is no longer available at the time of writing. It seems likely that the operator of this page decided to stop voluntarily to avoid further problems.

Instead of simply cracking down on all these pages, copyright holders could also learn from them. As it turns out, many LiveBioscoop users sincerely enjoyed and appreciated the social cinema visit, which may prove to be an interesting opportunity.

“LiveBioscoop has to stay. It feels better and is more fun that way. People can talk. Netflix is just like, I watch a movie and that was it. Since I found LiveBioscoop I no longer watched Netflix movies,” one follower commented.

While this is the first time that we have seen a settlement with a Facebook live streamer, movie piracy is relatively common on the social network. There still are dozens, if not hundreds of popular pages dedicated to pirated movies and TV-shows.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

BitChute is a BitTorrent-Powered YouTube Alternative

Post Syndicated from Andy original https://torrentfreak.com/bitchute-is-a-bittorrent-powered-youtube-alternative-170129/

bitchute-logoYouTube attracts over a billion visitors every month, with many flocking to the platform to view original content uploaded by thousands of contributors. However, those contributors aren’t completely free to upload and make money from whatever they like.

Since it needs to please its advertisers, YouTube has rules in place over what kind of content can be monetized, something which caused a huge backlash last year alongside claims of censorship.

But what if there was an alternative to YouTube, one that doesn’t impose the same kinds of restrictions on uploaders? Enter BitChute, a BitTorrent-powered video platform that seeks to hand freedom back to its users.

“The idea comes from seeing the increased levels of censorship by the large social media platforms in the last couple of years. Bannings, demonetization, and tweaking algorithms to send certain content into obscurity and, wanting to do something about it,” BitChute founder Ray Vahey informs TorrentFreak.

“I knew building a clone wasn’t the answer, many have tried and failed. And it would inevitably grow into an organization with the same problems anyway.”

As seen in the image below, the site has a familiar layout for anyone used to YouTube-like video platforms. It has similar video controls, view counts, and the ability to vote on content. It also has a fully-functioning comment section.

bitchute

Of course, one of the main obstacles for video content hosting platforms is the obscene amounts of bandwidth they consume. Any level of success is usually accompanied by big hosting bills. But along with its people-powered philosophy, BitChute does things a little differently.

Instead of utilizing central servers, BitChute uses WebTorrent, a system which allows people to share videos directly from their browser, without having to configure or install anything. Essentially this means that the site’s users become hosts of the videos they’re watching, which slams BitChute’s hosting costs into the ground.

“Distributed systems and WebTorrent invert the scalability advantage the Googles and Facebooks have. The bigger our user base grows, the more efficiently it can serve while retaining the simplicity of the web browser,” Vahey says.

“Also by the nature of all torrent technology, we are not locking users into a single site, and they have the choice to retain and continue sharing the files they download. That puts more power back in the hands of the consumer where it should be.”

The only hints that BitChute is using peer-to-peer technology are the peer counts under each video and a short delay before a selected video begins to play. This is necessary for the system to find peers but thankfully it isn’t too intrusive.

As far as we know, BitChute is the first attempt at a YouTube-like platform that leverages peer-to-peer technology. It’s only been in operation for a short time but according to its founder, things are going well.

“As far as I could tell, no one had yet run with this idea as a service, so that’s what myself and few like-minded people decided. To put it out there and see what people think. So far it’s been an amazingly positive response from people who understand and agree with what we’re doing,” Vahey explains.

“Just over three weeks ago we launched with limited upload access on a first come first served basis. We are flat out busy working on the next version of the site; I have two other co-founders based out of the UK who are supporting me, watch this space,” he concludes.

Certainly, people will be cheering the team on. Last September, popular YouTuber Bluedrake experimented with WebTorrent to distribute his videos after becoming frustrated with YouTube’s policies.

“All I want is a site where people can say what they want,” he said at the time. “I want a site where people can operate their business without having somebody else step in and take away their content when they say something they don’t like.”

For now, BitChute is still under development, but so far it has impressed Feross Aboukhadijeh, the Stanford University graduate who invented WebTorrent.

“BitChute is an exciting new product,” he told TF this week. “This is exactly the kind of ‘people-powered’ website that WebTorrent technology was designed to enable. I’m eager to see where the team takes it.”

BitChute can be found here.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Researchers Issue Security Warning Over Android VPN Apps

Post Syndicated from Andy original https://torrentfreak.com/researchers-issue-security-warning-over-android-vpn-apps-170125/

warningThere was a time when the Internet was a fairly straightforward place to navigate, with basic software, basic websites and few major security issues. Over the years, however, things have drastically changed.

Many people now spend their entire lives connected to the web in some way, particularly via mobile devices and apps such as Facebook and the countless thousands of others now freely available online.

For some users, the idea of encrypting their traffic has become attractive, from both a security and anti-censorship standpoint. On the one hand people like the idea of private communications and on the other, encryption can enable people to bypass website blocks, wherever they may occur and for whatever reason.

As a result, millions are now turning to premium VPN packages from reputable companies. Others, however, prefer to use the all-in-one options available on Google’s Play store, but according to a new study, that could be a risky strategy.

A study by researchers at CSIRO’s Data 61, University of New South Wales, and UC Berkley, has found that hundreds of VPN apps available from Google Play presented significant security issues including malware, spyware, adware and data leaks.

Very often, users look at the number of downloads combined with the ‘star rating’ of apps to work out whether they’re getting a good product. However, the researchers found that among the 283 apps tested, even the highest ranked and most-downloaded apps can carry nasty surprises.

“While 37% of the analyzed VPN apps have more than 500K installs and 25% of them receive at least a 4-star rating, over 38% of them contain some malware presence according to VirusTotal,” the researchers write.

The five types of malware detected can be broken down as follows: Adware (43%), Trojan (29%), Malvertising (17%), Riskware (6%) and Spyware (5%). The researchers ordered the most problematic apps by VirusTotal AV-Rank, which represents the number of anti-virus tools that identified any malware activity.

The worst offenders, according to the reportvpn-worst

The researchers found that only a marginal number of VPN users raised any security or privacy concerns in the review sections for each app, despite many of them having serious problems. The high number of downloads seem to suggest that users have confidence in them, despite their issues.

“According to the number of installs of these apps, millions of users appear to trust VPN apps despite their potential maliciousness. In fact, the high presence of malware activity in VPN apps that our analysis has revealed is worrisome given the ability that these apps already have to inspect and analyze all user’s traffic with the VPN permission,” the paper reads.

The growing awareness of VPNs and their association with privacy and security has been a hot topic in recent years, but the researchers found that many of the apps available on Google Play offer neither. Instead, they featured tracking of users by third parties while demanding access to sensitive Android permissions.

“Even though 67% of the identified VPN Android apps offer services to enhance online privacy and security, 75% of them use third-party tracking libraries and 82% request permissions to access sensitive resources including user accounts and text messages,” the researchers note.

Even from this low point, things manage to get worse. Many VPN users associate the product they’re using with encryption and the privacy it brings, but for almost one-fifth of apps tested by the researchers, the concept is alien.

“18% of the VPN apps implement tunneling protocols without encryption despite promising online anonymity and security to their users,” they write, adding that 16% of tested apps routed traffic through other users of the same app rather than utilizing dedicated online servers.

“This forwarding model raises a number of trust, security, and privacy concerns for participating users,” the researchers add, noting that only Hola admits to the practice on its website.

And when it comes to the handling of IPv6 traffic, the majority of the apps featured in the study fell short in a dramatic way. Around 84% of the VPN apps tested had IPv6 leaks while 66% had DNS leaks, something the researchers put down to misconfigurations or developer-induced errors.

“Both the lack of strong encryption and traffic leakages can ease online tracking activities performed by inpath middleboxes (e.g., commercial WiFi [Access Points] harvesting user’s data) and by surveillance agencies,” they warn.

While the study (pdf) is detailed, it does not attempt to rank any of the applications tested, other than showing a table of some of the worst offenders. From the perspective of the consumer looking to install a good VPN app, that’s possibly not as helpful as they might like.

Instead, those looking for a VPN will have to carry out their own research online before taking the plunge. Sticking with well-known companies that are transparent about their practices is a great start. And, if an app requests access to sensitive data during the install process for no good reason, get rid of it. Finally, if it’s a free app with a free service included, it’s a fair assumption that strings may be attached.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

China Bans Unauthorized VPN Services in Internet Crackdown

Post Syndicated from Andy original https://torrentfreak.com/china-ban-unauthorized-vpn-services-in-internet-crackdown-170123/

blocked-censorWhile the Internet is considered by many to be the greatest invention of modern time, to others it presents a disruptive influence that needs to be controlled.

Among developed nations nowhere is this more obvious than in China, where the government seeks to limit what citizens can experience online. Using technology such as filters and an army of personnel, people are routinely barred from visiting certain websites and engaging in activity deemed as undermining the state.

Of course, a cat-and-mouse game is continuously underway, with citizens regularly trying to punch through the country’s so-called ‘Great Firewall’ using various techniques, services, and encryption technologies. Now, however, even that is under threat.

In an announcement yesterday from China’s Ministry of Industry and Information Technology, the government explained that due to Internet technologies and services expanding in a “disorderly” fashion, regulation is needed to restore order.

“In recent years, as advances in information technology networks, cloud computing, big data and other applications have flourished, China’s Internet network access services market is facing many development opportunities. However, signs of disorderly development show the urgent need for regulation norms,” MIIT said.

In order to “standardize” the market and “strengthen network information security management,” the government says it is embarking on a “nationwide Internet network access services clean-up.” It will begin immediately and continue until March 31, 2018, with several aims.

All Internet services such as data centers, ISPs, CDNs and much-valued censorship-busting VPNs, will need to have pre-approval from the government to operate. Operating such a service without a corresponding telecommunications business license will constitute an offense.

“Internet data centers, ISP and CDN enterprises shall not privately build communication transmission facilities, and shall not use the network infrastructure and IP addresses, bandwidth and other network access resources…without the corresponding telecommunications business license,” the notice reads.

It will also be an offense to possess a business license but then operate outside its scope, such as by exceeding its regional boundaries or by operating other Internet services not permitted by the license. Internet entities are also forbidden to sub-lease to other unlicensed entities.

In the notice, VPNs and similar technologies have a section all to themselves and are framed as “cross-border issues.”

“Without the approval of the telecommunications administrations, entities can not create their own or leased line (including a Virtual Private Network) and other channels to carry out cross-border business activities,” it reads.

The notice, published yesterday, renders most VPN providers in China illegal, SCMP reports.

Only time will tell what effect the ban will have in the real world, but in the short-term there is bound to be some disruption as entities seek to license their services or scurry away underground.

As always, however, the Internet will perceive censorship as damage, and it’s inevitable that the most determined of netizens will find a way to access content outside China (such as Google, Facebook, YouTube and Twitter), no matter how strict the rules.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Harry Potter and the Real-life Weasley Clock

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/harry-potter-real-life-weasley-clock/

Pat Peters (such a wonderful Marvel-sounding name) recently shared his take on the Weasley Clock, a device that hangs on the wall of The Burrow, the rickety home inhabited by the Weasley family in the Harry Potter series.

Mrs. Weasley glanced at the grandfather clock in the corner. Harry liked this clock. It was completely useless if you wanted to know the time, but otherwise very informative. It had nine golden hands, and each of them was engraved with one of the Weasley family’s names. There were no numerals around the face, but descriptions of where each family member might be. “Home,” “school,” and “work” were there, but there was also “traveling,” “lost,” “hospital,” “prison,” and, in the position where the number twelve would be on a normal clock, “mortal peril.”

The clock in the movie has misplaced “mortal peril”, but aside from that it looks a lot like what we’d imagined from the books.

There’s a reason why more and more Harry Potter-themed builds are appearing online. The small size of devices such as the Raspberry Pi and Arduino allow for a digital ‘brain’ to live within an ordinary object, allowing control over it that you could easily confuse with magic…if you allow yourself to believe in such things.

So with last week’s Real-life Daily Prophet doing so well, it’s only right to share another Harry Potter-inspired project.

Harry Potter Weasley Clock

The clock serves not to tell the time but, rather, to indicate the location of Molly, Arthur and the horde of Weasley children. And using the OwnTracks GPS app for smartphones, Pat’s clock does exactly the same thing.

Pat Peters Weasley Clock Raspberry Pi

Pat has posted the entire build on instructables, allowing every budding witch and wizard (and possibly a curious Muggle or two) the chance to build their own Weasley Clock.

This location clock works through a Raspberry Pi that subscribes to an MQTT broker that our phone’s publish events to. Our phones (running the OwnTracks GPS app) send a message to the broker anytime we cross into or out of one of our waypoints that we have set up in OwnTracks, which then triggers the Raspberry Pi to run a servo that moves the clock hand to show our location.

There are no words for how much we love this. Here at Pi Towers we definitely have a soft spot for Harry Potter-themed builds, so make sure to share your own with us in the comments below, or across our social media channels on Facebook, Twitter, Instagram, YouTube and G+.

The post Harry Potter and the Real-life Weasley Clock appeared first on Raspberry Pi.

WhatsApp Security Vulnerability

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/01/whatsapp_securi.html

Back in March, Rolf Weber wrote about a potential vulnerability in the WhatsApp protocol that would allow Facebook to defeat perfect forward secrecy by forcibly change users’ keys, allowing it — or more likely, the government — to eavesdrop on encrypted messages.

It seems that this vulnerability is real:

WhatsApp has the ability to force the generation of new encryption keys for offline users, unbeknown to the sender and recipient of the messages, and to make the sender re-encrypt messages with new keys and send them again for any messages that have not been marked as delivered.

The recipient is not made aware of this change in encryption, while the sender is only notified if they have opted-in to encryption warnings in settings, and only after the messages have been re-sent. This re-encryption and rebroadcasting effectively allows WhatsApp to intercept and read users’ messages.

The security loophole was discovered by Tobias Boelter, a cryptography and security researcher at the University of California, Berkeley. He told the Guardian: “If WhatsApp is asked by a government agency to disclose its messaging records, it can effectively grant access due to the change in keys.”

The vulnerability is not inherent to the Signal protocol. Open Whisper Systems’ messaging app, Signal, the app used and recommended by whistleblower Edward Snowden, does not suffer from the same vulnerability. If a recipient changes the security key while offline, for instance, a sent message will fail to be delivered and the sender will be notified of the change in security keys without automatically resending the message.

WhatsApp’s implementation automatically resends an undelivered message with a new key without warning the user in advance or giving them the ability to prevent it.

Note that it’s an attack against current and future messages, and not something that would allow the government to reach into the past. In that way, it is no more troubling than the government hacking your mobile phone and reading your WhatsApp conversations that way.

An unnamed “WhatsApp spokesperson” said that they implemented the encryption this way for usability:

In WhatsApp’s implementation of the Signal protocol, we have a “Show Security Notifications” setting (option under Settings > Account > Security) that notifies you when a contact’s security code has changed. We know the most common reasons this happens are because someone has switched phones or reinstalled WhatsApp. This is because in many parts of the world, people frequently change devices and Sim cards. In these situations, we want to make sure people’s messages are delivered, not lost in transit.

He’s technically correct. This is not a backdoor. This really isn’t even a flaw. It’s a design decision that put usability ahead of security in this particular instance. Moxie Marlinspike, creator of Signal and the code base underlying WhatsApp’s encryption, said as much:

Under normal circumstances, when communicating with a contact who has recently changed devices or reinstalled WhatsApp, it might be possible to send a message before the sending client discovers that the receiving client has new keys. The recipient’s device immediately responds, and asks the sender to reencrypt the message with the recipient’s new identity key pair. The sender displays the “safety number has changed” notification, reencrypts the message, and delivers it.

The WhatsApp clients have been carefully designed so that they will not re-encrypt messages that have already been delivered. Once the sending client displays a “double check mark,” it can no longer be asked to re-send that message. This prevents anyone who compromises the server from being able to selectively target previously delivered messages for re-encryption.

The fact that WhatsApp handles key changes is not a “backdoor,” it is how cryptography works. Any attempt to intercept messages in transmit by the server is detectable by the sender, just like with Signal, PGP, or any other end-to-end encrypted communication system.

The only question it might be reasonable to ask is whether these safety number change notifications should be “blocking” or “non-blocking.” In other words, when a contact’s key changes, should WhatsApp require the user to manually verify the new key before continuing, or should WhatsApp display an advisory notification and continue without blocking the user.

Given the size and scope of WhatsApp’s user base, we feel that their choice to display a non-blocking notification is appropriate. It provides transparent and cryptographically guaranteed confidence in the privacy of a user’s communication, along with a simple user experience. The choice to make these notifications “blocking” would in some ways make things worse. That would leak information to the server about who has enabled safety number change notifications and who hasn’t, effectively telling the server who it could MITM transparently and who it couldn’t; something that WhatsApp considered very carefully.

How serious this is depends on your threat model. If you are worried about the US government — or any other government that can pressure Facebook — snooping on your messages, then this is a small vulnerability. If not, then it’s nothing to worry about.

Slashdot thread. Hacker News thread. BoingBoing post. More here.

EDITED TO ADD (1/24): Zeynep Tufekci takes the Guardian to task for their reporting on this vulnerability. (Note: I signed on to her letter.)

Audible Magic Accuses YouTube of Fraud Over Content ID Trademark

Post Syndicated from Andy original https://torrentfreak.com/audible-magic-accuses-youtube-of-fraud-over-content-id-trademark-170111/

sadyoutubeAutomatic Content Recognition (ACR) technologies have been available to the public for a number of years. Perhaps the most visible is the mobile app Shazam, which allows users to identify the name of a song after listening to just a short clip.

The same kind of technology is famously deployed at YouTube. Its (in)famous Content ID system can spot copyrighted content uploaded by users and make a decision whether to take it down or allow rightsholders to monetize it. However, YouTube now faces a legal challenge over the Content ID trademark.

Audible Magic has been operating in the content recognition and fingerprinting market for more than fifteen years. In fact, during 2006, YouTube and Audible Magic signed an agreement which gave YouTube a license to use the latter’s content recognition technology.

Perhaps surprisingly, Audible Magic’s system was called Content ID, a term YouTube uses to this day, despite its agreement with the content recognition company being terminated in 2009.

It’s clear the Audible Magic still feels it has a claim to the name and that is the basis of the complaint the company has just filed with the United States Trademark and Patent Office.

Audible Magic’s websiteam-contentid

Describing itself as “the leader in automated identification of audio and visual content for web media platforms,” Audible Magic says it has worked with the biggest names in media, including Warner Bros, Sony, Disney and Facebook. As highlighted above, between 2006 and 2009 it also worked with Google.

In its complaint to USPTO, Audible Magic says that during 2006, YouTube was facing accusations that it was a “prime enabler” of copyright infringement and pirating.

“The television and movie industries were complaining that YouTube was allowing third-parties to upload copyright materials from television and movies and was not instituting any controls or checks on third-party content. This negative publicity was particularly damaging to YouTube in 2006 when Google was considering acquiring YouTube for over one billion dollars,” the company writes.

To address this problem, in October 2006 Audible Magic and Google signed an agreement for YouTube to license Audible Magic’s Content ID system. After YouTube had been bought by Google, the license was transferred to the search giant.

The agreement between the companies was terminated three years later in 2009, at which point Audible Magic says that all intellectual property rights in Content ID reverted back to its control. However, Google is now attempting to gain ownership of the trademark. As shown in the image below, its application with the USPTO is ongoing and claims first use nearly eight years ago.

Google’s registration for Content IDuspto-contentid

“According to the date of first use claimed in Google’s registration, Google asserts that it first used the Content ID mark in connection with its services on August 27, 2008,” Audible Magic writes.

“Audible Magic’s date of first use of Content ID in March 2006 is thus well before the claimed first use of the mark by Google (as it should be since Google sourced the mark and related services from Audible Magic), and Audible Magic’s use of Content ID therefore has priority over Google’s use and registration.”

While Audible Magic feels it has a claim over the name, one of its biggest concerns surrounds the confusion that is set to arise with Google using the term ‘Content ID’ in a marketplace already occupied by Audible Magic.

“Such confusion may cause harm to Audible Magic and the consuming public and jeopardize the valuable goodwill and reputation Audible Magic has built up in connection with Content ID and its services,” the company explains.

In closing, Audible Magic asks the United States Patent and Trademark Office to cancel Google’s trademark registration on the basis the company “committed fraud” in 2013 when it signed a declaration which stated that it knew of no other company entitled to use the Content ID mark in commerce.

The full complaint to the USPTO can be found here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

BREIN Reveals Anti-Piracy Tactics and Achievements

Post Syndicated from Ernesto original https://torrentfreak.com/brein-reveals-anti-piracy-tactics-and-achievements-170111/

breinlogoWhen it comes to civil anti-piracy enforcement, BREIN is without a doubt one of the best known players in the industry.

The group, which receives support from Hollywood and other content industries, has shuttered hundreds of smaller sites in recent history and took on the likes of Mininova and The Pirate Bay.

In 2016 BREIN continued its enforcement actions in full swing. Besides targeting pirate sites throughout the world, it also increased its focus on individual uploaders of infringing content.

The group just published a detailed overview of what it accomplished over the past 12 months. This provides some clear insights into its anti-piracy priorities and offers a glimpse of what to expect in the near future.

To begin, BREIN stresses that copyright enforcement is needed to make sure that legal offerings can flourish. The main reason why people pirate is because the content is free, it says.

“This means that enforcement is essential. Not only for creation and production, but also for online and offline distribution and further investment in innovation in these areas.”

To ensure a broad impact, BREIN targets a wide range of pirate sources, services, and facilitators. While it’s impossible to make piracy go away completely, it hopes to disrupt the ecosystem enough to lower its prevalence.

“The purpose of enforcement is the disruption of illegal supply and use. BREIN therefore uses a ‘full spectrum’ approach that covers all players,” the group notes.

This strategy includes targeting websites and their hosting providers, search engines, social media, advertisers, payment providers, but also uploaders of infringing content and those who consume it.

Looking at the numbers we see that the anti-piracy group is closing the books on a productive year.

BREIN pulled 231 illegal sites and services offline, for example. This includes 84 linking sites, 63 streaming portals, and 34 torrent sites. Some of these shut down completely and others were forced to leave their hosting providers.

In addition, BREIN also took action against or caught 26 prolific uploaders, removed 18 Facebook groups where infringing content was being shared, removed 2,559,525 search results from Google, and took down 4,159 ads for illegal content.

With regards to uploaders, the anti-piracy group relied on ex-parte court orders in several cases. With these orders in hand it managed to secure several settlements. This includes actions against people who shared content on torrent sites, Usenet and Facebook.

BREIN says it always keeps the personal financial circumstances of its targets in mind when determining a settlement. While these never come cheap, this approach can certainly be seen as more reasonable than the jail sentence that was handed out in the UK for a similar offense.

The targeted uploaders and site operators were identified using a variety of forensic tools and techniques, including IP-address monitoring and requests for personal information to hosting providers or other intermediaries.

Looking ahead, BREIN plans to continue its efforts in the new year. It’s also looking forward to the conclusion of several pending cases, such as the local Pirate Bay blockade, which is currently under review by the European Court of Justice.

In addition, BREIN has started to collect the IP-addresses of pirating users. Ideally, they would like to cooperate with Internet providers to send them warnings, similar to the UK alerts system that will be rolled out soon.

“A warning system can contribute to compliance and prevent naively pirating consumers being faced with settlement claims from individual rightsholders. If compliance increases, only persistent offenders will be targeted,” BREIN writes.

More details on the full review and BREIN’s outlook for the new year are available in Dutch, on their official website.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

How Stack Overflow plans to survive the next DNS attack

Post Syndicated from Mark Henderson original http://blog.serverfault.com/2017/01/09/surviving-the-next-dns-attack/

Let’s talk about DNS. After all, what could go wrong? It’s just cache invalidation and naming things.

tl;dr

This blog post is about how Stack Overflow and the rest of the Stack Exchange network approaches DNS:

  • By bench-marking different DNS providers and how we chose between them
  • By implementing multiple DNS providers
  • By deliberately breaking DNS to measure its impact
  • By validating our assumptions and testing implementations of the DNS standard

The good stuff in this post is in the middle, so feel free to scroll down to “The Dyn Attack” if you want to get straight into the meat and potatoes of this blog post.

The Domain Name System

DNS had its moment in the spotlight in October 2016, with a major Distributed Denial of Service (DDos) attack launched against Dyn, which affected the ability for Internet users to connect to some of their favourite websites, such as Twitter, CNN, imgur, Spotify, and literally thousands of other sites.

But for most systems administrators or website operators, DNS is mostly kept in a little black box, outsourced to a 3rd party, and mostly forgotten about. And, for the most part, this is the way it should be. But as you start to grow to 1.3+ billion pageviews a month with a website where performance is a feature, every little bit matters.

In this post, I’m going to explain some of the decisions we’ve made around DNS in the past, and where we’re going with it in the future. I will eschew deep technical details and gloss over low-level DNS implementation in favour of the broad strokes.

In the beginning

So first, a bit of history: In the beginning, we ran our own DNS on-premises using artisanally crafted zone files with BIND. It was fast enough when we were doing only a few hundred million hits a month, but eventually hand-crafted zonefiles were too much hassle to maintain reliably. When we moved to Cloudflare as our CDN, their service is intimately coupled with DNS, so we demoted our BIND boxes out of production and handed off DNS to Cloudflare.

The search for a new provider

Fast forward to early 2016 and we moved our CDN to Fastly. Fastly doesn’t provide DNS service, so we were back on our own in that regards and our search for a new DNS provider began. We made a list of every DNS provider we could think of, and ended up with a shortlist of 10:

  • Dyn
  • NS1
  • Amazon Route 53
  • Google Cloud DNS
  • Azure DNS (beta)
  • DNSimple
  • Godaddy
  • EdgeCast (Verizon)
  • Hurricane Electric
  • DNS Made Easy

From this list of 10 providers, we did our initial investigations into their service offerings, and started eliminating services that were either not suited to our needs, outrageously expensive, had insufficient SLAs, or didn’t offer services that we required (such as a fully featured API). Then we started performance testing. We did this by embedding a hidden iFrame on 5% of the visitors to stackoverflow.com, which forced a request to a different DNS provider. We did this for each provider until we had some pretty solid performance numbers.

Using some basic analytics, we were able to measure the real-world performance, as seen by our real-world users, broken down into geographical area. We built some box plots based on these tests which allowed us to visualise the different impact each provider had.

If you don’t know how to interpret a boxplot, here’s a brief primer for you. For the data nerds, these were generated with R’s standard boxplot functions, which means the upper and lower whiskers are min(max(x), Q_3 + 1.5 * IQR) and max(min(x), Q_1 – 1.5 * IQR), where IQR = Q_3 – Q_1

This is the results of our tests as seen by our users in the United States:

DNS Performance in the United States

You can see that Hurricane Electric had a quarter of requests return in < 16ms and a median of 32ms, with the three “cloud” providers (Azure, Google Cloud DNS and Route 53) being slightly slower (being around 24ms first quarter and 45ms median), and DNS Made Easy coming in 2nd place (20ms first quarter, 39ms median).

You might wonder why the scale on that chart goes all the way to 700ms when the whiskers go nowhere near that high. This is because we have a worldwide audience, so just looking at data from the United States is not sufficient. If we look at data from New Zealand, we see a very different story:

DNS Performance in New Zealand

Here you can see that Route 53, DNS Made Easy and Azure all have healthy 1st quarters, but Hurricane Electric and Google have very poor 1st quarters. Try to remember this, as this becomes important later on.

We also have Stack Overflow in Portuguese, so let’s check the performance from Brazil:

DNS Performance in Brazil

Here we can see Hurricane Electric, Route 53 and Azure being favoured, with Google and DNS Made Easy being slower.

So how do you reach a decision about which DNS provider to choose, when your main goal is performance? It’s difficult, because regardless of which provider you end up with, you are going to be choosing a provider that is sub-optimal for part of your audience.

You know what would be awesome? If we could have two DNS providers, each one servicing the areas that they do best! Thankfully this is something that is possible to implement with DNS. However, time was short, so we had to put our dual-provider design on the back-burner and just go with a single provider for the time being.

Our initial rollout of DNS was using Amazon Route 53 as our provider: they had acceptable performance figures over a large number of regions and had very effective pricing (on that note Route 53, Azure DNS, and Google Cloud DNS are all priced identically for basic DNS services).

The DYN attack

Roll forwards to October 2016. Route 53 had proven to be a stable, fast, and cost-effective DNS provider. We still had dual DNS providers on our backlog of projects, but like a lot of good ideas it got put on the back-burner until we had more time.

Then the Internet ground to a halt. The DNS provider Dyn had come under attack, knocking a large number of authoritative DNS servers off the Internet, and causing widespread issues with connecting to major websites. All of a sudden DNS had our attention again. Stack Overflow and Stack Exchange were not affected by the Dyn outage, but this was pure luck.

We knew if a DDoS of this scale happened to our DNS provider, the solution would be to have two completely separate DNS providers. That way, if one provider gets knocked off the Internet, we still have a fully functioning second provider who can pick up the slack. But there were still questions to be answered and assumptions to be validated:

  • What is the performance impact for our users in having multiple DNS providers, when both providers are working properly?
  • What is the performance impact for our users if one of the providers is offline?
  • What is the best number of nameservers to be using?
  • How are we going to keep our DNS providers in sync?

These were pretty serious questions – some of which we had hypothesis that needed to be checked and others that were answered in the DNS standards, but we know from experience that DNS providers in the wild do not always obey the DNS standards.

What is the performance impact for our users in having multiple DNS providers, when both providers are working properly?

This one should be fairly easy to test. We’ve already done it once, so let’s just do it again. We fired up our tests, as we did in early 2016, but this time we specified two DNS providers:

  • Route 53 & Google Cloud
  • Route 53 & Azure DNS
  • Route 53 & Our internal DNS

We did this simply by listing Name Servers from both providers in our domain registration (and obviously we set up the same records in the zones for both providers).

Running with Route 53 and Google or Azure was fairly common sense – Google and Azure had good coverage of the regions that Route 53 performed poorly in. Their pricing is identical to Route 53, which would make forecasting for the budget easy. As a third option, we decided to see what would happen if we took our formerly demoted, on-premises BIND servers and put them back into production as one of the providers. Let’s look at the data for the three regions from before: United States, New Zealand and Brazil:

United States
DNS Performance for dual providers in the United States

New Zealand
DNS Performance for dual providers in New Zealand

Brazil

DNS Performance for dual providers in Brazil

There is probably one thing you’ll notice immediately from these boxplots, but there’s also another not-so obvious change:

  1. Azure is not in there (the obvious one)
  2. Our 3rd quarters are measurably slower (the not-so obvious one).

Azure

Azure has a fatal flaw in their DNS offering, as of the writing of this blog post. They do not permit the modification of the NS records in the apex of your zone:

You cannot add to, remove, or modify the records in the automatically created NS record set at the zone apex (name = “@”). The only change that’s permitted is to modify the record set TTL.

These NS records are what your DNS provider says are authoritative DNS servers for a given domain. It’s very important that they are accurate and correct, because they will be cached by clients and DNS resolvers and are more authoritative than the records provided by your registrar.

Without going too much into the actual specifics of how DNS caching and NS records work (it would take me another 2,500 words to describe this in detail), what would happen is this: Whichever DNS provider you contact first would be the only DNS provider you could contact for that domain until your DNS cache expires. If Azure is contacted first, then only Azure’s nameservers will be cached and used. This defeats the purpose of having multiple DNS providers, as in the event that the provider you’ve landed on goes offline, which is roughly 50:50, you will have no other DNS provider to fall back to.

So until Azure adds the ability to modify the NS records in the apex of a zone, they’re off the table for a dual-provider setup.

The 3rd quarter

What the third quarter represents here is the impact of latency on DNS. You’ll notice that in the results for ExDNS (which is the internal name for our on-premises BIND servers) the box plot is much taller than the others. This is because those servers are located in New Jersey and Colorado – far, far away from where most of our visitors come from. So as expected, a service with only two points of presence in a single country (as opposed to dozens worldwide) performs very poorly for a lot of users.

Performance conclusions

So our choices were narrowed for us to Route 53 and Google Cloud, thanks to Azure’s lack of ability to modify critical NS records. Thankfully, we have the data to back up the fact that Route 53 combined with Google is a very acceptable combination.

Remember earlier, when I said that the performance of New Zealand was important? This is because Route 53 performed well, but Google Cloud performed poorly in that region. But look at the chart again. Don’t scroll up, I’ll show you another chart here:

Comparison for DNS performance data in New Zealand between single and dual providers

See how Google on its own performed very poorly in NZ (its 1st quarter is 164ms versus 27ms for Route 53)? However, when you combine Google and Route 53 together, the performance basically stays the same as when there was just Route 53.

Why is this? Well, it’s due to a technique called Smooth Round Trip Time. Basically, DNS resolvers (namely certain version of BIND and PowerDNS) keep track of which DNS servers respond faster, and weight queries towards those DNS servers. This means that the faster provider should be skewed to more often than the slower providers. There’s a nice presentation over here if you want to learn more about this. The short version is that if you have many DNS servers, DNS cache servers will favour the fastests ones. As a result, if one provider is fast in Auckland but slow in London, and another provider is the reverse, DNS cache servers in Auckland will favour the first provider and DNS cache servers in London will favor the other. This is a very little known feature of modern DNS servers but our testing shows that enough ISPs support it that we are confident we can rely on it.

What is the performance impact for our users if one of the providers is offline?

This is where having some on-premises DNS servers comes in very handy. What we can essentially do here is send a sample of our users to our on-premises servers, get a baseline performance measurement, then break one of the servers and run the performance measurements again. We can also measure in multiple places: We have our measurements as reported by our clients (what the end user actually experienced), and we can look at data from within our network to see what actually happened. For network analysis, we turned to our trusted network analysis tool, ExtraHop. This would allow us to look at the data on the wire, and get measurements from a broken DNS server (something you can’t do easily with a pcap on that server, because, you know. It’s broken).

Here’s what healthy performance looked like on the wire (as measured by ExtraHop), with two DNS servers, both of them fully operational, over a 24-hour period (this chart is additive for the two series):

DNS performance with two healthy name servers

Blue and brown are the two different, healthy DNS servers. As you can see, there’s a very even 50:50 split in request volume. Because both of the servers are located in the same datacenter, Smoothed Round Trip Time had no effect, and we had a nice even distribution – as we would expect.

Now, what happens when we take one of those DNS servers offline, to simulate a provider outage?

DNS performance with a broken nameserver

In this case, the blue DNS server was offline, and the brown DNS server was healthy. What we see here is that the blue, broken, DNS server received the same number of requests as it did when the DNS server was healthy, but the brown, healthy, DNS server saw twice as many requests. This is because those users who were hitting the broken server eventually retried their requests to the healthy server and started to favor it. So what does this look like in terms of actual client performance?

I’m only going to share one chart with you this time, because they were all essentially the same:

Comparison of healthy vs unhealthy DNS performance

What we see here is a substantial number of our visitors saw a performance decrease. For some it was minor, for others, quite major. This is because the 50% of visitors who hit the faulty server need to retry their request, and the amount of time it takes to retry that request seems to vary. You can see again a large increase in the long tail, which indicates that they are clients who took over 300 milliseconds to retry their request.

What does this tell us?

What this means is that in the event of a DNS provider going offline, we need to pull that DNS provider out of rotation to provide best performance, but until we do our users will still receive service. A non-trivial number of users will be seeing a large performance impact.

What is the best number of nameservers to be using?

Based on the previous performance testing, we can assume that the number of retries a client may have to make is N/2+1, where N is the number of nameservers listed. So if we list eight nameservers, with four from each provider, the client may potentially have to make 5 DNS requests before they finally get a successful message (the four failed requests, plus a final successful one). A statistician better than I would be able to tell you the exact probabilities of each scenario you would face, but the short answer here is:

Four.

We felt that based on our use case, and the performance penalty we were willing to take, we would be listing a total of four nameservers – two from each provider. This may not be the right decision for those who have a web presence orders of magnitudes larger than ours, but Facebook provide two nameservers on IPv4 and two on IPv6. Twitter provides eight, four from Dyn and four from Route 53. Google provides 4.

How are we going to keep our DNS providers in sync?

DNS has built in ways of keeping multiple servers in sync. You have domain transfers (IXFR, AXFR), which are usually triggered by a NOTIFY packet sent to all the servers listed as NS records in the zone. But these are not used in the wild very often, and have limited support from DNS providers. They also come with their own headaches, like maintaining an ACL IP Whitelist, of which there could be hundreds of potential servers (all the different points of presence from multiple providers), of which you do not control any. You also lose the ability to audit who changed which record, as they could be changed on any given server.

So we built a tool to keep our DNS in sync. We actually built this tool years ago, once our artisanally crafted zone files became too troublesome to edit by hand. The details of this tool are out of scope for this blog post though. If you want to learn about it, keep an eye out around March 2017 as we plan to open-source it. The tool lets us describe the DNS zone data in one place and push it to many different DNS providers.

So what did we learn?

The biggest takeaway from all of this, is that even if you have multiple DNS servers, DNS is still a single point of failure if they are all with the same provider and that provider goes offline. Until the Dyn attack this was pretty much “in theory” if you were using a large DNS provider, because until first the successful attack no large DNS provider had ever had an extended outage on all of its points of presence.

However, implementing multiple DNS providers is not entirely straightforward. There are performance considerations. You need to ensure that both of your zones are serving the same data. There can be such a thing as too many nameservers.

Lastly, we did all of this whilst following DNS best practices. We didn’t have to do any weird DNS trickery, or write our own DNS server to do non-standard things. When DNS was designed in 1987, I wonder if the authors knew the importance of what they were creating. I don’t know, but their design still stands strong and resilient today.

Attributions

  • Thanks to Camelia Nicollet for her work in R to produce the graphs in this blog post

Pioneers: the first challenge is…

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/pioneers-challenge-1/

After introducing you all to Pioneers back in November, we’ve seen some amazing responses across social media with teams registering, Code Clubs and Jams retweeting and everyone getting themselves pumped up and ready for action.

Nicholas Tollervey on Twitter

This is the best thing I’ve seen in all my years involved in tech related education: https://t.co/5jerR9770r #MakeYourIdeas

Mass excitement all round – including here at Pi Towers! So, without further ado, here’s the delightful Owen to reveal the first challenge.

Pioneers Theme Launch

The eagerly anticipated Pioneers theme launch is here! If you’re yet to register for Pioneers, make sure you head to raspberrypi.org/pioneers And if you’ve no idea what we’re talking about, here’s Owen to explain more https://www.youtube.com/watch?v=nPP3dfTlLOs&t=18s

That’s right: we want you to make us laugh with tech. As well as the great examples that Owen provides, you’ll also find some great starters on the Pioneers website, along with hundreds of projects online.

If you’ve yet to register your team, make sure you do so via this form. And if you’re struggling to find a mentor for your team, or a team to mentor, make sure to use the #MakeYourIdeas tag on social media to keep in the loop. It’s also worth checking organisations such as your local Code Club, CoderDojo, or makerspace for anyone looking to get involved.

This Pioneers challenge is open to anyone in the UK between the ages of twelve and 15. If you’re soon to turn twelve or have just turned 16, head over to the Pioneers FAQ page – you may still be eligible to enter.

So get making, and make sure to share the process on YouTube, Facebook, Twitter, Instagram and Snapchat using #MakeYourIdeas!

posting your projects progress

The post Pioneers: the first challenge is… appeared first on Raspberry Pi.

2017-01-07 streaming

Post Syndicated from Vasil Kolev original https://vasil.ludost.net/blog/?p=3336

Малко наблюдения по stream-ването и платформите.
(днес stream-вах учредителното събрание на Да България (в която вече съм и член). По темата за Да България ще пиша някой друг път)

Facebook имат една от най-малоумните streaming платформи, на които съм попадал. Освен разните изисквания и горна граница на качеството (максимумът е 720p), event-ите им expire-ват доста бързо (т.е. не можеш да си го създадеш от предната вечер, като в youtube), на едно-две прекъсвания приключват и event-а (ако се налага човек да си пипне настройките няколко пъти, трябва да го създава наново) и имат и лимит за продължителността (което е особено дразнещо). Като за капак, човек ако няма flash не може да си пусне stream-а да е live каквото и да прави, та докато си правех тестовете се наложи да си ползвам виртуалката с windows (и тестовия facebook account на жената).

Youtube пък имат едно малко неразбирателство с ffmpeg, че пращат някакъв keepalive по RTMP сесията, който ffmpeg-а го няма за нищо, не го чете и в един момент едни tcp буфери се напълват (говорим за 16-тина байта на минута-две, та отнема няколко часа, че да се прояви) и се троши връзката. Слава богу, не махат event-а толкова бързо, колкото facebook и може да се рестартира.

Моя си streaming server си работеше най-добре (един nginx с mod_nginx_rtmp). Понеже имаше малко проблеми да reencode-вам всичко локално, бълвах на 10mbps директно изхода от хардуерния encoder до marla, от там дърпах с 3 ffmpeg-а и качвах смачкания на 1mbps stream до facebook, youtube и до същия nginx, за да мога да си го гледам.

И да си имам редът за бълване до facebook (понеже ми отне един следобед да го докарам както трябва, най-вече заради борбата с оня flash) – двете важни неща за -g 45 (може и 60), имат изискване за keyframe поне на всеки 2 секунди, и -r 30, понеже изискват 30-кадрово видео. Другото е стандартно – H.264, AAC, 44100hz (и моно звук, понеже такъв ми подаваха). Добавката с -af volume=60d трябва да се махне за всички, на които не им подават звук на ужасяващо ниски нива.

(за всички, които имаха забележки за звука, няма хубав automatic gain control, който бих могъл да сложа, за да изравнява добре нивата на това, което влиза. В залата микрофоните се чуваха с различна сила, много хора знаеха къде да говорят и нямаше и как да се направи нещо повече. Аз лично щях да търся начин да окича колкото мога повече от участниците с headset-ове, което па екипът ни много много го мрази)

ffmpeg -i rtmp://strm.ludost.net/st/XXXXX -r 30 -c:v libx264  -b:v 1000k -s 1280x720 -preset:v veryfast -threads 6 -minrate 1000k -r 30 -g 45 -maxrate 1000k \
	-c:a libfaac -ar 44100 -ac 1 -b:a 128k -af 'volume=60d' -f flv 'rtmp://rtmp-api.facebook.com:80/rtmp/XXXXXX'

Иначе е неприятно, че трябва да живеем на proprietary кодеци. При някакви скорошни тестове около FOSDEM пак се оказа, че VP9 още няма как да се encode-ва в близо до реално време без поне два пъти процесорната мощ, нужна за H.264, има малко поддръжка и никакъв хардуер, който може да го дава.