Tag Archives: sports

I feel the earth move under my feet (in Michigan)

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/michigan-seismic-activity-raspberry-pi/

The University of Michigan is home to the largest stadium in the USA (the second-largest in the world!). So what better place to test for spectator-induced seismic activity than The Big House?

The Big House stadium in Michigan

The Michigan Shake

University of Michigan geology professor Ben van der Pluijm decided to make waves by measuring the seismic activity produced during games at the university’s 107601 person-capacity stadium. Because earthquakes are (thankfully) very rare in the Midwest, and therefore very rarely experienced by van der Pluijm’s introductory geology class, he hoped this approach would make the movement of the Earth more accessible to his students.

“The bottom line was, I wanted something to show people that the Earth just shakes from all kinds of interactions,” explained van der Pluijm in his interview with The Michigan Daily. “All kinds of activity makes the Earth shake.”

The Big House stadium in Michigan

To measure the seismic activity, van der Pluijm used a Raspberry Pi, placing it on a flat concrete surface within the stadium.

Van der Pluijm installed a small machine called a Raspberry Pi computer in the stadium. He said his only requirements were that it needed to be able to plug into the internet and set up on a concrete floor. “Then it sits there and does its thing,” he said. “In fact, it probably does its thing right now.”

He then sent freshman student Sahil Tolia to some games to record the moments of spectator movement and celebration, so that these could be compared with the seismic activity that the Pi registers.

We’re not sure whether Professor van der Pluijm plans on releasing his findings to the outside world, or whether he’ll keep them a close secret with his introductory students, but we hope for the former!

Build your own Raspberry Pi seismic activity reader

We’re not sure what other technology van der Pluijm uses in conjunction with the Raspberry Pi, but it’s fairly easy to create your own seismic activity reader using our board. You can purchase the Raspberry Shake, an add-on board for the Pi that has vertical and horizontal geophones, MEMs accelerometers, and omnidirectional differential pressure transducers. Or you can fashion something at home, for example by taking hints from this project by Carlo Cristini, which uses household items to register movement.

The post I feel the earth move under my feet (in Michigan) appeared first on Raspberry Pi.

Cheating in Bird Racing

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/08/cheating_in_bir.html

I’ve previously written about people cheating in marathon racing by driving — or otherwise getting near the end of the race by faster means than running. In China, two people were convicted of cheating in a pigeon race:

The essence of the plan involved training the pigeons to believe they had two homes. The birds had been secretly raised not just in Shanghai but also in Shangqiu.

When the race was held in the spring of last year, the Shanghai Pigeon Association took all the entrants from Shanghai to Shangqiu and released them. Most of the pigeons started flying back to Shanghai.

But the four specially raised pigeons flew instead to their second home in Shangqiu. According to the court, the two men caught the birds there and then carried them on a bullet train back to Shanghai, concealed in milk cartons. (China prohibits live animals on bullet trains.)

When the men arrived in Shanghai, they released the pigeons, which quickly fluttered to their Shanghai loft, seemingly winning the race.

When Joe Public Becomes a Commercial Pirate, a Little Knowledge is Dangerous

Post Syndicated from Andy original https://torrentfreak.com/joe-public-becomes-commercial-pirate-little-knowledge-dangerous-180603/

Back in March and just a few hours before the Anthony Joshua v Joseph Parker fight, I got chatting with some fellow fans in the local pub. While some were intending to pay for the fight, others were going down the Kodi route.

Soon after the conversation switched to IPTV. One of the guys had a subscription and he said that his supplier would be along shortly if anyone wanted a package to watch the fight at home. Of course, I was curious to hear what he had to say since it’s not often this kind of thing is offered ‘offline’.

The guy revealed that he sold more or less exclusively on eBay and called up the page on his phone to show me. The listing made interesting reading.

In common with hundreds of similar IPTV subscription offers easily findable on eBay, the listing offered “All the sports and films you need plus VOD and main UK channels” for the sum of just under £60 per year, which is fairly cheap in the current market. With a non-committal “hmmm” I asked a bit more about the guy’s business and surprisingly he was happy to provide some details.

Like many people offering such packages, the guy was a reseller of someone else’s product. He also insisted that selling access to copyrighted content is OK because it sits in a “gray area”. It’s also easy to keep listings up on eBay, he assured me, as long as a few simple rules are adhered to. Right, this should be interesting.

First of all, sellers shouldn’t be “too obvious” he advised, noting that individual channels or channel lists shouldn’t be listed on the site. Fair enough, but then he said the most important thing of all is to have a disclaimer like his in any listing, written as follows:

“PLEASE NOTE EBAY: THIS IS NOT A DE SCRAMBLER SERVICE, I AM NOT SELLING ANY ILLEGAL CHANNELS OR CHANNEL LISTS NOR DO I REPRESENT ANY MEDIA COMPANY NOR HAVE ACCESS TO ANY OF THEIR CONTENTS. NO TRADEMARK HAS BEEN INFRINGED. DO NOT REMOVE LISTING AS IT IS IN ACCORDANCE WITH EBAY POLICIES.”

Apparently, this paragraph is crucial to keeping listings up on eBay and is the equivalent of kryptonite when it comes to deflecting copyright holders, police, and Trading Standards. Sure enough, a few seconds with Google reveals the same wording on dozens of eBay listings and those offering IPTV subscriptions on external platforms.

It is, of course, absolutely worthless but the IPTV seller insisted otherwise, noting he’d sold “thousands” of subscriptions through eBay without any problems. While a similar logic can be applied to garlic and vampires, a second disclaimer found on many other illicit IPTV subscription listings treads an even more bizarre path.

“THE PRODUCTS OFFERED CAN NOT BE USED TO DESCRAMBLE OR OTHERWISE ENABLE ACCESS TO CABLE OR SATELLITE TELEVISION PROGRAMS THAT BYPASSES PAYMENT TO THE SERVICE PROVIDER. RECEIVING SUBSCRIPTION/BASED TV AIRTIME IS ILLEGAL WITHOUT PAYING FOR IT.”

This disclaimer (which apparently no sellers displaying it have ever read) seems to be have been culled from the Zgemma site, which advertises a receiving device which can technically receive pirate IPTV services but wasn’t designed for the purpose. In that context, the disclaimer makes sense but when applied to dedicated pirate IPTV subscriptions, it’s absolutely ridiculous.

It’s unclear why so many sellers on eBay, Gumtree, Craigslist and other platforms think that these disclaimers are useful. It leads one to the likely conclusion that these aren’t hardcore pirates at all but regular people simply out to make a bit of extra cash who have received bad advice.

What is clear, however, is that selling access to thousands of otherwise subscription channels without permission from copyright owners is definitely illegal in the EU. The European Court of Justice says so (1,2) and it’s been backed up by subsequent cases in the Netherlands.

While the odds of getting criminally prosecuted or sued for reselling such a service are relatively slim, it’s worrying that in 2018 people still believe that doing so is made legal by the inclusion of a paragraph of text. It’s even more worrying that these individuals apparently have no idea of the serious consequences should they become singled out for legal action.

Even more surprisingly, TorrentFreak spoke with a handful of IPTV suppliers higher up the chain who also told us that what they are doing is legal. A couple claimed to be protected by communication intermediary laws, others didn’t want to go into details. Most stopped responding to emails on the topic. Perhaps most tellingly, none wanted to go on the record.

The big take-home here is that following some important EU rulings, knowingly linking to copyrighted content for profit is nearly always illegal in Europe and leaves people open for targeting by copyright holders and the authorities. People really should be aware of that, especially the little guy making a little extra pocket money on eBay.

Of course, people are perfectly entitled to carry on regardless and test the limits of the law when things go wrong. At this point, however, it’s probably worth noting that IPTV provider Ace Hosting recently handed over £600,000 rather than fight the Premier League (1,2) when they clearly had the money to put up a defense.

Given their effectiveness, perhaps they should’ve put up a disclaimer instead?

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Majority of Canadians Consume Online Content Legally, Survey Finds

Post Syndicated from Andy original https://torrentfreak.com/majority-of-canadians-consume-online-content-legally-survey-finds-180531/

Back in January, a coalition of companies and organizations with ties to the entertainment industries called on local telecoms regulator CRTC to implement a national website blocking regime.

Under the banner of Fairplay Canada, members including Bell, Cineplex, Directors Guild of Canada, Maple Leaf Sports and Entertainment, Movie Theatre Association of Canada, and Rogers Media, spoke of an industry under threat from marauding pirates. But just how serious is this threat?

The results of a new survey commissioned by Innovation Science and Economic Development Canada (ISED) in collaboration with the Department of Canadian Heritage (PCH) aims to shine light on the problem by revealing the online content consumption habits of citizens in the Great White North.

While there are interesting findings for those on both sides of the site-blocking debate, the situation seems somewhat removed from the Armageddon scenario predicted by the entertainment industries.

Carried out among 3,301 Canadians aged 12 years and over, the Kantar TNS study aims to cover copyright infringement in six key content areas – music, movies, TV shows, video games, computer software, and eBooks. Attitudes and behaviors are also touched upon while measuring the effectiveness of Canada’s copyright measures.

General Digital Content Consumption

In its introduction, the report notes that 28 million Canadians used the Internet in the three-month study period to November 27, 2017. Of those, 22 million (80%) consumed digital content. Around 20 million (73%) streamed or accessed content, 16 million (59%) downloaded content, while 8 million (28%) shared content.

Music, TV shows and movies all battled for first place in the consumption ranks, with 48%, 48%, and 46% respectively.

Copyright Infringement

According to the study, the majority of Canadians do things completely by the book. An impressive 74% of media-consuming respondents said that they’d only accessed material from legal sources in the preceding three months.

The remaining 26% admitted to accessing at least one illegal file in the same period. Of those, just 5% said that all of their consumption was from illegal sources, with movies (36%), software (36%), TV shows (34%) and video games (33%) the most likely content to be consumed illegally.

Interestingly, the study found that few demographic factors – such as gender, region, rural and urban, income, employment status and language – play a role in illegal content consumption.

“We found that only age and income varied significantly between consumers who infringed by downloading or streaming/accessing content online illegally and consumers who did not consume infringing content online,” the report reads.

“More specifically, the profile of consumers who downloaded or streamed/accessed infringing content skewed slightly younger and towards individuals with household incomes of $100K+.”

Licensed services much more popular than pirate haunts

It will come as no surprise that Netflix was the most popular service with consumers, with 64% having used it in the past three months. Sites like YouTube and Facebook were a big hit too, visited by 36% and 28% of content consumers respectively.

Overall, 74% of online content consumers use licensed services for content while 42% use social networks. Under a third (31%) use a combination of peer-to-peer (BitTorrent), cyberlocker platforms, or linking sites. Stream-ripping services are used by 9% of content consumers.

“Consumers who reported downloading or streaming/accessing infringing content only are less likely to use licensed services and more likely to use peer-to-peer/cyberlocker/linking sites than other consumers of online content,” the report notes.

Attitudes towards legal consumption & infringing content

In common with similar surveys over the years, the Kantar research looked at the reasons why people consume content from various sources, both legal and otherwise.

Convenience (48%), speed (36%) and quality (34%) were the most-cited reasons for using legal sources. An interesting 33% of respondents said they use legal sites to avoid using illegal sources.

On the illicit front, 54% of those who obtained unauthorized content in the previous three months said they did so due to it being free, with 40% citing convenience and 34% mentioning speed.

Almost six out of ten (58%) said lower costs would encourage them to switch to official sources, with 47% saying they’d move if legal availability was improved.

Canada’s ‘Notice-and-Notice’ warning system

People in Canada who share content on peer-to-peer systems like BitTorrent without permission run the risk of receiving an infringement notice warning them to stop. These are sent by copyright holders via users’ ISPs and the hope is that the shock of receiving a warning will turn consumers back to the straight and narrow.

The study reveals that 10% of online content consumers over the age of 12 have received one of these notices but what kind of effect have they had?

“Respondents reported that receiving such a notice resulted in the following: increased awareness of copyright infringement (38%), taking steps to ensure password protected home networks (27%), a household discussion about copyright infringement (27%), and discontinuing illegal downloading or streaming (24%),” the report notes.

While these are all positives for the entertainment industries, Kantar reports that almost a quarter (24%) of people who receive a notice simply ignore them.

Stream-ripping

Once upon a time, people obtaining music via P2P networks was cited as the music industry’s greatest threat but, with the advent of sites like YouTube, so-called stream-ripping is the latest bogeyman.

According to the study, 11% of Internet users say they’ve used a stream-ripping service. They are most likely to be male (62%) and predominantly 18 to 34 (52%) years of age.

“Among Canadians who have used a service to stream-rip music or entertainment, nearly half (48%) have used stream-ripping sites, one-third have used downloader apps (38%), one-in-seven (14%) have used a stream-ripping plug-in, and one-in-ten (10%) have used stream-ripping software,” the report adds.

Set-Top Boxes and VPNs

Few general piracy studies would be complete in 2018 without touching on set-top devices and Virtual Private Networks and this report doesn’t disappoint.

More than one in five (21%) respondents aged 12+ reported using a VPN, with the main purpose of securing communications and Internet browsing (57%).

A relatively modest 36% said they use a VPN to access free content while 32% said the aim was to access geo-blocked content unavailable in Canada. Just over a quarter (27%) said that accessing content from overseas at a reasonable price was the main motivator.

One in ten (10%) of respondents reported using a set-top box, with 78% stating they use them to access paid-for content. Interestingly, only a small number say they use the devices to infringe.

“A minority use set-top boxes to access other content that is not legal or they are unsure if it is legal (16%), or to access live sports that are not legal or they are unsure if it is legal (11%),” the report notes.

“Individuals who consumed a mix of legal and illegal content online are more likely to use VPN services (42%) or TV set-top boxes (21%) than consumers who only downloaded or streamed/accessed legal content.”

Kantar says that the findings of the report will be used to help policymakers evaluate how Canada’s Copyright Act is coping with a changing market and technological developments.

“This research will provide the necessary information required to further develop copyright policy in Canada, as well as to provide a foundation to assess the effectiveness of the measures to address copyright infringement, should future analysis be undertaken,” it concludes.

The full report can be found here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

ExtraTorrent Replacement Displays Warning On Predecessor’s Shutdown Anniversary

Post Syndicated from Andy original https://torrentfreak.com/extratorrent-replacement-displays-warning-on-predecessors-shutdown-anniversary-180518/

Exactly one year ago, millions of users in the BitTorrent community went into mourning with the shock depature of one of its major players.

ExtraTorrent was founded in back in November 2006, at a time when classic platforms such as TorrentSpy and Mininova were dominating the torrent site landscape. But with dedication and determination, the site amassed millions of daily visitors, outperforming every other torrent site apart from the mighty Pirate Bay.

Then, on May 17, 2017, everything came crashing down.

“ExtraTorrent has shut down permanently,” a note in the site read. “ExtraTorrent with all mirrors goes offline. We permanently erase all data. Stay away from fake ExtraTorrent websites and clones. Thx to all ET supporters and torrent community. ET was a place to be….”

While ExtraTorrent staff couldn’t be more clear in advising people to stay away from clones, few people listened to their warnings. Within hours, new sites appeared claiming to be official replacements for the much-loved torrent site and people flocked to them in their millions.

One of those was ExtraTorrent.ag, a torrent site connected to the operators of EZTV.ag, which appeared as a replacement in the wake of the official EZTV’s demise. Graphically very similar to the original ExtraTorrent, the .ag ‘replacement’ had none of its namesake’s community or unique content. But that didn’t dent its popularity.

ExtraTorrent.ag

At the start of this week, ExtraTorrent.ag was one of the most popular torrent sites on the Internet. With an Alexa rank of around 2,200, it would’ve clinched ninth position in our Top 10 Torrent Sites report earlier this year. However, after registering the site’s domain a year ago, something seems to have gone wrong.

Yesterday, on the anniversary of ExtraTorrent’s shutdown and exactly a year after the ExtraTorrent.ag domain was registered, ExtraTorrent.ag disappeared only to be replaced by a generic landing page, as shown below.

ExtraTorrent.ag landing page

This morning, however, there appear to be additional complications. Accessing with Firefox produces the page above but attempting to do so with Chrome produces an ominous security warning.

Chrome warning

Indeed, those protected by MalwareBytes won’t be able to access the page at all, since ExtraTorrent.ag redirects to the domain FindBetterResults.com, which the anti-malware app flags as malicious.

The change was reported to TF by the operator of domain unblocking site Unblocked.lol, which offers torrent site proxies as well as access to live TV and sports.

“I noticed when I started receiving emails saying ExtraTorrent was redirecting to some parked domain. When I jumped on the PC and checked myself it was just redirecting to a blank page,” he informs us.

“First I thought they’d blocked our IP address so I used some different ones. But I soon discovered the domain was in fact parked.”

So what has happened to this previously-functioning domain?

Whois records show that ExtraTorrent.ag was created on May 17, 2017 and appears to have been registered for a year. Yesterday, on May 17, 2018, the domain was updated to list what could potentially be a new owner, with an expiry date of May 17, 2019.

Once domains have expired, they usually enter an ‘Auto-Renew Grace Period’ for up to 45 days. This is followed by a 30-day ‘Redemption Grace Period’. At the end of this second period, domains cannot be renewed and are released for third-parties to register. That doesn’t appear to have been the case here.

So, to find out more about the sudden changes we reached out to the email address listed in the WHOIS report but received no response. Should we hear more we’ll update this report but in the meantime the Internet has lost one of its largest torrent sites and gained a rather pointless landing page with potential security risks.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Roku Displays FBI Anti-Piracy Warning to Legitimate YouTube & Netflix Users

Post Syndicated from Andy original https://torrentfreak.com/roku-displays-fbi-anti-piracy-warning-to-legitimate-youtube-netflix-users-180516/

In 2018, dealing with copyright infringement claims is a daily issue for many content platforms. The law in many regions demands swift attention and in order to appease copyright holders, most platforms are happy to oblige.

While it’s not unusual for ‘pirate’ content and services to suddenly disappear in response to a DMCA or similar notice, the same is rarely true for entire legitimate services.

But that’s what appeared to happen on the Roku platform during the night, when YouTube, Netflix and other channels disappeared only to be replaced with an ominous anti-piracy warning.

As the embedded tweet shows, the message caused confusion among Roku users who were only using their devices to access legal content. Messages replacing Netflix and YouTube seemed to have caused the greatest number of complaints but many other services were affected.

FoxSportsGo, FandangoNow, and India-focused YuppTV and Hotstar were also blacked out. As were the yoga and transformational videos specialists over at Gaia, the horror buffs at ChillerFlix, and UK TV service BritBox.

But while users scratched their heads, with some misguidedly blaming Roku for not being diligent enough against piracy, Roku took to Twitter to reveal that rather than anti-piracy complaints against the channels in question, a technical hitch was to blame.

However, a subsequent statement to CNET suggested that while blacking out Netflix and YouTube might have been accidental, Roku appears to have been taking anti-piracy action against another channel or channels at the time, with the measures inadvertently spilling over to innocent parties.

“We use that warning when we detect content that has violated copyright,” Roku said in a statement.

“Some channels in our Channel Store displayed that message and became inaccessible after Roku implemented a targeted anti-piracy measure on the platform.”

The precise nature of the action taken by Roku is unknown but it’s clear that copyright infringement is currently a hot topic for the platform.

Roku is currently fighting legal action in Mexico which ordered its products off the shelves following complaints that its platform is used by pirates. That led to an FBI warning being shown for what was believed to be the first time against the XTV and other channels last year.

This March, Roku took action against the popular USTVNow channel following what was described as a “third party” copyright infringement complaint. Just a couple of weeks later, Roku followed up by removing the controversial cCloud channel.

With Roku currently fighting to have sales reinstated in Mexico against a backdrop of claims that up to 40% of its users are pirates, it’s unlikely that Roku is suddenly going to go soft on piracy, so more channel outages can be expected in the future.

In the meantime, the scary FBI warnings of last evening are beginning to fade away (for legitimate channels at least) after the company issued advice on how to fix the problem.

“The recent outage which affected some channels has been resolved. Go to Settings > System > System update > Check now for a software update. Some channels may require you to log in again. Thank you for your patience,” the company wrote in an update.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Court Orders Pirate IPTV Linker to Shut Down or Face Penalties Up to €1.25m

Post Syndicated from Andy original https://torrentfreak.com/court-orders-pirate-iptv-linker-to-shut-down-or-face-penalties-up-to-e1-25m-180911/

There are few things guaranteed in life. Death, taxes, and lawsuits filed regularly by Dutch anti-piracy outfit BREIN.

One of its most recent targets was Netherlands-based company Leaper Beheer BV, which also traded under the names Flickstore, Dump Die Deal and Live TV Store. BREIN filed a complaint at the Limburg District Court in Maastricht, claiming that Leaper provides access to unlicensed live TV streams and on-demand movies.

The anti-piracy outfit claimed that around 4,000 live channels were on offer, including Fox Sports, movie channels, commercial and public channels. These could be accessed after the customer made a payment which granted access to a unique activation code which could be entered into a set-top box.

BREIN told the court that the code returned an .M3U playlist, which was effectively a hyperlink to IPTV channels and more than 1,000 movies being made available without permission from their respective copyright holders. As such, this amounted to a communication to the public in contravention of the EU Copyright Directive, BREIN argued.

In its defense, Leaper said that it effectively provided a convenient link-shortening service for content that could already be found online in other ways. The company argued that it is not a distributor of content itself and did not make available anything that wasn’t already public. The company added that it was completely down to the consumer whether illegal content was viewed or not.

The key question for the Court was whether Leaper did indeed make a new “communication to the public” under the EU Copyright Directive, a standard the Court of Justice of the European Union (CJEU) says should be interpreted in a manner that provides a high level of protection for rightsholders.

The Court took a three-point approach in arriving at its decision.

  • Did Leaper act in a deliberate manner when providing access to copyright content, especially when its intervention provided access to consumers who would not ordinarily have access to that content?
  • Did Leaper communicate the works via a new method to a new audience?
  • Did Leaper have a profit motive when it communicated works to the public?
  • The Court found that Leaper did communicate works to the public and intervened “with full knowledge of the consequences of its conduct” when it gave its customers access to protected works.

    “Access to [the content] in a different way would be difficult for those customers, if Leaper were not to provide its services in question,” the Court’s decision reads.

    “Leaper reaches an indeterminate number of potential recipients who can take cognizance of the protected works and form a new audience. The purchasers who register with Leaper are to be regarded as recipients who were not taken into account by the rightful claimants when they gave permission for the original communication of their work to the public.”

    With that, the Court ordered Leaper to cease-and-desist facilitating access to unlicensed streams within 48 hours of the judgment, with non-compliance penalties of 5,000 euros per IPTV subscription sold, link offered, or days exceeded, to a maximum of one million euros.

    But the Court didn’t stop there.

    “Leaper must submit a statement audited by an accountant, supported by (clear, readable copies of) all relevant documents, within 12 days of notification of this judgment of all the relevant (contact) details of the (person or legal persons) with whom the company has had contact regarding the provision of IPTV subscriptions and/or the provision of hyperlinks to sources where films and (live) broadcasts are evidently offered without the permission of the entitled parties,” the Court ruled.

    Failure to comply with this aspect of the ruling will lead to more penalties of 5,000 euros per day up to a maximum of 250,000 euros. Leaper was also ordered to pay BREIN’s costs of 20,700 euros.

    Describing the people behind Leaper as “crooks” who previously sold media boxes with infringing addons (as previously determined to be illegal in the Filmspeler case), BREIN chief Tim Kuik says that a switch of strategy didn’t help them evade the law.

    “[Leaper] sold a link to consumers that gave access to unauthorized content, i.e. pay-TV channels as well as video-on-demand films and series,” BREIN chief Tim Kuik informs TorrentFreak.

    “They did it for profit and should have checked whether the content was authorized. They did not and in fact were aware the content was unauthorized. Which means they are clearly infringing copyright.

    “This is evident from the CJEU case law in GS Media as well as Filmspeler and The Pirate Bay, aka the Dutch trilogy because the three cases came from the Netherlands, but these rulings are applicable throughout the EU.

    “They just keep at it knowing they’re cheating and we’ll take them to the cleaners,” Kuik concludes.

    Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

    Bell/TSN Letter to University Connects Site-Blocking Support to Students’ Futures

    Post Syndicated from Andy original https://torrentfreak.com/bell-tsn-letter-to-university-connects-site-blocking-support-to-students-futures-180510/

    In January, a coalition of Canadian companies called on local telecoms regulator CRTC to implement a website-blocking regime in Canada.

    The coalition, Fairplay Canada, is a collection of organizations and companies with ties to the entertainment industries and includes Bell, Cineplex, Directors Guild of Canada, Maple Leaf Sports and Entertainment, Movie Theatre Association of Canada, and Rogers Media. Its stated aim is to address Canada’s online piracy problems.

    While CTRC reviews FairPlay Canada’s plans, the coalition has been seeking to drum up support for the blocking regime, encouraging a diverse range of supporters to send submissions endorsing the project. Of course, building a united front among like-minded groups is nothing out of the ordinary but a situation just uncovered by Canadian law Professor Micheal Geist, one of the most vocal opponents of the proposed scheme, is bound to raise eyebrows.

    Geist discovered a submission by Brian Hutchings, who works as Vice-President, Administration at Brock University in Ontario. Dated March 22, 2018, it notes that one of the university’s most sought-after programs is Sports Management, which helps Brock’s students to become “the lifeblood” of Canada’s sport and entertainment industries.

    “Our University is deeply alarmed at how piracy is eroding an industry that employs so many of our co-op students and graduates. Piracy is a serious, pervasive threat that steals creativity, undermines investment in content development and threatens the survival of an industry that is also part of our national identity,” the submission reads.

    “Brock ardently supports the FairPlay Canada coalition of more than 25 organizations involved in every aspect of Canada’s film, TV, radio, sports entertainment and music industries. Specifically, we support the coalition’s request that the CRTC introduce rules that would disable access in Canada to the most egregious piracy sites, similar to measures that have been taken in the UK, France and Australia. We are committed to assist the members of the coalition and the CRTC in eliminating the theft of digital content.”

    The letter leaves no doubt that Brock University as a whole stands side-by-side with Fairplay Canada but according to a subsequent submission signed by Michelle Webber, President, Brock University Faculty Association (BUFA), nothing could be further from the truth.

    Noting that BUFA unanimously supports the position of the Canadian Association of University Teachers which opposes the FairPlay proposal, Webber adds that BUFA stands in opposition to the submission by Brian Hutchings on behalf of Brock University.

    “Vice President Hutching’s intervention was undertaken without consultation with the wider Brock University community, including faculty, librarians, and Senate; therefore, his submission should not be seen as indicative of the views of Brock University as a whole.”

    BUFA goes on to stress the importance of an open Internet to researchers and educators while raising concerns that the blocking proposals could threaten the principles of net neutrality in Canada.

    While the undermining of Hutching’s position is embarrassing enough, via access to information laws Geist has also been able to reveal the chain of events that prompted the Vice-President to write a letter of support on behalf of the whole university.

    It began with an email sent by former Brock professor Cheri Bradish to Mark Milliere, TSN’s Senior Vice President and General Manager, with Hutchings copied in. The idea was to connect the pair, with the suggestion that supporting the site-blocking plan would help to mitigate the threat to “future work options” for students.

    What followed was a direct email from Mark Milliere to Brian Hutchings, in which the former laid out the contributions his company makes to the university, while again suggesting that support for site-blocking would be in the long-term interests of students seeking employment in the industry.

    On March 23, Milliere wrote to Hutchings again, thanking him for “a terrific letter” and stating that “If you need anything from TSN, just ask.”

    This isn’t the first time that Bell has asked those beholden to the company to support its site-blocking plans.

    Back in February it was revealed that the company had asked its own employees to participate in the site-blocking submission process, without necessarily revealing their affiliations with the company.

    Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

    Danish Traffic to Pirate Sites Increases 67% in Just a Year

    Post Syndicated from Andy original https://torrentfreak.com/danish-traffic-to-pirate-sites-increases-67-in-just-a-year-180501/

    For close to 20 years, rightsholders have tried to stem the tide of mainstream Internet piracy. Yet despite increasingly powerful enforcement tools, infringement continues on a grand scale.

    While the problem is global, rightsholder groups often zoom in on their home turf, to see how the fight is progressing locally. Covering Denmark, the Rights Alliance Data Report 2017 paints a fairly pessimistic picture.

    Published this week, the industry study – which uses SimilarWeb and MarkMonitor data – finds that Danes visited 2,000 leading pirate sites 596 million times in 2017. That represents a 67% increase over the 356 million visits to unlicensed platforms made by citizens during 2016.

    The report notes that, at least in part, this explosive growth can be attributed to mobile-compatible sites and services, which make it easier than ever to consume illicit content on the move, as well as at home.

    In a sea of unauthorized streaming sites, Rights Alliance highlights one platform above all the others as a particularly bad influence in 2017 – 123movies (also known as GoMovies and GoStream, among others).

    “The popularity of this service rose sharply in 2017 from 40 million visits in 2016 to 175 million visits in 2017 – an increase of 337 percent, of which most of the traffic originates from mobile devices,” the report notes.

    123movies recently announced its closure but before that the platform was subjected to web-blocking in several jurisdictions.

    Rights Alliance says that Denmark has one of the most effective blocking systems in the world but that still doesn’t stop huge numbers of people from consuming pirate content from sites that aren’t yet blocked.

    “Traffic to infringing sites is overwhelming, and therefore blocking a few sites merely takes the top of the illegal activities,” Rights Alliance chief Maria Fredenslund informs TorrentFreak.

    “Blocking is effective by stopping 75% of traffic to blocked sites but certainly, an upscaled effort is necessary.”

    Rights Alliance also views the promotion of legal services as crucial to its anti-piracy strategy so when people visit a blocked site, they’re also directed towards legitimate platforms.

    “That is why we are working at the moment with Denmark’s Ministry of Culture and ISPs on a campaign ‘Share With Care 2′ which promotes legal services e.g. by offering a search function for legal services which will be placed in combination with the signs that are put on blocked websites,” the anti-piracy group notes.

    But even with such measures in place, the thirst for unlicensed content is great. In 2017 alone, 500 of the most popular films and TV shows were downloaded from P2P networks like BitTorrent more than 15 million times from Danish IP addresses, that’s up from 11.9 million in 2016.

    Given the dramatic rise in visits to pirate sites overall, the suggestion is that plenty of consumers are still getting through. Rights Alliance says that the number of people being restricted is also hampered by people who don’t use their ISP’s DNS service, which is the method used to block sites in Denmark.

    Additionally, interest in VPNs and similar anonymization and bypass-capable technologies is on the increase. Between 3.5% and 5% of Danish Internet users currently use a VPN, a number that’s expected to go up. Furthermore, Rights Alliance reports greater interest in “closed” pirate communities.

    “The data is based on closed [BitTorrent] networks. We also address the challenges with private communities on Facebook and other [social media] platforms,” Fredenslund explains.

    “Due to the closed doors of these platforms it is not possible for us to say anything precisely about the amount of infringing activities there. However, we receive an increasing number of notices from our members who discover that their products are distributed illegally and also we do an increased monitoring of these platforms.”

    But while more established technologies such as torrents and regular web-streaming continue in considerable volumes, newer IPTV-style services accessible via apps and dedicated platforms are also gaining traction.

    “The volume of visitors to these services’ websites has been sharply rising in 2017 – an increase of 84 percent from January to December,” Rights Alliance notes.

    “Even though the number of visitors does not say anything about actual consumption, as users usually only visit pages one time to download the program, the number gives an indication that the interest in IPTV is increasing.”

    To combat this growth market, Rights Alliance says it wants to establish web-blockades against sites hosting the software applications.

    Also on the up are visits to platforms offering live sports illegally. In 2017, Danish IP addresses made 2.96 million visits to these services, corresponding to almost 250,000 visits per month and representing an annual increase of 28%.

    Rights Alliance informs TF that in future a ‘live’ blocking mechanism similar to the one used by the Premier League in the UK could be deployed in Denmark.

    “We already have a dynamic blocking system, and we see an increasing demand for illegal TV products, so this could be a natural next step,” Fredenslund explains.

    Another small but perhaps significant detail is how users are accessing pirate sites. According to the report, large volumes of people are now visiting platforms directly, with more than 50% doing so in preference to referrals from search engines such as Google.

    In terms of deterrence, the Rights Alliance report sticks to the tried-and-tested approaches seen so often in the anti-piracy arena.

    Firstly, the group notes that it’s increasingly encountering people who are paying for legal services such as Netflix and Spotify so believe that allows them to grab something extra from a pirate site. However, in common with similar organizations globally, the group counters that pirate sites can serve malware or have other nefarious business interests behind the scenes, so people should stay away.

    Whether significant volumes will heed this advice will remain to be seen but if a 67% increase last year is any predictor of the future, piracy is here to stay – and then some. Rights Alliance says it is ready for the challenge but will need some assistance to achieve its goals.

    “As it is evident from the traffic data, criminal activities are not something that we, private companies (right holders in cooperation with ISPs), can handle alone,” Fredenslund says.

    “Therefore, we are very pleased that DK Government recently announced that the IP taskforce which was set down as a trial period has now been made permanent. In that regard it is important and necessary that the police will also obtain the authority to handle blocking of massively infringing websites. Police do not have the authority to carry out blocking as it is today.”

    The full report is available here (Danish, pdf)

    Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

    Stream to Twitch with the push of a button

    Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/tinkernut-twitch-streaming/

    Stream your video gaming exploits to the internet at the touch of a button with the Twitch-O-Matic. Everyone else is doing it, so you should too.

    Twitch-O-Matic: Raspberry Pi Twitch Streaming Device – Weekend Hacker #1804

    Some gaming consoles make it easy to stream to Twitch, some gaming consoles don’t (come on, Nintendo). So for those that don’t, I’ve made this beta version of the “Twitch-O-Matic”. No it doesn’t chop onions or fold your laundry, but what it DOES do is stream anything with HDMI output to your Twitch channel with the simple push of a button!

    eSports and online game streaming

    Interest in eSports has skyrocketed over the last few years, with viewership numbers in the hundreds of millions, sponsorship deals increasing in value and prestige, and tournament prize funds reaching millions of dollars. So it’s no wonder that more and more gamers are starting to stream live to online platforms in order to boost their fanbase and try to cash in on this growing industry.

    Streaming to Twitch

    Launched in 2011, Twitch.tv is an online live-streaming platform with a primary focus on video gaming. Users can create accounts to contribute their comments and content to the site, as well as watching live-streamed gaming competitions and broadcasts. With a staggering fifteen million daily users, Twitch is accessible via smartphone and gaming console apps, smart TVs, computers, and tablets. But if you want to stream to Twitch, you may find yourself using third-party software in order to do so. And with more buttons to click and more wires to plug in for older, app-less consoles, streaming can get confusing.

    Enter Tinkernut.

    Side note: we ❤ Tinkernut

    We’ve featured Tinkernut a few times on the Raspberry Pi blog – his tutorials are clear, his projects are interesting and useful, and his live-streamed comment videos for every build are a nice touch to sharing homebrew builds on the internet.

    Tinkernut Raspberry Pi Zero W Twitch-O-Matic

    So, yes, we love him. [This is true. Alex never shuts up about him. – Ed.] And since he has over 500K subscribers on YouTube, we’re obviously not the only ones. We wave our Tinkernut flags with pride.

    Twitch-O-Matic

    With a Raspberry Pi Zero W, an HDMI to CSI adapter, and a case to fit it all in, Tinkernut’s Twitch-O-Matic allows easy connection to the Twitch streaming service. You’ll also need a button – the bigger, the better in our opinion, though Tinkernut has opted for the Adafruit 16mm Illuminated Pushbutton for his build, and not the 100mm Massive Arcade Button that, sadly, we still haven’t found a reason to use yet.

    Adafruit massive button

    “I’m sorry, Dave…”

    For added frills and pizzazz, Tinketnut has also incorporated Adafruit’s White LED Backlight Module into the case, though you don’t have to do so unless you’re feeling super fancy.

    The setup

    The Raspberry Pi Zero W is connected to the HDMI to CSI adapter via the camera connector, in the same way you’d attach the camera ribbon. Tinkernut uses a standard Raspbian image on an 8GB SD card, with SSH enabled for remote access from his laptop. He uses the simple command Raspivid to test the HDMI connection by recording ten seconds of video footage from his console.

    Tinkernut Raspberry Pi Zero W Twitch-O-Matic

    One lead is all you need

    Once you have the Pi receiving video from your console, you can connect to Twitch using your Twitch stream key, which you can find by logging in to your account at Twitch.tv. Tinkernut’s tutorial gives you all the commands you need to stream from your Pi.

    The frills

    To up the aesthetic impact of your project, adding buttons and backlights is fairly straightforward.

    Tinkernut Raspberry Pi Zero W Twitch-O-Matic

    Pretty LED frills

    To run the stream command, Tinketnut uses a button: press once to start the stream, press again to stop. Pressing the button also turns on the LED backlight, so it’s obvious when streaming is in progress.

    The tutorial

    For the full code and 3D-printable case STL file, head to Tinketnut’s hackster.io project page. And if you’re already using a Raspberry Pi for Twitch streaming, share your build setup with us. Cheers!

    The post Stream to Twitch with the push of a button appeared first on Raspberry Pi.

    Aussie Federal Court Orders ISPs to Block Pirate IPTV Service

    Post Syndicated from Andy original https://torrentfreak.com/aussie-federal-court-orders-isps-to-block-pirate-iptv-service-180427/

    After successful applying for ISP blocks against dozens of traditional torrent and streaming portals, Village Roadshow and a coalition of movie studios switched tack last year.

    With the threat of pirate subscription IPTV services looming large, Roadshow, Disney, Universal, Warner Bros, Twentieth Century Fox, and Paramount targeted HDSubs+ (also known as PressPlayPlus), a fairly well-known service that provides hundreds of otherwise premium live channels, movies, and sports for a relatively small monthly fee.

    The injunction, which was filed last October, targets Australia’s largest ISPs including Telstra, Optus, TPG, and Vocus, plus subsidiaries.

    Unlike blocking injunctions targeting regular sites, the studios sought to have several elements of HD Subs+ infrastructure rendered inaccessible, so that its sales platform, EPG (electronic program guide), software (such as an Android and set-top box app), updates, and sundry other services would fail to operate in Australia.

    After a six month wait, the Federal Court granted the application earlier today, compelling Australia’s ISPs to block “16 online locations” associated with the HD Subs+ service, rendering its TV services inaccessible Down Under.

    “Each respondent must, within 15 business days of service of these orders, take reasonable steps to disable access to the target online locations,” said Justice Nicholas, as quoted by ZDNet.

    A small selection of channels in the HDSubs+ package

    The ISPs were given flexibility in how to implement the ban, with the Judge noting that DNS blocking, IP address blocking or rerouting, URL blocking, or “any alternative technical means for disabling access”, would be acceptable.

    The rightsholders are required to pay a fee of AU$50 fee for each domain they want to block but Village Roadshow says it doesn’t mind doing so, since blocking is in “public interest”. Continuing a pattern established last year, none of the ISPs showed up to the judgment.

    A similar IPTV blocking application was filed by Hong Kong-based broadcaster Television Broadcasts Limited (TVB) last year.

    TVB wants ISPs including Telstra, Optus, Vocus, and TPG plus their subsidiaries to block access to seven Android-based services named as A1, BlueTV, EVPAD, FunTV, MoonBox, Unblock, and hTV5.

    The application was previously heard alongside the HD Subs+ case but will now be handled separately following complications. In April it was revealed that TVB not only wants to block Internet locations related to the technical operation of the service, but also hosting sites that fulfill a role similar to that of Google Play or Apple’s App Store.

    TVB wants to have these app marketplaces blocked by Australian ISPs, which would not only render the illicit apps inaccessible to the public but all of the non-infringing ones too.

    Justice Nicholas will now have to decide whether the “primary purpose” of these marketplaces is to infringe or facilitate the infringement of TVB’s copyrights. However, there is also a question of whether China-focused live programming has copyright status in Australia. An additional hearing is scheduled for May 2 for these matters to be addressed.

    Also on Friday, Foxtel filed yet another blocking application targeting “15 online locations” involving 27 domain names connected to traditional BitTorrent and streaming services.

    According to ComputerWorld the injunction targets the same set of ISPs but this time around, Foxtel is trying to save on costs.

    The company doesn’t want to have expert witnesses present in court, doesn’t want to stage live demos of websites, and would like to rely on videos and screenshots instead. Foxtel also says that if the ISPs agree, it won’t serve its evidence on them as it has done previously.

    The company asked Justice Nicholas to deal with the injunction application “on paper” but he declined, setting a hearing for June 18 but accepting screenshots and videos as evidence.

    Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

    Safety first: a Raspberry Pi safety helmet

    Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/safety-helmet/

    Jennifer Fox is back, this time with a Raspberry Pi Zero–controlled impact force monitor that will notify you if your collision is a worth a trip to the doctor.

    Make an Impact Force Monitor!

    Check out my latest Hacker in Residence project for SparkFun Electronics: the Helmet Guardian! It’s a Pi Zero powered impact force monitor that turns on an LED if your head/body experiences a potentially dangerous impact. Install in your sports helmets, bicycle, or car to keep track of impact and inform you when it’s time to visit the doctor.

    Concussion

    We’ve all knocked our heads at least once in our lives, maybe due to tripping over a loose paving slab, or to falling off a bike, or to walking into the corner of the overhead cupboard door for the third time this week — will I ever learn?! More often than not, even when we’re seeing stars, we brush off the accident and continue with our day, oblivious to the long-term damage we may be doing.

    Force of impact

    After some thorough research, Jennifer Fox, founder of FoxBot Industries, concluded that forces of 4 to 6 G sustained for more than a few seconds are dangerous to the human body. With this in mind, she decided to use a Raspberry Pi Zero W and an accelerometer to create helmet with an impact force monitor that notifies its wearer if this level of G-force has been met.

    Jennifer Fox Raspberry Pi Impact Force Monitor

    Obviously, if you do have a serious fall, you should always seek medical advice. This project is an example of how affordable technology can be used to create medical and citizen science builds, and not a replacement for professional medical services.

    Setting up the impact monitor

    Jennifer’s monitor requires only a few pieces of tech: a Zero W, an accelerometer and breakout board, a rechargeable USB battery, and an LED, plus the standard wires and resistors for these components.

    After installing Raspbian, Jennifer enabled SSH and I2C on the Zero W to make it run headlessly, and then accessed it from a laptop. This allows her to control the Pi without physically connecting to it, and it makes for a wireless finished project.

    Jen wired the Pi to the accelerometer breakout board and LED as shown in the schematic below.

    Jennifer Fox Raspberry Pi Impact Force Monitor

    The LED acts as a signal of significant impacts, turning on when the G-force threshold is reached, and not turning off again until the program is reset.

    Jennifer Fox Raspberry Pi Impact Force Monitor

    Make your own and more

    Jennifer’s full code for the impact monitor is on GitHub, and she’s put together a complete tutorial on SparkFun’s website.

    For more tutorials from Jennifer Fox, such as her ‘Bark Back’ IoT Pet Monitor, be sure to follow her on YouTube. And for similar projects, check out Matt’s smart bike light and Amelia Day’s physical therapy soccer ball.

    The post Safety first: a Raspberry Pi safety helmet appeared first on Raspberry Pi.

    Russians Hacked the Olympics

    Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/03/russians_hacked.html

    Two weeks ago, I blogged about the myriad of hacking threats against the Olympics. Last week, the Washington Post reported that Russia hacked the Olympics network and tried to cast the blame on North Korea.

    Of course, the evidence is classified, so there’s no way to verify this claim. And while the article speculates that the hacks were a retaliation for Russia being banned due to doping, that doesn’t ring true to me. If they tried to blame North Korea, it’s more likely that they’re trying to disrupt something between North Korea, South Korea, and the US. But I don’t know.

    Internet Security Threats at the Olympics

    Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/02/internet_securi.html

    There are a lot:

    The cybersecurity company McAfee recently uncovered a cyber operation, dubbed Operation GoldDragon, attacking South Korean organizations related to the Winter Olympics. McAfee believes the attack came from a nation state that speaks Korean, although it has no definitive proof that this is a North Korean operation. The victim organizations include ice hockey teams, ski suppliers, ski resorts, tourist organizations in Pyeongchang, and departments organizing the Pyeongchang Olympics.

    Meanwhile, a Russia-linked cyber attack has already stolen and leaked documents from other Olympic organizations. The so-called Fancy Bear group, or APT28, began its operations in late 2017 –­ according to Trend Micro and Threat Connect, two private cybersecurity firms­ — eventually publishing documents in 2018 outlining the political tensions between IOC officials and World Anti-Doping Agency (WADA) officials who are policing Olympic athletes. It also released documents specifying exceptions to anti-doping regulations granted to specific athletes (for instance, one athlete was given an exception because of his asthma medication). The most recent Fancy Bear leak exposed details about a Canadian pole vaulter’s positive results for cocaine. This group has targeted WADA in the past, specifically during the 2016 Rio de Janeiro Olympics. Assuming the attribution is right, the action appears to be Russian retaliation for the punitive steps against Russia.

    A senior analyst at McAfee warned that the Olympics may experience more cyber attacks before closing ceremonies. A researcher at ThreatConnect asserted that organizations like Fancy Bear have no reason to stop operations just because they’ve already stolen and released documents. Even the United States Department of Homeland Security has issued a notice to those traveling to South Korea to remind them to protect themselves against cyber risks.

    One presumes the Olympics network is sufficiently protected against the more pedestrian DDoS attacks and the like, but who knows?

    EDITED TO ADD: There was already one attack.

    Top 8 Best Practices for High-Performance ETL Processing Using Amazon Redshift

    Post Syndicated from Thiyagarajan Arumugam original https://aws.amazon.com/blogs/big-data/top-8-best-practices-for-high-performance-etl-processing-using-amazon-redshift/

    An ETL (Extract, Transform, Load) process enables you to load data from source systems into your data warehouse. This is typically executed as a batch or near-real-time ingest process to keep the data warehouse current and provide up-to-date analytical data to end users.

    Amazon Redshift is a fast, petabyte-scale data warehouse that enables you easily to make data-driven decisions. With Amazon Redshift, you can get insights into your big data in a cost-effective fashion using standard SQL. You can set up any type of data model, from star and snowflake schemas, to simple de-normalized tables for running any analytical queries.

    To operate a robust ETL platform and deliver data to Amazon Redshift in a timely manner, design your ETL processes to take account of Amazon Redshift’s architecture. When migrating from a legacy data warehouse to Amazon Redshift, it is tempting to adopt a lift-and-shift approach, but this can result in performance and scale issues long term. This post guides you through the following best practices for ensuring optimal, consistent runtimes for your ETL processes:

    • COPY data from multiple, evenly sized files.
    • Use workload management to improve ETL runtimes.
    • Perform table maintenance regularly.
    • Perform multiple steps in a single transaction.
    • Loading data in bulk.
    • Use UNLOAD to extract large result sets.
    • Use Amazon Redshift Spectrum for ad hoc ETL processing.
    • Monitor daily ETL health using diagnostic queries.

    1. COPY data from multiple, evenly sized files

    Amazon Redshift is an MPP (massively parallel processing) database, where all the compute nodes divide and parallelize the work of ingesting data. Each node is further subdivided into slices, with each slice having one or more dedicated cores, equally dividing the processing capacity. The number of slices per node depends on the node type of the cluster. For example, each DS2.XLARGE compute node has two slices, whereas each DS2.8XLARGE compute node has 16 slices.

    When you load data into Amazon Redshift, you should aim to have each slice do an equal amount of work. When you load the data from a single large file or from files split into uneven sizes, some slices do more work than others. As a result, the process runs only as fast as the slowest, or most heavily loaded, slice. In the example shown below, a single large file is loaded into a two-node cluster, resulting in only one of the nodes, “Compute-0”, performing all the data ingestion:

    When splitting your data files, ensure that they are of approximately equal size – between 1 MB and 1 GB after compression. The number of files should be a multiple of the number of slices in your cluster. Also, I strongly recommend that you individually compress the load files using gzip, lzop, or bzip2 to efficiently load large datasets.

    When loading multiple files into a single table, use a single COPY command for the table, rather than multiple COPY commands. Amazon Redshift automatically parallelizes the data ingestion. Using a single COPY command to bulk load data into a table ensures optimal use of cluster resources, and quickest possible throughput.

    2. Use workload management to improve ETL runtimes

    Use Amazon Redshift’s workload management (WLM) to define multiple queues dedicated to different workloads (for example, ETL versus reporting) and to manage the runtimes of queries. As you migrate more workloads into Amazon Redshift, your ETL runtimes can become inconsistent if WLM is not appropriately set up.

    I recommend limiting the overall concurrency of WLM across all queues to around 15 or less. This WLM guide helps you organize and monitor the different queues for your Amazon Redshift cluster.

    When managing different workloads on your Amazon Redshift cluster, consider the following for the queue setup:

    • Create a queue dedicated to your ETL processes. Configure this queue with a small number of slots (5 or fewer). Amazon Redshift is designed for analytics queries, rather than transaction processing. The cost of COMMIT is relatively high, and excessive use of COMMIT can result in queries waiting for access to the commit queue. Because ETL is a commit-intensive process, having a separate queue with a small number of slots helps mitigate this issue.
    • Claim extra memory available in a queue. When executing an ETL query, you can take advantage of the wlm_query_slot_count to claim the extra memory available in a particular queue. For example, a typical ETL process might involve COPYing raw data into a staging table so that downstream ETL jobs can run transformations that calculate daily, weekly, and monthly aggregates. To speed up the COPY process (so that the downstream tasks can start in parallel sooner), the wlm_query_slot_count can be increased for this step.
    • Create a separate queue for reporting queries. Configure query monitoring rules on this queue to further manage long-running and expensive queries.
    • Take advantage of the dynamic memory parameters. They swap the memory from your ETL to your reporting queue after the ETL job has completed.

    3. Perform table maintenance regularly

    Amazon Redshift is a columnar database, which enables fast transformations for aggregating data. Performing regular table maintenance ensures that transformation ETLs are predictable and performant. To get the best performance from your Amazon Redshift database, you must ensure that database tables regularly are VACUUMed and ANALYZEd. The Analyze & Vacuum schema utility helps you automate the table maintenance task and have VACUUM & ANALYZE executed in a regular fashion.

    • Use VACUUM to sort tables and remove deleted blocks

    During a typical ETL refresh process, tables receive new incoming records using COPY, and unneeded data (cold data) is removed using DELETE. New rows are added to the unsorted region in a table. Deleted rows are simply marked for deletion.

    DELETE does not automatically reclaim the space occupied by the deleted rows. Adding and removing large numbers of rows can therefore cause the unsorted region and the number of deleted blocks to grow. This can degrade the performance of queries executed against these tables.

    After an ETL process completes, perform VACUUM to ensure that user queries execute in a consistent manner. The complete list of tables that need VACUUMing can be found using the Amazon Redshift Util’s table_info script.

    Use the following approaches to ensure that VACCUM is completed in a timely manner:

    • Use wlm_query_slot_count to claim all the memory allocated in the ETL WLM queue during the VACUUM process.
    • DROP or TRUNCATE intermediate or staging tables, thereby eliminating the need to VACUUM them.
    • If your table has a compound sort key with only one sort column, try to load your data in sort key order. This helps reduce or eliminate the need to VACUUM the table.
    • Consider using time series This helps reduce the amount of data you need to VACUUM.
    • Use ANALYZE to update database statistics

    Amazon Redshift uses a cost-based query planner and optimizer using statistics about tables to make good decisions about the query plan for the SQL statements. Regular statistics collection after the ETL completion ensures that user queries run fast, and that daily ETL processes are performant. The Amazon Redshift utility table_info script provides insights into the freshness of the statistics. Keeping the statistics off (pct_stats_off) less than 20% ensures effective query plans for the SQL queries.

    4. Perform multiple steps in a single transaction

    ETL transformation logic often spans multiple steps. Because commits in Amazon Redshift are expensive, if each ETL step performs a commit, multiple concurrent ETL processes can take a long time to execute.

    To minimize the number of commits in a process, the steps in an ETL script should be surrounded by a BEGIN…END statement so that a single commit is performed only after all the transformation logic has been executed. For example, here is an example multi-step ETL script that performs one commit at the end:

    Begin
    CREATE temporary staging_table;
    INSERT INTO staging_table SELECT .. FROM source (transformation logic);
    DELETE FROM daily_table WHERE dataset_date =?;
    INSERT INTO daily_table SELECT .. FROM staging_table (daily aggregate);
    DELETE FROM weekly_table WHERE weekending_date=?;
    INSERT INTO weekly_table SELECT .. FROM staging_table(weekly aggregate);
    Commit

    5. Loading data in bulk

    Amazon Redshift is designed to store and query petabyte-scale datasets. Using Amazon S3 you can stage and accumulate data from multiple source systems before executing a bulk COPY operation. The following methods allow efficient and fast transfer of these bulk datasets into Amazon Redshift:

    • Use a manifest file to ingest large datasets that span multiple files. The manifest file is a JSON file that lists all the files to be loaded into Amazon Redshift. Using a manifest file ensures that Amazon Redshift has a consistent view of the data to be loaded from S3, while also ensuring that duplicate files do not result in the same data being loaded more than one time.
    • Use temporary staging tables to hold the data for transformation. These tables are automatically dropped after the ETL session is complete. Temporary tables can be created using the CREATE TEMPORARY TABLE syntax, or by issuing a SELECT … INTO #TEMP_TABLE query. Explicitly specifying the CREATE TEMPORARY TABLE statement allows you to control the DISTRIBUTION KEY, SORT KEY, and compression settings to further improve performance.
    • User ALTER table APPEND to swap data from the staging tables to the target table. Data in the source table is moved to matching columns in the target table. Column order doesn’t matter. After data is successfully appended to the target table, the source table is empty. ALTER TABLE APPEND is much faster than a similar CREATE TABLE AS or INSERT INTO operation because it doesn’t involve copying or moving data.

    6. Use UNLOAD to extract large result sets

    Fetching a large number of rows using SELECT is expensive and takes a long time. When a large amount of data is fetched from the Amazon Redshift cluster, the leader node has to hold the data temporarily until the fetches are complete. Further, data is streamed out sequentially, which results in longer elapsed time. As a result, the leader node can become hot, which not only affects the SELECT that is being executed, but also throttles resources for creating execution plans and managing the overall cluster resources. Here is an example of a large SELECT statement. Notice that the leader node is doing most of the work to stream out the rows:

    Use UNLOAD to extract large results sets directly to S3. After it’s in S3, the data can be shared with multiple downstream systems. By default, UNLOAD writes data in parallel to multiple files according to the number of slices in the cluster. All the compute nodes participate to quickly offload the data into S3.

    If you are extracting data for use with Amazon Redshift Spectrum, you should make use of the MAXFILESIZE parameter to and keep files are 150 MB. Similar to item 1 above, having many evenly sized files ensures that Redshift Spectrum can do the maximum amount of work in parallel.

    7. Use Redshift Spectrum for ad hoc ETL processing

    Events such as data backfill, promotional activity, and special calendar days can trigger additional data volumes that affect the data refresh times in your Amazon Redshift cluster. To help address these spikes in data volumes and throughput, I recommend staging data in S3. After data is organized in S3, Redshift Spectrum enables you to query it directly using standard SQL. In this way, you gain the benefits of additional capacity without having to resize your cluster.

    For tips on getting started with and optimizing the use of Redshift Spectrum, see the previous post, 10 Best Practices for Amazon Redshift Spectrum.

    8. Monitor daily ETL health using diagnostic queries

    Monitoring the health of your ETL processes on a regular basis helps identify the early onset of performance issues before they have a significant impact on your cluster. The following monitoring scripts can be used to provide insights into the health of your ETL processes:

    ScriptUse when…Solution
    commit_stats.sql – Commit queue statistics from past days, showing largest queue length and queue time firstDML statements such as INSERT/UPDATE/COPY/DELETE operations take several times longer to execute when multiple of these operations are in progressSet up separate WLM queues for the ETL process and limit the concurrency to < 5.
    copy_performance.sql –  Copy command statistics for the past days Daily COPY operations take longer to execute• Follow the best practices for the COPY command.
    • Analyze data growth with the incoming datasets and consider cluster resize to meet the expected SLA.
    table_info.sql – Table skew and unsorted statistics along with storage and key informationTransformation steps take longer to execute• Set up regular VACCUM jobs to address unsorted rows and claim the deleted blocks so that transformation SQL execute optimally.
    • Consider a table redesign to avoid data skewness.
    v_check_transaction_locks.sql – Monitor transaction locksINSERT/UPDATE/COPY/DELETE operations on particular tables do not respond back in timely manner, compared to when run after the ETLMultiple DML statements are operating on the same target table at the same moment from different transactions. Set up ETL job dependency so that they execute serially for the same target table.
    v_get_schema_priv_by_user.sql – Get the schema that the user has access toReporting users can view intermediate tablesSet up separate database groups for reporting and ETL users, and grants access to objects using GRANT.
    v_generate_tbl_ddl.sql – Get the table DDLYou need to create an empty table with same structure as target table for data backfillGenerate DDL using this script for data backfill.
    v_space_used_per_tbl.sql – monitor space used by individual tablesAmazon Redshift data warehouse space growth is trending upwards more than normal

    Analyze the individual tables that are growing at higher rate than normal. Consider data archival using UNLOAD to S3 and Redshift Spectrum for later analysis.

    Use unscanned_table_summary.sql to find unused table and archive or drop them.

    top_queries.sql – Return the top 50 time consuming statements aggregated by its textETL transformations are taking longer to executeAnalyze the top transformation SQL and use EXPLAIN to find opportunities for tuning the query plan.

    There are several other useful scripts available in the amazon-redshift-utils repository. The AWS Lambda Utility Runner runs a subset of these scripts on a scheduled basis, allowing you to automate much of monitoring of your ETL processes.

    Example ETL process

    The following ETL process reinforces some of the best practices discussed in this post. Consider the following four-step daily ETL workflow where data from an RDBMS source system is staged in S3 and then loaded into Amazon Redshift. Amazon Redshift is used to calculate daily, weekly, and monthly aggregations, which are then unloaded to S3, where they can be further processed and made available for end-user reporting using a number of different tools, including Redshift Spectrum and Amazon Athena.

    Step 1:  Extract from the RDBMS source to a S3 bucket

    In this ETL process, the data extract job fetches change data every 1 hour and it is staged into multiple hourly files. For example, the staged S3 folder looks like the following:

     [[email protected] ~]$ aws s3 ls s3://<<S3 Bucket>>/batch/2017/07/02/
    2017-07-02 01:59:58   81900220 20170702T01.export.gz
    2017-07-02 02:59:56   84926844 20170702T02.export.gz
    2017-07-02 03:59:54   78990356 20170702T03.export.gz
    …
    2017-07-02 22:00:03   75966745 20170702T21.export.gz
    2017-07-02 23:00:02   89199874 20170702T22.export.gz
    2017-07-02 00:59:59   71161715 20170702T23.export.gz

    Organizing the data into multiple, evenly sized files enables the COPY command to ingest this data using all available resources in the Amazon Redshift cluster. Further, the files are compressed (gzipped) to further reduce COPY times.

    Step 2: Stage data to the Amazon Redshift table for cleansing

    Ingesting the data can be accomplished using a JSON-based manifest file. Using the manifest file ensures that S3 eventual consistency issues can be eliminated and also provides an opportunity to dedupe any files if needed. A sample manifest20170702.json file looks like the following:

    {
      "entries": [
        {"url":" s3://<<S3 Bucket>>/batch/2017/07/02/20170702T01.export.gz", "mandatory":true},
        {"url":" s3://<<S3 Bucket>>/batch/2017/07/02/20170702T02.export.gz", "mandatory":true},
        …
        {"url":" s3://<<S3 Bucket>>/batch/2017/07/02/20170702T23.export.gz", "mandatory":true}
      ]
    }

    The data can be ingested using the following command:

    SET wlm_query_slot_count TO <<max available concurrency in the ETL queue>>;
    COPY stage_tbl FROM 's3:// <<S3 Bucket>>/batch/manifest20170702.json' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole' manifest;
    

    Because the downstream ETL processes depend on this COPY command to complete, the wlm_query_slot_count is used to claim all the memory available to the queue. This helps the COPY command complete as quickly as possible.

    Step 3: Transform data to create daily, weekly, and monthly datasets and load into target tables

    Data is staged in the “stage_tbl” from where it can be transformed into the daily, weekly, and monthly aggregates and loaded into target tables. The following job illustrates a typical weekly process:

    Begin
    INSERT into ETL_LOG (..) values (..);
    DELETE from weekly_tbl where dataset_week = <<current week>>;
    INSERT into weekly_tbl (..)
      SELECT date_trunc('week', dataset_day) AS week_begin_dataset_date, SUM(C1) AS C1, SUM(C2) AS C2
    	FROM   stage_tbl
    GROUP BY date_trunc('week', dataset_day);
    INSERT into AUDIT_LOG values (..);
    COMMIT;
    End;
    

    As shown above, multiple steps are combined into one transaction to perform a single commit, reducing contention on the commit queue.

    Step 4: Unload the daily dataset to populate the S3 data lake bucket

    The transformed results are now unloaded into another S3 bucket, where they can be further processed and made available for end-user reporting using a number of different tools, including Redshift Spectrum and Amazon Athena.

    unload ('SELECT * FROM weekly_tbl WHERE dataset_week = <<current week>>’) TO 's3:// <<S3 Bucket>>/datalake/weekly/20170526/' iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole';

    Summary

    Amazon Redshift lets you easily operate petabyte-scale data warehouses on the cloud. This post summarized the best practices for operating scalable ETL natively within Amazon Redshift. I demonstrated efficient ways to ingest and transform data, along with close monitoring. I also demonstrated the best practices being used in a typical sample ETL workload to transform the data into Amazon Redshift.

    If you have questions or suggestions, please comment below.

     


    About the Author

    Thiyagarajan Arumugam is a Big Data Solutions Architect at Amazon Web Services and designs customer architectures to process data at scale. Prior to AWS, he built data warehouse solutions at Amazon.com. In his free time, he enjoys all outdoor sports and practices the Indian classical drum mridangam.

     

    Optimize Delivery of Trending, Personalized News Using Amazon Kinesis and Related Services

    Post Syndicated from Yukinori Koide original https://aws.amazon.com/blogs/big-data/optimize-delivery-of-trending-personalized-news-using-amazon-kinesis-and-related-services/

    This is a guest post by Yukinori Koide, an the head of development for the Newspass department at Gunosy.

    Gunosy is a news curation application that covers a wide range of topics, such as entertainment, sports, politics, and gourmet news. The application has been installed more than 20 million times.

    Gunosy aims to provide people with the content they want without the stress of dealing with a large influx of information. We analyze user attributes, such as gender and age, and past activity logs like click-through rate (CTR). We combine this information with article attributes to provide trending, personalized news articles to users.

    In this post, I show you how to process user activity logs in real time using Amazon Kinesis Data Firehose, Amazon Kinesis Data Analytics, and related AWS services.

    Why does Gunosy need real-time processing?

    Users need fresh and personalized news. There are two constraints to consider when delivering appropriate articles:

    • Time: Articles have freshness—that is, they lose value over time. New articles need to reach users as soon as possible.
    • Frequency (volume): Only a limited number of articles can be shown. It’s unreasonable to display all articles in the application, and users can’t read all of them anyway.

    To deliver fresh articles with a high probability that the user is interested in them, it’s necessary to include not only past user activity logs and some feature values of articles, but also the most recent (real-time) user activity logs.

    We optimize the delivery of articles with these two steps.

    1. Personalization: Deliver articles based on each user’s attributes, past activity logs, and feature values of each article—to account for each user’s interests.
    2. Trends analysis/identification: Optimize delivering articles using recent (real-time) user activity logs—to incorporate the latest trends from all users.

    Optimizing the delivery of articles is always a cold start. Initially, we deliver articles based on past logs. We then use real-time data to optimize as quickly as possible. In addition, news has a short freshness time. Specifically, day-old news is past news, and even the news that is three hours old is past news. Therefore, shortening the time between step 1 and step 2 is important.

    To tackle this issue, we chose AWS for processing streaming data because of its fully managed services, cost-effectiveness, and so on.

    Solution

    The following diagrams depict the architecture for optimizing article delivery by processing real-time user activity logs

    There are three processing flows:

    1. Process real-time user activity logs.
    2. Store and process all user-based and article-based logs.
    3. Execute ad hoc or heavy queries.

    In this post, I focus on the first processing flow and explain how it works.

    Process real-time user activity logs

    The following are the steps for processing user activity logs in real time using Kinesis Data Streams and Kinesis Data Analytics.

    1. The Fluentd server sends the following user activity logs to Kinesis Data Streams:
    {"article_id": 12345, "user_id": 12345, "action": "click"}
    {"article_id": 12345, "user_id": 12345, "action": "impression"}
    ...
    1. Map rows of logs to columns in Kinesis Data Analytics.

    1. Set the reference data to Kinesis Data Analytics from Amazon S3.

    a. Gunosy has user attributes such as gender, age, and segment. Prepare the following CSV file (user_id, gender, segment_id) and put it in Amazon S3:

    101,female,1
    102,male,2
    103,female,3
    ...

    b. Add the application reference data source to Kinesis Data Analytics using the AWS CLI:

    $ aws kinesisanalytics add-application-reference-data-source \
      --application-name <my-application-name> \
      --current-application-version-id <version-id> \
      --reference-data-source '{
      "TableName": "REFERENCE_DATA_SOURCE",
      "S3ReferenceDataSource": {
        "BucketARN": "arn:aws:s3:::<my-bucket-name>",
        "FileKey": "mydata.csv",
        "ReferenceRoleARN": "arn:aws:iam::<account-id>:role/..."
      },
      "ReferenceSchema": {
        "RecordFormat": {
          "RecordFormatType": "CSV",
          "MappingParameters": {
            "CSVMappingParameters": {"RecordRowDelimiter": "\n", "RecordColumnDelimiter": ","}
          }
        },
        "RecordEncoding": "UTF-8",
        "RecordColumns": [
          {"Name": "USER_ID", "Mapping": "0", "SqlType": "INTEGER"},
          {"Name": "GENDER",  "Mapping": "1", "SqlType": "VARCHAR(32)"},
          {"Name": "SEGMENT_ID", "Mapping": "2", "SqlType": "INTEGER"}
        ]
      }
    }'

    This application reference data source can be referred on Kinesis Data Analytics.

    1. Run a query against the source data stream on Kinesis Data Analytics with the application reference data source.

    a. Define the temporary stream named TMP_SQL_STREAM.

    CREATE OR REPLACE STREAM "TMP_SQL_STREAM" (
      GENDER VARCHAR(32), SEGMENT_ID INTEGER, ARTICLE_ID INTEGER
    );

    b. Insert the joined source stream and application reference data source into the temporary stream.

    CREATE OR REPLACE PUMP "TMP_PUMP" AS
    INSERT INTO "TMP_SQL_STREAM"
    SELECT STREAM
      R.GENDER, R.SEGMENT_ID, S.ARTICLE_ID, S.ACTION
    FROM      "SOURCE_SQL_STREAM_001" S
    LEFT JOIN "REFERENCE_DATA_SOURCE" R
      ON S.USER_ID = R.USER_ID;

    c. Define the destination stream named DESTINATION_SQL_STREAM.

    CREATE OR REPLACE STREAM "DESTINATION_SQL_STREAM" (
      TIME TIMESTAMP, GENDER VARCHAR(32), SEGMENT_ID INTEGER, ARTICLE_ID INTEGER, 
      IMPRESSION INTEGER, CLICK INTEGER
    );

    d. Insert the processed temporary stream, using a tumbling window, into the destination stream per minute.

    CREATE OR REPLACE PUMP "STREAM_PUMP" AS
    INSERT INTO "DESTINATION_SQL_STREAM"
    SELECT STREAM
      ROW_TIME AS TIME,
      GENDER, SEGMENT_ID, ARTICLE_ID,
      SUM(CASE ACTION WHEN 'impression' THEN 1 ELSE 0 END) AS IMPRESSION,
      SUM(CASE ACTION WHEN 'click' THEN 1 ELSE 0 END) AS CLICK
    FROM "TMP_SQL_STREAM"
    GROUP BY
      GENDER, SEGMENT_ID, ARTICLE_ID,
      FLOOR("TMP_SQL_STREAM".ROWTIME TO MINUTE);

    The results look like the following:

    1. Insert the results into Amazon Elasticsearch Service (Amazon ES).
    2. Batch servers get results from Amazon ES every minute. They then optimize delivering articles with other data sources using a proprietary optimization algorithm.

    How to connect a stream to another stream in another AWS Region

    When we built the solution, Kinesis Data Analytics was not available in the Asia Pacific (Tokyo) Region, so we used the US West (Oregon) Region. The following shows how we connected a data stream to another data stream in the other Region.

    There is no need to continue containing all components in a single AWS Region, unless you have a situation where a response difference at the millisecond level is critical to the service.

    Benefits

    The solution provides benefits for both our company and for our users. Benefits for the company are cost savings—including development costs, operational costs, and infrastructure costs—and reducing delivery time. Users can now find articles of interest more quickly. The solution can process more than 500,000 records per minute, and it enables fast and personalized news curating for our users.

    Conclusion

    In this post, I showed you how we optimize trending user activities to personalize news using Amazon Kinesis Data Firehose, Amazon Kinesis Data Analytics, and related AWS services in Gunosy.

    AWS gives us a quick and economical solution and a good experience.

    If you have questions or suggestions, please comment below.


    Additional Reading

    If you found this post useful, be sure to check out Implement Serverless Log Analytics Using Amazon Kinesis Analytics and Joining and Enriching Streaming Data on Amazon Kinesis.


    About the Authors

    Yukinori Koide is the head of development for the Newspass department at Gunosy. He is working on standardization of provisioning and deployment flow, promoting the utilization of serverless and containers for machine learning and AI services. His favorite AWS services are DynamoDB, Lambda, Kinesis, and ECS.

     

     

     

    Akihiro Tsukada is a start-up solutions architect with AWS. He supports start-up companies in Japan technically at many levels, ranging from seed to later-stage.

     

     

     

     

    Yuta Ishii is a solutions architect with AWS. He works with our customers to provide architectural guidance for building media & entertainment services, helping them improve the value of their services when using AWS.