Tag Archives: rest

Game Companies Oppose DMCA Exemption for ‘Abandoned’ Online Games

Post Syndicated from Ernesto original https://torrentfreak.com/game-companies-oppose-dmca-exemption-for-abandoned-online-games-180217/

There are a lot of things people are not allowed to do under US copyright law, but perhaps just as importantly there are exemptions.

The U.S. Copyright Office is currently considering whether or not to loosen the DMCA’s anti-circumvention provisions, which prevent the public from ‘tinkering’ with DRM-protected content and devices.

These provisions are renewed every three years after the Office hears various arguments from the public. One of the major topics on the agenda this year is the preservation of abandoned games.

The Copyright Office previously included game preservation exemptions to keep these games accessible. This means that libraries, archives, and museums can use emulators and other circumvention tools to make old classics playable.

Late last year several gaming fans including the Museum of Art and Digital Entertainment (the MADE), a nonprofit organization operating in California, argued for an expansion of this exemption to also cover online games. This includes games in the widely popular multiplayer genre, which require a connection to an online server.

“Although the Current Exemption does not cover it, preservation of online video games is now critical,” MADE wrote in its comment to the Copyright Office.

“Online games have become ubiquitous and are only growing in popularity. For example, an estimated fifty-three percent of gamers play multiplayer games at least once a week, and spend, on average, six hours a week playing with others online.”

This week, the Entertainment Software Association (ESA), which acts on behalf of prominent members including Electonic Arts, Nintendo and Ubisoft, opposed the request.

While they are fine with the current game-preservation exemption, expanding it to online games goes too far, they say. This would allow outsiders to recreate online game environments using server code that was never published in public.

It would also allow a broad category of “affiliates” to help with this which, according to the ESA, could include members of the public

“The proponents characterize these as ‘slight modifications’ to the existing exemption. However they are nothing of the sort. The proponents request permission to engage in forms of circumvention that will enable the complete recreation of a hosted video game-service environment and make the video game available for play by a public audience.”

“Worse yet, proponents seek permission to deputize a legion of ‘affiliates’ to assist in their activities,” ESA adds.

The proposed changes would enable and facilitate infringing use, the game companies warn. They fear that outsiders such as MADE will replicate the game servers and allow the public to play these abandoned games, something games companies would generally charge for. This could be seen as direct competition.

MADE, for example, already charges the public to access its museum so they can play games. This can be seen as commercial use under the DMCA, ESA points out.

“Public performance and display of online games within a museum likewise is a commercial use within the meaning of Section 107. MADE charges an admission fee – ‘$10 to play games all day’.

“Under the authority summarized above, public performance and display of copyrighted works to generate entrance fee revenue is a commercial use, even if undertaken by a nonprofit museum,” the ESA adds.

The ESA also stresses that their members already make efforts to revive older games themselves. There is a vibrant and growing market for “retro” games, which games companies are motivated to serve, they say.

The games companies, therefore, urge the Copyright Office to keep the status quo and reject any exemptions for online games.

“In sum, expansion of the video game preservation exemption as contemplated by Class 8 is not a ‘modest’ proposal. Eliminating the important limitations that the Register provided when adopting the current exemption risks the possibility of wide-scale infringement and substantial market harm,” they write.

The Copyright Office will take all arguments into consideration before it makes a final decision. It’s clear that the wishes of game preservation advocates, such as MADE, are hard to unite with the interests of the game companies, so one side will clearly be disappointed with the outcome.

A copy of ESA’s submissionavailablelble here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Major US Sports Leagues Report Top Piracy Nations to Government

Post Syndicated from Ernesto original https://torrentfreak.com/major-us-sports-leagues-report-top-piracy-nations-to-government-180216/

While pirated Hollywood blockbusters often score the big headlines, there are several other industries that have been battling with piracy over the years. This includes sports organizations.

Many of the major US leagues including the NBA, NFL, NHL, MLB and the Tennis Association, are bundling their powers in the Sports Coalition, to try and curb the availability of pirated streams and videos.

A few days ago the Sports Coalition put the piracy problem on the agenda of the United States Trade Representative (USTR).

“Sports organizations, including Sports Coalition members, are heavily affected by live sports telecast piracy, including the unauthorized live retransmission of sports telecasts over the Internet,” the Sports Coalition wrote.

“The Internet piracy of live sports telecasts is not only a persistent problem, but also a global one, often involving bad actors in more than one nation.”

The USTR asked the public for comments on which countries play a central role in copyright infringement issues. In its response, the Sports Coalition stresses that piracy is a global issue but singles out several nations as particularly problematic.

The coalition recommends that the USTR should put the Netherlands and Switzerland on the “Priority Watch List” of its 2018 Special 301 Report, followed by Russia, Saudi Arabia, Seychelles and Sweden, which get a regular “Watch List” recommendation.

The main problem with these countries is that hosting providers and content distribution networks don’t do enough to curb piracy.

In the Netherlands, sawlive.tv, strikezoneme, wizlnet, AltusHost, Host Palace, Quasi Networks and SNEL pirated or provided services contributing to sports piracy, the coalition writes. In Switzerland, mlbstreamme, robinwidgetorg, strikeoutmobi, BlackHOST, Private Layer and Solar Communications are doing the same.

According to the major sports leagues, the US Government should encourage these countries to step up their anti-piracy game. This is not only important for US copyright holders, but also for licensees in other countries.

“Clearly, there is common ground – both in terms of shared economic interests and legal obligations to protect and enforce intellectual property and related rights – for the United States and the nations with which it engages in international trade to work cooperatively to stop Internet piracy of sports programming.”

Whether any of these countries will make it into the USTR’s final list has yet to be seen. For Switzerland it wouldn’t be the first time but for the Netherlands it would be new, although it has been considered before.

A document we received through a FOIA request earlier this year revealed that the US Embassy reached out to the Dutch Government in the past, to discuss similar complaints from the Sports Coalition.

The same document also revealed that local anti-piracy group BREIN consistently urged the entertainment industries it represents not to advocate placing the Netherlands on the 301 Watch List but to solve the problems behind the scenes instead.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Embedding a Tweet Can be Copyright Infringement, Court Rules

Post Syndicated from Ernesto original https://torrentfreak.com/embedding-a-tweet-can-be-copyright-infringement-court-rules-180216/

Nowadays it’s fairly common for blogs and news sites to embed content posted by third parties, ranging from YouTube videos to tweets.

Although these publications don’t host the content themselves, they can be held liable for copyright infringement, a New York federal court has ruled.

The case in question was filed by Justin Goldman whose photo of Tom Brady went viral after he posted it on Snapchat. After being reposted on Reddit, it also made its way onto Twitter from where various news organizations picked it up.

Several of these news sites reported on the photo by embedding tweets from others. However, since Goldman never gave permission to display his photo, he went on to sue the likes of Breitbart, Time, Vox and Yahoo, for copyright infringement.

In their defense, the news organizations argued that they did nothing wrong as no content was hosted on their servers. They referred to the so-called “server test” that was applied in several related cases in the past, which determined that liability rests on the party that hosts the infringing content.

In an order that was just issued, US District Court Judge Katherine Forrest disagrees. She rejects the “server test” argument and rules that the news organizations are liable.

“[W]hen defendants caused the embedded Tweets to appear on their websites, their actions violated plaintiff’s exclusive display right; the fact that the image was hosted on a server owned and operated by an unrelated third party (Twitter) does not shield them from this result,” Judge Forrest writes.

Judge Forrest argues that the server test was established in the ‘Perfect 10 v. Amazon’ case, which dealt with the ‘distribution’ of content. This case is about ‘displaying’ an infringing work instead, an area where the jurisprudence is not as clear.

“The Court agrees with plaintiff. The plain language of the Copyright Act, the legislative history undergirding its enactment, and subsequent Supreme Court jurisprudence provide no basis for a rule that allows the physical location or possession of an image to determine who may or may not have “displayed” a work within the meaning of the Copyright Act.”

As a result, summary judgment was granted in favor of Goldman.

Rightsholders, including Getty Images which supported Goldman, are happy with the result. However, not everyone is pleased. The Electronic Frontier Foundation (EFF) says that if the current verdict stands it will put millions of regular Internet users at risk.

“Rejecting years of settled precedent, a federal court in New York has ruled that you could infringe copyright simply by embedding a tweet in a web page,” EFF comments.

“Even worse, the logic of the ruling applies to all in-line linking, not just embedding tweets. If adopted by other courts, this legally and technically misguided decision would threaten millions of ordinary Internet users with infringement liability.”

Given what’s at stake, it’s likely that the news organization will appeal this week’s order.

Interestingly, earlier this week a California district court dismissed Playboy’s copyright infringement complaint against Boing Boing, which embedded a YouTube video that contained infringing content.

A copy of Judge Forrest’s opinion can be found here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Court Orders Spanish ISPs to Block Pirate Sites For Hollywood

Post Syndicated from Andy original https://torrentfreak.com/court-orders-spanish-isps-to-block-pirate-sites-for-hollywood-180216/

Determined to reduce levels of piracy globally, Hollywood has become one of the main proponents of site-blocking on the planet. To date there have been multiple lawsuits in far-flung jurisdictions, with Europe one of the primary targets.

Following complaints from Disney, 20th Century Fox, Paramount, Sony, Universal and Warner, Spain has become one of the latest targets. According to the studios a pair of sites – HDFull.tv and Repelis.tv – infringe their copyrights on a grand scale and need to be slowed down by preventing users from accessing them.

HDFull is a platform that provides movies and TV shows in both Spanish and English. Almost 60% its traffic comes from Spain and after a huge surge in visitors last July, it’s now the 337th most popular site in the country according to Alexa. Visitors from Mexico, Argentina, United States and Chile make up the rest of its audience.

Repelis.tv is a similar streaming portal specializing in movies, mainly in Spanish. A third of the site’s visitors hail from Mexico with the remainder coming from Argentina, Columbia, Spain and Chile. In common with HDFull, Repelis has been building its visitor numbers quickly since 2017.

The studios demanding more blocks

With a ruling in hand from the European Court of Justice which determined that sites can be blocked on copyright infringement grounds, the studios asked the courts to issue an injunction against several local ISPs including Telefónica, Vodafone, Orange and Xfera. In an order handed down this week, Barcelona Commercial Court No. 6 sided with the studios and ordered the ISPs to begin blocking the sites.

“They damage the legitimate rights of those who own the films and series, which these pages illegally display and with which they profit illegally through the advertising revenues they generate,” a statement from the Spanish Federation of Cinematographic Distributors (FEDECINE) reads.

FEDECINE General director Estela Artacho said that changes in local law have helped to provide the studios with a new way to protect audiovisual content released in Spain.

“Thanks to the latest reform of the Civil Procedure Law, we have in this jurisdiction a new way to exercise different possibilities to protect our commercial film offering,” Artacho said.

“Those of us who are part of this industry work to make culture accessible and offer the best cinematographic experience in the best possible conditions, guaranteeing the continuity of the sector.”

The development was also welcomed by Stan McCoy, president of the Motion Picture Association’s EMEA division, which represents the plaintiffs in the case.

“We have just taken a welcome step which we consider crucial to face the problem of piracy in Spain,” McCoy said.

“These actions are necessary to maintain the sustainability of the creative community both in Spain and throughout Europe. We want to ensure that consumers enjoy the entertainment offer in a safe and secure environment.”

After gaining experience from blockades and subsequent circumvention in other regions, the studios seem better prepared to tackle fallout in Spain. In addition to blocking primary domains, the ruling handed down by the court this week also obliges ISPs to block any other domain, subdomain or IP address whose purpose is to facilitate access to the blocked platforms.

News of Spain’s ‘pirate’ blocks come on the heels of fresh developments in Germany, where this week a court ordered ISP Vodafone to block KinoX, one of the country’s most popular streaming portals.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

timeShift(GrafanaBuzz, 1w) Issue 34

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2018/02/16/timeshiftgrafanabuzz-1w-issue-34/

Welcome to TimeShift The big news this week is that we’ve sold out of General Admission tickets for GrafanaCon EU! We would like to thank the Grafana community for all your support and enthusiasm around GrafanaCon and are looking forward to delivering a highly valuable, interesting and most of all, fun conference. There are a small amount of Angel Tickets available – don’t miss your last chance to join us in Amsterdam!

New AWS Certified Solutions Architect – Associate Exam: Now in General Availability

Post Syndicated from Janna Pellegrino original https://aws.amazon.com/blogs/architecture/new-aws-certified-solutions-architect-associate-exam-now-in-general-availability/

We’ve updated our AWS Certified Solutions Architect – Associate exam to include new services and architectural best practices, including the pillars of the Well-Architected Framework.

About The Exam

The new AWS Certified Solutions Architect – Associate (Released February 2018) exam validates knowledge of how to architect and deploy secure and robust applications on AWS technologies. We recommend candidates have at least one year of hands-on experience designing available, cost-efficient, fault-tolerant, and scalable and distributed systems on AWS before taking the exam. This exam covers:

  • Designing resilient architectures
  • Defining performant architectures
  • Specifying secure applications and architectures
  • Designing cost-optimized architectures
  • Defining operationally excellent architectures

How To Prepare

We also refreshed our exam preparation resources. If you are looking to expand your Architecting knowledge, we recommend the following resources:

AWS Training (aws.amazon.com/training)

AWS Materials

AWS Whitepapers (aws.amazon.com/whitepapers) Kindle and .pdf and Other Materials

  • Architecting for the Cloud: AWS Best Practices whitepaper, February 2016
  • AWS Well-Architected webpage (various whitepapers linked)

Note that if you’ve already started preparing, you also have the option to take the previous version of the exam through August 12, 2018.

Next Steps

If you’re interested in taking this new exam, learn more at the AWS Certified Solutions Architect – Associate webpage, or register for the exam today.

 

[$] A report from the Enigma conference

Post Syndicated from jake original https://lwn.net/Articles/747005/rss

The 2018 USENIX
Enigma conference
was held for the third time in January. Among
many interesting talks, three presentations dealing with human security
behaviors stood out. This article covers the key messages of these talks,
namely the finding that humans are social in their security
behaviors: their decision to adopt a good security practice is hardly ever
an isolated decision.

Subscribers can read on for the report by guest author Christian
Folini.

Early Challenges: Making Critical Hires

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/early-challenges-making-critical-hires/

row of potential employee hires sitting waiting for an interview

In 2009, Google disclosed that they had 400 recruiters on staff working to hire nearly 10,000 people. Someday, that might be your challenge, but most companies in their early days are looking to hire a handful of people — the right people — each year. Assuming you are closer to startup stage than Google stage, let’s look at who you need to hire, when to hire them, where to find them (and how to help them find you), and how to get them to join your company.

Who Should Be Your First Hires

In later stage companies, the roles in the company have been well fleshed out, don’t change often, and each role can be segmented to focus on a specific area. A large company may have an entire department focused on just cubicle layout; at a smaller company you may not have a single person whose actual job encompasses all of facilities. At Backblaze, our CTO has a passion and knack for facilities and mostly led that charge. Also, the needs of a smaller company are quick to change. One of our first hires was a QA person, Sean, who ended up being 100% focused on data center infrastructure. In the early stage, things can shift quite a bit and you need people that are broadly capable, flexible, and most of all willing to pitch in where needed.

That said, there are times you may need an expert. At a previous company we hired Jon, a PhD in Bayesian statistics, because we needed algorithmic analysis for spam fighting. However, even that person was not only able and willing to do the math, but also code, and to not only focus on Bayesian statistics but explore a plethora of spam fighting options.

When To Hire

If you’ve raised a lot of cash and are willing to burn it with mistakes, you can guess at all the roles you might need and start hiring for them. No judgement: that’s a reasonable strategy if you’re cash-rich and time-poor.

If your cash is limited, try to see what you and your team are already doing and then hire people to take those jobs. It may sound counterintuitive, but if you’re already doing it presumably it needs to be done, you have a good sense of the type of skills required to do it, and you can bring someone on-board and get them up to speed quickly. That then frees you up to focus on tasks that can’t be done by someone else. At Backblaze, I ran marketing internally for years before hiring a VP of Marketing, making it easier for me to know what we needed. Once I was hiring, my primary goal was to find someone I could trust to take that role completely off of me so I could focus solely on my CEO duties

Where To Find the Right People

Finding great people is always difficult, particularly when the skillsets you’re looking for are highly in-demand by larger companies with lots of cash and cachet. You, however, have one massive advantage: you need to hire 5 people, not 5,000.

People You Worked With

The absolutely best people to hire are ones you’ve worked with before that you already know are good in a work situation. Consider your last job, the one before, and the one before that. A significant number of the people we recruited at Backblaze came from our previous startup MailFrontier. We knew what they could do and how they would fit into the culture, and they knew us and thus could quickly meld into the environment. If you didn’t have a previous job, consider people you went to school with or perhaps individuals with whom you’ve done projects previously.

People You Know

Hiring friends, family, and others can be risky, but should be considered. Sometimes a friend can be a “great buddy,” but is not able to do the job or isn’t a good fit for the organization. Having to let go of someone who is a friend or family member can be rough. Have the conversation up front with them about that possibility, so you have the ability to stay friends if the position doesn’t work out. Having said that, if you get along with someone as a friend, that’s one critical component of succeeding together at work. At Backblaze we’ve hired a number of people successfully that were friends of someone in the organization.

Friends Of People You Know

Your network is likely larger than you imagine. Your employees, investors, advisors, spouses, friends, and other folks all know people who might be a great fit for you. Make sure they know the roles you’re hiring for and ask them if they know anyone that would fit. Search LinkedIn for the titles you’re looking for and see who comes up; if they’re a 2nd degree connection, ask your connection for an introduction.

People You Know About

Sometimes the person you want isn’t someone anyone knows, but you may have read something they wrote, used a product they’ve built, or seen a video of a presentation they gave. Reach out. You may get a great hire: worst case, you’ll let them know they were appreciated, and make them aware of your organization.

Other Places to Find People

There are a million other places to find people, including job sites, community groups, Facebook/Twitter, GitHub, and more. Consider where the people you’re looking for are likely to congregate online and in person.

A Comment on Diversity

Hiring “People You Know” can often result in “Hiring People Like You” with the same workplace experiences, culture, background, and perceptions. Some studies have shown [1, 2, 3, 4] that homogeneous groups deliver faster, while heterogeneous groups are more creative. Also, “Hiring People Like You” often propagates the lack of women and minorities in tech and leadership positions in general. When looking for people you know, keep an eye to not discount people you know who don’t have the same cultural background as you.

Helping People To Find You

Reaching out proactively to people is the most direct way to find someone, but you want potential hires coming to you as well. To do this, they have to a) be aware of you, b) know you have a role they’re interested in, and c) think they would want to work there. Let’s tackle a) and b) first below.

Your Blog

I started writing our blog before we launched the product and talked about anything I found interesting related to our space. For several years now our team has owned the content on the blog and in 2017 over 1.5 million people read it. Each time we have a position open it’s published to the blog. If someone finds reading about backup and storage interesting, perhaps they’d want to dig in deeper from the inside. Many of the people we’ve recruited have mentioned reading the blog as either how they found us or as a factor in why they wanted to work here.
[BTW, this is Gleb’s 200th post on Backblaze’s blog. The first was in 2008. — Editor]

Your Email List

In addition to the emails our blog subscribers receive, we send regular emails to our customers, partners, and prospects. These are largely focused on content we think is directly useful or interesting for them. However, once every few months we include a small mention that we’re hiring, and the positions we’re looking for. Often a small blurb is all you need to capture people’s imaginations whether they might find the jobs interesting or can think of someone that might fit the bill.

Your Social Involvement

Whether it’s Twitter or Facebook, Hacker News or Slashdot, your potential hires are engaging in various communities. Being socially involved helps make people aware of you, reminds them of you when they’re considering a job, and paints a picture of what working with you and your company would be like. Adam was in a Reddit thread where we were discussing our Storage Pods, and that interaction was ultimately part of the reason he left Apple to come to Backblaze.

Convincing People To Join

Once you’ve found someone or they’ve found you, how do you convince them to join? They may be currently employed, have other offers, or have to relocate. Again, while the biggest companies have a number of advantages, you might have more unique advantages than you realize.

Why Should They Join You

Here are a set of items that you may be able to offer which larger organizations might not:

Role: Consider the strengths of the role. Perhaps it will have broader scope? More visibility at the executive level? No micromanagement? Ability to take risks? Option to create their own role?

Compensation: In addition to salary, will their options potentially be worth more since they’re getting in early? Can they trade-off salary for more options? Do they get option refreshes?

Benefits: In addition to healthcare, food, and 401(k) plans, are there unique benefits of your company? One company I knew took the entire team for a one-month working retreat abroad each year.

Location: Most people prefer to work close to home. If you’re located outside of the San Francisco Bay Area, you might be at a disadvantage for not being in the heart of tech. But if you find employees close to you you’ve got a huge advantage. Sometimes it’s micro; even in the Bay Area the difference of 5 miles can save 20 minutes each way every day. We located the Backblaze headquarters in San Mateo, a middle-ground that made it accessible to those coming from San Jose and San Francisco. We also chose a downtown location near a train, restaurants, and cafes: all to make it easier and more pleasant. Also, are you flexible in letting your employees work remotely? Our systems administrator Elliott is about to embark on a long-term cross-country journey working from an RV.

Environment: Open office, cubicle, cafe, work-from-home? Loud/quiet? Social or focused? 24×7 or work-life balance? Different environments appeal to different people.

Team: Who will they be working with? A company with 100,000 people might have 100 brilliant ones you’d want to work with, but ultimately we work with our core team. Who will your prospective hires be working with?

Market: Some people are passionate about gaming, others biotech, still others food. The market you’re targeting will get different people excited.

Product: Have an amazing product people love? Highlight that. If you’re lucky, your potential hire is already a fan.

Mission: Curing cancer, making people happy, and other company missions inspire people to strive to be part of the journey. Our mission is to make storing data astonishingly easy and low-cost. If you care about data, information, knowledge, and progress, our mission helps drive all of them.

Culture: I left this for last, but believe it’s the most important. What is the culture of your company? Finding people who want to work in the culture of your organization is critical. If they like the culture, they’ll fit and continue it. We’ve worked hard to build a culture that’s collaborative, friendly, supportive, and open; one in which people like coming to work. For example, the five founders started with (and still have) the same compensation and equity. That started a culture of “we’re all in this together.” Build a culture that will attract the people you want, and convey what the culture is.

Writing The Job Description

Most job descriptions focus on the all the requirements the candidate must meet. While important to communicate, the job description should first sell the job. Why would the appropriate candidate want the job? Then share some of the requirements you think are critical. Remember that people read not just what you say but how you say it. Try to write in a way that conveys what it is like to actually be at the company. Ahin, our VP of Marketing, said the job description itself was one of the things that attracted him to the company.

Orchestrating Interviews

Much can be said about interviewing well. I’m just going to say this: make sure that everyone who is interviewing knows that their job is not only to evaluate the candidate, but give them a sense of the culture, and sell them on the company. At Backblaze, we often have one person interview core prospects solely for company/culture fit.

Onboarding

Hiring success shouldn’t be defined by finding and hiring the right person, but instead by the right person being successful and happy within the organization. Ensure someone (usually their manager) provides them guidance on what they should be concentrating on doing during their first day, first week, and thereafter. Giving new employees opportunities and guidance so that they can achieve early wins and feel socially integrated into the company does wonders for bringing people on board smoothly

In Closing

Our Director of Production Systems, Chris, said to me the other day that he looks for companies where he can work on “interesting problems with nice people.” I’m hoping you’ll find your own version of that and find this post useful in looking for your early and critical hires.

Of course, I’d be remiss if I didn’t say, if you know of anyone looking for a place with “interesting problems with nice people,” Backblaze is hiring. 😉

The post Early Challenges: Making Critical Hires appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Pirate Streaming Search Engine Exploits Crunchyroll Vulnerability

Post Syndicated from Ernesto original https://torrentfreak.com/pirate-streaming-search-engine-exploits-crunchyroll-vulnerability-180213/

With 20 million members around the world, Crunchyroll is one of the largest on-demand streaming platforms for anime and manga content.

Much like Hollywood, the site has competition from pirate streaming sites which offer their content without permission. These usually stream pirated videos which are hosted on external sites.

However, this week Crunchyroll is facing a more direct attack. The people behind the new streaming meta-search engine StreamCR say they’ve found a way to stream the site’s content from its own servers, without paying.

“This works due to a vulnerability in the Crunchyroll system,” StreamCR’s operators tell TorrentFreak.

Simply put, StreamCR uses an active Crunchyroll account to locate the video streams and embeds this on its own website. This allows people to access Crunchyroll videos in the best quality without paying.

“This gives access to the full library in the region of our server, retrieving it as long as we’re not bound by the regular regional restriction. For this, we pick a US server as American Crunchyroll has the most library of content.

Stream in various qualities

The exploit was developed in-house, the StreamCR team informs us. While it works fine at the moment the team realizes that this may not last forever, as Crunchyroll might eventually patch the vulnerability.

However, the meta-search engine will have made its point by then.

“We expect them to fix this, Why wouldn’t they? In the meantime, this can demonstrate how vulnerable Crunchyroll is at the moment,” they tell us.

The site’s ultimate plan is to become the go-to search engine for people looking to stream all kinds of pirated videos. In addition to Crunchyroll, StreamCR also indexes various pirate sites, including YesMovies, Gomovies, and 9anime.

“StreamCR’s goal is to let people access streams with ease from a universal site, we’re trying to have a Google-like experience for finding online streams,” they say.

TorrentFreak reached out to Crunchyroll asking for a comment on the issue, but at the time of publication, we have yet to hear back.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Amazon Relational Database Service – Looking Back at 2017

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-relational-database-service-looking-back-at-2017/

The Amazon RDS team launched nearly 80 features in 2017. Some of them were covered in this blog, others on the AWS Database Blog, and the rest in What’s New or Forum posts. To wrap up my week, I thought it would be worthwhile to give you an organized recap. So here we go!

Certification & Security

Features

Engine Versions & Features

Regional Support

Instance Support

Price Reductions

And That’s a Wrap
I’m pretty sure that’s everything. As you can see, 2017 was quite the year! I can’t wait to see what the team delivers in 2018.

Jeff;

 

[$] The rest of the 4.16 merge window

Post Syndicated from corbet original https://lwn.net/Articles/746791/rss

At the close of the 4.16 merge window,
11,746
non-merge changesets had been merged; that is 5,000 since last week’s summary. This merge window is
thus a busy one, though not out of line with its predecessors — 4.14 had
11,500 changesets during its merge window, while 4.15 had 12,599. Quite a
bit of that work is of the boring internal variety; over 600 of those
changesets were
device-tree updates, for example. But there was still a fair amount of
interesting work merged in the second half of the 4.16 merge window; read
on for the highlights.

How I built a data warehouse using Amazon Redshift and AWS services in record time

Post Syndicated from Stephen Borg original https://aws.amazon.com/blogs/big-data/how-i-built-a-data-warehouse-using-amazon-redshift-and-aws-services-in-record-time/

This is a customer post by Stephen Borg, the Head of Big Data and BI at Cerberus Technologies.

Cerberus Technologies, in their own words: Cerberus is a company founded in 2017 by a team of visionary iGaming veterans. Our mission is simple – to offer the best tech solutions through a data-driven and a customer-first approach, delivering innovative solutions that go against traditional forms of working and process. This mission is based on the solid foundations of reliability, flexibility and security, and we intend to fundamentally change the way iGaming and other industries interact with technology.

Over the years, I have developed and created a number of data warehouses from scratch. Recently, I built a data warehouse for the iGaming industry single-handedly. To do it, I used the power and flexibility of Amazon Redshift and the wider AWS data management ecosystem. In this post, I explain how I was able to build a robust and scalable data warehouse without the large team of experts typically needed.

In two of my recent projects, I ran into challenges when scaling our data warehouse using on-premises infrastructure. Data was growing at many tens of gigabytes per day, and query performance was suffering. Scaling required major capital investment for hardware and software licenses, and also significant operational costs for maintenance and technical staff to keep it running and performing well. Unfortunately, I couldn’t get the resources needed to scale the infrastructure with data growth, and these projects were abandoned. Thanks to cloud data warehousing, the bottleneck of infrastructure resources, capital expense, and operational costs have been significantly reduced or have totally gone away. There is no more excuse for allowing obstacles of the past to delay delivering timely insights to decision makers, no matter how much data you have.

With Amazon Redshift and AWS, I delivered a cloud data warehouse to the business very quickly, and with a small team: me. I didn’t have to order hardware or software, and I no longer needed to install, configure, tune, or keep up with patches and version updates. Instead, I easily set up a robust data processing pipeline and we were quickly ingesting and analyzing data. Now, my data warehouse team can be extremely lean, and focus more time on bringing in new data and delivering insights. In this post, I show you the AWS services and the architecture that I used.

Handling data feeds

I have several different data sources that provide everything needed to run the business. The data includes activity from our iGaming platform, social media posts, clickstream data, marketing and campaign performance, and customer support engagements.

To handle the diversity of data feeds, I developed abstract integration applications using Docker that run on Amazon EC2 Container Service (Amazon ECS) and feed data to Amazon Kinesis Data Streams. These data streams can be used for real time analytics. In my system, each record in Kinesis is preprocessed by an AWS Lambda function to cleanse and aggregate information. My system then routes it to be stored where I need on Amazon S3 by Amazon Kinesis Data Firehose. Suppose that you used an on-premises architecture to accomplish the same task. A team of data engineers would be required to maintain and monitor a Kafka cluster, develop applications to stream data, and maintain a Hadoop cluster and the infrastructure underneath it for data storage. With my stream processing architecture, there are no servers to manage, no disk drives to replace, and no service monitoring to write.

Setting up a Kinesis stream can be done with a few clicks, and the same for Kinesis Firehose. Firehose can be configured to automatically consume data from a Kinesis Data Stream, and then write compressed data every N minutes to Amazon S3. When I want to process a Kinesis data stream, it’s very easy to set up a Lambda function to be executed on each message received. I can just set a trigger from the AWS Lambda Management Console, as shown following.

I also monitor the duration of function execution using Amazon CloudWatch and AWS X-Ray.

Regardless of the format I receive the data from our partners, I can send it to Kinesis as JSON data using my own formatters. After Firehose writes this to Amazon S3, I have everything in nearly the same structure I received but compressed, encrypted, and optimized for reading.

This data is automatically crawled by AWS Glue and placed into the AWS Glue Data Catalog. This means that I can immediately query the data directly on S3 using Amazon Athena or through Amazon Redshift Spectrum. Previously, I used Amazon EMR and an Amazon RDS–based metastore in Apache Hive for catalog management. Now I can avoid the complexity of maintaining Hive Metastore catalogs. Glue takes care of high availability and the operations side so that I know that end users can always be productive.

Working with Amazon Athena and Amazon Redshift for analysis

I found Amazon Athena extremely useful out of the box for ad hoc analysis. Our engineers (me) use Athena to understand new datasets that we receive and to understand what transformations will be needed for long-term query efficiency.

For our data analysts and data scientists, we’ve selected Amazon Redshift. Amazon Redshift has proven to be the right tool for us over and over again. It easily processes 20+ million transactions per day, regardless of the footprint of the tables and the type of analytics required by the business. Latency is low and query performance expectations have been more than met. We use Redshift Spectrum for long-term data retention, which enables me to extend the analytic power of Amazon Redshift beyond local data to anything stored in S3, and without requiring me to load any data. Redshift Spectrum gives me the freedom to store data where I want, in the format I want, and have it available for processing when I need it.

To load data directly into Amazon Redshift, I use AWS Data Pipeline to orchestrate data workflows. I create Amazon EMR clusters on an intra-day basis, which I can easily adjust to run more or less frequently as needed throughout the day. EMR clusters are used together with Amazon RDS, Apache Spark 2.0, and S3 storage. The data pipeline application loads ETL configurations from Spring RESTful services hosted on AWS Elastic Beanstalk. The application then loads data from S3 into memory, aggregates and cleans the data, and then writes the final version of the data to Amazon Redshift. This data is then ready to use for analysis. Spark on EMR also helps with recommendations and personalization use cases for various business users, and I find this easy to set up and deliver what users want. Finally, business users use Amazon QuickSight for self-service BI to slice, dice, and visualize the data depending on their requirements.

Each AWS service in this architecture plays its part in saving precious time that’s crucial for delivery and getting different departments in the business on board. I found the services easy to set up and use, and all have proven to be highly reliable for our use as our production environments. When the architecture was in place, scaling out was either completely handled by the service, or a matter of a simple API call, and crucially doesn’t require me to change one line of code. Increasing shards for Kinesis can be done in a minute by editing a stream. Increasing capacity for Lambda functions can be accomplished by editing the megabytes allocated for processing, and concurrency is handled automatically. EMR cluster capacity can easily be increased by changing the master and slave node types in Data Pipeline, or by using Auto Scaling. Lastly, RDS and Amazon Redshift can be easily upgraded without any major tasks to be performed by our team (again, me).

In the end, using AWS services including Kinesis, Lambda, Data Pipeline, and Amazon Redshift allows me to keep my team lean and highly productive. I eliminated the cost and delays of capital infrastructure, as well as the late night and weekend calls for support. I can now give maximum value to the business while keeping operational costs down. My team pushed out an agile and highly responsive data warehouse solution in record time and we can handle changing business requirements rapidly, and quickly adapt to new data and new user requests.


Additional Reading

If you found this post useful, be sure to check out Deploy a Data Warehouse Quickly with Amazon Redshift, Amazon RDS for PostgreSQL and Tableau Server and Top 8 Best Practices for High-Performance ETL Processing Using Amazon Redshift.


About the Author

Stephen Borg is the Head of Big Data and BI at Cerberus Technologies. He has a background in platform software engineering, and first became involved in data warehousing using the typical RDBMS, SQL, ETL, and BI tools. He quickly became passionate about providing insight to help others optimize the business and add personalization to products. He is now the Head of Big Data and BI at Cerberus Technologies.

 

 

 

BootStomp – Find Android Bootloader Vulnerabilities

Post Syndicated from Darknet original https://www.darknet.org.uk/2018/02/bootstomp-find-android-bootloader-vulnerabilities/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

BootStomp – Find Android Bootloader Vulnerabilities

BootStomp is a Python-based tool, with Docker support that helps you find two different classes of Android bootloader vulnerabilities and bugs. It looks for memory corruption and state storage vulnerabilities.

Note that BootStomp works with boot-loaders compiled for ARM architectures (32 and 64 bits both) and that results might slightly vary depending on angr and Z3’s versions. This is because of the time angr takes to analyze basic blocks and to Z3’s expression concretization results.

Read the rest of BootStomp – Find Android Bootloader Vulnerabilities now! Only available at Darknet.

Kim Dotcom Begins New Fight to Avoid Extradition to United States

Post Syndicated from Andy original https://torrentfreak.com/kim-dotcom-begins-new-fight-to-avoid-extradition-to-united-states-180212/

More than six years ago in January 2012, file-hosting site Megaupload was shut down by the United States government and founder Kim Dotcom and his associates were arrested in New Zealand.

What followed was an epic legal battle to extradite Dotcom, Mathias Ortmann, Finn Batato, and Bram van der Kolk to the United States to face several counts including copyright infringement, racketeering, and money laundering. Dotcom has battled the US government every inch of the way.

The most significant matters include the validity of the search warrants used to raid Dotcom’s Coatesville home on January 20, 2012. Despite a prolonged trip through the legal system, in 2014 the Supreme Court dismissed Dotcom’s appeals that the search warrants weren’t valid.

In 2015, the District Court later ruled that Dotcom and his associates are eligible for extradition. A subsequent appeal to the High Court failed when in February 2017 – and despite a finding that communicating copyright-protected works to the public is not a criminal offense in New Zealand – a judge also ruled in favor.

Of course, Dotcom and his associates immediately filed appeals and today in the Court of Appeal in Wellington, their hearing got underway.

Lawyer Grant Illingworth, representing Van der Kolk and Ortmann, told the Court that the case had “gone off the rails” during the initial 10-week extradition hearing in 2015, arguing that the case had merited “meaningful” consideration by a judge, something which failed to happen.

“It all went wrong. It went absolutely, totally wrong,” Mr. Illingworth said. “We were not heard.”

As expected, Illingworth underlined the belief that under New Zealand law, a person may only be extradited for an offense that could be tried in a criminal court locally. His clients’ cases do not meet that standard, the lawyer argued.

Turning back the clocks more than six years, Illingworth again raised the thorny issue of the warrants used to authorize the raids on the Megaupload defendants.

It had previously been established that New Zealand’s GCSB intelligence service had illegally spied on Dotcom and his associates in the lead up to their arrests. However, that fact was not disclosed to the District Court judge who authorized the raids.

“We say that there was misleading conduct at this stage because there was no reference to the fact that information had been gathered illegally by the GCSB,” he said.

But according to Justice Forrest Miller, even if this defense argument holds up the High Court had already found there was a prima facie case to answer “with bells on”.

“The difficulty that you face here ultimately is whether the judicial process that has been followed in both of the courts below was meaningful, to use the Canadian standard,” Justice Miller said.

“You’re going to have to persuade us that what Justice Gilbert [in the High Court] ended up with, even assuming your interpretation of the legislation is correct, was wrong.”

Although the US seeks to extradite Dotcom and his associates on 13 charges, including racketeering, copyright infringement, money laundering and wire fraud, the Court of Appeal previously confirmed that extradition could be granted based on just some of the charges.

The stakes couldn’t be much higher. The FBI says that the “Megaupload Conspiracy” earned the quartet $175m and if extradited to the US, they could face decades in jail.

While Dotcom was not in court today, he has been active on Twitter.

“The court process went ‘off the rails’ when the only copyright expert Judge in NZ was >removed< from my case and replaced by a non-tech Judge who asked if Mega was ‘cow storage’. He then simply copy/pasted 85% of the US submissions into his judgment," Dotcom wrote.

Dotcom also appeared to question the suitability of judges at both the High Court and Court of Appeal for the task in hand.

“Justice Miller and Justice Gilbert (he wrote that High Court judgment) were business partners at the law firm Chapman Tripp which represents the Hollywood Studios in my case. Both Judges are now at the Court of Appeal. Gilbert was promoted shortly after ruling against me,” Dotcom added.

Dotcom is currently suing the New Zealand government for billions of dollars in damages over the warrant which triggered his arrest and the demise of Megaupload.

The hearing is expected to last up to two-and-a-half weeks.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Integration With Zapier

Post Syndicated from Bozho original https://techblog.bozho.net/integration-with-zapier/

Integration is boring. And also inevitable. But I won’t be writing about enterprise integration patterns. Instead, I’ll explain how to create an app for integration with Zapier.

What is Zapier? It is a service that allows you tо connect two (or more) otherwise unconnected services via their APIs (or protocols). You can do stuff like “Create a Trello task from an Evernote note”, “publish new RSS items to Facebook”, “append new emails to a spreadsheet”, “post approaching calendar meeting to Slack”, “Save big email attachments to Dropbox”, “tweet all instagrams above a certain likes threshold”, and so on. In fact, it looks to cover mostly the same usecases as another famous service that I really like – IFTTT (if this then that), with my favourite use-case “Get a notification when the international space station passes over your house”. And all of those interactions can be configured via a UI.

Now that’s good for end users but what does it have to do with software development and integration? Zapier (unlike IFTTT, unfortunately), allows custom 3rd party services to be included. So if you have a service of your own, you can create an “app” and allow users to integrate your service with all the other 3rd party services. IFTTT offers a way to invoke web endpoints (including RESTful services), but it doesn’t allow setting headers, so that makes it quite limited for actual APIs.

In this post I’ll briefly explain how to write a custom Zapier app and then will discuss where services like Zapier stand from an architecture perspective.

The thing that I needed it for – to be able to integrate LogSentinel with any of the third parties available through Zapier, i.e. to store audit logs for events that happen in all those 3rd party systems. So how do I do that? There’s a tutorial that makes it look simple. And it is, with a few catches.

First, there are two tutorials – one in GitHub and one on Zapier’s website. And they differ slightly, which becomes tricky in some cases.

I initially followed the GitHub tutorial and had my build fail. It claimed the zapier platform dependency is missing. After I compared it with the example apps, I found out there’s a caret in front of the zapier platform dependency. Removing it just yielded another error – that my node version should be exactly 6.10.2. Why?

The Zapier CLI requires you have exactly version 6.10.2 installed. You’ll see errors and will be unable to proceed otherwise.

It appears that they are using AWS Lambda which is stuck on Node 6.10.2 (actually – it’s 6.10.3 when you check). The current major release is 8, so minus points for choosing … javascript for a command-line tool and for building sandboxed apps. Maybe other decisions had their downsides as well, I won’t be speculating. Maybe it’s just my dislike for dynamic languages.

So, after you make sure you have the correct old version on node, you call zapier init and make sure there are no carets, npm install and then zapier test. So far so good, you have a dummy app. Now how do you make a RESTful call to your service?

Zapier splits the programmable entities in two – “triggers” and “creates”. A trigger is the event that triggers the whole app, an a “create” is what happens as a result. In my case, my app doesn’t publish any triggers, it only accepts input, so I won’t be mentioning triggers (though they seem easy). You configure all of the elements in index.js (e.g. this one):

const log = require('./creates/log');
....
creates: {
    [log.key]: log,
}

The log.js file itself is the interesting bit – there you specify all the parameters that should be passed to your API call, as well as making the API call itself:

const log = (z, bundle) => {
  const responsePromise = z.request({
    method: 'POST',
    url: `https://api.logsentinel.com/api/log/${bundle.inputData.actorId}/${bundle.inputData.action}`,
    body: bundle.inputData.details,
	headers: {
		'Accept': 'application/json'
	}
  });
  return responsePromise
    .then(response => JSON.parse(response.content));
};

module.exports = {
  key: 'log-entry',
  noun: 'Log entry',

  display: {
    label: 'Log',
    description: 'Log an audit trail entry'
  },

  operation: {
    inputFields: [
      {key: 'actorId', label:'ActorID', required: true},
      {key: 'action', label:'Action', required: true},
      {key: 'details', label:'Details', required: false}
    ],
    perform: log
  }
};

You can pass the input parameters to your API call, and it’s as simple as that. The user can then specify which parameters from the source (“trigger”) should be mapped to each of your parameters. In an example zap, I used an email trigger and passed the sender as actorId, the sibject as “action” and the body of the email as details.

There’s one more thing – authentication. Authentication can be done in many ways. Some services offer OAuth, others – HTTP Basic or other custom forms of authentication. There is a section in the documentation about all the options. In my case it was (almost) an HTTP Basic auth. My initial thought was to just supply the credentials as parameters (which you just hardcode rather than map to trigger parameters). That may work, but it’s not the canonical way. You should configure “authentication”, as it triggers a friendly UI for the user.

You include authentication.js (which has the fields your authentication requires) and then pre-process requests by adding a header (in index.js):

const authentication = require('./authentication');

const includeAuthHeaders = (request, z, bundle) => {
  if (bundle.authData.organizationId) {
	request.headers = request.headers || {};
	request.headers['Application-Id'] = bundle.authData.applicationId
	const basicHash = Buffer(`${bundle.authData.organizationId}:${bundle.authData.apiSecret}`).toString('base64');
	request.headers['Authorization'] = `Basic ${basicHash}`;
  }
  return request;
};

const App = {
  // This is just shorthand to reference the installed dependencies you have. Zapier will
  // need to know these before we can upload
  version: require('./package.json').version,
  platformVersion: require('zapier-platform-core').version,
  authentication: authentication,
  
  // beforeRequest & afterResponse are optional hooks into the provided HTTP client
  beforeRequest: [
	includeAuthHeaders
  ]
...
}

And then you zapier push your app and you can test it. It doesn’t automatically go live, as you have to invite people to try it and use it first, but in many cases that’s sufficient (i.e. using Zapier when doing integration with a particular client)

Can Zapier can be used for any integration problem? Unlikely – it’s pretty limited and simple, but that’s also a strength. You can, in half a day, make your service integrate with thousands of others for the most typical use-cases. And not that although it’s meant for integrating public services rather than for enterprise integration (where you make multiple internal systems talk to each other), as an increasing number of systems rely on 3rd party services, it could find home in an enterprise system, replacing some functions of an ESB.

Effectively, such services (Zapier, IFTTT) are “Simple ESB-as-a-service”. You go to a UI, fill a bunch of fields, and you get systems talking to each other without touching the systems themselves. I’m not a big fan of ESBs, mostly because they become harder to support with time. But minimalist, external ones might be applicable in certain situations. And while such services are primarily aimed at end users, they could be a useful bit in an enterprise architecture that relies on 3rd party services.

Whether it could process the required load, whether an organization is willing to let its data flow through a 3rd party provider (which may store the intermediate parameters), is a question that should be answered in a case by cases basis. I wouldn’t recommend it as a general solution, but it’s certainly an option to consider.

The post Integration With Zapier appeared first on Bozho's tech blog.

Comcast Explains How It Deals With Persistent Pirates

Post Syndicated from Ernesto original https://torrentfreak.com/comcast-explains-how-it-deals-with-persistent-pirates-180210/

Dating back to the turn of the last century, copyright holders have alerted Internet providers about alleged copyright infringers on their network.

While many ISPs forwarded these notices to their subscribers, most were not very forthcoming about what would happen after multiple accusations.

This vagueness was in part shaped by law. While it’s clear that the DMCA requires Internet providers to implement a meaningful “repeat infringer” policy, the DMCA doesn’t set any clear boundaries on what constitutes a repeat infringer and when one should be punished.

With the recent Fourth Circuit Court of Appeals ruling against Cox, it is now clear that “infringers” doesn’t imply people who are adjudicated, valid accusations from copyright holders are enough. However, an ISP still has some flexibility when it comes to the rest of its “repeat infringer” policy.

In this light, it’s interesting to see that Comcast recently published details of its repeat infringer policy online. While the ISP has previously confirmed that persistent pirates could be terminated, it has never publicly spelled out its policy in such detail.

First up, Comcast clarifies that subscribers to its Xfinity service can be flagged based on reports from rightsholders alone, which is in line with the Fourth Circuit ruling.

“Any infringement of third party copyright rights violates the law. We reserve the right to treat any customer account for whom we receive multiple DMCA notifications from content owners as a repeat infringer,” the company notes.

If Comcast receives multiple notices in a calendar month, the associated subscriber moves from one policy step to the next one. This means that the ISP will issue warnings with increased visibility.

These alerts can come in the form of emails, letters to a home address, text messages, phone calls, and also alerts sent to the subscriber’s web browser. The alerts then have to be acknowledged by the user, so it clear that he or she understands what’s at stake.

From Comcast’s repeat infringer policy

Comcast doesn’t state specifically how many alerts will trigger tougher action, but it stresses that repeat infringers risk having their accounts suspended. As a result, all devices that rely on Internet access will be interrupted or stop working.

“If your XFINITY Internet account is suspended, you will have no Internet access or service during suspension. This means any services and devices that use the Internet will not properly work or will not work at all,” Comcast states.

The suspension is applied as a last warning before the lights go out completely. Subscribers who reach this stage can still reinstate their Internet connectivity by calling Comcast. It’s unclear whether they have to take any additional action, but it could be that these subscribers have to ‘promise’ to behave.

After this last warning, the subscriber risks the most severe penalty, account termination. This is not limited to regular access to the web, but also affects XFINITY TV, XFINITY Voice, and XFINITY Home, including smart thermostats and home security equipment.

“If you reach the point of service termination, we will terminate your XFINITY Internet service and related add-ons. Unreturned equipment charges will still apply. If you also have XFINITY TV and/or XFINITY Voice services, they will also be terminated,” Comcast warns.

Comcast doesn’t specify how long the Internet termination lasts but the company states that it’s typically no less than 180 days. This means that terminated subscribers will need to find an Internet subscription elsewhere if one’s available.

The good news is that other XFINITY services can be restored after termination, without Internet access. Subscribers will have to contact Comcast to request a quote for an Internet-less package.

While this policy may sound harsh to some, Comcast has few other options if it wants to avoid liability. The good news is that the company requires users to acknowledge the warnings, which means that any measures shouldn’t come as a surprise.

There is no mention of any option to contest any copyright holder notices, which may become an issue in the future. After all, when copyright holders have the power to have people’s Internet connections terminated, their accusations have to be spot on.



Comcast’s repeat infringer policy is available here and was, according to the information we have available, quietly published around December last year.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Cloudflare Hit With Piracy Lawsuit After Abuse Form ‘Fails’

Post Syndicated from Ernesto original https://torrentfreak.com/cloudflare-hit-with-piracy-lawsuit-after-abuse-form-fails-180210/

Seattle-based artist Christopher Boffoli is no stranger when it comes to suing tech companies for aiding copyright infringement of his work.

Boffoli has filed lawsuits against Imgur, Twitter, Pinterest, Google, and others, which were dismissed and/or settled out of court under undisclosed terms.

This month he filed a new case against another intermediary, Cloudflare, which has had its fair share of piracy allegations in recent years.

In common with other companies, Cloudflare is accused of contributing to copyright infringements of Boffoli’s “Big Appetites” miniatures series. In this case, several Cloudflare customers allegedly posted these photos on their sites which were then reproduced on the servers of the CDN provider.

The lawsuit mentions that the infringing copies were posted on unique-landscape.com and baklol.com. This was also pointed out to Cloudflare by Boffoli, who sent the company DMCA takedown notices in October and November of last year.

While the photographer received an automated response, the photos in question remained online. Through the lawsuit, Boffoli hopes this will change.

“CloudFlare induced, caused, or materially contributed to the Infringing Websites’ publication,” the complaint reads. “CloudFlare had actual knowledge of the Infringing Content. Boffoli provided notice to CloudFlare in compliance with the DMCA, and CloudFlare failed to disable access to or remove the Infringing Websites.”

The photographer is asking the court to order an injunction preventing Cloudflare from making his work available. In addition, the complaint asks for actual and statutory damages for willful copyright infringement. With at least four photos in the lawsuit, the potential damages are more than half a million dollars.

While it’s not mentioned in the complaint, the email communication between Boffoli and Cloudflare goes further than just an automated response. Court records show that the photographer initially didn’t ask Cloudflare to remove the infringing photos. Instead, he asked the CDN provider to forward them to the ISP or site owner.

“I would be grateful if you would forward this DMCA takedown request to the website owner and ISP so these infringing links can immediately be removed,” it read.

Part of the email communication

From then on things escalated a bit. The emails reveal that Boffoli had trouble reporting the infringing photos through the required form.

When the photographer pointed this out in a direct email, Cloudflare urged him to try the form again as that was the only way to send the DMCA request to the designated copyright agent.

“The DMCA doesn’t require us to process reports not sent to our registered agent as per our registration with the US Copyright Office. Our registered copyright agent is the form located at cloudflare.com/abuse/form and you may proceed via that avenue,” Cloudflare wrote.

If the case moves forward, Cloudflare may use this to argue that it never received a proper DMCA takedown notice. However, Boffoli wasn’t planning on trying again and instead threatened a lawsuit, unless Cloudflare took immediate action.

“As I have said, your form did not work for me despite repeated attempts to use it. And it is insulting for you to suggest that it’s working fine when it is not. So again, this is absolutely my last attempt to get you to respond to this infringement for which you are impeding the removal,” Boffoli wrote.

“If you take no action now I will forward this to my legal team this week. It is more than enough of a burden to have to waste countless hours policing my own copyrights without organizations like Cloudflare running interference for copyright infringers. I am not averse to asking a federal judge to compel you to deal with these copyright infringements. And I will seek statutory damages for contributory infringement at that time.”

As it turns out, that was not an idle threat.

—-

A copy of the complaint is available here (pdf) and the email exhibits can be found here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Voksi Releases Detailed Denuvo-Cracking Video Tutorial

Post Syndicated from Andy original https://torrentfreak.com/voksi-releases-detailed-denuvo-cracking-video-tutorial-180210/

Earlier this week, version 4.9 of the Denuvo anti-tamper system, which had protected Assassins Creed Origin for the past several months, was defeated by Italian cracking group CPY.

While Denuvo would probably paint four months of protection as a success, the company would certainly have preferred for things to have gone on a bit longer, not least following publisher Ubisoft’s decision to use VMProtect technology on top.

But while CPY do their thing in Italy there’s another rival whittling away at whatever the giants at Denuvo (and new owner Irdeto) can come up with. The cracker – known only as Voksi – hails from Bulgaria and this week he took the unusual step of releasing a 90-minute video (embedded below) in which he details how to defeat Denuvo’s V4 anti-tamper technology.

The video is not for the faint-hearted so those with an aversion to issues of a highly technical nature might feel the urge to look away. However, it may surprise readers to learn that not so long ago, Voksi knew absolutely nothing about coding.

“You will find this very funny and unbelievable,” Voksi says, recalling the events of 2012.

“There was one game called Sanctum and on one free [play] weekend [on Steam], I and my best friend played through it and saw how great the cooperative action was. When the free weekend was over, we wanted to keep playing, but we didn’t have any money to buy the game.

“So, I started to look for alternative ways, LAN emulators, anything! Then I decided I need to crack it. That’s how I got into reverse engineering. I started watching some shitty YouTube videos with bad quality and doing some tutorials. Then I found about Steam exploits and that’s how I got into making Steamworks fixes, allowing cracked multiplayer between players.”

Voksi says his entire cracking career began with this one indie game and his desire to play it with his best friend. Prior to that, he had absolutely no experience at all. He says he’s taken no university courses or any course at all for that matter. Everything he knows has come from material he’s found online. But the intrigue doesn’t stop there.

“I don’t even know how to code properly in high-level language like C#, C++, etc. But I understand assembly [language] perfectly fine,” he explains.

For those who code, that’s generally a little bit back to front, with low-level languages usually posing the most difficulties. But Voksi says that with assembly, everything “just clicked.”

Of course, it’s been six years since the 21-year-old was first motivated to crack a game due to lack of funds. In the more than half decade since, have his motivations changed at all? Is it the thrill of solving the puzzle or are there other factors at play?

“I just developed an urge to provide paid stuff for free for people who can’t afford it and specifically, co-op and multiplayer cracks. Of course, i’m not saying don’t support the developers if you have the money and like the game. You should do that,” he says.

“The challenge of cracking also motivates me, especially with an abomination like Denuvo. It is pure cancer for the gaming industry, it doesn’t help and it only causes issues for the paying customers.”

Those who follow Voksi online will know that as well as being known in his own right, he’s part of the REVOLT group, a collective that has Voksi’s core interests and goals as their own.

“REVOLT started as a group with one and only goal – to provide multiplayer support for cracked games. No other group was doing it until that day. It was founded by several members, from which I’m currently the only one active, still releasing cracks.

“Our great achievements are in first place, of course, cracking Denuvo V4, making us one of the four groups/people who were able to break the protection. In second place are our online fixes for several AAA games, allowing you to play on legit servers with legit players. In third place, our ordinary Steamworks fixes allowing you to play multiplayer between cracked users.”

In communities like /r/crackwatch on Reddit and those less accessible, Voksi and others doing similar work are often held up as Internet heroes, cracking games in order to give the masses access to something that might’ve been otherwise inaccessible. But how does this fame sit with him?

“Well, I don’t see myself as a hero, just another ordinary person doing what he loves. I love seeing people happy because of my work, that’s also a big motivation, but nothing more than that,” he says.

Finally, what’s up next for Voksi and what are his hopes for the rest of the year?

“In an ideal world, Denuvo would die. As for me, I don’t know, time will tell,” he concludes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Sharing Secrets with AWS Lambda Using AWS Systems Manager Parameter Store

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/sharing-secrets-with-aws-lambda-using-aws-systems-manager-parameter-store/

This post courtesy of Roberto Iturralde, Sr. Application Developer- AWS Professional Services

Application architects are faced with key decisions throughout the process of designing and implementing their systems. One decision common to nearly all solutions is how to manage the storage and access rights of application configuration. Shared configuration should be stored centrally and securely with each system component having access only to the properties that it needs for functioning.

With AWS Systems Manager Parameter Store, developers have access to central, secure, durable, and highly available storage for application configuration and secrets. Parameter Store also integrates with AWS Identity and Access Management (IAM), allowing fine-grained access control to individual parameters or branches of a hierarchical tree.

This post demonstrates how to create and access shared configurations in Parameter Store from AWS Lambda. Both encrypted and plaintext parameter values are stored with only the Lambda function having permissions to decrypt the secrets. You also use AWS X-Ray to profile the function.

Solution overview

This example is made up of the following components:

  • An AWS SAM template that defines:
    • A Lambda function and its permissions
    • An unencrypted Parameter Store parameter that the Lambda function loads
    • A KMS key that only the Lambda function can access. You use this key to create an encrypted parameter later.
  • Lambda function code in Python 3.6 that demonstrates how to load values from Parameter Store at function initialization for reuse across invocations.

Launch the AWS SAM template

To create the resources shown in this post, you can download the SAM template or choose the button to launch the stack. The template requires one parameter, an IAM user name, which is the name of the IAM user to be the admin of the KMS key that you create. In order to perform the steps listed in this post, this IAM user will need permissions to execute Lambda functions, create Parameter Store parameters, administer keys in KMS, and view the X-Ray console. If you have these privileges in your IAM user account you can use your own account to complete the walkthrough. You can not use the root user to administer the KMS keys.

SAM template resources

The following sections show the code for the resources defined in the template.
Lambda function

ParameterStoreBlogFunctionDev:
    Type: 'AWS::Serverless::Function'
    Properties:
      FunctionName: 'ParameterStoreBlogFunctionDev'
      Description: 'Integrating lambda with Parameter Store'
      Handler: 'lambda_function.lambda_handler'
      Role: !GetAtt ParameterStoreBlogFunctionRoleDev.Arn
      CodeUri: './code'
      Environment:
        Variables:
          ENV: 'dev'
          APP_CONFIG_PATH: 'parameterStoreBlog'
          AWS_XRAY_TRACING_NAME: 'ParameterStoreBlogFunctionDev'
      Runtime: 'python3.6'
      Timeout: 5
      Tracing: 'Active'

  ParameterStoreBlogFunctionRoleDev:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          -
            Effect: Allow
            Principal:
              Service:
                - 'lambda.amazonaws.com'
            Action:
              - 'sts:AssumeRole'
      ManagedPolicyArns:
        - 'arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole'
      Policies:
        -
          PolicyName: 'ParameterStoreBlogDevParameterAccess'
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              -
                Effect: Allow
                Action:
                  - 'ssm:GetParameter*'
                Resource: !Sub 'arn:aws:ssm:${AWS::Region}:${AWS::AccountId}:parameter/dev/parameterStoreBlog*'
        -
          PolicyName: 'ParameterStoreBlogDevXRayAccess'
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              -
                Effect: Allow
                Action:
                  - 'xray:PutTraceSegments'
                  - 'xray:PutTelemetryRecords'
                Resource: '*'

In this YAML code, you define a Lambda function named ParameterStoreBlogFunctionDev using the SAM AWS::Serverless::Function type. The environment variables for this function include the ENV (dev) and the APP_CONFIG_PATH where you find the configuration for this app in Parameter Store. X-Ray tracing is also enabled for profiling later.

The IAM role for this function extends the AWSLambdaBasicExecutionRole by adding IAM policies that grant the function permissions to write to X-Ray and get parameters from Parameter Store, limited to paths under /dev/parameterStoreBlog*.
Parameter Store parameter

SimpleParameter:
    Type: AWS::SSM::Parameter
    Properties:
      Name: '/dev/parameterStoreBlog/appConfig'
      Description: 'Sample dev config values for my app'
      Type: String
      Value: '{"key1": "value1","key2": "value2","key3": "value3"}'

This YAML code creates a plaintext string parameter in Parameter Store in a path that your Lambda function can access.
KMS encryption key

ParameterStoreBlogDevEncryptionKeyAlias:
    Type: AWS::KMS::Alias
    Properties:
      AliasName: 'alias/ParameterStoreBlogKeyDev'
      TargetKeyId: !Ref ParameterStoreBlogDevEncryptionKey

  ParameterStoreBlogDevEncryptionKey:
    Type: AWS::KMS::Key
    Properties:
      Description: 'Encryption key for secret config values for the Parameter Store blog post'
      Enabled: True
      EnableKeyRotation: False
      KeyPolicy:
        Version: '2012-10-17'
        Id: 'key-default-1'
        Statement:
          -
            Sid: 'Allow administration of the key & encryption of new values'
            Effect: Allow
            Principal:
              AWS:
                - !Sub 'arn:aws:iam::${AWS::AccountId}:user/${IAMUsername}'
            Action:
              - 'kms:Create*'
              - 'kms:Encrypt'
              - 'kms:Describe*'
              - 'kms:Enable*'
              - 'kms:List*'
              - 'kms:Put*'
              - 'kms:Update*'
              - 'kms:Revoke*'
              - 'kms:Disable*'
              - 'kms:Get*'
              - 'kms:Delete*'
              - 'kms:ScheduleKeyDeletion'
              - 'kms:CancelKeyDeletion'
            Resource: '*'
          -
            Sid: 'Allow use of the key'
            Effect: Allow
            Principal:
              AWS: !GetAtt ParameterStoreBlogFunctionRoleDev.Arn
            Action:
              - 'kms:Encrypt'
              - 'kms:Decrypt'
              - 'kms:ReEncrypt*'
              - 'kms:GenerateDataKey*'
              - 'kms:DescribeKey'
            Resource: '*'

This YAML code creates an encryption key with a key policy with two statements.

The first statement allows a given user (${IAMUsername}) to administer the key. Importantly, this includes the ability to encrypt values using this key and disable or delete this key, but does not allow the administrator to decrypt values that were encrypted with this key.

The second statement grants your Lambda function permission to encrypt and decrypt values using this key. The alias for this key in KMS is ParameterStoreBlogKeyDev, which is how you reference it later.

Lambda function

Here I walk you through the Lambda function code.

import os, traceback, json, configparser, boto3
from aws_xray_sdk.core import patch_all
patch_all()

# Initialize boto3 client at global scope for connection reuse
client = boto3.client('ssm')
env = os.environ['ENV']
app_config_path = os.environ['APP_CONFIG_PATH']
full_config_path = '/' + env + '/' + app_config_path
# Initialize app at global scope for reuse across invocations
app = None

class MyApp:
    def __init__(self, config):
        """
        Construct new MyApp with configuration
        :param config: application configuration
        """
        self.config = config

    def get_config(self):
        return self.config

def load_config(ssm_parameter_path):
    """
    Load configparser from config stored in SSM Parameter Store
    :param ssm_parameter_path: Path to app config in SSM Parameter Store
    :return: ConfigParser holding loaded config
    """
    configuration = configparser.ConfigParser()
    try:
        # Get all parameters for this app
        param_details = client.get_parameters_by_path(
            Path=ssm_parameter_path,
            Recursive=False,
            WithDecryption=True
        )

        # Loop through the returned parameters and populate the ConfigParser
        if 'Parameters' in param_details and len(param_details.get('Parameters')) > 0:
            for param in param_details.get('Parameters'):
                param_path_array = param.get('Name').split("/")
                section_position = len(param_path_array) - 1
                section_name = param_path_array[section_position]
                config_values = json.loads(param.get('Value'))
                config_dict = {section_name: config_values}
                print("Found configuration: " + str(config_dict))
                configuration.read_dict(config_dict)

    except:
        print("Encountered an error loading config from SSM.")
        traceback.print_exc()
    finally:
        return configuration

def lambda_handler(event, context):
    global app
    # Initialize app if it doesn't yet exist
    if app is None:
        print("Loading config and creating new MyApp...")
        config = load_config(full_config_path)
        app = MyApp(config)

    return "MyApp config is " + str(app.get_config()._sections)

Beneath the import statements, you import the patch_all function from the AWS X-Ray library, which you use to patch boto3 to create X-Ray segments for all your boto3 operations.

Next, you create a boto3 SSM client at the global scope for reuse across function invocations, following Lambda best practices. Using the function environment variables, you assemble the path where you expect to find your configuration in Parameter Store. The class MyApp is meant to serve as an example of an application that would need its configuration injected at construction. In this example, you create an instance of ConfigParser, a class in Python’s standard library for handling basic configurations, to give to MyApp.

The load_config function loads the all the parameters from Parameter Store at the level immediately beneath the path provided in the Lambda function environment variables. Each parameter found is put into a new section in ConfigParser. The name of the section is the name of the parameter, less the base path. In this example, the full parameter name is /dev/parameterStoreBlog/appConfig, which is put in a section named appConfig.

Finally, the lambda_handler function initializes an instance of MyApp if it doesn’t already exist, constructing it with the loaded configuration from Parameter Store. Then it simply returns the currently loaded configuration in MyApp. The impact of this design is that the configuration is only loaded from Parameter Store the first time that the Lambda function execution environment is initialized. Subsequent invocations reuse the existing instance of MyApp, resulting in improved performance. You see this in the X-Ray traces later in this post. For more advanced use cases where configuration changes need to be received immediately, you could implement an expiry policy for your configuration entries or push notifications to your function.

To confirm that everything was created successfully, test the function in the Lambda console.

  1. Open the Lambda console.
  2. In the navigation pane, choose Functions.
  3. In the Functions pane, filter to ParameterStoreBlogFunctionDev to find the function created by the SAM template earlier. Open the function name to view its details.
  4. On the top right of the function detail page, choose Test. You may need to create a new test event. The input JSON doesn’t matter as this function ignores the input.

After running the test, you should see output similar to the following. This demonstrates that the function successfully fetched the unencrypted configuration from Parameter Store.

Create an encrypted parameter

You currently have a simple, unencrypted parameter and a Lambda function that can access it.

Next, you create an encrypted parameter that only your Lambda function has permission to use for decryption. This limits read access for this parameter to only this Lambda function.

To follow along with this section, deploy the SAM template for this post in your account and make your IAM user name the KMS key admin mentioned earlier.

  1. In the Systems Manager console, under Shared Resources, choose Parameter Store.
  2. Choose Create Parameter.
    • For Name, enter /dev/parameterStoreBlog/appSecrets.
    • For Type, select Secure String.
    • For KMS Key ID, choose alias/ParameterStoreBlogKeyDev, which is the key that your SAM template created.
    • For Value, enter {"secretKey": "secretValue"}.
    • Choose Create Parameter.
  3. If you now try to view the value of this parameter by choosing the name of the parameter in the parameters list and then choosing Show next to the Value field, you won’t see the value appear. This is because, even though you have permission to encrypt values using this KMS key, you do not have permissions to decrypt values.
  4. In the Lambda console, run another test of your function. You now also see the secret parameter that you created and its decrypted value.

If you do not see the new parameter in the Lambda output, this may be because the Lambda execution environment is still warm from the previous test. Because the parameters are loaded at Lambda startup, you need a fresh execution environment to refresh the values.

Adjust the function timeout to a different value in the Advanced Settings at the bottom of the Lambda Configuration tab. Choose Save and test to trigger the creation of a new Lambda execution environment.

Profiling the impact of querying Parameter Store using AWS X-Ray

By using the AWS X-Ray SDK to patch boto3 in your Lambda function code, each invocation of the function creates traces in X-Ray. In this example, you can use these traces to validate the performance impact of your design decision to only load configuration from Parameter Store on the first invocation of the function in a new execution environment.

From the Lambda function details page where you tested the function earlier, under the function name, choose Monitoring. Choose View traces in X-Ray.

This opens the X-Ray console in a new window filtered to your function. Be aware of the time range field next to the search bar if you don’t see any search results.
In this screenshot, I’ve invoked the Lambda function twice, one time 10.3 minutes ago with a response time of 1.1 seconds and again 9.8 minutes ago with a response time of 8 milliseconds.

Looking at the details of the longer running trace by clicking the trace ID, you can see that the Lambda function spent the first ~350 ms of the full 1.1 sec routing the request through Lambda and creating a new execution environment for this function, as this was the first invocation with this code. This is the portion of time before the initialization subsegment.

Next, it took 725 ms to initialize the function, which includes executing the code at the global scope (including creating the boto3 client). This is also a one-time cost for a fresh execution environment.

Finally, the function executed for 65 ms, of which 63.5 ms was the GetParametersByPath call to Parameter Store.

Looking at the trace for the second, much faster function invocation, you see that the majority of the 8 ms execution time was Lambda routing the request to the function and returning the response. Only 1 ms of the overall execution time was attributed to the execution of the function, which makes sense given that after the first invocation you’re simply returning the config stored in MyApp.

While the Traces screen allows you to view the details of individual traces, the X-Ray Service Map screen allows you to view aggregate performance data for all traced services over a period of time.

In the X-Ray console navigation pane, choose Service map. Selecting a service node shows the metrics for node-specific requests. Selecting an edge between two nodes shows the metrics for requests that traveled that connection. Again, be aware of the time range field next to the search bar if you don’t see any search results.

After invoking your Lambda function several more times by testing it from the Lambda console, you can view some aggregate performance metrics. Look at the following:

  • From the client perspective, requests to the Lambda service for the function are taking an average of 50 ms to respond. The function is generating ~1 trace per minute.
  • The function itself is responding in an average of 3 ms. In the following screenshot, I’ve clicked on this node, which reveals a latency histogram of the traced requests showing that over 95% of requests return in under 5 ms.
  • Parameter Store is responding to requests in an average of 64 ms, but note the much lower trace rate in the node. This is because you only fetch data from Parameter Store on the initialization of the Lambda execution environment.

Conclusion

Deduplication, encryption, and restricted access to shared configuration and secrets is a key component to any mature architecture. Serverless architectures designed using event-driven, on-demand, compute services like Lambda are no different.

In this post, I walked you through a sample application accessing unencrypted and encrypted values in Parameter Store. These values were created in a hierarchy by application environment and component name, with the permissions to decrypt secret values restricted to only the function needing access. The techniques used here can become the foundation of secure, robust configuration management in your enterprise serverless applications.

Google Chrome Marking ALL Non-HTTPS Sites Insecure July 2018

Post Syndicated from Darknet original https://www.darknet.org.uk/2018/02/google-chrome-marking-non-https-sites-insecure-july-2018/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

Google Chrome Marking ALL Non-HTTPS Sites Insecure July 2018

Google is ramping up its campaign against HTTP only sites and is going to mark ALL Non-HTTPS sites insecure in July 2018 with the release of Chrome 68. It’s a pretty strong move, but Google and the Internet, in general, has been moving in this direction for a while.

It started with suggestions, then forced SSL on all sites behind logins, then mixed-content warnings, then showing HTTP sites are not-secured and now it’s going to be outright marked as insecure.

Read the rest of Google Chrome Marking ALL Non-HTTPS Sites Insecure July 2018 now! Only available at Darknet.