Tag Archives: DISA

Embedding a Tweet Can be Copyright Infringement, Court Rules

Post Syndicated from Ernesto original https://torrentfreak.com/embedding-a-tweet-can-be-copyright-infringement-court-rules-180216/

Nowadays it’s fairly common for blogs and news sites to embed content posted by third parties, ranging from YouTube videos to tweets.

Although these publications don’t host the content themselves, they can be held liable for copyright infringement, a New York federal court has ruled.

The case in question was filed by Justin Goldman whose photo of Tom Brady went viral after he posted it on Snapchat. After being reposted on Reddit, it also made its way onto Twitter from where various news organizations picked it up.

Several of these news sites reported on the photo by embedding tweets from others. However, since Goldman never gave permission to display his photo, he went on to sue the likes of Breitbart, Time, Vox and Yahoo, for copyright infringement.

In their defense, the news organizations argued that they did nothing wrong as no content was hosted on their servers. They referred to the so-called “server test” that was applied in several related cases in the past, which determined that liability rests on the party that hosts the infringing content.

In an order that was just issued, US District Court Judge Katherine Forrest disagrees. She rejects the “server test” argument and rules that the news organizations are liable.

“[W]hen defendants caused the embedded Tweets to appear on their websites, their actions violated plaintiff’s exclusive display right; the fact that the image was hosted on a server owned and operated by an unrelated third party (Twitter) does not shield them from this result,” Judge Forrest writes.

Judge Forrest argues that the server test was established in the ‘Perfect 10 v. Amazon’ case, which dealt with the ‘distribution’ of content. This case is about ‘displaying’ an infringing work instead, an area where the jurisprudence is not as clear.

“The Court agrees with plaintiff. The plain language of the Copyright Act, the legislative history undergirding its enactment, and subsequent Supreme Court jurisprudence provide no basis for a rule that allows the physical location or possession of an image to determine who may or may not have “displayed” a work within the meaning of the Copyright Act.”

As a result, summary judgment was granted in favor of Goldman.

Rightsholders, including Getty Images which supported Goldman, are happy with the result. However, not everyone is pleased. The Electronic Frontier Foundation (EFF) says that if the current verdict stands it will put millions of regular Internet users at risk.

“Rejecting years of settled precedent, a federal court in New York has ruled that you could infringe copyright simply by embedding a tweet in a web page,” EFF comments.

“Even worse, the logic of the ruling applies to all in-line linking, not just embedding tweets. If adopted by other courts, this legally and technically misguided decision would threaten millions of ordinary Internet users with infringement liability.”

Given what’s at stake, it’s likely that the news organization will appeal this week’s order.

Interestingly, earlier this week a California district court dismissed Playboy’s copyright infringement complaint against Boing Boing, which embedded a YouTube video that contained infringing content.

A copy of Judge Forrest’s opinion can be found here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

‘Pirate’ Kodi Addon Devs & Distributors Told to Cease-and-Desist

Post Syndicated from Andy original https://torrentfreak.com/pirate-kodi-addon-devs-distributors-told-to-cease-and-desist-180214/

Last November, following a year of upheaval for third-party addon creators and distributors, yet more turmoil hit the community in the form of threats from the world’s most powerful anti-piracy coalition – the Alliance for Creativity and Entertainment (ACE).

Comprised of 30 companies including the studios of the MPAA, Amazon, Netflix, CBS, HBO, BBC, Sky, Bell Canada, CBS, Hulu, Lionsgate, Foxtel, Village Roadshow, and many more, ACE warned several developers to shut down – or else.

The letter: shut down – or else

Now it appears that ACE is on the warpath again, this time targeting a broader range of individuals involved in the Kodi addon scene, from developers and distributors to those involved in the production of how-to videos on YouTube.

The first report of action came from TVAddons, who noted that the lead developer at the Noobs and Nerds repository had been targeted with a cease-and-desist notice, adding that people from the site had been “visited at their homes.”

As seen in the image below, the Noobs and Nerds website is currently down. The site’s Twitter account has also been disabled.

Noobs and Nerds – gone

While TVAddons couldn’t precisely confirm the source of the threat, information gathered from individuals involved in the addon scene all point to the involvement of ACE.

In particular, a man known online as Teverz, who develops his own builds, runs a repo, and creates Kodi-themed YouTube videos, confirmed that ACE had been in touch.

An apparently unconcerned Teverz….

“I am not a dev so they really don’t scare me lmao,” he added.

Teverz claims to be from Canada and it appears that others in the country are also facing cease and desist notices. An individual known as Doggmatic, who also identifies as Canadian and has Kodi builds under his belt, says he too was targeted.

Another target in Canada

Doggmatic, who appears to be part of the Illuminati repo, says he had someone call the people who sent the cease-and-desist but like Teverz, he doesn’t seem overly concerned, at least for now.

“I have a legal representative calling them. The letters they sent aren’t legal documents. No lawyer signed them and no law firm mentioned,” Doggmatic said.

But the threats don’t stop there. Blamo, the developer of the Neptune Rising addon accessible from the Blamo repo, also claims to have been threatened.

SpinzTV, who offers unofficial Kodi builds and an associated repository, is also under the spotlight. Unlike his Canadian counterparts, he has already thrown in the towel, according to a short announcement on Twitter.

For SpinzTV it’s all over…

TorrentFreak contacted the Alliance for Creativity and Entertainment, asking them if they could confirm the actions and provide any additional details. At the time of publication they had no information for us but we’ll update if and when that comes in.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

EFF Urges US Copyright Office To Reject Proactive ‘Piracy’ Filters

Post Syndicated from Andy original https://torrentfreak.com/eff-urges-us-copyright-office-to-reject-proactive-piracy-filters-180213/

Faced with millions of individuals consuming unlicensed audiovisual content from a variety of sources, entertainment industry groups have been seeking solutions closer to the roots of the problem.

As widespread site-blocking attempts to tackle ‘pirate’ sites in the background, greater attention has turned to legal platforms that host both licensed and unlicensed content.

Under current legislation, these sites and services can do business relatively comfortably due to the so-called safe harbor provisions of the US Digital Millennium Copyright Act (DMCA) and the European Union Copyright Directive (EUCD).

Both sets of legislation ensure that Internet platforms can avoid being held liable for the actions of others provided they themselves address infringement when they are made aware of specific problems. If a video hosting site has a copy of an unlicensed movie uploaded by a user, for example, it must be removed within a reasonable timeframe upon request from the copyright holder.

However, in both the US and EU there is mounting pressure to make it more difficult for online services to achieve ‘safe harbor’ protections.

Entertainment industry groups believe that platforms use the law to turn a blind eye to infringing content uploaded by users, content that is often monetized before being taken down. With this in mind, copyright holders on both sides of the Atlantic are pressing for more proactive regimes, ones that will see Internet platforms install filtering mechanisms to spot and discard infringing content before it can reach the public.

While such a system would be welcomed by rightsholders, Internet companies are fearful of a future in which they could be held more liable for the infringements of others. They’re supported by the EFF, who yesterday presented a petition to the US Copyright Office urging caution over potential changes to the DMCA.

“As Internet users, website owners, and online entrepreneurs, we urge you to preserve and strengthen the Digital Millennium Copyright Act safe harbors for Internet service providers,” the EFF writes.

“The DMCA safe harbors are key to keeping the Internet open to all. They allow anyone to launch a website, app, or other service without fear of crippling liability for copyright infringement by users.”

It is clear that pressure to introduce mandatory filtering is a concern to the EFF. Filters are blunt instruments that cannot fathom the intricacies of fair use and are liable to stifle free speech and stymie innovation, they argue.

“Major media and entertainment companies and their surrogates want Congress to replace today’s DMCA with a new law that would require websites and Internet services to use automated filtering to enforce copyrights.

“Systems like these, no matter how sophisticated, cannot accurately determine the copyright status of a work, nor whether a use is licensed, a fair use, or otherwise non-infringing. Simply put, automated filters censor lawful and important speech,” the EFF warns.

While its introduction was voluntary and doesn’t affect the company’s safe harbor protections, YouTube already has its own content filtering system in place.

ContentID is able to detect the nature of some content uploaded by users and give copyright holders a chance to remove or monetize it. The company says that the majority of copyright disputes are now handled by ContentID but the system is not perfect and mistakes are regularly flagged by users and mentioned in the media.

However, ContentID was also very expensive to implement so expecting smaller companies to deploy something similar on much more limited budgets could be a burden too far, the EFF warns.

“What’s more, even deeply flawed filters are prohibitively expensive for all but the largest Internet services. Requiring all websites to implement filtering would reinforce the market power wielded by today’s large Internet services and allow them to stifle competition. We urge you to preserve effective, usable DMCA safe harbors, and encourage Congress to do the same,” the EFF notes.

The same arguments, for and against, are currently raging in Europe where the EU Commission proposed mandatory upload filtering in 2016. Since then, opposition to the proposals has been fierce, with warnings of potential human rights breaches and conflicts with existing copyright law.

Back in the US, there are additional requirements for a provider to qualify for safe harbor, including having a named designated agent tasked with receiving copyright infringement notifications. This person’s name must be listed on a platform’s website and submitted to the US Copyright Office, which maintains a centralized online directory of designated agents’ contact information.

Under new rules, agents must be re-registered with the Copyright Office every three years, despite that not being a requirement under the DMCA. The EFF is concerned that by simply failing to re-register an agent, an otherwise responsible website could lose its safe harbor protections, even if the agent’s details have remained the same.

“We’re concerned that the new requirement will particularly disadvantage small and nonprofit websites. We ask you to reconsider this rule,” the EFF concludes.

The EFF’s letter to the Copyright Office can be found here.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Early Challenges: Making Critical Hires

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/early-challenges-making-critical-hires/

row of potential employee hires sitting waiting for an interview

In 2009, Google disclosed that they had 400 recruiters on staff working to hire nearly 10,000 people. Someday, that might be your challenge, but most companies in their early days are looking to hire a handful of people — the right people — each year. Assuming you are closer to startup stage than Google stage, let’s look at who you need to hire, when to hire them, where to find them (and how to help them find you), and how to get them to join your company.

Who Should Be Your First Hires

In later stage companies, the roles in the company have been well fleshed out, don’t change often, and each role can be segmented to focus on a specific area. A large company may have an entire department focused on just cubicle layout; at a smaller company you may not have a single person whose actual job encompasses all of facilities. At Backblaze, our CTO has a passion and knack for facilities and mostly led that charge. Also, the needs of a smaller company are quick to change. One of our first hires was a QA person, Sean, who ended up being 100% focused on data center infrastructure. In the early stage, things can shift quite a bit and you need people that are broadly capable, flexible, and most of all willing to pitch in where needed.

That said, there are times you may need an expert. At a previous company we hired Jon, a PhD in Bayesian statistics, because we needed algorithmic analysis for spam fighting. However, even that person was not only able and willing to do the math, but also code, and to not only focus on Bayesian statistics but explore a plethora of spam fighting options.

When To Hire

If you’ve raised a lot of cash and are willing to burn it with mistakes, you can guess at all the roles you might need and start hiring for them. No judgement: that’s a reasonable strategy if you’re cash-rich and time-poor.

If your cash is limited, try to see what you and your team are already doing and then hire people to take those jobs. It may sound counterintuitive, but if you’re already doing it presumably it needs to be done, you have a good sense of the type of skills required to do it, and you can bring someone on-board and get them up to speed quickly. That then frees you up to focus on tasks that can’t be done by someone else. At Backblaze, I ran marketing internally for years before hiring a VP of Marketing, making it easier for me to know what we needed. Once I was hiring, my primary goal was to find someone I could trust to take that role completely off of me so I could focus solely on my CEO duties

Where To Find the Right People

Finding great people is always difficult, particularly when the skillsets you’re looking for are highly in-demand by larger companies with lots of cash and cachet. You, however, have one massive advantage: you need to hire 5 people, not 5,000.

People You Worked With

The absolutely best people to hire are ones you’ve worked with before that you already know are good in a work situation. Consider your last job, the one before, and the one before that. A significant number of the people we recruited at Backblaze came from our previous startup MailFrontier. We knew what they could do and how they would fit into the culture, and they knew us and thus could quickly meld into the environment. If you didn’t have a previous job, consider people you went to school with or perhaps individuals with whom you’ve done projects previously.

People You Know

Hiring friends, family, and others can be risky, but should be considered. Sometimes a friend can be a “great buddy,” but is not able to do the job or isn’t a good fit for the organization. Having to let go of someone who is a friend or family member can be rough. Have the conversation up front with them about that possibility, so you have the ability to stay friends if the position doesn’t work out. Having said that, if you get along with someone as a friend, that’s one critical component of succeeding together at work. At Backblaze we’ve hired a number of people successfully that were friends of someone in the organization.

Friends Of People You Know

Your network is likely larger than you imagine. Your employees, investors, advisors, spouses, friends, and other folks all know people who might be a great fit for you. Make sure they know the roles you’re hiring for and ask them if they know anyone that would fit. Search LinkedIn for the titles you’re looking for and see who comes up; if they’re a 2nd degree connection, ask your connection for an introduction.

People You Know About

Sometimes the person you want isn’t someone anyone knows, but you may have read something they wrote, used a product they’ve built, or seen a video of a presentation they gave. Reach out. You may get a great hire: worst case, you’ll let them know they were appreciated, and make them aware of your organization.

Other Places to Find People

There are a million other places to find people, including job sites, community groups, Facebook/Twitter, GitHub, and more. Consider where the people you’re looking for are likely to congregate online and in person.

A Comment on Diversity

Hiring “People You Know” can often result in “Hiring People Like You” with the same workplace experiences, culture, background, and perceptions. Some studies have shown [1, 2, 3, 4] that homogeneous groups deliver faster, while heterogeneous groups are more creative. Also, “Hiring People Like You” often propagates the lack of women and minorities in tech and leadership positions in general. When looking for people you know, keep an eye to not discount people you know who don’t have the same cultural background as you.

Helping People To Find You

Reaching out proactively to people is the most direct way to find someone, but you want potential hires coming to you as well. To do this, they have to a) be aware of you, b) know you have a role they’re interested in, and c) think they would want to work there. Let’s tackle a) and b) first below.

Your Blog

I started writing our blog before we launched the product and talked about anything I found interesting related to our space. For several years now our team has owned the content on the blog and in 2017 over 1.5 million people read it. Each time we have a position open it’s published to the blog. If someone finds reading about backup and storage interesting, perhaps they’d want to dig in deeper from the inside. Many of the people we’ve recruited have mentioned reading the blog as either how they found us or as a factor in why they wanted to work here.
[BTW, this is Gleb’s 200th post on Backblaze’s blog. The first was in 2008. — Editor]

Your Email List

In addition to the emails our blog subscribers receive, we send regular emails to our customers, partners, and prospects. These are largely focused on content we think is directly useful or interesting for them. However, once every few months we include a small mention that we’re hiring, and the positions we’re looking for. Often a small blurb is all you need to capture people’s imaginations whether they might find the jobs interesting or can think of someone that might fit the bill.

Your Social Involvement

Whether it’s Twitter or Facebook, Hacker News or Slashdot, your potential hires are engaging in various communities. Being socially involved helps make people aware of you, reminds them of you when they’re considering a job, and paints a picture of what working with you and your company would be like. Adam was in a Reddit thread where we were discussing our Storage Pods, and that interaction was ultimately part of the reason he left Apple to come to Backblaze.

Convincing People To Join

Once you’ve found someone or they’ve found you, how do you convince them to join? They may be currently employed, have other offers, or have to relocate. Again, while the biggest companies have a number of advantages, you might have more unique advantages than you realize.

Why Should They Join You

Here are a set of items that you may be able to offer which larger organizations might not:

Role: Consider the strengths of the role. Perhaps it will have broader scope? More visibility at the executive level? No micromanagement? Ability to take risks? Option to create their own role?

Compensation: In addition to salary, will their options potentially be worth more since they’re getting in early? Can they trade-off salary for more options? Do they get option refreshes?

Benefits: In addition to healthcare, food, and 401(k) plans, are there unique benefits of your company? One company I knew took the entire team for a one-month working retreat abroad each year.

Location: Most people prefer to work close to home. If you’re located outside of the San Francisco Bay Area, you might be at a disadvantage for not being in the heart of tech. But if you find employees close to you you’ve got a huge advantage. Sometimes it’s micro; even in the Bay Area the difference of 5 miles can save 20 minutes each way every day. We located the Backblaze headquarters in San Mateo, a middle-ground that made it accessible to those coming from San Jose and San Francisco. We also chose a downtown location near a train, restaurants, and cafes: all to make it easier and more pleasant. Also, are you flexible in letting your employees work remotely? Our systems administrator Elliott is about to embark on a long-term cross-country journey working from an RV.

Environment: Open office, cubicle, cafe, work-from-home? Loud/quiet? Social or focused? 24×7 or work-life balance? Different environments appeal to different people.

Team: Who will they be working with? A company with 100,000 people might have 100 brilliant ones you’d want to work with, but ultimately we work with our core team. Who will your prospective hires be working with?

Market: Some people are passionate about gaming, others biotech, still others food. The market you’re targeting will get different people excited.

Product: Have an amazing product people love? Highlight that. If you’re lucky, your potential hire is already a fan.

Mission: Curing cancer, making people happy, and other company missions inspire people to strive to be part of the journey. Our mission is to make storing data astonishingly easy and low-cost. If you care about data, information, knowledge, and progress, our mission helps drive all of them.

Culture: I left this for last, but believe it’s the most important. What is the culture of your company? Finding people who want to work in the culture of your organization is critical. If they like the culture, they’ll fit and continue it. We’ve worked hard to build a culture that’s collaborative, friendly, supportive, and open; one in which people like coming to work. For example, the five founders started with (and still have) the same compensation and equity. That started a culture of “we’re all in this together.” Build a culture that will attract the people you want, and convey what the culture is.

Writing The Job Description

Most job descriptions focus on the all the requirements the candidate must meet. While important to communicate, the job description should first sell the job. Why would the appropriate candidate want the job? Then share some of the requirements you think are critical. Remember that people read not just what you say but how you say it. Try to write in a way that conveys what it is like to actually be at the company. Ahin, our VP of Marketing, said the job description itself was one of the things that attracted him to the company.

Orchestrating Interviews

Much can be said about interviewing well. I’m just going to say this: make sure that everyone who is interviewing knows that their job is not only to evaluate the candidate, but give them a sense of the culture, and sell them on the company. At Backblaze, we often have one person interview core prospects solely for company/culture fit.

Onboarding

Hiring success shouldn’t be defined by finding and hiring the right person, but instead by the right person being successful and happy within the organization. Ensure someone (usually their manager) provides them guidance on what they should be concentrating on doing during their first day, first week, and thereafter. Giving new employees opportunities and guidance so that they can achieve early wins and feel socially integrated into the company does wonders for bringing people on board smoothly

In Closing

Our Director of Production Systems, Chris, said to me the other day that he looks for companies where he can work on “interesting problems with nice people.” I’m hoping you’ll find your own version of that and find this post useful in looking for your early and critical hires.

Of course, I’d be remiss if I didn’t say, if you know of anyone looking for a place with “interesting problems with nice people,” Backblaze is hiring. 😉

The post Early Challenges: Making Critical Hires appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Preining: In memoriam Staszek Wawrykiewicz

Post Syndicated from ris original https://lwn.net/Articles/747123/rss

Norbert Preining reports
the sad news
that Staszek Wawrykiewicz has died. “Staszek was an
active member of the Polish TeX community, and an incredibly valuable TeX
Live Team member. His insistence and perseverance have saved TeX Live from
many disasters and bugs. Although I have been in contact with Staszek over
the TeX Live mailing lists since some years, I met him in person for the
first time on my first ever BachoTeX, the EuroBachoTeX 2007. His
friendliness, openness to all new things, his inquisitiveness, all took a
great place in my heart.
” (Thanks to Paul Wise)

The Early Days of Mass Internet Piracy Were Awesome Yet Awful

Post Syndicated from Andy original https://torrentfreak.com/the-early-days-of-mass-internet-piracy-were-awesome-yet-awful-180211/

While Napster certainly put the digital cats among the pigeons in 1999, the organized chaos of mass Internet file-sharing couldn’t be truly appreciated until the advent of decentralized P2P networks a year or so later.

In the blink of an eye, everyone with a “shared folder” client became both a consumer and publisher, sucking in files from strangers and sharing them with like-minded individuals all around the planet. While today’s piracy narrative is all about theft and danger, in the early 2000s the sharing community felt more like distant friends who hadn’t met, quietly trading cards together.

Satisfying to millions, those who really engaged found shared folder sharing a real adrenaline buzz, as English comedian Seann Walsh noted on Conan this week.

“Click. 20th Century Fox comes up. No pixels. No shaky cam. No silhouettes of heads at the bottom of the screen, people coming in five minutes late. None of that,” Walsh said, recalling his experience of downloading X-Men 2 (X2) from LimeWire.

“We thought: ‘We’ve done it!!’ This was incredible! We were going to have to go to the cinema. We weren’t going to have to wait for the film to come out on video. We weren’t going to have to WALK to blockbuster!”

But while the nostalgia has an air of magic about it, Walsh’s take on the piracy experience is bittersweet. While obtaining X2 without having to trudge to a video store was a revelation, there were plenty of drawbacks too.

Downloading the pirate copy took a week, which pre-BitTorrent wasn’t a completely bad result but still a considerable commitment. There were also serious problems with quality control.

“20th Century fades, X Men 2 comes up. We’ve done it! We’re not taking it for granted – we’re actually hugging. Yes! Yes! We’ve done it! This is the future! We look at the screen, Wolverine turns round…,” …..and Walsh launches into a broadside of pseudo-German babble, mimicking the unexpectedly-dubbed superhero.

After a week of downloading and getting a quality picture on launch, that is a punch in the gut, to say the least. Arguably no less than a pirate deserves, some will argue, but a fat lip nonetheless, and one many a pirate has suffered over the years. Nevertheless, as Walsh notes, it’s a pain that kids in 2018 simply cannot comprehend.

“Children today are living the childhood I dreamed of. If they want to hear a song – touch – they stream it. They’ve got it now. Bang. Instantly. They don’t know the pain of LimeWire.

“Start downloading a song, go to school, come back. HOPE that it’d finished! That download bar messing with you. Four minutes left…..nine HOURS and 28 minutes left? Thirty seconds left…..52 hours and 38 minutes left? JUST TELL ME THE TRUTH!!!!!” Walsh pleaded.

While this might sound comical now, this was the reality of people downloading from clients such as LimeWire and Kazaa. While X2 in German would’ve been torture for a non-German speaker, the misery of watching an English language copy of 28 Days Later somehow crammed into a 30Mb file is right up there too.

Mislabeled music with microscopic bitrates? That was pretty much standard.

But against the odds, these frankly second-rate experiences still managed to capture the hearts and minds of the digitally minded. People were prepared to put up with nonsense and regular disappointment in order to consume content in a way fit for the 21st century. Yet somehow the combined might of the entertainment industries couldn’t come up with anything substantially better for a number of years.

Of course, broadband availability and penetration played its part but looking back, something could have been done. Not only didn’t the Internet’s popularity come as a surprise, people’s expectations were dramatically lower than they are today too. In any event, beating the pirates should have been child’s play. After all, it was just regular people sharing files in a Windows folder.

Any fool could do it – and millions did. Surprisingly, they have proven unstoppable.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Cloudflare Hit With Piracy Lawsuit After Abuse Form ‘Fails’

Post Syndicated from Ernesto original https://torrentfreak.com/cloudflare-hit-with-piracy-lawsuit-after-abuse-form-fails-180210/

Seattle-based artist Christopher Boffoli is no stranger when it comes to suing tech companies for aiding copyright infringement of his work.

Boffoli has filed lawsuits against Imgur, Twitter, Pinterest, Google, and others, which were dismissed and/or settled out of court under undisclosed terms.

This month he filed a new case against another intermediary, Cloudflare, which has had its fair share of piracy allegations in recent years.

In common with other companies, Cloudflare is accused of contributing to copyright infringements of Boffoli’s “Big Appetites” miniatures series. In this case, several Cloudflare customers allegedly posted these photos on their sites which were then reproduced on the servers of the CDN provider.

The lawsuit mentions that the infringing copies were posted on unique-landscape.com and baklol.com. This was also pointed out to Cloudflare by Boffoli, who sent the company DMCA takedown notices in October and November of last year.

While the photographer received an automated response, the photos in question remained online. Through the lawsuit, Boffoli hopes this will change.

“CloudFlare induced, caused, or materially contributed to the Infringing Websites’ publication,” the complaint reads. “CloudFlare had actual knowledge of the Infringing Content. Boffoli provided notice to CloudFlare in compliance with the DMCA, and CloudFlare failed to disable access to or remove the Infringing Websites.”

The photographer is asking the court to order an injunction preventing Cloudflare from making his work available. In addition, the complaint asks for actual and statutory damages for willful copyright infringement. With at least four photos in the lawsuit, the potential damages are more than half a million dollars.

While it’s not mentioned in the complaint, the email communication between Boffoli and Cloudflare goes further than just an automated response. Court records show that the photographer initially didn’t ask Cloudflare to remove the infringing photos. Instead, he asked the CDN provider to forward them to the ISP or site owner.

“I would be grateful if you would forward this DMCA takedown request to the website owner and ISP so these infringing links can immediately be removed,” it read.

Part of the email communication

From then on things escalated a bit. The emails reveal that Boffoli had trouble reporting the infringing photos through the required form.

When the photographer pointed this out in a direct email, Cloudflare urged him to try the form again as that was the only way to send the DMCA request to the designated copyright agent.

“The DMCA doesn’t require us to process reports not sent to our registered agent as per our registration with the US Copyright Office. Our registered copyright agent is the form located at cloudflare.com/abuse/form and you may proceed via that avenue,” Cloudflare wrote.

If the case moves forward, Cloudflare may use this to argue that it never received a proper DMCA takedown notice. However, Boffoli wasn’t planning on trying again and instead threatened a lawsuit, unless Cloudflare took immediate action.

“As I have said, your form did not work for me despite repeated attempts to use it. And it is insulting for you to suggest that it’s working fine when it is not. So again, this is absolutely my last attempt to get you to respond to this infringement for which you are impeding the removal,” Boffoli wrote.

“If you take no action now I will forward this to my legal team this week. It is more than enough of a burden to have to waste countless hours policing my own copyrights without organizations like Cloudflare running interference for copyright infringers. I am not averse to asking a federal judge to compel you to deal with these copyright infringements. And I will seek statutory damages for contributory infringement at that time.”

As it turns out, that was not an idle threat.

—-

A copy of the complaint is available here (pdf) and the email exhibits can be found here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Sharing Secrets with AWS Lambda Using AWS Systems Manager Parameter Store

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/sharing-secrets-with-aws-lambda-using-aws-systems-manager-parameter-store/

This post courtesy of Roberto Iturralde, Sr. Application Developer- AWS Professional Services

Application architects are faced with key decisions throughout the process of designing and implementing their systems. One decision common to nearly all solutions is how to manage the storage and access rights of application configuration. Shared configuration should be stored centrally and securely with each system component having access only to the properties that it needs for functioning.

With AWS Systems Manager Parameter Store, developers have access to central, secure, durable, and highly available storage for application configuration and secrets. Parameter Store also integrates with AWS Identity and Access Management (IAM), allowing fine-grained access control to individual parameters or branches of a hierarchical tree.

This post demonstrates how to create and access shared configurations in Parameter Store from AWS Lambda. Both encrypted and plaintext parameter values are stored with only the Lambda function having permissions to decrypt the secrets. You also use AWS X-Ray to profile the function.

Solution overview

This example is made up of the following components:

  • An AWS SAM template that defines:
    • A Lambda function and its permissions
    • An unencrypted Parameter Store parameter that the Lambda function loads
    • A KMS key that only the Lambda function can access. You use this key to create an encrypted parameter later.
  • Lambda function code in Python 3.6 that demonstrates how to load values from Parameter Store at function initialization for reuse across invocations.

Launch the AWS SAM template

To create the resources shown in this post, you can download the SAM template or choose the button to launch the stack. The template requires one parameter, an IAM user name, which is the name of the IAM user to be the admin of the KMS key that you create. In order to perform the steps listed in this post, this IAM user will need permissions to execute Lambda functions, create Parameter Store parameters, administer keys in KMS, and view the X-Ray console. If you have these privileges in your IAM user account you can use your own account to complete the walkthrough. You can not use the root user to administer the KMS keys.

SAM template resources

The following sections show the code for the resources defined in the template.
Lambda function

ParameterStoreBlogFunctionDev:
    Type: 'AWS::Serverless::Function'
    Properties:
      FunctionName: 'ParameterStoreBlogFunctionDev'
      Description: 'Integrating lambda with Parameter Store'
      Handler: 'lambda_function.lambda_handler'
      Role: !GetAtt ParameterStoreBlogFunctionRoleDev.Arn
      CodeUri: './code'
      Environment:
        Variables:
          ENV: 'dev'
          APP_CONFIG_PATH: 'parameterStoreBlog'
          AWS_XRAY_TRACING_NAME: 'ParameterStoreBlogFunctionDev'
      Runtime: 'python3.6'
      Timeout: 5
      Tracing: 'Active'

  ParameterStoreBlogFunctionRoleDev:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          -
            Effect: Allow
            Principal:
              Service:
                - 'lambda.amazonaws.com'
            Action:
              - 'sts:AssumeRole'
      ManagedPolicyArns:
        - 'arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole'
      Policies:
        -
          PolicyName: 'ParameterStoreBlogDevParameterAccess'
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              -
                Effect: Allow
                Action:
                  - 'ssm:GetParameter*'
                Resource: !Sub 'arn:aws:ssm:${AWS::Region}:${AWS::AccountId}:parameter/dev/parameterStoreBlog*'
        -
          PolicyName: 'ParameterStoreBlogDevXRayAccess'
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              -
                Effect: Allow
                Action:
                  - 'xray:PutTraceSegments'
                  - 'xray:PutTelemetryRecords'
                Resource: '*'

In this YAML code, you define a Lambda function named ParameterStoreBlogFunctionDev using the SAM AWS::Serverless::Function type. The environment variables for this function include the ENV (dev) and the APP_CONFIG_PATH where you find the configuration for this app in Parameter Store. X-Ray tracing is also enabled for profiling later.

The IAM role for this function extends the AWSLambdaBasicExecutionRole by adding IAM policies that grant the function permissions to write to X-Ray and get parameters from Parameter Store, limited to paths under /dev/parameterStoreBlog*.
Parameter Store parameter

SimpleParameter:
    Type: AWS::SSM::Parameter
    Properties:
      Name: '/dev/parameterStoreBlog/appConfig'
      Description: 'Sample dev config values for my app'
      Type: String
      Value: '{"key1": "value1","key2": "value2","key3": "value3"}'

This YAML code creates a plaintext string parameter in Parameter Store in a path that your Lambda function can access.
KMS encryption key

ParameterStoreBlogDevEncryptionKeyAlias:
    Type: AWS::KMS::Alias
    Properties:
      AliasName: 'alias/ParameterStoreBlogKeyDev'
      TargetKeyId: !Ref ParameterStoreBlogDevEncryptionKey

  ParameterStoreBlogDevEncryptionKey:
    Type: AWS::KMS::Key
    Properties:
      Description: 'Encryption key for secret config values for the Parameter Store blog post'
      Enabled: True
      EnableKeyRotation: False
      KeyPolicy:
        Version: '2012-10-17'
        Id: 'key-default-1'
        Statement:
          -
            Sid: 'Allow administration of the key & encryption of new values'
            Effect: Allow
            Principal:
              AWS:
                - !Sub 'arn:aws:iam::${AWS::AccountId}:user/${IAMUsername}'
            Action:
              - 'kms:Create*'
              - 'kms:Encrypt'
              - 'kms:Describe*'
              - 'kms:Enable*'
              - 'kms:List*'
              - 'kms:Put*'
              - 'kms:Update*'
              - 'kms:Revoke*'
              - 'kms:Disable*'
              - 'kms:Get*'
              - 'kms:Delete*'
              - 'kms:ScheduleKeyDeletion'
              - 'kms:CancelKeyDeletion'
            Resource: '*'
          -
            Sid: 'Allow use of the key'
            Effect: Allow
            Principal:
              AWS: !GetAtt ParameterStoreBlogFunctionRoleDev.Arn
            Action:
              - 'kms:Encrypt'
              - 'kms:Decrypt'
              - 'kms:ReEncrypt*'
              - 'kms:GenerateDataKey*'
              - 'kms:DescribeKey'
            Resource: '*'

This YAML code creates an encryption key with a key policy with two statements.

The first statement allows a given user (${IAMUsername}) to administer the key. Importantly, this includes the ability to encrypt values using this key and disable or delete this key, but does not allow the administrator to decrypt values that were encrypted with this key.

The second statement grants your Lambda function permission to encrypt and decrypt values using this key. The alias for this key in KMS is ParameterStoreBlogKeyDev, which is how you reference it later.

Lambda function

Here I walk you through the Lambda function code.

import os, traceback, json, configparser, boto3
from aws_xray_sdk.core import patch_all
patch_all()

# Initialize boto3 client at global scope for connection reuse
client = boto3.client('ssm')
env = os.environ['ENV']
app_config_path = os.environ['APP_CONFIG_PATH']
full_config_path = '/' + env + '/' + app_config_path
# Initialize app at global scope for reuse across invocations
app = None

class MyApp:
    def __init__(self, config):
        """
        Construct new MyApp with configuration
        :param config: application configuration
        """
        self.config = config

    def get_config(self):
        return self.config

def load_config(ssm_parameter_path):
    """
    Load configparser from config stored in SSM Parameter Store
    :param ssm_parameter_path: Path to app config in SSM Parameter Store
    :return: ConfigParser holding loaded config
    """
    configuration = configparser.ConfigParser()
    try:
        # Get all parameters for this app
        param_details = client.get_parameters_by_path(
            Path=ssm_parameter_path,
            Recursive=False,
            WithDecryption=True
        )

        # Loop through the returned parameters and populate the ConfigParser
        if 'Parameters' in param_details and len(param_details.get('Parameters')) > 0:
            for param in param_details.get('Parameters'):
                param_path_array = param.get('Name').split("/")
                section_position = len(param_path_array) - 1
                section_name = param_path_array[section_position]
                config_values = json.loads(param.get('Value'))
                config_dict = {section_name: config_values}
                print("Found configuration: " + str(config_dict))
                configuration.read_dict(config_dict)

    except:
        print("Encountered an error loading config from SSM.")
        traceback.print_exc()
    finally:
        return configuration

def lambda_handler(event, context):
    global app
    # Initialize app if it doesn't yet exist
    if app is None:
        print("Loading config and creating new MyApp...")
        config = load_config(full_config_path)
        app = MyApp(config)

    return "MyApp config is " + str(app.get_config()._sections)

Beneath the import statements, you import the patch_all function from the AWS X-Ray library, which you use to patch boto3 to create X-Ray segments for all your boto3 operations.

Next, you create a boto3 SSM client at the global scope for reuse across function invocations, following Lambda best practices. Using the function environment variables, you assemble the path where you expect to find your configuration in Parameter Store. The class MyApp is meant to serve as an example of an application that would need its configuration injected at construction. In this example, you create an instance of ConfigParser, a class in Python’s standard library for handling basic configurations, to give to MyApp.

The load_config function loads the all the parameters from Parameter Store at the level immediately beneath the path provided in the Lambda function environment variables. Each parameter found is put into a new section in ConfigParser. The name of the section is the name of the parameter, less the base path. In this example, the full parameter name is /dev/parameterStoreBlog/appConfig, which is put in a section named appConfig.

Finally, the lambda_handler function initializes an instance of MyApp if it doesn’t already exist, constructing it with the loaded configuration from Parameter Store. Then it simply returns the currently loaded configuration in MyApp. The impact of this design is that the configuration is only loaded from Parameter Store the first time that the Lambda function execution environment is initialized. Subsequent invocations reuse the existing instance of MyApp, resulting in improved performance. You see this in the X-Ray traces later in this post. For more advanced use cases where configuration changes need to be received immediately, you could implement an expiry policy for your configuration entries or push notifications to your function.

To confirm that everything was created successfully, test the function in the Lambda console.

  1. Open the Lambda console.
  2. In the navigation pane, choose Functions.
  3. In the Functions pane, filter to ParameterStoreBlogFunctionDev to find the function created by the SAM template earlier. Open the function name to view its details.
  4. On the top right of the function detail page, choose Test. You may need to create a new test event. The input JSON doesn’t matter as this function ignores the input.

After running the test, you should see output similar to the following. This demonstrates that the function successfully fetched the unencrypted configuration from Parameter Store.

Create an encrypted parameter

You currently have a simple, unencrypted parameter and a Lambda function that can access it.

Next, you create an encrypted parameter that only your Lambda function has permission to use for decryption. This limits read access for this parameter to only this Lambda function.

To follow along with this section, deploy the SAM template for this post in your account and make your IAM user name the KMS key admin mentioned earlier.

  1. In the Systems Manager console, under Shared Resources, choose Parameter Store.
  2. Choose Create Parameter.
    • For Name, enter /dev/parameterStoreBlog/appSecrets.
    • For Type, select Secure String.
    • For KMS Key ID, choose alias/ParameterStoreBlogKeyDev, which is the key that your SAM template created.
    • For Value, enter {"secretKey": "secretValue"}.
    • Choose Create Parameter.
  3. If you now try to view the value of this parameter by choosing the name of the parameter in the parameters list and then choosing Show next to the Value field, you won’t see the value appear. This is because, even though you have permission to encrypt values using this KMS key, you do not have permissions to decrypt values.
  4. In the Lambda console, run another test of your function. You now also see the secret parameter that you created and its decrypted value.

If you do not see the new parameter in the Lambda output, this may be because the Lambda execution environment is still warm from the previous test. Because the parameters are loaded at Lambda startup, you need a fresh execution environment to refresh the values.

Adjust the function timeout to a different value in the Advanced Settings at the bottom of the Lambda Configuration tab. Choose Save and test to trigger the creation of a new Lambda execution environment.

Profiling the impact of querying Parameter Store using AWS X-Ray

By using the AWS X-Ray SDK to patch boto3 in your Lambda function code, each invocation of the function creates traces in X-Ray. In this example, you can use these traces to validate the performance impact of your design decision to only load configuration from Parameter Store on the first invocation of the function in a new execution environment.

From the Lambda function details page where you tested the function earlier, under the function name, choose Monitoring. Choose View traces in X-Ray.

This opens the X-Ray console in a new window filtered to your function. Be aware of the time range field next to the search bar if you don’t see any search results.
In this screenshot, I’ve invoked the Lambda function twice, one time 10.3 minutes ago with a response time of 1.1 seconds and again 9.8 minutes ago with a response time of 8 milliseconds.

Looking at the details of the longer running trace by clicking the trace ID, you can see that the Lambda function spent the first ~350 ms of the full 1.1 sec routing the request through Lambda and creating a new execution environment for this function, as this was the first invocation with this code. This is the portion of time before the initialization subsegment.

Next, it took 725 ms to initialize the function, which includes executing the code at the global scope (including creating the boto3 client). This is also a one-time cost for a fresh execution environment.

Finally, the function executed for 65 ms, of which 63.5 ms was the GetParametersByPath call to Parameter Store.

Looking at the trace for the second, much faster function invocation, you see that the majority of the 8 ms execution time was Lambda routing the request to the function and returning the response. Only 1 ms of the overall execution time was attributed to the execution of the function, which makes sense given that after the first invocation you’re simply returning the config stored in MyApp.

While the Traces screen allows you to view the details of individual traces, the X-Ray Service Map screen allows you to view aggregate performance data for all traced services over a period of time.

In the X-Ray console navigation pane, choose Service map. Selecting a service node shows the metrics for node-specific requests. Selecting an edge between two nodes shows the metrics for requests that traveled that connection. Again, be aware of the time range field next to the search bar if you don’t see any search results.

After invoking your Lambda function several more times by testing it from the Lambda console, you can view some aggregate performance metrics. Look at the following:

  • From the client perspective, requests to the Lambda service for the function are taking an average of 50 ms to respond. The function is generating ~1 trace per minute.
  • The function itself is responding in an average of 3 ms. In the following screenshot, I’ve clicked on this node, which reveals a latency histogram of the traced requests showing that over 95% of requests return in under 5 ms.
  • Parameter Store is responding to requests in an average of 64 ms, but note the much lower trace rate in the node. This is because you only fetch data from Parameter Store on the initialization of the Lambda execution environment.

Conclusion

Deduplication, encryption, and restricted access to shared configuration and secrets is a key component to any mature architecture. Serverless architectures designed using event-driven, on-demand, compute services like Lambda are no different.

In this post, I walked you through a sample application accessing unencrypted and encrypted values in Parameter Store. These values were created in a hierarchy by application environment and component name, with the permissions to decrypt secret values restricted to only the function needing access. The techniques used here can become the foundation of secure, robust configuration management in your enterprise serverless applications.

Chrome and Firefox Block 123movies Over “Harmful Programs”

Post Syndicated from Ernesto original https://torrentfreak.com/chrome-and-firefox-block-123movies-over-harmful-programs-180209/

With millions of visitors per day, 123movies(hub), also known as Gomovies, is one of the largest pirate streaming sites on the web.

Today, however, many visitors were welcomed by a dangerous-looking red banner instead of the usual homepage.

“The site ahead contains harmful programs,” Chrome warns its users. “Attackers on 123movieshub.to might attempt to trick you into installing programs that harm your browsing experience.”

It is not clear what the problem is in this particular case, but these type of notifications are often triggered by malicious or deceptive third-party advertising that has appeared on a site.

Warning

These warning messages are triggered by Google’s Safebrowsing algorithm which flags websites that pose a potential danger to visitors. Chrome, Firefox, and others use this service to prevent users from running into unwanted software.

In addition to the browser block, Google generally informs the site’s owners that their domain will be demoted in search results until the issue is resolved.

Google previously informed us that these kinds of warnings automatically disappear when the flagged sites no longer violate Google’s policy. This can take one or two days, but also longer.

This isn’t the first time that Google has flagged such a large website. Many pirate sites, including The Pirate Bay, have been affected by this issue in the past.

Chrome and Firefox users should be familiar with these intermittent warning notices be now. If users believe that an affected site is harmless they can always take steps (Chrome, FF) to bypass the blocks, but that’s completely at their own risk.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Pirate ‘Kodi’ Boxes & Infringing Streams Cost eBay Sellers Dearly

Post Syndicated from Andy original https://torrentfreak.com/pirate-kodi-boxes-infringing-streams-cost-ebay-sellers-dearly-180209/

Those on the look out for ready-configured pirate set-top boxes can drift around the web looking at hundreds of options or head off to the places most people know best – eBay and Facebook.

Known for its ease of use and broad range of content, eBay is often the go-to place for sellers looking to offload less than legitimate stock. Along with Facebook, it’s become one of the easiest places online to find so-called Kodi boxes.

While the Kodi software itself is entirely legal, millions of people have their boxes configured for piracy purposes and eBay and Facebook provide a buying platform for those who don’t want to do the work themselves.

Sellers generally operate with impunity but according to news from the Premier League and anti-piracy partners Federation Against Copyright Theft (FACT), that’s not always the case.

FACT reports that a supplier of ISDs (Illicit Streaming Devices) that came pre-loaded for viewing top-tier football without permission has agreed to pay the Premier League thousands of pounds.

Nayanesh Patel from Harrow, Middlesex, is said to have sold Kodi-type boxes on eBay and Facebook but got caught in the act. As a result he’s agreed to cough up £18,000, disable his website, remove all advertising, and cease future sales.

A second individual, who isn’t named, allegedly sold subscriptions to illegal streams of Premier League football via eBay. He too was tracked down and eventually agreed to pay £8,000 and cease all future streams sales.

“This case shows there are serious consequences for sellers of pre-loaded boxes and is a warning for anyone who thinks they might get away with this type of activity,” says Premier League Director of Legal Services, Kevin Plumb.

“The Premier League is currently engaged in a comprehensive copyright protection programme that includes targeting and taking action against sellers of pre-loaded devices, and any ISPs or hosts that facilitate the broadcast of pirated Premier League content.”

The number of individuals selling pirate set-top devices and IPTV-style subscription packages on eBay and social media has grown to epidemic proportions, so perhaps the biggest surprise is that there aren’t more cases like these. Importantly, however, these apparent settlement agreements are a step back from the criminal prosecutions we’ve seen in the past.

Previously, individuals under FACT’s spotlight have tended to be targeted by the police, with all the drawn-out misery that entails. While these cash settlements are fairly hefty, they appear to be in lieu of law enforcement involvement, not inconsiderable solicitors bills, and potential jail sentences. For a few unlucky sellers, this could prove the more attractive option.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

RIAA: Cox Ruling Shows that Grande Can Be Liable for Piracy Too

Post Syndicated from Ernesto original https://torrentfreak.com/riaa-cox-ruling-shows-that-grande-can-be-liable-for-piracy-too-180207/

Regular Internet providers are being put under increasing pressure for not doing enough to curb copyright infringement.

Last year several major record labels, represented by the RIAA, filed a lawsuit in a Texas District Court, accusing ISP Grande Communications of turning a blind eye on its pirating subscribers.

“Despite their knowledge of repeat infringements, Defendants have permitted repeat infringers to use the Grande service to continue to infringe Plaintiffs’ copyrights without consequence,” the RIAA’s complaint read.

Grande disagreed with this assertion and filed a motion to dismiss the case. The ISP argued that it doesn’t encourage any of its customers to download copyrighted works, and that it has no control over the content subscribers access.

The Internet provider didn’t deny that it received millions of takedown notices through the piracy tracking company Rightscorp. However, it believed that these notices are flawed and not worthy of acting upon.

The case shows a lot of similarities with the legal battle between BMG and Cox Communications, in which the Fourth Circuit Court of Appeals issued an important verdict last week.

The appeals court overturned the $25 million piracy damages verdict against Cox due to an erroneous jury instruction but held that the ISP lost its safe harbor protection because it failed to implement a meaningful repeat infringer policy.

This week, the RIAA used the Fourth Circuit ruling as further evidence that Grande’s motion to dismiss should be denied.

The RIAA points out that both Cox and Grande used similar arguments in their defense, some of which were denied by the appeals court. The Fourth Circuit held, for example, that an ISP’s substantial non-infringing uses does not immunize it from liability for contributory copyright infringement.

In addition, the appeals court also clarified that if an ISP wilfully blinds itself to copyright infringements, that is sufficient to satisfy the knowledge requirement for contributory copyright infringement.

According to the RIAA’s filing at a Texas District Court this week, Grande has already admitted that it willingly ‘ignored’ takedown notices that were submitted on behalf of third-party copyright holders.

“Grande has already admitted that it received notices from Rightscorp and, to use Grande’s own phrase, did not ‘meaningfully investigate’ them,” the RIAA writes.

“Thus, even if this Court were to apply the Fourth Circuit’s ‘willful blindness’ standard, the level of knowledge that Grande has effectively admitted exceeds the level of knowledge that the Fourth Circuit held was ‘powerful evidence’ sufficient to establish liability for contributory infringement.”

As such, the motion to dismiss the case should be denied, the RIAA argues.

What’s not mentioned in the RIAA’s filing, however, is why Grande chose not to act upon these takedown notices. In its defense, the ISP previously explained that Rightcorp’s notices lacked specificity and were incapable of detecting actual infringements.

Grande argued that if they acted on these notices without additional proof, its subscribers could lose their Internet access even though they are using it for legal purposes. The ISP may, therefore, counter that it wasn’t willfully blind, as it saw no solid proof for the alleged infringements to begin with.

“To merely treat these allegations as true without investigation would be a disservice to Grande’s subscribers, who would run the risk of having their Internet service permanently terminated despite using Grande’s services for completely legitimate purposes,” Grande previously wrote.

This brings up a tricky issue. The Fourth Circuit made it clear last week that ISPs require a meaningful policy against repeat infringers in respond to takedown notices from copyright holders. But what are the requirements for a proper takedown notice? Do any and all notices count?

Grande clearly has no faith in the accuracy of Rightscorp’s technology but if their case goes in the same direction as Cox’s, that might not make much of a difference.

A copy of the RIAA’s summary of supplemental authority is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

All-In on Unlimited Backup

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/all-in-on-unlimited-backup/

chips on computer with cloud backup

The cloud backup industry has seen its share of tumultuousness. BitCasa, Dell DataSafe, Xdrive, and a dozen others have closed up shop. Mozy, Amazon, and Microsoft offered, but later canceled, their unlimited offerings. Recently, CrashPlan for Home customers were notified that their service was being end-of-lifed. Then today we’ve heard from Carbonite customers who are frustrated by this morning’s announcement of a price increase from Carbonite.

We believe that the fundamental goal of a cloud backup is having peace-of-mind: knowing your data — all of it — is safe. For over 10 years Backblaze has been providing that peace-of-mind by offering completely unlimited cloud backup to our customers. And we continue to be committed to that. Knowing that your cloud backup vendor is not going to disappear or fundamentally change their service is an essential element in achieving that peace-of-mind.

Committed to Unlimited Backup

When Mozy discontinued their unlimited backup on Jan 31, 2011, a lot of people asked, “Does this mean Backblaze will discontinue theirs as well?” At that time I wrote the blog post Backblaze is committed to unlimited backup. That was seven years ago. Since then we’ve continued to make Backblaze cloud backup better: dramatically speeding up backups and restores, offering the unique and very popular Restore Return Refund program, enabling direct access and sharing of any file in your backup, and more. We also introduced Backblaze Groups to enable businesses and families to manage backups — all at no additional cost.

How That’s Possible

I’d like to answer the question of “How have you been able to do this when others haven’t?

First, commitment. It’s not impossible to offer unlimited cloud backup, but it’s not easy. The Backblaze team has been committed to unlimited as a core tenet.

Second, we have pursued the technical, business, and cultural steps required to make it happen. We’ve designed our own servers, written our cloud storage software, run our own operations, and been continually focused on every place we could optimize a penny out of the cost of storage. We’ve built a culture at Backblaze that cares deeply about that.

Ensuring Peace-of-Mind

Price increases and plan changes happen in our industry, but Backblaze has consistently been the low price leader, and continues to stand by the foundational element of our service — truly unlimited backup storage. Carbonite just announced a price increase from $60 to $72/year, and while that’s not an astronomical increase, it’s important to keep in mind the service that they are providing at that rate. The basic Carbonite plan provides a service that doesn’t back up videos or external hard drives by default. We think that’s dangerous. No one wants to discover that their videos weren’t backed up after their computer dies, or have to worry about the safety and durability of their data. That is why we have continued to build on our foundation of unlimited, as well as making our service faster and more accessible. All of these serve the goal of ensuring peace-of-mind for our customers.

3 Months Free For You & A Friend

As part of our commitment to unlimited, refer your friends to receive three months of Backblaze service through March 15, 2018. When you Refer-a-Friend with your personal referral link, and they subscribe, both of you will receive three months of service added to your account. See promotion details on our Refer-a-Friend page.

Want A Reminder When Your Carbonite Subscription Runs Out?

If you’re considering switching from Carbonite, we’d love to be your new backup provider. Enter your email and the date you’d like to be reminded in the form below and you’ll get a friendly reminder email from us to start a new backup plan with Backblaze. Or, you could start a free trial today.

We think you’ll be glad you switched, and you’ll have a chance to experience some of that Backblaze peace-of-mind for your data.

Please Send Me a Reminder When I Need a New Backup Provider



 

The post All-In on Unlimited Backup appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Cloudflare Terminates Service to Sci-Hub Domain Names

Post Syndicated from Ernesto original https://torrentfreak.com/cloudflare-terminates-service-to-sci-hub-domain-names-180205/

While Sci-Hub is praised by thousands of researchers and academics around the world, copyright holders are doing everything in their power to wipe the site from the web.

Following a $15 million defeat against Elsevier last June, the American Chemical Society (ACS) won a default judgment of $4.8 million in copyright damages a few months later.

The publisher was further granted a broad injunction, requiring various third-party services to stop providing access to the site. This includes domain registries, hosting companies and search engines.

Soon after the order was signed, several of Sci-Hub’s domain names became unreachable as domain registries complied with the court order. This resulted in a domain name whack-a-mole, but all this time Sci-Hub remained available.

Last weekend another problem appeared for Sci-Hub. This time ACS went after CDN provider Cloudflare, which informed the site that a court order requires the company to disconnect several domain names.

“Cloudflare has received the attached court order, Case 1:17-cv-OO726-LMB-JFA,” the company writes. “Cloudflare will terminate your service for the following domains sci-hub.la, sci-hub.tv, and sci-hub.tw by disabling our authoritative DNS in 24 hours.”

According to Sci-Hub’s operator, losing access to Cloudflare is not “critical,” but it may “cause a short pause in website operation.”

Sci-Hub’s Cloudflare tweet

Cloudflare’s actions are significant because the company previously protested a similar order. When the RIAA used the permanent injunction in the MP3Skull case to compel Cloudflare to disconnect the site, the CDN provider refused.

The RIAA argued that Cloudflare was operating “in active concert or participation” with the pirates. The CDN provider objected, but the court eventually ordered Cloudflare to take action, although it did not rule on the “active concert or participation” part.

In the Sci-Hub case “active concert or participation” is also a requirement for the injunction to apply. While it specifically mentions ISPs and search engines, ACS Director Glenn Ruskin previously stressed that companies won’t be targeted for simply linking users to Sci-Hub.

“The court’s affirmative ruling does not apply to search engines writ large, but only to those entities who have been in active concert or participation with Sci-Hub, such as websites that host ACS content stolen by Sci-Hub,” Ruskin told us at the time.

Cloudflare does more than linking of course, but the company doesn’t see itself as a web hosting service either. While it still may not agree with the “active concert” classification, there’s no evidence that Cloudflare objected in court this time.

As for Sci-Hub, they have to look elsewhere if they want another CDN provider. For now, however, the site remains widely available.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Progressing from tech to leadership

Post Syndicated from Michal Zalewski original http://lcamtuf.blogspot.com/2018/02/on-leadership.html

I’ve been a technical person all my life. I started doing vulnerability research in the late 1990s – and even today, when I’m not fiddling with CNC-machined robots or making furniture, I’m probably clobbering together a fuzzer or writing a book about browser protocols and APIs. In other words, I’m a geek at heart.

My career is a different story. Over the past two decades and a change, I went from writing CGI scripts and setting up WAN routers for a chain of shopping malls, to doing pentests for institutional customers, to designing a series of network monitoring platforms and handling incident response for a big telco, to building and running the product security org for one of the largest companies in the world. It’s been an interesting ride – and now that I’m on the hook for the well-being of about 100 folks across more than a dozen subteams around the world, I’ve been thinking a bit about the lessons learned along the way.

Of course, I’m a bit hesitant to write such a post: sometimes, your efforts pan out not because of your approach, but despite it – and it’s possible to draw precisely the wrong conclusions from such anecdotes. Still, I’m very proud of the culture we’ve created and the caliber of folks working on our team. It happened through the work of quite a few talented tech leads and managers even before my time, but it did not happen by accident – so I figured that my observations may be useful for some, as long as they are taken with a grain of salt.

But first, let me start on a somewhat somber note: what nobody tells you is that one’s level on the leadership ladder tends to be inversely correlated with several measures of happiness. The reason is fairly simple: as you get more senior, a growing number of people will come to you expecting you to solve increasingly fuzzy and challenging problems – and you will no longer be patted on the back for doing so. This should not scare you away from such opportunities, but it definitely calls for a particular mindset: your motivation must come from within. Look beyond the fight-of-the-day; find satisfaction in seeing how far your teams have come over the years.

With that out of the way, here’s a collection of notes, loosely organized into three major themes.

The curse of a techie leader

Perhaps the most interesting observation I have is that for a person coming from a technical background, building a healthy team is first and foremost about the subtle art of letting go.

There is a natural urge to stay involved in any project you’ve started or helped improve; after all, it’s your baby: you’re familiar with all the nuts and bolts, and nobody else can do this job as well as you. But as your sphere of influence grows, this becomes a choke point: there are only so many things you could be doing at once. Just as importantly, the project-hoarding behavior robs more junior folks of the ability to take on new responsibilities and bring their own ideas to life. In other words, when done properly, delegation is not just about freeing up your plate; it’s also about empowerment and about signalling trust.

Of course, when you hand your project over to somebody else, the new owner will initially be slower and more clumsy than you; but if you pick the new leads wisely, give them the right tools and the right incentives, and don’t make them deathly afraid of messing up, they will soon excel at their new jobs – and be grateful for the opportunity.

A related affliction of many accomplished techies is the conviction that they know the answers to every question even tangentially related to their domain of expertise; that belief is coupled with a burning desire to have the last word in every debate. When practiced in moderation, this behavior is fine among peers – but for a leader, one of the most important skills to learn is knowing when to keep your mouth shut: people learn a lot better by experimenting and making small mistakes than by being schooled by their boss, and they often try to read into your passing remarks. Don’t run an authoritarian camp focused on total risk aversion or perfectly efficient resource management; just set reasonable boundaries and exit conditions for experiments so that they don’t spiral out of control – and be amazed by the results every now and then.

Death by planning

When nothing is on fire, it’s easy to get preoccupied with maintaining the status quo. If your current headcount or budget request lists all the same projects as last year’s, or if you ever find yourself ending an argument by deferring to a policy or a process document, it’s probably a sign that you’re getting complacent. In security, complacency usually ends in tears – and when it doesn’t, it leads to burnout or boredom.

In my experience, your goal should be to develop a cadre of managers or tech leads capable of coming up with clever ideas, prioritizing them among themselves, and seeing them to completion without your day-to-day involvement. In your spare time, make it your mission to challenge them to stay ahead of the curve. Ask your vendor security lead how they’d streamline their work if they had a 40% jump in the number of vendors but no extra headcount; ask your product security folks what’s the second line of defense or containment should your primary defenses fail. Help them get good ideas off the ground; set some mental success and failure criteria to be able to cut your losses if something does not pan out.

Of course, malfunctions happen even in the best-run teams; to spot trouble early on, instead of overzealous project tracking, I found it useful to encourage folks to run a data-driven org. I’d usually ask them to imagine that a brand new VP shows up in our office and, as his first order of business, asks “why do you have so many people here and how do I know they are doing the right things?”. Not everything in security can be quantified, but hard data can validate many of your assumptions – and will alert you to unseen issues early on.

When focusing on data, it’s important not to treat pie charts and spreadsheets as an art unto itself; if you run a security review process for your company, your CSAT scores are going to reach 100% if you just rubberstamp every launch request within ten minutes of receiving it. Make sure you’re asking the right questions; instead of “how satisfied are you with our process”, try “is your product better as a consequence of talking to us?”

Whenever things are not progressing as expected, it is a natural instinct to fall back to micromanagement, but it seldom truly cures the ill. It’s probable that your team disagrees with your vision or its feasibility – and that you’re either not listening to their feedback, or they don’t think you’d care. It’s good to assume that most of your employees are as smart or smarter than you; barking your orders at them more loudly or more frequently does not lead anyplace good. It’s good to listen to them and either present new facts or work with them on a plan you can all get behind.

In some circumstances, all that’s needed is honesty about the business trade-offs, so that your team feels like your “partner in crime”, not a victim of circumstance. For example, we’d tell our folks that by not falling behind on basic, unglamorous work, we earn the trust of our VPs and SVPs – and that this translates into the independence and the resources we need to pursue more ambitious ideas without being told what to do; it’s how we game the system, so to speak. Oh: leading by example is a pretty powerful tool at your disposal, too.

The human factor

I’ve come to appreciate that hiring decent folks who can get along with others is far more important than trying to recruit conference-circuit superstars. In fact, hiring superstars is a decidedly hit-and-miss affair: while certainly not a rule, there is a proportion of folks who put the maintenance of their celebrity status ahead of job responsibilities or the well-being of their peers.

For teams, one of the most powerful demotivators is a sense of unfairness and disempowerment. This is where tech-originating leaders can shine, because their teams usually feel that their bosses understand and can evaluate the merits of the work. But it also means you need to be decisive and actually solve problems for them, rather than just letting them vent. You will need to make unpopular decisions every now and then; in such cases, I think it’s important to move quickly, rather than prolonging the uncertainty – but it’s also important to sincerely listen to concerns, explain your reasoning, and be frank about the risks and trade-offs.

Whenever you see a clash of personalities on your team, you probably need to respond swiftly and decisively; being right should not justify being a bully. If you don’t react to repeated scuffles, your best people will probably start looking for other opportunities: it’s draining to put up with constant pie fights, no matter if the pies are thrown straight at you or if you just need to duck one every now and then.

More broadly, personality differences seem to be a much better predictor of conflict than any technical aspects underpinning a debate. As a boss, you need to identify such differences early on and come up with creative solutions. Sometimes, all you need is taking some badly-delivered but valid feedback and having a conversation with the other person, asking some questions that can help them reach the same conclusions without feeling that their worldview is under attack. Other times, the only path forward is making sure that some folks simply don’t run into each for a while.

Finally, dealing with low performers is a notoriously hard but important part of the game. Especially within large companies, there is always the temptation to just let it slide: sideline a struggling person and wait for them to either get over their issues or leave. But this sends an awful message to the rest of the team; for better or worse, fairness is important to most. Simply firing the low performers is seldom the best solution, though; successful recovery cases are what sets great managers apart from the average ones.

Oh, one more thought: people in leadership roles have their allegiance divided between the company and the people who depend on them. The obligation to the company is more formal, but the impact you have on your team is longer-lasting and more intimate. When the obligations to the employer and to your team collide in some way, make sure you can make the right call; it might be one of the the most consequential decisions you’ll ever make.

Appeals Court Throws Out $25 Million Piracy Verdict Against Cox, Doesn’t Reinstate “Safe Harbor”

Post Syndicated from Ernesto original https://torrentfreak.com/appeals-court-throws-out-25-million-piracy-verdict-against-cox-doesnt-reinstate-safe-harbor-180201/

December 2015, a Virginia federal jury ruled that Internet provider Cox Communications was responsible for the copyright infringements of its subscribers.

The ISP was found guilty of willful contributory copyright infringement and ordered to pay music publisher BMG Rights Management $25 million in damages.

Cox swiftly filed its appeal arguing that the District Court made several errors in the jury instructions. In addition, it asked for a clarification of the term “repeat infringer” in its favor.

Today the Court of Appeals for the Fourth Circuit ruled on the matter in a mixed decision which could have great consequences.

The Court ruled that the District Court indeed made a mistake in its jury instruction. Specifically, it said that the ISP could be found liable for contributory infringement if it “knew or should have known of such infringing activity.” The Court of Appeals agrees that based on the law, the “should have known” standard is too low.

When this is the case the appeals court can call for a new trial, and that is exactly what it did. This means that the $25 million verdict is off the table, and the same is true for the millions in attorney’s fees and costs BMG was previously granted.

It’s not all good news for Cox though. The most crucial matter in the case is whether Cox has safe harbor protection under the DMCA. In order to qualify, the company is required to terminate accounts of repeat infringers, when appropriate.

Cox argued that subscribers can only be seen as repeat infringers if they’ve been previously adjudicated in court, not if they merely received several takedown notices. This was still an open question, as the term repeat infringer is not clearly defined in the DMCA.

Today, however, the appeals court is pretty clear on the matter. According to Judge Motz’s opinion, shared by HWR, the language of the DMCA suggests that the term “infringer” is not limited to adjudicated infringers.

This is supported by legislative history as the House Commerce and Senate Judiciary Committee Reports both explained that “those who repeatedly or flagrantly abuse their access to the Internet through disrespect for the intellectual property rights of others should know that there is a realistic threat of losing that access.”

“The passage does not suggest that they should risk losing Internet access only once they have been sued in court and found liable for multiple instances of infringement,” Judge Motz writes in her opinion.

Losing Internet access would hardly be a “realistic threat” that would stop someone from pirating if he or she has already been punished several times in court, the argument goes.

This leads the Court of Appeals to conclude that the District Court was right: Cox is not entitled to safe harbor protection because it failed to implement a meaningful repeat infringer policy.

“Cox failed to qualify for the DMCA safe harbor because it failed to implement its policy in any consistent or meaningful way — leaving it essentially with no policy,” Judge Motz writes.

This means that, while Cox gets a new trial, it is still at a severe disadvantage. Not only that, the Court of Appeals interpretation of the repeat infringer question is also a clear signal to other Internet service providers to disconnect pirates based on repeated copyright holder complaints.

Judge Motz’s full opinion is available here (pdf).

T-Mobile Blocks Pirate Sites Then Reports Itself For Possible Net Neutrality Violation

Post Syndicated from Andy original https://torrentfreak.com/t-mobile-blocks-pirate-sites-then-reports-itself-for-possible-net-neutrality-violation-180130/

For the past eight years, Austria has been struggling with the thorny issue of pirate site blocking. Local ISPs have put up quite a fight but site blocking is now a reality, albeit with a certain amount of confusion.

After a dizzying route through the legal system, last November the Supreme Court finally ruled that The Pirate Bay and other “structurally-infringing” sites including 1337x.to and isohunt.to can be blocked, if rightsholders have exhausted all other options.

The Court based its decision on the now-familiar BREIN v Filmspeler and BREIN v Ziggo and XS4All cases that received European Court of Justice rulings last year. However, there is now an additional complication, this time on the net neutrality front.

After being passed in October 2015 and coming into force in April 2016, the Telecom Single Market (TSM) Regulation established the principle of non-discriminatory traffic management in the EU. The regulation still allows for the blocking of copyright-infringing websites but only where supported by a clear administrative or judicial decision. This is where T-Mobile sees a problem.

In addition to blocking sites named specifically by the court, copyright holders also expect the ISP to block related platforms, such as clones and mirrors, that aren’t specified in the same manner.

So, last week, after blocking several obscure Pirate Bay clones such as proxydl.cf, the ISP reported itself to the Austrian Regulatory Authority for Broadcasting and Telecommunications (RTR) for a potential net neutrality breach.

“It sounds paradoxical, but this should finally bring legal certainty in a long-standing dispute over pirate sites. T-Mobile Austria has filed with regulatory authority RTR a kind of self-report, after blocking several sites on the basis of a warning by rights holders,” T-Mobile said in a statement.

“The background to the communication to the RTR, through which T-Mobile intends to obtain an assessment by the regulator, is a very unsatisfactory legal situation in which operators have no opportunity to behave in conformity with the law.

“The service provider is forced upon notification by the copyright owner to even judge about possible copyright infringements. At the same time, the provider is violating the principle of net neutrality by setting up a ban.”

T-Mobile says the problem is complicated by rightsholders who, after obtaining a blocking order forcing named ISPs to block named pirate sites (as required under EU law), send similar demands to other ISPs that were not party to court proceedings. The rightsholders also send blocking demands when blocked sites disappear and reappear under a new name, despite those new names not being part of the original order.

According to industry body Internet Service Providers Austria (ISPA), there is a real need for clarification. It’s hoped that T-Mobile reporting itself for a potential net neutrality breach will have the desired effect.

“For more than two years, we have been trying to find a solution with the involved interest groups and the responsible ministry, which on the one hand protects the rights of the artists and on the other hand does not force the providers into the role of a judge,” complains Maximilian Schubert, Secretary General of the ISPA.

“The willingness of the rights holders to compromise had remained within manageable limits. Now they are massively increasing the pressure and demanding costly measures, which the service providers see as punishment for them providing legal security for their customers for many years.”

ISPA hopes that the telecoms regulator will now help to clear up this uncertainty.

“We now hope that the regulator will give a clear answer here. Because from our point of view, the assessment of legality cannot and should not be outsourced to companies,” Schubert concludes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Yaghmour: Ten Days in Shenzhen

Post Syndicated from jake original https://lwn.net/Articles/745708/rss

On his blog, embedded developer Karim Yaghmour has written about his ten-day trip to Shenzen, China, which is known as the “Silicon Valley of hardware”. His lengthy trip report covers much that would be of use to others who are thinking of making the trip, but also serves as an interesting travelogue even for those who are likely to never go. “The map didn’t disappoint and I was able to find a large number of kiosks selling some of the items I was interested in. Obviously many kiosks also had items that I had seen on Amazon or elsewhere as well. I was mostly focusing on things I hadn’t seen before. After a few hours of walking floors upon floors of shops, I was ready to start focusing on other aspects of my research: hard to source and/or evaluate components, tools and expanding my knowledge of what was available in the hardware space. Hint: TEGES’ [The Essential Guide to Electronics in Shenzhen] advice about having comfortable shoes and comfortable clothing is completely warranted.

Finding tools was relatively easy. TEGES indicates the building and floor to go to, and you’ll find most anything you can think of from rework stations, to pick-and-place machines, and including things like oscilloscopes, stereo microscopes, multimeters, screwdrivers, etc. In the process I saw some tools which I couldn’t immediately figure out the purpose for, but later found out their uses on some other visits. Satisfied with a first glance at the tools, I set out to look for one specific component I was having a hard time with. That proved a lot more difficult than anticipated. Actually I should qualify that. It was trivial to find tons of it, just not something that matched exactly what I needed. I used TEGES to identify one part of the market that seemed most likely to have what I was looking for, but again, I could find lots of it, just not what I needed.”

Udemy Targets ‘Pirate’ Site Giving Away its Paid Courses For Free

Post Syndicated from Andy original https://torrentfreak.com/udemy-targets-pirate-site-giving-away-its-paid-courses-for-free-180129/

While there’s no shortage of people who advocate free sharing of movies and music, passions are often raised when it comes to the availability of educational information.

Significant numbers of people believe that learning should be open to all and that texts and associated materials shouldn’t be locked away by copyright holders trying to monetize knowledge. Of course, people who make a living creating learning materials see the position rather differently.

A clash of these ideals is brewing in the United States where online learning platform Udemy has been trying to have some of its courses taken down from FreeTutorials.us, a site that makes available premium tutorials and other learning materials for free.

Early December 2017, counsel acting for Udemy and a number of its individual and corporate instructors (Maximilian Schwarzmüller, Academind GmbH, Peter Dalmaris, Futureshock Enterprises, Jose Marcial Portilla, and Pierian Data) wrote to FreeTutorials.us with DMCA takedown notice.

“Pursuant to 17 U.S.C. § 512(c)(3)(A) of the Digital Millennium Copyright Act (‘DMCA’), this communication serves as a notice of infringement and request for removal of certain web content available on freetutorials.us,” the letter reads.

“I hereby request that you remove or disable access to the material listed in Exhibit A in as expedient a fashion as possible. This communication does not constitute a waiver of any right to recover damages incurred by virtue of any such unauthorized activities, and such rights as well as claims for other relief are expressly retained.”

A small sample of Exhibit A

On January 10, 2018, the same law firm wrote to Cloudflare, which provides services to FreeTutorials. The DMCA notice asked Cloudflare to disable access to the same set of infringing content listed above.

It seems likely that whatever happened next wasn’t to Udemy’s satisfaction. On January 16, an attorney from the same law firm filed a DMCA subpoena at a district court in California. A DMCA subpoena can enable a copyright holder to obtain the identity of an alleged infringer without having to file a lawsuit and without needing a signature from a judge.

The subpoena was directed at Cloudflare, which provides services to FreeTutorials. The company was ordered to hand over “all identifying information identifying the owner, operator and/or contact person(s) associated with the domain www.freetutorials.us, including but not limited to name(s), address(es), telephone number(s), email address(es), Internet protocol connection records, administrative records and billing records from the time the account was established to the present.”

On January 26, the date by which Cloudflare was ordered to hand over the information, Cloudflare wrote to FreeTutorials with a somewhat late-in-the-day notification.

“We received the attached subpoena regarding freetutorials.us, a domain managed through your Cloudflare account. The subpoena requires us to provide information in our systems related to this website,” the company wrote.

“We have determined that this is a valid subpoena, and we are required to provide the requested information. In accordance with our Privacy Policy, we are informing you before we provide any of the requested subscriber information. We plan to turn over documents in response to the subpoena on January 26th, 2018, unless you intervene in the case.”

With that deadline passing last Friday, it’s safe to say that Cloudflare has complied with the subpoena as the law requires. However, TorrentFreak spoke with FreeTutorials who told us that the company doesn’t hold anything useful on them.

“No, they have nothing,” the team explained.

Noting that they’ll soon dispense with the services of Cloudflare, the team confirmed that they had received emails from Udemy and its instructors but hadn’t done a lot in response.

“How about a ‘NO’? was our answer to all the DMCA takedown requests from Udemy and its Instructors,” they added.

FreeTutorials (FTU) are affiliated with FreeCoursesOnline (FCO) and seem passionate about what they do. In common with others who distribute learning materials online, they express a belief in free education for all, irrespective of financial resources.

“We, FTU and FCO, are a group of seven members assorted as a team from different countries and cities. We are JN, SRZ aka SunRiseZone, Letap, Lihua Google Drive, Kaya, Zinnia, Faiz MeemBazooka,” a spokesperson revealed.

“We’re all members and colleagues and we also have our own daily work and business stuff to do. We have been through that phase of life when we didn’t have enough money to buy books and get tuition or even apply for a good course that we always wanted to have, so FTU & FCO are just our vision to provide Free Education For Everyone.

“We would love to change our priorities towards our current and future projects, only if we manage to get some faithful FTU’ers to join in and help us to grow together and make FTU a place it should be.”

TorrentFreak requested comment from Udemy but at the time of publication, we were yet to hear back. However, we did manage to get in touch with Jonathan Levi, an Udemy instructor who sent this takedown notice to the site in October 2017:

“I’m writing to you on behalf of SuperHuman Enterprises, LLC. You are in violation of our copyright, using our images, and linking to pirated copies of our courses. Remove them IMMEDIATELY or face severe legal action….You have 48 hours to comply,” he wrote, adding:

“And in case you’re going to say I don’t have evidence that I own the files, it’s my fucking face in the videos.”

Levi says that the site had been non-responsive so now things are being taken to the next level.

“They don’t reply to takedowns, so we’ve joined a class action lawsuit against FTU lead by Udemy and a law firm specializing in this type of thing,” Levi concludes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

The problematic Wannacry North Korea attribution

Post Syndicated from Robert Graham original http://blog.erratasec.com/2018/01/the-problematic-wannacry-north-korea.html

Last month, the US government officially “attributed” the Wannacry ransomware worm to North Korea. This attribution has three flaws, which are a good lesson for attribution in general.

It was an accident

The most important fact about Wannacry is that it was an accident. We’ve had 30 years of experience with Internet worms teaching us that worms are always accidents. While launching worms may be intentional, their effects cannot be predicted. While they appear to have targets, like Slammer against South Korea, or Witty against the Pentagon, further analysis shows this was just a random effect that was impossible to predict ahead of time. Only in hindsight are these effects explainable.
We should hold those causing accidents accountable, too, but it’s a different accountability. The U.S. has caused more civilian deaths in its War on Terror than the terrorists caused triggering that war. But we hold these to be morally different: the terrorists targeted the innocent, whereas the U.S. takes great pains to avoid civilian casualties. 
Since we are talking about blaming those responsible for accidents, we also must include the NSA in that mix. The NSA created, then allowed the release of, weaponized exploits. That’s like accidentally dropping a load of unexploded bombs near a village. When those bombs are then used, those having lost the weapons are held guilty along with those using them. Yes, while we should blame the hacker who added ETERNAL BLUE to their ransomware, we should also blame the NSA for losing control of ETERNAL BLUE.

A country and its assets are different

Was it North Korea, or hackers affilliated with North Korea? These aren’t the same.

It’s hard for North Korea to have hackers of its own. It doesn’t have citizens who grow up with computers to pick from. Moreover, an internal hacking corps would create tainted citizens exposed to dangerous outside ideas. Update: Some people have pointed out that Kim Il-sung University in the capital does have some contact with the outside world, with academics granted limited Internet access, so I guess some tainting is allowed. Still, what we know of North Korea hacking efforts largley comes from hackers they employ outside North Korea. It was the Lazurus Group, outside North Korea, that did Wannacry.
Instead, North Korea develops external hacking “assets”, supporting several external hacking groups in China, Japan, and South Korea. This is similar to how intelligence agencies develop human “assets” in foreign countries. While these assets do things for their handlers, they also have normal day jobs, and do many things that are wholly independent and even sometimes against their handler’s interests.
For example, this Muckrock FOIA dump shows how “CIA assets” independently worked for Castro and assassinated a Panamanian president. That they also worked for the CIA does not make the CIA responsible for the Panamanian assassination.
That CIA/intelligence assets work this way is well-known and uncontroversial. The fact that countries use hacker assets like this is the controversial part. These hackers do act independently, yet we refuse to consider this when we want to “attribute” attacks.

Attribution is political

We have far better attribution for the nPetya attacks. It was less accidental (they clearly desired to disrupt Ukraine), and the hackers were much closer to the Russian government (Russian citizens). Yet, the Trump administration isn’t fighting Russia, they are fighting North Korea, so they don’t officially attribute nPetya to Russia, but do attribute Wannacry to North Korea.
Trump is in conflict with North Korea. He is looking for ways to escalate the conflict. Attributing Wannacry helps achieve his political objectives.
That it was blatantly politics is demonstrated by the way it was released to the press. It wasn’t released in the normal way, where the administration can stand behind it, and get challenged on the particulars. Instead, it was pre-released through the normal system of “anonymous government officials” to the NYTimes, and then backed up with op-ed in the Wall Street Journal. The government leaks information like this when it’s weak, not when its strong.

The proper way is to release the evidence upon which the decision was made, so that the public can challenge it. Among the questions the public would ask is whether it they believe it was North Korea’s intention to cause precisely this effect, such as disabling the British NHS. Or, whether it was merely hackers “affiliated” with North Korea, or hackers carrying out North Korea’s orders. We cannot challenge the government this way because the government intentionally holds itself above such accountability.

Conclusion

We believe hacking groups tied to North Korea are responsible for Wannacry. Yet, even if that’s true, we still have three attribution problems. We still don’t know if that was intentional, in pursuit of some political goal, or an accident. We still don’t know if it was at the direction of North Korea, or whether their hacker assets acted independently. We still don’t know if the government has answers to these questions, or whether it’s exploiting this doubt to achieve political support for actions against North Korea.