Tag Archives: iq

Game Companies Oppose DMCA Exemption for ‘Abandoned’ Online Games

Post Syndicated from Ernesto original https://torrentfreak.com/game-companies-oppose-dmca-exemption-for-abandoned-online-games-180217/

There are a lot of things people are not allowed to do under US copyright law, but perhaps just as importantly there are exemptions.

The U.S. Copyright Office is currently considering whether or not to loosen the DMCA’s anti-circumvention provisions, which prevent the public from ‘tinkering’ with DRM-protected content and devices.

These provisions are renewed every three years after the Office hears various arguments from the public. One of the major topics on the agenda this year is the preservation of abandoned games.

The Copyright Office previously included game preservation exemptions to keep these games accessible. This means that libraries, archives, and museums can use emulators and other circumvention tools to make old classics playable.

Late last year several gaming fans including the Museum of Art and Digital Entertainment (the MADE), a nonprofit organization operating in California, argued for an expansion of this exemption to also cover online games. This includes games in the widely popular multiplayer genre, which require a connection to an online server.

“Although the Current Exemption does not cover it, preservation of online video games is now critical,” MADE wrote in its comment to the Copyright Office.

“Online games have become ubiquitous and are only growing in popularity. For example, an estimated fifty-three percent of gamers play multiplayer games at least once a week, and spend, on average, six hours a week playing with others online.”

This week, the Entertainment Software Association (ESA), which acts on behalf of prominent members including Electonic Arts, Nintendo and Ubisoft, opposed the request.

While they are fine with the current game-preservation exemption, expanding it to online games goes too far, they say. This would allow outsiders to recreate online game environments using server code that was never published in public.

It would also allow a broad category of “affiliates” to help with this which, according to the ESA, could include members of the public

“The proponents characterize these as ‘slight modifications’ to the existing exemption. However they are nothing of the sort. The proponents request permission to engage in forms of circumvention that will enable the complete recreation of a hosted video game-service environment and make the video game available for play by a public audience.”

“Worse yet, proponents seek permission to deputize a legion of ‘affiliates’ to assist in their activities,” ESA adds.

The proposed changes would enable and facilitate infringing use, the game companies warn. They fear that outsiders such as MADE will replicate the game servers and allow the public to play these abandoned games, something games companies would generally charge for. This could be seen as direct competition.

MADE, for example, already charges the public to access its museum so they can play games. This can be seen as commercial use under the DMCA, ESA points out.

“Public performance and display of online games within a museum likewise is a commercial use within the meaning of Section 107. MADE charges an admission fee – ‘$10 to play games all day’.

“Under the authority summarized above, public performance and display of copyrighted works to generate entrance fee revenue is a commercial use, even if undertaken by a nonprofit museum,” the ESA adds.

The ESA also stresses that their members already make efforts to revive older games themselves. There is a vibrant and growing market for “retro” games, which games companies are motivated to serve, they say.

The games companies, therefore, urge the Copyright Office to keep the status quo and reject any exemptions for online games.

“In sum, expansion of the video game preservation exemption as contemplated by Class 8 is not a ‘modest’ proposal. Eliminating the important limitations that the Register provided when adopting the current exemption risks the possibility of wide-scale infringement and substantial market harm,” they write.

The Copyright Office will take all arguments into consideration before it makes a final decision. It’s clear that the wishes of game preservation advocates, such as MADE, are hard to unite with the interests of the game companies, so one side will clearly be disappointed with the outcome.

A copy of ESA’s submissionavailablelble here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Backblaze and GDPR

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/gdpr-compliance/

GDPR General Data Protection Regulation

Over the next few months the noise over GDPR will finally reach a crescendo. For the uninitiated, “GDPR” stands for “General Data Protection Regulation” and it goes into effect on May 25th of this year. GDPR is designed to protect how personal information of EU (European Union) citizens is collected, stored, and shared. The regulation should also improve transparency as to how personal information is managed by a business or organization.

Backblaze fully expects to be GDPR compliant when May 25th rolls around and we thought we’d share our experience along the way. We’ll start with this post as an introduction to GDPR. In future posts, we’ll dive into some of the details of the process we went through in meeting the GDPR objectives.

GDPR: A Two Way Street

To ensure we are GDPR compliant, Backblaze has assembled a dedicated internal team, engaged outside counsel in the United Kingdom, and consulted with other tech companies on best practices. While it is a sizable effort on our part, we view this as a waypoint in our ongoing effort to secure and protect our customers’ data and to be transparent in how we work as a company.

In addition to the effort we are putting into complying with the regulation, we think it is important to underscore and promote the idea that data privacy and security is a two-way street. We can spend millions of dollars on protecting the security of our systems, but we can’t stop a bad actor from finding and using your account credentials left on a note stuck to your monitor. We can give our customers tools like two factor authentication and private encryption keys, but it is the partnership with our customers that is the most powerful protection. The same thing goes for your digital privacy — we’ll do our best to protect your information, but we will need your help to do so.

Why GDPR is Important

At the center of GDPR is the protection of Personally Identifiable Information or “PII.” The definition for PII is information that can be used stand-alone or in concert with other information to identify a specific person. This includes obvious data like: name, address, and phone number, less obvious data like email address and IP address, and other data such as a credit card number, and unique identifiers that can be decoded back to the person.

How Will GDPR Affect You as an Individual

If you are a citizen in the EU, GDPR is designed to protect your private information from being used or shared without your permission. Technically, this only applies when your data is collected, processed, stored or shared outside of the EU, but it’s a good practice to hold all of your service providers to the same standard. For example, when you are deciding to sign up with a service, you should be able to quickly access and understand what personal information is being collected, why it is being collected, and what the business can do with that information. These terms are typically found in “Terms and Conditions” and “Privacy Policy” documents, or perhaps in a written contract you signed before starting to use a given service or product.

Even if you are not a citizen of the EU, GDPR will still affect you. Why? Because nearly every company you deal with, especially online, will have customers that live in the EU. It makes little sense for Backblaze, or any other service provider or vendor, to create a separate set of rules for just EU citizens. In practice, protection of private information should be more accountable and transparent with GDPR.

How Will GDPR Affect You as a Backblaze Customer

Over the coming months Backblaze customers will see changes to our current “Terms and Conditions,” “Privacy Policy,” and to our Backblaze services. While the changes to the Backblaze services are expected to be minimal, the “terms and privacy” documents will change significantly. The changes will include among other things the addition of a group of model clauses and related materials. These clauses will be generally consistent across all GDPR compliant vendors and are meant to be easily understood so that a customer can easily determine how their PII is being collected and used.

Common GDPR Questions:

Here are a few of the more common questions we have heard regarding GDPR.

  1. GDPR will only affect citizens in the EU.
    Answer: The changes that are being made by companies such as Backblaze to comply with GDPR will almost certainly apply to customers from all countries. And that’s a good thing. The protections afforded to EU citizens by GDPR are something all users of our service should benefit from.
  2. After May 25, 2018, a citizen of the EU will not be allowed to use any applications or services that store data outside of the EU.
    Answer: False, no one will stop you as an EU citizen from using the internet-based service you choose. But, you should make sure you know where your data is being collected, processed, and stored. If any of those activities occur outside the EU, make sure the company is following the GDPR guidelines.
  3. My business only has a few EU citizens as customers, so I don’t need to care about GDPR?
    Answer: False, even if you have just one EU citizen as a customer, and you capture, process or store data their PII outside of the EU, you need to comply with GDPR.
  4. Companies can be fined millions of dollars for not complying with GDPR.
    Answer:
    True, but: the regulation allows for companies to be fined up to $4 Million dollars or 20% of global revenue (whichever is greater) if they don’t comply with GDPR. In practice, the feeling is that such fines will be reserved (at least initially) for egregious violators that ignore or merely give “lip-service” to GDPR.
  5. You’ll be able to tell a company is GDPR compliant because they have a “GDPR Certified” badge on their website.
    Answer: There is no official GDPR certification or an official GDPR certification program. Companies that comply with GDPR are expected to follow the articles in the regulation and it should be clear from the outside looking in that they have followed the regulations. For example, their “Terms and Conditions,” and “Privacy Policy” should clearly spell out how and why they collect, use, and share your information. At some point a real GDPR certification program may be adopted, but not yet.

For all the hoopla about GDPR, the regulation is reasonably well thought out and addresses a very important issue — people’s privacy online. Creating a best practices document, or in this case a regulation, that companies such as Backblaze can follow is a good idea. The document isn’t perfect, and over the coming years we expect there to be changes. One thing we hope for is that the countries within the EU continue to stand behind one regulation and not fragment the document into multiple versions, each applying to themselves. We believe that having multiple different GDPR versions for different EU countries would lead to less protection overall of EU citizens.

In summary, GDPR changes are coming over the next few months. Backblaze has our internal staff and our EU-based legal council working diligently to ensure that we will be GDPR compliant by May 25th. We believe that GDPR will have a positive effect in enhancing the protection of personally identifiable information for not only EU citizens, but all of our Backblaze customers.

The post Backblaze and GDPR appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Early Challenges: Making Critical Hires

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/early-challenges-making-critical-hires/

row of potential employee hires sitting waiting for an interview

In 2009, Google disclosed that they had 400 recruiters on staff working to hire nearly 10,000 people. Someday, that might be your challenge, but most companies in their early days are looking to hire a handful of people — the right people — each year. Assuming you are closer to startup stage than Google stage, let’s look at who you need to hire, when to hire them, where to find them (and how to help them find you), and how to get them to join your company.

Who Should Be Your First Hires

In later stage companies, the roles in the company have been well fleshed out, don’t change often, and each role can be segmented to focus on a specific area. A large company may have an entire department focused on just cubicle layout; at a smaller company you may not have a single person whose actual job encompasses all of facilities. At Backblaze, our CTO has a passion and knack for facilities and mostly led that charge. Also, the needs of a smaller company are quick to change. One of our first hires was a QA person, Sean, who ended up being 100% focused on data center infrastructure. In the early stage, things can shift quite a bit and you need people that are broadly capable, flexible, and most of all willing to pitch in where needed.

That said, there are times you may need an expert. At a previous company we hired Jon, a PhD in Bayesian statistics, because we needed algorithmic analysis for spam fighting. However, even that person was not only able and willing to do the math, but also code, and to not only focus on Bayesian statistics but explore a plethora of spam fighting options.

When To Hire

If you’ve raised a lot of cash and are willing to burn it with mistakes, you can guess at all the roles you might need and start hiring for them. No judgement: that’s a reasonable strategy if you’re cash-rich and time-poor.

If your cash is limited, try to see what you and your team are already doing and then hire people to take those jobs. It may sound counterintuitive, but if you’re already doing it presumably it needs to be done, you have a good sense of the type of skills required to do it, and you can bring someone on-board and get them up to speed quickly. That then frees you up to focus on tasks that can’t be done by someone else. At Backblaze, I ran marketing internally for years before hiring a VP of Marketing, making it easier for me to know what we needed. Once I was hiring, my primary goal was to find someone I could trust to take that role completely off of me so I could focus solely on my CEO duties

Where To Find the Right People

Finding great people is always difficult, particularly when the skillsets you’re looking for are highly in-demand by larger companies with lots of cash and cachet. You, however, have one massive advantage: you need to hire 5 people, not 5,000.

People You Worked With

The absolutely best people to hire are ones you’ve worked with before that you already know are good in a work situation. Consider your last job, the one before, and the one before that. A significant number of the people we recruited at Backblaze came from our previous startup MailFrontier. We knew what they could do and how they would fit into the culture, and they knew us and thus could quickly meld into the environment. If you didn’t have a previous job, consider people you went to school with or perhaps individuals with whom you’ve done projects previously.

People You Know

Hiring friends, family, and others can be risky, but should be considered. Sometimes a friend can be a “great buddy,” but is not able to do the job or isn’t a good fit for the organization. Having to let go of someone who is a friend or family member can be rough. Have the conversation up front with them about that possibility, so you have the ability to stay friends if the position doesn’t work out. Having said that, if you get along with someone as a friend, that’s one critical component of succeeding together at work. At Backblaze we’ve hired a number of people successfully that were friends of someone in the organization.

Friends Of People You Know

Your network is likely larger than you imagine. Your employees, investors, advisors, spouses, friends, and other folks all know people who might be a great fit for you. Make sure they know the roles you’re hiring for and ask them if they know anyone that would fit. Search LinkedIn for the titles you’re looking for and see who comes up; if they’re a 2nd degree connection, ask your connection for an introduction.

People You Know About

Sometimes the person you want isn’t someone anyone knows, but you may have read something they wrote, used a product they’ve built, or seen a video of a presentation they gave. Reach out. You may get a great hire: worst case, you’ll let them know they were appreciated, and make them aware of your organization.

Other Places to Find People

There are a million other places to find people, including job sites, community groups, Facebook/Twitter, GitHub, and more. Consider where the people you’re looking for are likely to congregate online and in person.

A Comment on Diversity

Hiring “People You Know” can often result in “Hiring People Like You” with the same workplace experiences, culture, background, and perceptions. Some studies have shown [1, 2, 3, 4] that homogeneous groups deliver faster, while heterogeneous groups are more creative. Also, “Hiring People Like You” often propagates the lack of women and minorities in tech and leadership positions in general. When looking for people you know, keep an eye to not discount people you know who don’t have the same cultural background as you.

Helping People To Find You

Reaching out proactively to people is the most direct way to find someone, but you want potential hires coming to you as well. To do this, they have to a) be aware of you, b) know you have a role they’re interested in, and c) think they would want to work there. Let’s tackle a) and b) first below.

Your Blog

I started writing our blog before we launched the product and talked about anything I found interesting related to our space. For several years now our team has owned the content on the blog and in 2017 over 1.5 million people read it. Each time we have a position open it’s published to the blog. If someone finds reading about backup and storage interesting, perhaps they’d want to dig in deeper from the inside. Many of the people we’ve recruited have mentioned reading the blog as either how they found us or as a factor in why they wanted to work here.
[BTW, this is Gleb’s 200th post on Backblaze’s blog. The first was in 2008. — Editor]

Your Email List

In addition to the emails our blog subscribers receive, we send regular emails to our customers, partners, and prospects. These are largely focused on content we think is directly useful or interesting for them. However, once every few months we include a small mention that we’re hiring, and the positions we’re looking for. Often a small blurb is all you need to capture people’s imaginations whether they might find the jobs interesting or can think of someone that might fit the bill.

Your Social Involvement

Whether it’s Twitter or Facebook, Hacker News or Slashdot, your potential hires are engaging in various communities. Being socially involved helps make people aware of you, reminds them of you when they’re considering a job, and paints a picture of what working with you and your company would be like. Adam was in a Reddit thread where we were discussing our Storage Pods, and that interaction was ultimately part of the reason he left Apple to come to Backblaze.

Convincing People To Join

Once you’ve found someone or they’ve found you, how do you convince them to join? They may be currently employed, have other offers, or have to relocate. Again, while the biggest companies have a number of advantages, you might have more unique advantages than you realize.

Why Should They Join You

Here are a set of items that you may be able to offer which larger organizations might not:

Role: Consider the strengths of the role. Perhaps it will have broader scope? More visibility at the executive level? No micromanagement? Ability to take risks? Option to create their own role?

Compensation: In addition to salary, will their options potentially be worth more since they’re getting in early? Can they trade-off salary for more options? Do they get option refreshes?

Benefits: In addition to healthcare, food, and 401(k) plans, are there unique benefits of your company? One company I knew took the entire team for a one-month working retreat abroad each year.

Location: Most people prefer to work close to home. If you’re located outside of the San Francisco Bay Area, you might be at a disadvantage for not being in the heart of tech. But if you find employees close to you you’ve got a huge advantage. Sometimes it’s micro; even in the Bay Area the difference of 5 miles can save 20 minutes each way every day. We located the Backblaze headquarters in San Mateo, a middle-ground that made it accessible to those coming from San Jose and San Francisco. We also chose a downtown location near a train, restaurants, and cafes: all to make it easier and more pleasant. Also, are you flexible in letting your employees work remotely? Our systems administrator Elliott is about to embark on a long-term cross-country journey working from an RV.

Environment: Open office, cubicle, cafe, work-from-home? Loud/quiet? Social or focused? 24×7 or work-life balance? Different environments appeal to different people.

Team: Who will they be working with? A company with 100,000 people might have 100 brilliant ones you’d want to work with, but ultimately we work with our core team. Who will your prospective hires be working with?

Market: Some people are passionate about gaming, others biotech, still others food. The market you’re targeting will get different people excited.

Product: Have an amazing product people love? Highlight that. If you’re lucky, your potential hire is already a fan.

Mission: Curing cancer, making people happy, and other company missions inspire people to strive to be part of the journey. Our mission is to make storing data astonishingly easy and low-cost. If you care about data, information, knowledge, and progress, our mission helps drive all of them.

Culture: I left this for last, but believe it’s the most important. What is the culture of your company? Finding people who want to work in the culture of your organization is critical. If they like the culture, they’ll fit and continue it. We’ve worked hard to build a culture that’s collaborative, friendly, supportive, and open; one in which people like coming to work. For example, the five founders started with (and still have) the same compensation and equity. That started a culture of “we’re all in this together.” Build a culture that will attract the people you want, and convey what the culture is.

Writing The Job Description

Most job descriptions focus on the all the requirements the candidate must meet. While important to communicate, the job description should first sell the job. Why would the appropriate candidate want the job? Then share some of the requirements you think are critical. Remember that people read not just what you say but how you say it. Try to write in a way that conveys what it is like to actually be at the company. Ahin, our VP of Marketing, said the job description itself was one of the things that attracted him to the company.

Orchestrating Interviews

Much can be said about interviewing well. I’m just going to say this: make sure that everyone who is interviewing knows that their job is not only to evaluate the candidate, but give them a sense of the culture, and sell them on the company. At Backblaze, we often have one person interview core prospects solely for company/culture fit.

Onboarding

Hiring success shouldn’t be defined by finding and hiring the right person, but instead by the right person being successful and happy within the organization. Ensure someone (usually their manager) provides them guidance on what they should be concentrating on doing during their first day, first week, and thereafter. Giving new employees opportunities and guidance so that they can achieve early wins and feel socially integrated into the company does wonders for bringing people on board smoothly

In Closing

Our Director of Production Systems, Chris, said to me the other day that he looks for companies where he can work on “interesting problems with nice people.” I’m hoping you’ll find your own version of that and find this post useful in looking for your early and critical hires.

Of course, I’d be remiss if I didn’t say, if you know of anyone looking for a place with “interesting problems with nice people,” Backblaze is hiring. 😉

The post Early Challenges: Making Critical Hires appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Cloudflare Hit With Piracy Lawsuit After Abuse Form ‘Fails’

Post Syndicated from Ernesto original https://torrentfreak.com/cloudflare-hit-with-piracy-lawsuit-after-abuse-form-fails-180210/

Seattle-based artist Christopher Boffoli is no stranger when it comes to suing tech companies for aiding copyright infringement of his work.

Boffoli has filed lawsuits against Imgur, Twitter, Pinterest, Google, and others, which were dismissed and/or settled out of court under undisclosed terms.

This month he filed a new case against another intermediary, Cloudflare, which has had its fair share of piracy allegations in recent years.

In common with other companies, Cloudflare is accused of contributing to copyright infringements of Boffoli’s “Big Appetites” miniatures series. In this case, several Cloudflare customers allegedly posted these photos on their sites which were then reproduced on the servers of the CDN provider.

The lawsuit mentions that the infringing copies were posted on unique-landscape.com and baklol.com. This was also pointed out to Cloudflare by Boffoli, who sent the company DMCA takedown notices in October and November of last year.

While the photographer received an automated response, the photos in question remained online. Through the lawsuit, Boffoli hopes this will change.

“CloudFlare induced, caused, or materially contributed to the Infringing Websites’ publication,” the complaint reads. “CloudFlare had actual knowledge of the Infringing Content. Boffoli provided notice to CloudFlare in compliance with the DMCA, and CloudFlare failed to disable access to or remove the Infringing Websites.”

The photographer is asking the court to order an injunction preventing Cloudflare from making his work available. In addition, the complaint asks for actual and statutory damages for willful copyright infringement. With at least four photos in the lawsuit, the potential damages are more than half a million dollars.

While it’s not mentioned in the complaint, the email communication between Boffoli and Cloudflare goes further than just an automated response. Court records show that the photographer initially didn’t ask Cloudflare to remove the infringing photos. Instead, he asked the CDN provider to forward them to the ISP or site owner.

“I would be grateful if you would forward this DMCA takedown request to the website owner and ISP so these infringing links can immediately be removed,” it read.

Part of the email communication

From then on things escalated a bit. The emails reveal that Boffoli had trouble reporting the infringing photos through the required form.

When the photographer pointed this out in a direct email, Cloudflare urged him to try the form again as that was the only way to send the DMCA request to the designated copyright agent.

“The DMCA doesn’t require us to process reports not sent to our registered agent as per our registration with the US Copyright Office. Our registered copyright agent is the form located at cloudflare.com/abuse/form and you may proceed via that avenue,” Cloudflare wrote.

If the case moves forward, Cloudflare may use this to argue that it never received a proper DMCA takedown notice. However, Boffoli wasn’t planning on trying again and instead threatened a lawsuit, unless Cloudflare took immediate action.

“As I have said, your form did not work for me despite repeated attempts to use it. And it is insulting for you to suggest that it’s working fine when it is not. So again, this is absolutely my last attempt to get you to respond to this infringement for which you are impeding the removal,” Boffoli wrote.

“If you take no action now I will forward this to my legal team this week. It is more than enough of a burden to have to waste countless hours policing my own copyrights without organizations like Cloudflare running interference for copyright infringers. I am not averse to asking a federal judge to compel you to deal with these copyright infringements. And I will seek statutory damages for contributory infringement at that time.”

As it turns out, that was not an idle threat.

—-

A copy of the complaint is available here (pdf) and the email exhibits can be found here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Sharing Secrets with AWS Lambda Using AWS Systems Manager Parameter Store

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/sharing-secrets-with-aws-lambda-using-aws-systems-manager-parameter-store/

This post courtesy of Roberto Iturralde, Sr. Application Developer- AWS Professional Services

Application architects are faced with key decisions throughout the process of designing and implementing their systems. One decision common to nearly all solutions is how to manage the storage and access rights of application configuration. Shared configuration should be stored centrally and securely with each system component having access only to the properties that it needs for functioning.

With AWS Systems Manager Parameter Store, developers have access to central, secure, durable, and highly available storage for application configuration and secrets. Parameter Store also integrates with AWS Identity and Access Management (IAM), allowing fine-grained access control to individual parameters or branches of a hierarchical tree.

This post demonstrates how to create and access shared configurations in Parameter Store from AWS Lambda. Both encrypted and plaintext parameter values are stored with only the Lambda function having permissions to decrypt the secrets. You also use AWS X-Ray to profile the function.

Solution overview

This example is made up of the following components:

  • An AWS SAM template that defines:
    • A Lambda function and its permissions
    • An unencrypted Parameter Store parameter that the Lambda function loads
    • A KMS key that only the Lambda function can access. You use this key to create an encrypted parameter later.
  • Lambda function code in Python 3.6 that demonstrates how to load values from Parameter Store at function initialization for reuse across invocations.

Launch the AWS SAM template

To create the resources shown in this post, you can download the SAM template or choose the button to launch the stack. The template requires one parameter, an IAM user name, which is the name of the IAM user to be the admin of the KMS key that you create. In order to perform the steps listed in this post, this IAM user will need permissions to execute Lambda functions, create Parameter Store parameters, administer keys in KMS, and view the X-Ray console. If you have these privileges in your IAM user account you can use your own account to complete the walkthrough. You can not use the root user to administer the KMS keys.

SAM template resources

The following sections show the code for the resources defined in the template.
Lambda function

ParameterStoreBlogFunctionDev:
    Type: 'AWS::Serverless::Function'
    Properties:
      FunctionName: 'ParameterStoreBlogFunctionDev'
      Description: 'Integrating lambda with Parameter Store'
      Handler: 'lambda_function.lambda_handler'
      Role: !GetAtt ParameterStoreBlogFunctionRoleDev.Arn
      CodeUri: './code'
      Environment:
        Variables:
          ENV: 'dev'
          APP_CONFIG_PATH: 'parameterStoreBlog'
          AWS_XRAY_TRACING_NAME: 'ParameterStoreBlogFunctionDev'
      Runtime: 'python3.6'
      Timeout: 5
      Tracing: 'Active'

  ParameterStoreBlogFunctionRoleDev:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          -
            Effect: Allow
            Principal:
              Service:
                - 'lambda.amazonaws.com'
            Action:
              - 'sts:AssumeRole'
      ManagedPolicyArns:
        - 'arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole'
      Policies:
        -
          PolicyName: 'ParameterStoreBlogDevParameterAccess'
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              -
                Effect: Allow
                Action:
                  - 'ssm:GetParameter*'
                Resource: !Sub 'arn:aws:ssm:${AWS::Region}:${AWS::AccountId}:parameter/dev/parameterStoreBlog*'
        -
          PolicyName: 'ParameterStoreBlogDevXRayAccess'
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              -
                Effect: Allow
                Action:
                  - 'xray:PutTraceSegments'
                  - 'xray:PutTelemetryRecords'
                Resource: '*'

In this YAML code, you define a Lambda function named ParameterStoreBlogFunctionDev using the SAM AWS::Serverless::Function type. The environment variables for this function include the ENV (dev) and the APP_CONFIG_PATH where you find the configuration for this app in Parameter Store. X-Ray tracing is also enabled for profiling later.

The IAM role for this function extends the AWSLambdaBasicExecutionRole by adding IAM policies that grant the function permissions to write to X-Ray and get parameters from Parameter Store, limited to paths under /dev/parameterStoreBlog*.
Parameter Store parameter

SimpleParameter:
    Type: AWS::SSM::Parameter
    Properties:
      Name: '/dev/parameterStoreBlog/appConfig'
      Description: 'Sample dev config values for my app'
      Type: String
      Value: '{"key1": "value1","key2": "value2","key3": "value3"}'

This YAML code creates a plaintext string parameter in Parameter Store in a path that your Lambda function can access.
KMS encryption key

ParameterStoreBlogDevEncryptionKeyAlias:
    Type: AWS::KMS::Alias
    Properties:
      AliasName: 'alias/ParameterStoreBlogKeyDev'
      TargetKeyId: !Ref ParameterStoreBlogDevEncryptionKey

  ParameterStoreBlogDevEncryptionKey:
    Type: AWS::KMS::Key
    Properties:
      Description: 'Encryption key for secret config values for the Parameter Store blog post'
      Enabled: True
      EnableKeyRotation: False
      KeyPolicy:
        Version: '2012-10-17'
        Id: 'key-default-1'
        Statement:
          -
            Sid: 'Allow administration of the key & encryption of new values'
            Effect: Allow
            Principal:
              AWS:
                - !Sub 'arn:aws:iam::${AWS::AccountId}:user/${IAMUsername}'
            Action:
              - 'kms:Create*'
              - 'kms:Encrypt'
              - 'kms:Describe*'
              - 'kms:Enable*'
              - 'kms:List*'
              - 'kms:Put*'
              - 'kms:Update*'
              - 'kms:Revoke*'
              - 'kms:Disable*'
              - 'kms:Get*'
              - 'kms:Delete*'
              - 'kms:ScheduleKeyDeletion'
              - 'kms:CancelKeyDeletion'
            Resource: '*'
          -
            Sid: 'Allow use of the key'
            Effect: Allow
            Principal:
              AWS: !GetAtt ParameterStoreBlogFunctionRoleDev.Arn
            Action:
              - 'kms:Encrypt'
              - 'kms:Decrypt'
              - 'kms:ReEncrypt*'
              - 'kms:GenerateDataKey*'
              - 'kms:DescribeKey'
            Resource: '*'

This YAML code creates an encryption key with a key policy with two statements.

The first statement allows a given user (${IAMUsername}) to administer the key. Importantly, this includes the ability to encrypt values using this key and disable or delete this key, but does not allow the administrator to decrypt values that were encrypted with this key.

The second statement grants your Lambda function permission to encrypt and decrypt values using this key. The alias for this key in KMS is ParameterStoreBlogKeyDev, which is how you reference it later.

Lambda function

Here I walk you through the Lambda function code.

import os, traceback, json, configparser, boto3
from aws_xray_sdk.core import patch_all
patch_all()

# Initialize boto3 client at global scope for connection reuse
client = boto3.client('ssm')
env = os.environ['ENV']
app_config_path = os.environ['APP_CONFIG_PATH']
full_config_path = '/' + env + '/' + app_config_path
# Initialize app at global scope for reuse across invocations
app = None

class MyApp:
    def __init__(self, config):
        """
        Construct new MyApp with configuration
        :param config: application configuration
        """
        self.config = config

    def get_config(self):
        return self.config

def load_config(ssm_parameter_path):
    """
    Load configparser from config stored in SSM Parameter Store
    :param ssm_parameter_path: Path to app config in SSM Parameter Store
    :return: ConfigParser holding loaded config
    """
    configuration = configparser.ConfigParser()
    try:
        # Get all parameters for this app
        param_details = client.get_parameters_by_path(
            Path=ssm_parameter_path,
            Recursive=False,
            WithDecryption=True
        )

        # Loop through the returned parameters and populate the ConfigParser
        if 'Parameters' in param_details and len(param_details.get('Parameters')) > 0:
            for param in param_details.get('Parameters'):
                param_path_array = param.get('Name').split("/")
                section_position = len(param_path_array) - 1
                section_name = param_path_array[section_position]
                config_values = json.loads(param.get('Value'))
                config_dict = {section_name: config_values}
                print("Found configuration: " + str(config_dict))
                configuration.read_dict(config_dict)

    except:
        print("Encountered an error loading config from SSM.")
        traceback.print_exc()
    finally:
        return configuration

def lambda_handler(event, context):
    global app
    # Initialize app if it doesn't yet exist
    if app is None:
        print("Loading config and creating new MyApp...")
        config = load_config(full_config_path)
        app = MyApp(config)

    return "MyApp config is " + str(app.get_config()._sections)

Beneath the import statements, you import the patch_all function from the AWS X-Ray library, which you use to patch boto3 to create X-Ray segments for all your boto3 operations.

Next, you create a boto3 SSM client at the global scope for reuse across function invocations, following Lambda best practices. Using the function environment variables, you assemble the path where you expect to find your configuration in Parameter Store. The class MyApp is meant to serve as an example of an application that would need its configuration injected at construction. In this example, you create an instance of ConfigParser, a class in Python’s standard library for handling basic configurations, to give to MyApp.

The load_config function loads the all the parameters from Parameter Store at the level immediately beneath the path provided in the Lambda function environment variables. Each parameter found is put into a new section in ConfigParser. The name of the section is the name of the parameter, less the base path. In this example, the full parameter name is /dev/parameterStoreBlog/appConfig, which is put in a section named appConfig.

Finally, the lambda_handler function initializes an instance of MyApp if it doesn’t already exist, constructing it with the loaded configuration from Parameter Store. Then it simply returns the currently loaded configuration in MyApp. The impact of this design is that the configuration is only loaded from Parameter Store the first time that the Lambda function execution environment is initialized. Subsequent invocations reuse the existing instance of MyApp, resulting in improved performance. You see this in the X-Ray traces later in this post. For more advanced use cases where configuration changes need to be received immediately, you could implement an expiry policy for your configuration entries or push notifications to your function.

To confirm that everything was created successfully, test the function in the Lambda console.

  1. Open the Lambda console.
  2. In the navigation pane, choose Functions.
  3. In the Functions pane, filter to ParameterStoreBlogFunctionDev to find the function created by the SAM template earlier. Open the function name to view its details.
  4. On the top right of the function detail page, choose Test. You may need to create a new test event. The input JSON doesn’t matter as this function ignores the input.

After running the test, you should see output similar to the following. This demonstrates that the function successfully fetched the unencrypted configuration from Parameter Store.

Create an encrypted parameter

You currently have a simple, unencrypted parameter and a Lambda function that can access it.

Next, you create an encrypted parameter that only your Lambda function has permission to use for decryption. This limits read access for this parameter to only this Lambda function.

To follow along with this section, deploy the SAM template for this post in your account and make your IAM user name the KMS key admin mentioned earlier.

  1. In the Systems Manager console, under Shared Resources, choose Parameter Store.
  2. Choose Create Parameter.
    • For Name, enter /dev/parameterStoreBlog/appSecrets.
    • For Type, select Secure String.
    • For KMS Key ID, choose alias/ParameterStoreBlogKeyDev, which is the key that your SAM template created.
    • For Value, enter {"secretKey": "secretValue"}.
    • Choose Create Parameter.
  3. If you now try to view the value of this parameter by choosing the name of the parameter in the parameters list and then choosing Show next to the Value field, you won’t see the value appear. This is because, even though you have permission to encrypt values using this KMS key, you do not have permissions to decrypt values.
  4. In the Lambda console, run another test of your function. You now also see the secret parameter that you created and its decrypted value.

If you do not see the new parameter in the Lambda output, this may be because the Lambda execution environment is still warm from the previous test. Because the parameters are loaded at Lambda startup, you need a fresh execution environment to refresh the values.

Adjust the function timeout to a different value in the Advanced Settings at the bottom of the Lambda Configuration tab. Choose Save and test to trigger the creation of a new Lambda execution environment.

Profiling the impact of querying Parameter Store using AWS X-Ray

By using the AWS X-Ray SDK to patch boto3 in your Lambda function code, each invocation of the function creates traces in X-Ray. In this example, you can use these traces to validate the performance impact of your design decision to only load configuration from Parameter Store on the first invocation of the function in a new execution environment.

From the Lambda function details page where you tested the function earlier, under the function name, choose Monitoring. Choose View traces in X-Ray.

This opens the X-Ray console in a new window filtered to your function. Be aware of the time range field next to the search bar if you don’t see any search results.
In this screenshot, I’ve invoked the Lambda function twice, one time 10.3 minutes ago with a response time of 1.1 seconds and again 9.8 minutes ago with a response time of 8 milliseconds.

Looking at the details of the longer running trace by clicking the trace ID, you can see that the Lambda function spent the first ~350 ms of the full 1.1 sec routing the request through Lambda and creating a new execution environment for this function, as this was the first invocation with this code. This is the portion of time before the initialization subsegment.

Next, it took 725 ms to initialize the function, which includes executing the code at the global scope (including creating the boto3 client). This is also a one-time cost for a fresh execution environment.

Finally, the function executed for 65 ms, of which 63.5 ms was the GetParametersByPath call to Parameter Store.

Looking at the trace for the second, much faster function invocation, you see that the majority of the 8 ms execution time was Lambda routing the request to the function and returning the response. Only 1 ms of the overall execution time was attributed to the execution of the function, which makes sense given that after the first invocation you’re simply returning the config stored in MyApp.

While the Traces screen allows you to view the details of individual traces, the X-Ray Service Map screen allows you to view aggregate performance data for all traced services over a period of time.

In the X-Ray console navigation pane, choose Service map. Selecting a service node shows the metrics for node-specific requests. Selecting an edge between two nodes shows the metrics for requests that traveled that connection. Again, be aware of the time range field next to the search bar if you don’t see any search results.

After invoking your Lambda function several more times by testing it from the Lambda console, you can view some aggregate performance metrics. Look at the following:

  • From the client perspective, requests to the Lambda service for the function are taking an average of 50 ms to respond. The function is generating ~1 trace per minute.
  • The function itself is responding in an average of 3 ms. In the following screenshot, I’ve clicked on this node, which reveals a latency histogram of the traced requests showing that over 95% of requests return in under 5 ms.
  • Parameter Store is responding to requests in an average of 64 ms, but note the much lower trace rate in the node. This is because you only fetch data from Parameter Store on the initialization of the Lambda execution environment.

Conclusion

Deduplication, encryption, and restricted access to shared configuration and secrets is a key component to any mature architecture. Serverless architectures designed using event-driven, on-demand, compute services like Lambda are no different.

In this post, I walked you through a sample application accessing unencrypted and encrypted values in Parameter Store. These values were created in a hierarchy by application environment and component name, with the permissions to decrypt secret values restricted to only the function needing access. The techniques used here can become the foundation of secure, robust configuration management in your enterprise serverless applications.

Migrating Your Amazon ECS Containers to AWS Fargate

Post Syndicated from Tiffany Jernigan original https://aws.amazon.com/blogs/compute/migrating-your-amazon-ecs-containers-to-aws-fargate/

AWS Fargate is a new technology that works with Amazon Elastic Container Service (ECS) to run containers without having to manage servers or clusters. What does this mean? With Fargate, you no longer need to provision or manage a single virtual machine; you can just create tasks and run them directly!

Fargate uses the same API actions as ECS, so you can use the ECS console, the AWS CLI, or the ECS CLI. I recommend running through the first-run experience for Fargate even if you’re familiar with ECS. It creates all of the one-time setup requirements, such as the necessary IAM roles. If you’re using a CLI, make sure to upgrade to the latest version

In this blog, you will see how to migrate ECS containers from running on Amazon EC2 to Fargate.

Getting started

Note: Anything with code blocks is a change in the task definition file. Screen captures are from the console. Additionally, Fargate is currently available in the us-east-1 (N. Virginia) region.

Launch type

When you create tasks (grouping of containers) and clusters (grouping of tasks), you now have two launch type options: EC2 and Fargate. The default launch type, EC2, is ECS as you knew it before the announcement of Fargate. You need to specify Fargate as the launch type when running a Fargate task.

Even though Fargate abstracts away virtual machines, tasks still must be launched into a cluster. With Fargate, clusters are a logical infrastructure and permissions boundary that allow you to isolate and manage groups of tasks. ECS also supports heterogeneous clusters that are made up of tasks running on both EC2 and Fargate launch types.

The optional, new requiresCompatibilities parameter with FARGATE in the field ensures that your task definition only passes validation if you include Fargate-compatible parameters. Tasks can be flagged as compatible with EC2, Fargate, or both.

"requiresCompatibilities": [
    "FARGATE"
]

Networking

"networkMode": "awsvpc"

In November, we announced the addition of task networking with the network mode awsvpc. By default, ECS uses the bridge network mode. Fargate requires using the awsvpc network mode.

In bridge mode, all of your tasks running on the same instance share the instance’s elastic network interface, which is a virtual network interface, IP address, and security groups.

The awsvpc mode provides this networking support to your tasks natively. You now get the same VPC networking and security controls at the task level that were previously only available with EC2 instances. Each task gets its own elastic networking interface and IP address so that multiple applications or copies of a single application can run on the same port number without any conflicts.

The awsvpc mode also provides a separation of responsibility for tasks. You can get complete control of task placement within your own VPCs, subnets, and the security policies associated with them, even though the underlying infrastructure is managed by Fargate. Also, you can assign different security groups to each task, which gives you more fine-grained security. You can give an application only the permissions it needs.

"portMappings": [
    {
        "containerPort": "3000"
    }
 ]

What else has to change? First, you only specify a containerPort value, not a hostPort value, as there is no host to manage. Your container port is the port that you access on your elastic network interface IP address. Therefore, your container ports in a single task definition file need to be unique.

"environment": [
    {
        "name": "WORDPRESS_DB_HOST",
        "value": "127.0.0.1:3306"
    }
 ]

Additionally, links are not allowed as they are a property of the “bridge” network mode (and are now a legacy feature of Docker). Instead, containers share a network namespace and communicate with each other over the localhost interface. They can be referenced using the following:

localhost/127.0.0.1:<some_port_number>

CPU and memory

"memory": "1024",
 "cpu": "256"

"memory": "1gb",
 "cpu": ".25vcpu"

When launching a task with the EC2 launch type, task performance is influenced by the instance types that you select for your cluster combined with your task definition. If you pick larger instances, your applications make use of the extra resources if there is no contention.

In Fargate, you needed a way to get additional resource information so we created task-level resources. Task-level resources define the maximum amount of memory and cpu that your task can consume.

  • memory can be defined in MB with just the number, or in GB, for example, “1024” or “1gb”.
  • cpu can be defined as the number or in vCPUs, for example, “256” or “.25vcpu”.
    • vCPUs are virtual CPUs. You can look at the memory and vCPUs for instance types to get an idea of what you may have used before.

The memory and CPU options available with Fargate are:

CPU Memory
256 (.25 vCPU) 0.5GB, 1GB, 2GB
512 (.5 vCPU) 1GB, 2GB, 3GB, 4GB
1024 (1 vCPU) 2GB, 3GB, 4GB, 5GB, 6GB, 7GB, 8GB
2048 (2 vCPU) Between 4GB and 16GB in 1GB increments
4096 (4 vCPU) Between 8GB and 30GB in 1GB increments

IAM roles

Because Fargate uses awsvpc mode, you need an Amazon ECS service-linked IAM role named AWSServiceRoleForECS. It provides Fargate with the needed permissions, such as the permission to attach an elastic network interface to your task. After you create your service-linked IAM role, you can delete the remaining roles in your services.

"executionRoleArn": "arn:aws:iam::<your_account_id>:role/ecsTaskExecutionRole"

With the EC2 launch type, an instance role gives the agent the ability to pull, publish, talk to ECS, and so on. With Fargate, the task execution IAM role is only needed if you’re pulling from Amazon ECR or publishing data to Amazon CloudWatch Logs.

The Fargate first-run experience tutorial in the console automatically creates these roles for you.

Volumes

Fargate currently supports non-persistent, empty data volumes for containers. When you define your container, you no longer use the host field and only specify a name.

Load balancers

For awsvpc mode, and therefore for Fargate, use the IP target type instead of the instance target type. You define this in the Amazon EC2 service when creating a load balancer.

If you’re using a Classic Load Balancer, change it to an Application Load Balancer or a Network Load Balancer.

Tip: If you are using an Application Load Balancer, make sure that your tasks are launched in the same VPC and Availability Zones as your load balancer.

Let’s migrate a task definition!

Here is an example NGINX task definition. This type of task definition is what you’re used to if you created one before Fargate was announced. It’s what you would run now with the EC2 launch type.

{
    "containerDefinitions": [
        {
            "name": "nginx",
            "image": "nginx",
            "memory": "512",
            "cpu": "100",
            "essential": true,
            "portMappings": [
                {
                    "hostPort": "80",
                    "containerPort": "80",
                    "protocol": "tcp"
                }
            ],
            "logConfiguration": {
                "logDriver": "awslogs",
                "options": {
                    "awslogs-group": "/ecs/",
                    "awslogs-region": "us-east-1",
                    "awslogs-stream-prefix": "ecs"
                }
            }
        }
    ],
    "family": "nginx-ec2"
}

OK, so now what do you need to do to change it to run with the Fargate launch type?

  • Add FARGATE for requiredCompatibilities (not required, but a good safety check for your task definition).
  • Use awsvpc as the network mode.
  • Just specify the containerPort (the hostPortvalue is the same).
  • Add a task executionRoleARN value to allow logging to CloudWatch.
  • Provide cpu and memory limits for the task.
{
    "requiresCompatibilities": [
        "FARGATE"
    ],
    "containerDefinitions": [
        {
            "name": "nginx",
            "image": "nginx",
            "memory": "512",
            "cpu": "100",
            "essential": true,
            "portMappings": [
                {
                    "containerPort": "80",
                    "protocol": "tcp"
                }
            ],
            "logConfiguration": {
                "logDriver": "awslogs",
                "options": {
                    "awslogs-group": "/ecs/",
                    "awslogs-region": "us-east-1",
                    "awslogs-stream-prefix": "ecs"
                }
            }
        }
    ],
    "networkMode": "awsvpc",
    "executionRoleArn": "arn:aws:iam::<your_account_id>:role/ecsTaskExecutionRole",
    "family": "nginx-fargate",
    "memory": "512",
    "cpu": "256"
}

Are there more examples?

Yep! Head to the AWS Samples GitHub repo. We have several sample task definitions you can try for both the EC2 and Fargate launch types. Contributions are very welcome too :).

 

tiffany jernigan
@tiffanyfayj

All-In on Unlimited Backup

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/all-in-on-unlimited-backup/

chips on computer with cloud backup

The cloud backup industry has seen its share of tumultuousness. BitCasa, Dell DataSafe, Xdrive, and a dozen others have closed up shop. Mozy, Amazon, and Microsoft offered, but later canceled, their unlimited offerings. Recently, CrashPlan for Home customers were notified that their service was being end-of-lifed. Then today we’ve heard from Carbonite customers who are frustrated by this morning’s announcement of a price increase from Carbonite.

We believe that the fundamental goal of a cloud backup is having peace-of-mind: knowing your data — all of it — is safe. For over 10 years Backblaze has been providing that peace-of-mind by offering completely unlimited cloud backup to our customers. And we continue to be committed to that. Knowing that your cloud backup vendor is not going to disappear or fundamentally change their service is an essential element in achieving that peace-of-mind.

Committed to Unlimited Backup

When Mozy discontinued their unlimited backup on Jan 31, 2011, a lot of people asked, “Does this mean Backblaze will discontinue theirs as well?” At that time I wrote the blog post Backblaze is committed to unlimited backup. That was seven years ago. Since then we’ve continued to make Backblaze cloud backup better: dramatically speeding up backups and restores, offering the unique and very popular Restore Return Refund program, enabling direct access and sharing of any file in your backup, and more. We also introduced Backblaze Groups to enable businesses and families to manage backups — all at no additional cost.

How That’s Possible

I’d like to answer the question of “How have you been able to do this when others haven’t?

First, commitment. It’s not impossible to offer unlimited cloud backup, but it’s not easy. The Backblaze team has been committed to unlimited as a core tenet.

Second, we have pursued the technical, business, and cultural steps required to make it happen. We’ve designed our own servers, written our cloud storage software, run our own operations, and been continually focused on every place we could optimize a penny out of the cost of storage. We’ve built a culture at Backblaze that cares deeply about that.

Ensuring Peace-of-Mind

Price increases and plan changes happen in our industry, but Backblaze has consistently been the low price leader, and continues to stand by the foundational element of our service — truly unlimited backup storage. Carbonite just announced a price increase from $60 to $72/year, and while that’s not an astronomical increase, it’s important to keep in mind the service that they are providing at that rate. The basic Carbonite plan provides a service that doesn’t back up videos or external hard drives by default. We think that’s dangerous. No one wants to discover that their videos weren’t backed up after their computer dies, or have to worry about the safety and durability of their data. That is why we have continued to build on our foundation of unlimited, as well as making our service faster and more accessible. All of these serve the goal of ensuring peace-of-mind for our customers.

3 Months Free For You & A Friend

As part of our commitment to unlimited, refer your friends to receive three months of Backblaze service through March 15, 2018. When you Refer-a-Friend with your personal referral link, and they subscribe, both of you will receive three months of service added to your account. See promotion details on our Refer-a-Friend page.

Want A Reminder When Your Carbonite Subscription Runs Out?

If you’re considering switching from Carbonite, we’d love to be your new backup provider. Enter your email and the date you’d like to be reminded in the form below and you’ll get a friendly reminder email from us to start a new backup plan with Backblaze. Or, you could start a free trial today.

We think you’ll be glad you switched, and you’ll have a chance to experience some of that Backblaze peace-of-mind for your data.

Please Send Me a Reminder When I Need a New Backup Provider



 

The post All-In on Unlimited Backup appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Barbot 4: the bartending Grandfather clock

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/barbot-4/

Meet Barbot 4, the drink-dispensing Grandfather clock who knows when it’s time to party.

Barbot 4. Grandfather Time (first video of cocktail robot)

The first introduction to my latest barbot – this time made inside a grandfather clock. There is another video where I explain a bit about how it works, and am happy to give more explanations. https://youtu.be/hdxV_KKH5MA This can make cocktails with up to 4 spirits, and 4 mixers, and is controlled by voice, keyboard input, or a gui, depending which is easiest.

Barbot 4

Robert Prest’s Barbot 4 is a beverage dispenser loaded into an old Grandfather clock. There’s space in the back for your favourite spirits and mixers, and a Raspberry Pi controls servo motors that release the required measures of your favourite cocktail ingredients, according to preset recipes.

Barbot 4 Raspberry Pi drink-dispensing robot

The clock can hold four mixers and four spirits, and a human supervisor records these using Drinkydoodad, a friendly touchscreen interface. With information about its available ingredients and a library of recipes, Barbot 4 can create your chosen drink. Patrons control the system either with voice commands or with the touchscreen UI.

Barbot 4 Raspberry Pi drink-dispensing robot

Robert has experimented with various components as this project has progressed. He has switched out peristaltic pumps in order to increase the flow of liquid, and adjusted the motors so that they can handle carbonated beverages. In the video, he highlights other quirks he hopes to address, like the fact that drinks tend to splash during pouring.

Barbot 4 Raspberry Pi drink-dispensing robot

As well as a Raspberry Pi, the build uses Arduinos. These control the light show, which can be adjusted according to your party-time lighting preferences.

An explanation of the build accompanies Robert’s second video. We’re hoping he’ll also release more details of Barbot 3, his suitcase-sized, portable Barbot, and of Doom Shot Bot, a bottle topper that pours a shot every time you die in the game DoomZ.

Automated bartending

Barbot 4 isn’t the first cocktail-dispensing Raspberry Pi bartender we’ve seen, though we have to admit that fitting it into a grandfather clock definitely makes it one of the quirkiest.

If you’ve built a similar project using a Raspberry Pi, we’d love to see it. Share your project in the comments, or tell us what drinks you’d ask Barbot to mix if you had your own at home.

The post Barbot 4: the bartending Grandfather clock appeared first on Raspberry Pi.

Community Profile: Dr. Lucy Rogers

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/community-profile-lucy-rogers/

This column is from The MagPi issue 58. You can download a PDF of the full issue for free, or subscribe to receive the print edition through your letterbox or the digital edition on your tablet. All proceeds from the print and digital editions help the Raspberry Pi Foundation achieve our charitable goals.

Dr Lucy Rogers calls herself a Transformer. “I transform simple electronics into cool gadgets, I transform science into plain English, I transform problems into opportunities. I am also a catalyst. I am interested in everything around me, and can often see ways of putting two ideas from very different fields together into one package. If I cannot do this myself, I connect the people who can.”

Dr Lucy Rogers Raspberry Pi The MagPi Community Profile

Among many other projects, Dr Lucy Rogers currently focuses much of her attention on reducing the damage from space debris

It’s a pretty wide range of interests and skills for sure. But it only takes a brief look at Lucy’s résumé to realise that she means it. When she says she’s interested in everything around her, this interest reaches from electronics to engineering, wearable tech, space, robotics, and robotic dinosaurs. And she can be seen talking about all of these things across various companies’ social media, such as IBM, websites including the Women’s Engineering Society, and books, including her own.

Dr Lucy Rogers Raspberry Pi The MagPi Community Profile

With her bright LED boots, Lucy was one of the wonderful Pi community members invited to join us and HRH The Duke of York at St James’s Palace just over a year ago

When not attending conferences as guest speaker, tinkering with electronics, or creating engaging IoT tutorials, she can be found retrofitting Raspberry Pis into the aforementioned robotic dinosaurs at Blackgang Chine Land of Imagination, writing, and judging battling bots for the BBC’s Robot Wars.

Dr Lucy Rogers Raspberry Pi The MagPi Community Profile

First broadcast in the UK between 1998 and 2004, Robot Wars was revived in 2016 with a new look and new judges, including Dr Lucy Rogers. Competitors battle their home-brew robots, and Lucy, together with the other two judges, awards victories among the carnage of robotic remains

Lucy graduated from Lancaster University with a degree in Mechanical Engineering. After that, she spent seven years at Rolls-Royce Industrial Power Group as a graduate trainee before becoming a chartered engineer and earning her PhD in bubbles.

Bubbles?

“Foam formation in low‑expansion fire-fighting equipment. I investigated the equipment to determine how the bubbles were formed,” she explains. Obviously. Bubbles!

Dr Lucy Rogers Raspberry Pi The MagPi Community Profile

Lucy graduated from the Singularity University Graduate Studies Program in 2011, focusing on how robotics, nanotech, medicine, and various technologies can tackle the challenges facing the world

She then went on to become a fellow of the Royal Astronomical Society (RAS) in 2005 and, later, a fellow of both the Institution of Mechanical Engineers (IMechE) and British Interplanetary Society. As a member of the Association of British Science Writers, Lucy wrote It’s ONLY Rocket Science: an Introduction in Plain English.

Dr Lucy Rogers Raspberry Pi The MagPi Community Profile

In It’s Only Rocket Science: An Introduction in Plain English Lucy explains that ‘hard to understand’ isn’t the same as ‘impossible to understand’, and takes her readers through the journey of building a rocket, leaving Earth, and travelling the cosmos

As a standout member of the industry, and all-round fun person to be around, Lucy has quickly established herself as a valued member of the Pi community.

In 2014, with the help of Neil Ford and Andy Stanford-Clark, Lucy worked with the UK’s oldest amusement park, Blackgang Chine Land of Imagination, on the Isle of Wight, with the aim of updating its animatronic dinosaurs. The original Blackgang Chine dinosaurs had a limited range of behaviour: able to roar, move their heads, and stomp a foot in a somewhat repetitive action.

When she contacted Raspberry Pi back in the November of that same year, the team were working on more creative, varied behaviours, giving each dinosaur a new Raspberry Pi-sized brain. This later evolved into a very successful dino-hacking Raspberry Jam.

Dr Lucy Rogers Raspberry Pi The MagPi Community Profile

Lucy, Neil Ford, and Andy Stanford-Clark used several Raspberry Pis and Node-RED to visualise flows of events when updating the robotic dinosaurs at Blackgang Chine. They went on to create the successful WightPi Raspberry Jam event, where visitors could join in with the unique hacking opportunity.

Given her love for tinkering with tech, and a love for stand-up comedy that can be uncovered via a quick YouTube search, it’s no wonder that Lucy was asked to help judge the first round of the ‘Make us laugh’ Pioneers challenge for Raspberry Pi. Alongside comedian Bec Hill, Code Club UK director Maria Quevedo, and the face of the first challenge, Owen Daughtery, Lucy lent her expertise to help name winners in the various categories of the teens event, and offered her support to future Pioneers.

The post Community Profile: Dr. Lucy Rogers appeared first on Raspberry Pi.

Raspberry Crusoe: how a Pi got lost at sea

Post Syndicated from James Robinson original https://www.raspberrypi.org/blog/lost-high-altitude-balloon/

The tale of the little HAB that could and its three-month journey from Portslade Aldridge Community Academy in the UK to the coast of Denmark.

PACA Computing on Twitter

Where did it land ???? #skypaca #skycademy @pacauk #RaspberryPi

High-altitude ballooning

Some of you may be familiar with Raspberry Pi being used as the flight computer, or tracker, of high-altitude balloon (HAB) payloads. For those who aren’t, high-altitude ballooning is a relatively simple activity (at least in principle) where a tracker is attached to a large weather balloon which is then released into the atmosphere. While the HAB ascends, the tracker takes pictures and data readings the whole time. Eventually (around 30km up) the balloon bursts, leaving the payload free to descend and be recovered. For a better explanation, I’m handing over to the students of UTC Oxfordshire:

Pi in the Sky | UTC Oxfordshire

On Tuesday 2nd May, students launched a Raspberry Pi computer 35,000 metres into the stratosphere as part of an Employer-Led project at UTC Oxfordshire, set by the Raspberry Pi Foundation. The project involved engineering, scientific and communication/publicity skills being developed to create the payload and code to interpret experiments set by the science team.

Skycademy

Over the past few years, we’ve seen schools and their students explore the possibilities that high-altitude ballooning offers, and back in 2015 and 2016 we ran Skycademy. The programme was simple enough: get a bunch of educators together in the same space, show them how to launch a balloon flight, and then send them back to their students to try and repeat what they’ve learned. Since the first Skycademy event, a number of participants have carried out launches, and we are extremely proud of each and every one of them.

The case of the vanishing PACA HAB

Not every launch has been a 100% success though. There are many things that can and do go wrong during HAB flights, and watching each launch from the comfort of our office can be a nerve-wracking experience. We had such an experience back in July 2017, during the launch performed by Skycademy graduate and Raspberry Pi Certified Educator Dave Hartley and his students from Portslade Aldridge Community Academy (PACA).

Dave and his team had been working on their payload for some time, and were awaiting suitable weather conditions. Early one Wednesday in July, everything aligned: they had a narrow window of good weather and so set their launch plan in motion. Soon they had assembled the payload in the school grounds and all was ready for the launch.

Dave Hartley on Twitter

Launch day! @pacauk #skycademy #skypaca #raspberrypi

Just before 11:00, they’d completed their final checks and released their payload into the atmosphere. Over the course of 64 minutes, the HAB steadily rose to an altitude of 25647m, where it captured some amazing pictures before the balloon burst and a rapid descent began.

Portslade Aldridge Community Academy Skycademy Raspberry Pi
Portslade Aldridge Community Academy Skycademy Raspberry Pi

Soon after the payload began to descend, the team noticed something worrying: their predicted descent path took the payload dangerously far south — it was threatening to land in the sea. As the payload continued to lose altitude, their calculated results kept shifting, alternately predicting a landing on the ground or out to sea. Eventually it became clear that the payload would narrowly overshoot the land, and it finally landed about 2 km out to sea.

Portslade Aldridge Community Academy Skycademy Raspberry Pi High Altitude Ballooning

The path of the balloon

It’s not uncommon for a HAB payload to get lost. There are many ways this can happen, particularly in a narrow country with a prevailing easterly wind like the UK. Payloads can get lost at sea, land somewhere inaccessible, or simply run out of power before they are located and retrieved. So normally, this would be the end of the story for the PACA students — even if the team had had a speedboat to hand, their payload was surely lost for good.

A message from Denmark

However, this is not the end of our story! A couple of months later, I arrived at work and saw this tweet from a colleague:

Raspberry Pi on Twitter

Anyone lost a Raspberry Pi HAB? Someone found this one on a beach in south western Denmark yesterday #UKHAS https://t.co/7lBzFiemgr

Good Samaritan Henning Hansen had found a Raspberry Pi washed up on a remote beach in Denmark! While walking a stretch of coast to collect plastic debris for an environmental monitoring project, he came across something unusual near the shore at 55°04’53.0″N and 8°38’46.9″E.

This of course piqued my interest, and we began to investigate the image he had shared on Facebook.

Portslade Aldridge Community Academy Skycademy Raspberry Pi High Altitude Ballooning

Inspecting the photo closely, we noticed a small asset label — the kind of label that, over a year earlier, we’d stuck to each and every bit of Skycademy field kit. We excitedly claimed the kit on behalf of Dave and his students, and contacted Henning to arrange the recovery of the payload. He told us it must have been carried ashore with the tide some time between 21 and 27 September, and probably on 21 September, since that day had the highest tide over the period. This meant the payload must have spent over two months at sea!

From the photo we could tell that the Raspberry Pi had suffered significant corrosion, having been exposed to salt water for so long, and so we felt pessimistic about the chances that there would be any recoverable data on it. However, Henning said that he’d been able to read some files from the FAT partition of the SD card, so all hope was not lost.

After a few weeks and a number of complications around dispatch and delivery (thank you, Henning, for your infinite patience!), Helen collected the HAB from a local Post Office.

Portslade Aldridge Community Academy Skycademy Raspberry Pi High Altitude Ballooning

SUCCESS!

We set about trying to read the data from the SD card, and eventually became disheartened: despite several attempts, we were unable to read its contents.

In a last-ditch effort, we gave the SD card to Jonathan, one of our engineers, who initially laughed at the prospect of recovering any data from it. But ten minutes later, he returned with news of success!

Portslade Aldridge Community Academy Skycademy Raspberry Pi
Portslade Aldridge Community Academy Skycademy Raspberry Pi
Portslade Aldridge Community Academy Skycademy Raspberry Pi
Portslade Aldridge Community Academy Skycademy Raspberry Pi

Since then, we’ve been able to reunite the payload with the PACA launch team, and the students sent us the perfect message to end this story:

Portslade Aldridge Community Academy Skycademy Raspberry Pi High Altitude Ballooning

The post Raspberry Crusoe: how a Pi got lost at sea appeared first on Raspberry Pi.

New Anti-Piracy Coalition Calls For Canadian Website Blocking

Post Syndicated from Ernesto original https://torrentfreak.com/new-anti-piracy-coalition-calls-for-canadian-website-blocking-180130/

In recent years pirate sites have been blocked around the world, from Europe, through Asia, and even Down Under.

While many of the large corporations backing these blockades have their roots in North America, blocking efforts have been noticeably absent there. This should change, according to a new anti-piracy coalition that was launched in Canada this week.

Fairplay Canada, which consists of a broad range of organizations with ties to the entertainment industry, calls on the local telecom regulator CRTC to institute a national website blocking program.

The coalition’s members include Bell, Cineplex, Directors Guild of Canada, Maple Leaf Sports and Entertainment, Movie Theatre Association of Canada, and Rogers Media, which all share the goal of addressing the country’s rampant piracy problem.

The Canadian blocklist should be maintained by a yet to be established non-profit organization called “Independent Piracy Review Agency” (IPRA) and both IPRA and the CRTC would be overseen by the Federal Court of Appeal, the organizations propose.

“What we are proposing has been effective in countries like the UK, France, and Australia,” says Dr. Shan Chandrasekar, President and CEO of Asian Television Network International Limited (ATN), who is filing Fairplay Canada’s application.

“We are ardent supporters of this incredible coalition that has been formed to propose a new tool to empower the CRTC to address online piracy in Canada. We have great faith in Canadian regulators to modernize the tools available to help creators protect the content they make for Canadians’ enjoyment.”

The proposal is unique in the sense that it’s the first of its kind in North America and also has support from major players in the Telco industry. Since most large ISPs also have ties to media companies of their own, the latter is less surprising as it may seem at first glance.

Bell, for example, is not only the largest Internet provider in Canada but also owns the television broadcasting and production company Bell Media, which applauds the new plan.

“Bell is pleased to work with our partners across the industry and the CRTC on this important step in ensuring the long-term viability of the Canadian creative sector,” says Randy Lennox, President of Bell Media.

“Digital rights holders need up-to-date tools to combat piracy where it’s happening, on the Internet, and the process proposed by the coalition will provide just that, fairly, openly and effectively,” he adds.

Thus far the Government’s response to the plan has been rather reserved. When an early version of the plans leaked last month, Canadaland quoted a spokesperson who said that the Government is committed to opening doors instead of building walls.

Digital rights group OpenMedia goes a step further and brands the proposal a censorship plan which will violate net neutrality and limit people’s right to freedom of expression.

“Everybody agrees that content creators deserved to be paid for their work. But the proposal from this censorship coalition goes too far,” Executive Director Laura Tribe says.

“FairPlay Canada’s proposal is like using a machine gun to kill a mosquito. It will undoubtedly lead to legitimate content and speech being censored online violating our right to free expression and the principles of net neutrality, which the federal government has consistently pledged support for.”

While CTRC is reviewing FairPlay Canada’s plans, OpenMedia has launched a petition to stop the effort in its tracks, which has been signed by more than 45,000 Canadians to date.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

[$] QUIC as a solution to protocol ossification

Post Syndicated from corbet original https://lwn.net/Articles/745590/rss

The TCP protocol has become so ubiquitous that, to many people, the terms
“TCP/IP” and “networking” are nearly synonymous. The fact that introducing
new protocols (or even modifying existing protocols) has become nearly impossible tends to
reinforce that situation. That is not stopping people from trying, though.
At linux.conf.au 2018, Jana Iyengar, a developer at Google, discussed the
current state of the QUIC
protocol which, he said, is now used for about 7% of the traffic on the
Internet as a whole.

Researchers Use a Blockchain to Boost Anonymous Torrent Sharing

Post Syndicated from Ernesto original https://torrentfreak.com/researchers-use-a-blockchain-to-boost-anonymous-torrent-sharing-180129/

The Tribler client has been around for over a decade. We first covered it in 2006 and since then it’s developed into a truly decentralized BitTorrent client.

Even if all torrent sites were shut down today, Tribler users would still be able to find and add new content.

The project is not run by regular software developers but by a team of quality researchers at Delft University of Technology. There are currently more than 45 masters students, various thesis students, five dedicated scientific developers, and several professors involved.

Simply put, Triber aims to make the torrent ecosystem truly decentralized and anonymous. A social network of peers that can survive even if all torrent sites ceased to exist.

“Search and download torrents with less worries or censorship,” Triber’s tagline reads.

Like many other BitTorrent clients, Tribler has a search box at the top of the application. However, the search results that appear when users type in a keyword don’t come from a central index. Instead, they come directly from other peers.

Thriber’s search results

With the latest release, Tribler 7.0, the project adds another element to the mix, it’s very own blockchain. This blockchain keeps track of how much people are sharing and rewards them accordingly.

“Tribler is a torrent client for social people, who help each other. You can now earn tokens by helping others. It is specifically designed to prevent freeriding and detect hit-and-run peers.” Tribler leader Dr. Johan Pouwelse tells TF.

“You help other Tribler users by seeding and by enhancing their privacy. In return, you get faster downloads, as your tokens show you contribute to the community.”

Pouwelse, who aims to transform BitTorrent into an ethical Darknet, just presented the latest release at Stanford University. In addition, the Internet Engineering Task Force is also considering the blockchain implementation as an official Internet standard.

This recognition from academics and technology experts is welcome, of course, but Triber’s true power comes from the users. The client has gathered a decent userbase of the years but there sure is plenty room for improvement on this front.

The anonymity aspect is perhaps one of the biggest selling points and Pouwelse believes that this will greatly benefit from the blockchain implementation.

Triber provides users with pseudo anonymity by routing the transfers through other users. However, this means that the amount of bandwith used by the application inceases as well. Thus far, this hasn’t worked very well, which resulted in slow anonymous downloads.

“With the integrated blockchain release today we think we can start fixing the problem of both underseeded swarms and fast proxies,” Dr. Pouwelse says.

“Our solution is basically very simple, only social people get decent performance on Tribler. This means in a few years we will end up with only users that act nice. Others leave.”

Tribler’s trust stats

Tribler provides users with quite a bit of flexibility on the anonymity site. The feature can be turned off completely, or people can choose a protection layer ranging from one to four hops.

What’s also important to note is that users don’t operate as exit nodes by default. The IP-addresses of the exit nodes are public ouitside the network and can be monitored, so that would only increase liability.

So who are the exit-nodes in this process then? According to Pouwelse’s rather colorful description, these appear to be volunteers that run their code through a VPN a or a VPS server.

“The past years we have created an army of bots we call ‘Self-replicating Autonomous Entities’. These are Terminator-style self-replicating pieces of code which have their own Bitcoin wallet to go out there and buy servers to run more copies of themselves,” he explains.

“They utilize very primitive genetic evolution to improve survival, buy a VPN for protection, earn credits using our experimental credit mining preview release, and sell our bandwidth tokens on our integrated decentral market for cold hard Bitcoin cash to renew the cycle of life for the next month billing cycle of their VPS provider.”

Some might question why there’s such a massive research project dedicated to building an anonymous BitTorrent network. What are the benefits to society?

The answer is clear, according to Pouwelse. The ethical darknet they envision will be a unique micro-economy where sharing is rewarded, without having to expose one’s identity.

“We are building the Internet of Trust. The Internet can do amazing things, it even created honesty among drugs dealers,” he says, referring to the infamous Silk Road.

“Reliability rating of drugs lords gets you life imprisonment. That’s not something we want. We are creating our own trustworthy micro-economy for bandwidth tokens and real Bitcoins,” he adds.

People who are interested in taking Tribler for a spin can download the latest version from the official website.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Piracy Can Help Music Sales of Many Artists, Research Shows

Post Syndicated from Ernesto original https://torrentfreak.com/piracy-can-help-music-sales-of-many-artists-research-shows-180128/

The debate over whether online piracy helps or hurts music sales has been dragging on for several decades now.

The issue has been researched extensively with both positive and negative effects being reported, often varying based on the type of artist, music genre and media, among other variables.

One of the more extensive studies was published this month in the peer-reviewed Information Economics and Policy journal, by Queen’s University economics researcher Jonathan Lee.

In a paper titled ‘Purchase, pirate, publicize: Private-network music sharing and market album sales’ he examined the effect of BitTorrent-based piracy on both digital and physical music sales.

We covered an earlier version of the study two years ago when it was still a work in progress. With updates to the research methods and a data sample, the results are now more clear.

The file-sharing data was obtained from an unnamed private BitTorrent tracker and covers a data set of 250,000 albums and more than five million downloads. These were matched to US sales data for thousands of albums provided by Nielsen SoundScan.

By refining the estimation approach and updating the matching technique, the final version of the paper shows some interesting results.

Based on the torrent tracker data, Lee finds that piracy can boost sales of mid-tier artists, both for physical CDs and digital downloads. For the most popular artists, this effect is reversed. In both cases, the impact is the largest for digital sales.

“I now find that top artists are harmed and mid-tier artists may be helped in both markets, but that these effects are larger for digital sales,” Lee tells TorrentFreak. “This is consistent with the idea that people are more willing to switch between digital piracy and digital sales than between digital piracy and physical CDs.”

The findings lead to the conclusion that there is no ideal ‘one-size-fits-all’ response to piracy. In fact, some unauthorized sharing may be a good thing.

This is in line with observations from musicians themselves over the past years. Several top artists have admitted the positive effects of piracy, including Ed Sheeran, who recently said that he owes his career to it.

“I know that’s a bad thing to say, because I’m part of a music industry that doesn’t like illegal file sharing,” Sheeran said in an interview with CBS. “Illegal file sharing was what made me. It was students in England going to university, sharing my songs with each other.”

Sheeran sharing on TPB

Today, Sheeran is in a totally different position of course. As one of the top artists, he would now be hurt by piracy. However, the new stars of tomorrow may still reap the benefits.

According to the researcher, the music industry should realize that shutting down pirate sites may not always be the best option. On the contrary, file-sharing sites may be useful as promotional platforms in some cases.

“Following above, a policy of total shutdown of private file sharing networks seems excessively costly (compared with their relatively small impact on sales) and unwise (as a one-size-fits-all policy). It would be better to make legal consumption more convenient, reducing the demand for piracy as an alternative to purchasing,” Lee tells us.

“It would also be smart to experiment with releasing music onto piracy networks themselves, especially for up-and-coming artists, similar to the free promotion afforded by commercial radio.”

The researcher makes another interesting extrapolation from the findings. In recent years, some labels and artists have signed exclusive deals with some streaming platforms. This means that content is not available everywhere, and this fragmentation may make piracy look more appealing.

“Here you can view piracy as a non-fragmented alternative platform to Spotify et al. Thus consumers will have a strong incentive to use a single non-fragmented platform (piracy) over having multiple subscriptions to fragmented platforms,” Lee says.

It would be better for the labels to publish their music on all platforms, and to make these more appealing and convenient than the pirate alternative.

The data used for the research was collected several years ago before the big streaming boom, so it might be that the results are different today. However, it is clear that the effect of piracy on sales is not as uniform as the music industry often portrays it.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

GDPR for Developers [presentation]

Post Syndicated from Bozho original https://techblog.bozho.net/gdpr-developers-presentation/

On a recent meetup in Amsterdam I talked about GDPR from a technical point of view, effectively turning my “GDPR – a practical guide for developers” article into a talk.

You can see the slides here:

If you’re interested, you can also join a webinar on the same topic, organized by AxonIQ, where I will join Frans Vanbuul. You can find more information about the webinar here.

The interesting thing that I can share after the meetup and after meeting with potential clients is that everyone (maybe unsurprisingly) has a very specific question that doesn’t get an immediate answer even after you follow the general guidelines. That is maybe a problem on the Regulation’s side, as it has not brought sufficient clarity to businesses.

As I said during the presentation – in technology we’re used with binary questions. In law and legal compliance an answer is somewhere on a scale between 1 and 10. “Do I have to encrypt my data at rest”? Well, I guess yes, but in terms of compliance I’d say “6 out of 10”, as it is not strict, depends on the multiple people’s interpretation of the sensitivity of the data and on other factors like access control.

So the communication between legal and technical people is key to understand what exactly implementation changes are needed.

The post GDPR for Developers [presentation] appeared first on Bozho's tech blog.

Court Orders Hosting Provider to Stop Pirate Premier League Streams

Post Syndicated from Ernesto original https://torrentfreak.com/court-orders-hosting-provider-to-stop-pirate-premier-league-streams-180126/

In many parts of the world football, or soccer as some would call it, is the number one spectator sport.

The English Premier League, widely regarded as one the top competitions, draws hundreds of millions of viewers per year. Many of these pay for access to the matches, but there’s also a massive circuit of unauthorized streams.

The Football Association Premier League (FAPL) has been clamping down on these pirate sources for years. In the UK, for example, it obtained a unique High Court injunction last year, which requires local Internet providers to block streams as they go live.

In addition, the organization has also filed legal action against a hosting provider through which several live sports streaming sites are operating. The case in question was filed in the Netherlands where Ecatel LTD, a UK company, operated several servers.

According to the complaint, Ecatel hosted sites such as cast247.tv, streamlive.to and iguide.to, which allowed visitors to watch live Premier League streams without paying.

As the streaming platforms themselves were not responsive to takedown requests, the Premier League demanded action from their hosting provider. Specifically, they wanted the company to disconnect live streams on their end, by null-routing the servers of the offending customer.

This week the Court of The Hague issued its judgment, which is a clear win for the football association.

The Court ruled that, after the hosting company receives a takedown notice from FAPL or one of its agents, Ecatel must disconnect pirate Premier League streams within 30 minutes.

“[The Court] recommends that, after 24 hours of service of this judgment, Ecatel cease and discontinue any service used by third parties to infringe the copyright to FAPL by promptly but no later than 30 minutes after receipt of a request to that end,” the verdict reads.

The ban can be lifted after the game has ended, making it a temporary measure similar to the UK Internet provider blockades. If Ecatel fails to comply, it faces a penalty of €5,000 for each illegal stream, to a maximum of € 1,500,000.

While the order is good news for the Premier League, it will be hard to enforce, since Ecatel LTD was dissolved last year. Another hosting company called Novogara was previously linked with Ecatel and is still active, but that is not mentioned in the court order.

This means that the order will mostly be valuable as a precedent. Especially since it goes against an earlier order from 2015, which Emerce pointed out. This warrants a closer look at how the Court reached its decision.

In its defense, Ecatel had argued that an obligation to disconnect customers based on a takedown notice would be disproportionate and violate its entrepreneurial freedoms. The latter is protected by the EU Charter of Fundamental Rights.

The Court, however, highlights that there is a clash between the entrepreneurial rights of Ecatel and the copyrights of FAPL in this case. This requires the Court to weigh these rights to see which prevails over the other.

According to the verdict, the measures Ecatel would have to take to comply are not overly costly. The company already null-routed customers who failed to pay, so the technical capabilities are there.

Ecatel also argued that disconnecting a server could affect legal content that’s provided by its customers. However, according to the Court, Ecatel is partly to blame for this, as it does business with customers who seemingly don’t have a proper takedown process themselves. This is something the company could have included in their contracts.

As a result, the Court put the copyrights of FAPL above the entrepreneurial freedom rights of the hosting provider.

The second right that has to be weighed is the public’s right to freedom of expression and information. While the Court rules that this right is limited by the measures, it argues that the rights of copyright holders weigh stronger.

“Admittedly, this freedom [of expression and information] is restricted, but according to the order, this will only apply for the duration of the offending streams. Furthermore, as said, this will only take place if the stream has not already been blocked in another way,” the Court writes.

If any legal content is affected by the measures then the offending streaming platform itself will experience more pressure from users to deal with the problem, and offer a suitable takedown procedure to prevent similar problems in the future, the Court notes.

TorrentFreak reached out to FAPL and Ecatel’s lawyers for a comment on the verdict but at the time of writing we haven’t heard back.

The verdict appears to be a powerful precedent for copyright holders. Kim Kuik, director of local anti-piracy group BREIN, is pleased with the outcome. While BREIN was not involved in this lawsuit, it previously sued Ecatel in another case.

“It is a good precedent. An intermediary like Ecatel has its accountability and must have an effective notice and take down procedure,” Kuik tells TorrentFreak.

“Too bad it wasn’t also against the people behind Ecatel, who now can continue using another vehicle. The judge thinks this verdict serves a warning to them. Time will tell if that is so.”

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons