Tag Archives: Make

Game Companies Oppose DMCA Exemption for ‘Abandoned’ Online Games

Post Syndicated from Ernesto original https://torrentfreak.com/game-companies-oppose-dmca-exemption-for-abandoned-online-games-180217/

There are a lot of things people are not allowed to do under US copyright law, but perhaps just as importantly there are exemptions.

The U.S. Copyright Office is currently considering whether or not to loosen the DMCA’s anti-circumvention provisions, which prevent the public from ‘tinkering’ with DRM-protected content and devices.

These provisions are renewed every three years after the Office hears various arguments from the public. One of the major topics on the agenda this year is the preservation of abandoned games.

The Copyright Office previously included game preservation exemptions to keep these games accessible. This means that libraries, archives, and museums can use emulators and other circumvention tools to make old classics playable.

Late last year several gaming fans including the Museum of Art and Digital Entertainment (the MADE), a nonprofit organization operating in California, argued for an expansion of this exemption to also cover online games. This includes games in the widely popular multiplayer genre, which require a connection to an online server.

“Although the Current Exemption does not cover it, preservation of online video games is now critical,” MADE wrote in its comment to the Copyright Office.

“Online games have become ubiquitous and are only growing in popularity. For example, an estimated fifty-three percent of gamers play multiplayer games at least once a week, and spend, on average, six hours a week playing with others online.”

This week, the Entertainment Software Association (ESA), which acts on behalf of prominent members including Electonic Arts, Nintendo and Ubisoft, opposed the request.

While they are fine with the current game-preservation exemption, expanding it to online games goes too far, they say. This would allow outsiders to recreate online game environments using server code that was never published in public.

It would also allow a broad category of “affiliates” to help with this which, according to the ESA, could include members of the public

“The proponents characterize these as ‘slight modifications’ to the existing exemption. However they are nothing of the sort. The proponents request permission to engage in forms of circumvention that will enable the complete recreation of a hosted video game-service environment and make the video game available for play by a public audience.”

“Worse yet, proponents seek permission to deputize a legion of ‘affiliates’ to assist in their activities,” ESA adds.

The proposed changes would enable and facilitate infringing use, the game companies warn. They fear that outsiders such as MADE will replicate the game servers and allow the public to play these abandoned games, something games companies would generally charge for. This could be seen as direct competition.

MADE, for example, already charges the public to access its museum so they can play games. This can be seen as commercial use under the DMCA, ESA points out.

“Public performance and display of online games within a museum likewise is a commercial use within the meaning of Section 107. MADE charges an admission fee – ‘$10 to play games all day’.

“Under the authority summarized above, public performance and display of copyrighted works to generate entrance fee revenue is a commercial use, even if undertaken by a nonprofit museum,” the ESA adds.

The ESA also stresses that their members already make efforts to revive older games themselves. There is a vibrant and growing market for “retro” games, which games companies are motivated to serve, they say.

The games companies, therefore, urge the Copyright Office to keep the status quo and reject any exemptions for online games.

“In sum, expansion of the video game preservation exemption as contemplated by Class 8 is not a ‘modest’ proposal. Eliminating the important limitations that the Register provided when adopting the current exemption risks the possibility of wide-scale infringement and substantial market harm,” they write.

The Copyright Office will take all arguments into consideration before it makes a final decision. It’s clear that the wishes of game preservation advocates, such as MADE, are hard to unite with the interests of the game companies, so one side will clearly be disappointed with the outcome.

A copy of ESA’s submissionavailablelble here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Subtitle Heroes: Fansubbing Movie Criticized For Piracy Promotion

Post Syndicated from Andy original https://torrentfreak.com/subtitle-heroes-fansubbing-movie-criticized-for-piracy-promotion-180217/

With many thousands of movies and TV shows being made available illegally online every year, a significant number will be enjoyed by speakers of languages other than that presented in the original production.

When Hollywood blockbusters appear online, small armies of individuals around the world spring into action, translating the dialog into Chinese and Czech, Dutch and Danish, French and Farsi, Russian and Romanian, plus a dozen languages in between. TV shows, particularly those produced in the US, get the same immediate treatment.

For many years, subtitling (‘fansubbing’) communities have provided an incredible service to citizens around the globe, from those seeking to experience new culture and languages to the hard of hearing and profoundly deaf. Now, following in the footsteps of movies like TPB:AFK and Kim Dotcom: Caught in the Web, a new movie has premiered in Italy which celebrates this extraordinary movement.

Subs Heroes from writer and director Franco Dipietro hit cinemas at the end of January. It documents the contribution fansubbing has made to Italian culture in a country that under fascism in 1934 banned the use of foreign languages in films, books, newspapers and everyday speech.

The movie centers on the large subtitle site ItalianSubs.net. Founded by a group of teenagers in 2006, it is now run by a team of men and women who maintain their identities as regular citizens during the day but transform into “superheroes of fansubbing” at night.

Needless to say, not everyone is pleased with this depiction of the people behind the now-infamous 500,000 member site.

For many years, fansubbing attracted very little heat but over time anti-piracy groups have been turning up the pressure, accusing subtitling teams of fueling piracy. This notion is shared by local anti-piracy outfit FAPAV (Federation for the Protection of Audiovisual and Multimedia Content), which has accused Dipietro’s movie of glamorizing criminal activity.

In a statement following the release of Subs Heroes, FAPAV made its position crystal clear: sites like ItalianSubs do not contribute to the development of the audiovisual market in Italy.

“It is necessary to clarify: when a protected work is subtitled and there is no right to do so, a crime is committed,” the anti-piracy group says.

“[Italiansubs] translates and makes available subtitles of audiovisual works (films and television series) in many cases not yet distributed on the Italian market. All this without having requested the consent of the rights holders. Ergo the Italiansubs community is illegal.”

Italiansubs (note ad for movie, top right)

FAPAV General Secretary Federico Bagnoli Rossi says that the impact that fansubbers have on the market is significant, causing damage not only to companies distributing the content but also to those who invest in official translations.

The fact that fansubbers often translate content that is not yet available in the region only compounds matters, Rossi says, noting that unofficial translations can also have “direct consequences” on those who have language dubbing as an occupation.

“The audiovisual market today needs to be supported and the protection and fight against illicit behaviors are as fundamental as investments and creative ideas,” Rossi notes.

“Everyone must do their part, respecting the rules and with a competitive and global cultural vision. There are no ‘superheroes’ or noble goals behind piracy, but only great damage to the audiovisual sector and all its workers.”

Also piling on the criticism is the chief of the National Cinema Exhibitors’ Association, who wrote to all of the companies involved to remind them that unauthorized subtitling is a crime. According to local reports, there seems to be an underlying tone that people should avoid becoming associated with the movie.

This did not please director Franco Dipietro who is defending his right to document the fansubbing movement, whether the industry likes it or not.

“We invite those who perhaps think differently to deepen the discussion and maybe organize an event to talk about it together. The film is made to confront and talk about a phenomenon that, whether we like it or not, exists and we can not pretend that it is not there,” Dipietro concludes.



Subs Heroes Trailer 1 from Duel: on Vimeo.



Subs Heroes Trailer 2 from Duel: on Vimeo.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Major US Sports Leagues Report Top Piracy Nations to Government

Post Syndicated from Ernesto original https://torrentfreak.com/major-us-sports-leagues-report-top-piracy-nations-to-government-180216/

While pirated Hollywood blockbusters often score the big headlines, there are several other industries that have been battling with piracy over the years. This includes sports organizations.

Many of the major US leagues including the NBA, NFL, NHL, MLB and the Tennis Association, are bundling their powers in the Sports Coalition, to try and curb the availability of pirated streams and videos.

A few days ago the Sports Coalition put the piracy problem on the agenda of the United States Trade Representative (USTR).

“Sports organizations, including Sports Coalition members, are heavily affected by live sports telecast piracy, including the unauthorized live retransmission of sports telecasts over the Internet,” the Sports Coalition wrote.

“The Internet piracy of live sports telecasts is not only a persistent problem, but also a global one, often involving bad actors in more than one nation.”

The USTR asked the public for comments on which countries play a central role in copyright infringement issues. In its response, the Sports Coalition stresses that piracy is a global issue but singles out several nations as particularly problematic.

The coalition recommends that the USTR should put the Netherlands and Switzerland on the “Priority Watch List” of its 2018 Special 301 Report, followed by Russia, Saudi Arabia, Seychelles and Sweden, which get a regular “Watch List” recommendation.

The main problem with these countries is that hosting providers and content distribution networks don’t do enough to curb piracy.

In the Netherlands, sawlive.tv, strikezoneme, wizlnet, AltusHost, Host Palace, Quasi Networks and SNEL pirated or provided services contributing to sports piracy, the coalition writes. In Switzerland, mlbstreamme, robinwidgetorg, strikeoutmobi, BlackHOST, Private Layer and Solar Communications are doing the same.

According to the major sports leagues, the US Government should encourage these countries to step up their anti-piracy game. This is not only important for US copyright holders, but also for licensees in other countries.

“Clearly, there is common ground – both in terms of shared economic interests and legal obligations to protect and enforce intellectual property and related rights – for the United States and the nations with which it engages in international trade to work cooperatively to stop Internet piracy of sports programming.”

Whether any of these countries will make it into the USTR’s final list has yet to be seen. For Switzerland it wouldn’t be the first time but for the Netherlands it would be new, although it has been considered before.

A document we received through a FOIA request earlier this year revealed that the US Embassy reached out to the Dutch Government in the past, to discuss similar complaints from the Sports Coalition.

The same document also revealed that local anti-piracy group BREIN consistently urged the entertainment industries it represents not to advocate placing the Netherlands on the 301 Watch List but to solve the problems behind the scenes instead.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

New National Academies Report on Crypto Policy

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/02/new_national_ac.html

The National Academies has just published “Decrypting the Encryption Debate: A Framework for Decision Makers.” It looks really good, although I have not read it yet.

Not much news or analysis yet. Please post any links you find in the comments, and I will summarize them here.

Court Orders Spanish ISPs to Block Pirate Sites For Hollywood

Post Syndicated from Andy original https://torrentfreak.com/court-orders-spanish-isps-to-block-pirate-sites-for-hollywood-180216/

Determined to reduce levels of piracy globally, Hollywood has become one of the main proponents of site-blocking on the planet. To date there have been multiple lawsuits in far-flung jurisdictions, with Europe one of the primary targets.

Following complaints from Disney, 20th Century Fox, Paramount, Sony, Universal and Warner, Spain has become one of the latest targets. According to the studios a pair of sites – HDFull.tv and Repelis.tv – infringe their copyrights on a grand scale and need to be slowed down by preventing users from accessing them.

HDFull is a platform that provides movies and TV shows in both Spanish and English. Almost 60% its traffic comes from Spain and after a huge surge in visitors last July, it’s now the 337th most popular site in the country according to Alexa. Visitors from Mexico, Argentina, United States and Chile make up the rest of its audience.

Repelis.tv is a similar streaming portal specializing in movies, mainly in Spanish. A third of the site’s visitors hail from Mexico with the remainder coming from Argentina, Columbia, Spain and Chile. In common with HDFull, Repelis has been building its visitor numbers quickly since 2017.

The studios demanding more blocks

With a ruling in hand from the European Court of Justice which determined that sites can be blocked on copyright infringement grounds, the studios asked the courts to issue an injunction against several local ISPs including Telefónica, Vodafone, Orange and Xfera. In an order handed down this week, Barcelona Commercial Court No. 6 sided with the studios and ordered the ISPs to begin blocking the sites.

“They damage the legitimate rights of those who own the films and series, which these pages illegally display and with which they profit illegally through the advertising revenues they generate,” a statement from the Spanish Federation of Cinematographic Distributors (FEDECINE) reads.

FEDECINE General director Estela Artacho said that changes in local law have helped to provide the studios with a new way to protect audiovisual content released in Spain.

“Thanks to the latest reform of the Civil Procedure Law, we have in this jurisdiction a new way to exercise different possibilities to protect our commercial film offering,” Artacho said.

“Those of us who are part of this industry work to make culture accessible and offer the best cinematographic experience in the best possible conditions, guaranteeing the continuity of the sector.”

The development was also welcomed by Stan McCoy, president of the Motion Picture Association’s EMEA division, which represents the plaintiffs in the case.

“We have just taken a welcome step which we consider crucial to face the problem of piracy in Spain,” McCoy said.

“These actions are necessary to maintain the sustainability of the creative community both in Spain and throughout Europe. We want to ensure that consumers enjoy the entertainment offer in a safe and secure environment.”

After gaining experience from blockades and subsequent circumvention in other regions, the studios seem better prepared to tackle fallout in Spain. In addition to blocking primary domains, the ruling handed down by the court this week also obliges ISPs to block any other domain, subdomain or IP address whose purpose is to facilitate access to the blocked platforms.

News of Spain’s ‘pirate’ blocks come on the heels of fresh developments in Germany, where this week a court ordered ISP Vodafone to block KinoX, one of the country’s most popular streaming portals.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Pirates Crack Microsoft’s UWP Protection, Five Layers of DRM Defeated

Post Syndicated from Andy original https://torrentfreak.com/pirates-crack-microsofts-uwp-protection-five-layers-of-drm-defeated-180215/

As the image on the right shows, Microsoft’s Universal Windows Platform (UWP) is a system that enables software developers to create applications that can run across many devices.

“The Universal Windows Platform (UWP) is the app platform for Windows 10. You can develop apps for UWP with just one API set, one app package, and one store to reach all Windows 10 devices – PC, tablet, phone, Xbox, HoloLens, Surface Hub and more,” Microsoft explains.

While the benefits of such a system are immediately apparent, critics say that UWP gives Microsoft an awful lot of control, not least since UWP software must be distributed via the Windows Store with Microsoft taking a cut.

Or that was the plan, at least.

Last evening it became clear that the UWP system, previously believed to be uncrackable, had fallen to pirates. After being released on October 31, 2017, the somewhat underwhelming Zoo Tycoon Ultimate Animal Collection became the first victim at the hands of popular scene group, CODEX.

“This is the first scene release of a UWP (Universal Windows Platform) game. Therefore we would like to point out that it will of course only work on Windows 10. This particular game requires Windows 10 version 1607 or newer,” the group said in its release notes.

CODEX release notes

CODEX says it’s important that the game isn’t allowed to communicate with the Internet so the group advises users to block the game’s executable in their firewall.

While that’s not a particularly unusual instruction, CODEX did reveal that various layers of protection had to be bypassed to make the game work. They’re listed by the group as MSStore, UWP, EAppX, XBLive, and Arxan, the latter being an anti-tamper system.

“It’s the equivalent of Denuvo (without the DRM License part),” cracker Voksi previously explained. “It’s still bloats the executable with useless virtual machines that only slow down your game.”

Arxan features

Arxan’s marketing comes off as extremely confident but may need amending in light of yesterday’s developments.

“Arxan uses code protection against reverse-engineering, key and data protection to secure servers and fortification of game logic to stop the bad guys from tampering. Sorry hackers, game over,” the company’s marketing reads.

What is unclear at this stage is whether Zoo Tycoon Ultimate Animal Collection represents a typical UWP release or if some particular flaw allowed CODEX to take it apart. The possibility of additional releases is certainly a tantalizing one for pirates but how long they will have to wait is unknown.

Whatever the outcome, Arxan calling “game over” is perhaps a little premature under the circumstances but in this continuing arms race, they probably have another version of their anti-tamper tech up their sleeves…..

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

How to Patch Linux Workloads on AWS

Post Syndicated from Koen van Blijderveen original https://aws.amazon.com/blogs/security/how-to-patch-linux-workloads-on-aws/

Most malware tries to compromise your systems by using a known vulnerability that the operating system maker has already patched. As best practices to help prevent malware from affecting your systems, you should apply all operating system patches and actively monitor your systems for missing patches.

In this blog post, I show you how to patch Linux workloads using AWS Systems Manager. To accomplish this, I will show you how to use the AWS Command Line Interface (AWS CLI) to:

  1. Launch an Amazon EC2 instance for use with Systems Manager.
  2. Configure Systems Manager to patch your Amazon EC2 Linux instances.

In two previous blog posts (Part 1 and Part 2), I showed how to use the AWS Management Console to perform the necessary steps to patch, inspect, and protect Microsoft Windows workloads. You can implement those same processes for your Linux instances running in AWS by changing the instance tags and types shown in the previous blog posts.

Because most Linux system administrators are more familiar with using a command line, I show how to patch Linux workloads by using the AWS CLI in this blog post. The steps to use the Amazon EBS Snapshot Scheduler and Amazon Inspector are identical for both Microsoft Windows and Linux.

What you should know first

To follow along with the solution in this post, you need one or more Amazon EC2 instances. You may use existing instances or create new instances. For this post, I assume this is an Amazon EC2 for Amazon Linux instance installed from Amazon Machine Images (AMIs).

Systems Manager is a collection of capabilities that helps you automate management tasks for AWS-hosted instances on Amazon EC2 and your on-premises servers. In this post, I use Systems Manager for two purposes: to run remote commands and apply operating system patches. To learn about the full capabilities of Systems Manager, see What Is AWS Systems Manager?

As of Amazon Linux 2017.09, the AMI comes preinstalled with the Systems Manager agent. Systems Manager Patch Manager also supports Red Hat and Ubuntu. To install the agent on these Linux distributions or an older version of Amazon Linux, see Installing and Configuring SSM Agent on Linux Instances.

If you are not familiar with how to launch an Amazon EC2 instance, see Launching an Instance. I also assume you launched or will launch your instance in a private subnet. You must make sure that the Amazon EC2 instance can connect to the internet using a network address translation (NAT) instance or NAT gateway to communicate with Systems Manager. The following diagram shows how you should structure your VPC.

Diagram showing how to structure your VPC

Later in this post, you will assign tasks to a maintenance window to patch your instances with Systems Manager. To do this, the IAM user you are using for this post must have the iam:PassRole permission. This permission allows the IAM user assigning tasks to pass his own IAM permissions to the AWS service. In this example, when you assign a task to a maintenance window, IAM passes your credentials to Systems Manager. You also should authorize your IAM user to use Amazon EC2 and Systems Manager. As mentioned before, you will be using the AWS CLI for most of the steps in this blog post. Our documentation shows you how to get started with the AWS CLI. Make sure you have the AWS CLI installed and configured with an AWS access key and secret access key that belong to an IAM user that have the following AWS managed policies attached to the IAM user you are using for this example: AmazonEC2FullAccess and AmazonSSMFullAccess.

Step 1: Launch an Amazon EC2 Linux instance

In this section, I show you how to launch an Amazon EC2 instance so that you can use Systems Manager with the instance. This step requires you to do three things:

  1. Create an IAM role for Systems Manager before launching your Amazon EC2 instance.
  2. Launch your Amazon EC2 instance with Amazon EBS and the IAM role for Systems Manager.
  3. Add tags to the instances so that you can add your instances to a Systems Manager maintenance window based on tags.

A. Create an IAM role for Systems Manager

Before launching an Amazon EC2 instance, I recommend that you first create an IAM role for Systems Manager, which you will use to update the Amazon EC2 instance. AWS already provides a preconfigured policy that you can use for the new role and it is called AmazonEC2RoleforSSM.

  1. Create a JSON file named trustpolicy-ec2ssm.json that contains the following trust policy. This policy describes which principal (an entity that can take action on an AWS resource) is allowed to assume the role we are going to create. In this example, the principal is the Amazon EC2 service.
    {
      "Version": "2012-10-17",
      "Statement": {
        "Effect": "Allow",
        "Principal": {"Service": "ec2.amazonaws.com"},
        "Action": "sts:AssumeRole"
      }
    }

  1. Use the following command to create a role named EC2SSM that has the AWS managed policy AmazonEC2RoleforSSM attached to it. This generates JSON-based output that describes the role and its parameters, if the command is successful.
    $ aws iam create-role --role-name EC2SSM --assume-role-policy-document file://trustpolicy-ec2ssm.json

  1. Use the following command to attach the AWS managed IAM policy (AmazonEC2RoleforSSM) to your newly created role.
    $ aws iam attach-role-policy --role-name EC2SSM --policy-arn arn:aws:iam::aws:policy/service-role/AmazonEC2RoleforSSM

  1. Use the following commands to create the IAM instance profile and add the role to the instance profile. The instance profile is needed to attach the role we created earlier to your Amazon EC2 instance.
    $ aws iam create-instance-profile --instance-profile-name EC2SSM-IP
    $ aws iam add-role-to-instance-profile --instance-profile-name EC2SSM-IP --role-name EC2SSM

B. Launch your Amazon EC2 instance

To follow along, you need an Amazon EC2 instance that is running Amazon Linux. You can use any existing instance you may have or create a new instance.

When launching a new Amazon EC2 instance, be sure that:

  1. Use the following command to launch a new Amazon EC2 instance using an Amazon Linux AMI available in the US East (N. Virginia) Region (also known as us-east-1). Replace YourKeyPair and YourSubnetId with your information. For more information about creating a key pair, see the create-key-pair documentation. Write down the InstanceId that is in the output because you will need it later in this post.
    $ aws ec2 run-instances --image-id ami-cb9ec1b1 --instance-type t2.micro --key-name YourKeyPair --subnet-id YourSubnetId --iam-instance-profile Name=EC2SSM-IP

  1. If you are using an existing Amazon EC2 instance, you can use the following command to attach the instance profile you created earlier to your instance.
    $ aws ec2 associate-iam-instance-profile --instance-id YourInstanceId --iam-instance-profile Name=EC2SSM-IP

C. Add tags

The final step of configuring your Amazon EC2 instances is to add tags. You will use these tags to configure Systems Manager in Step 2 of this post. For this example, I add a tag named Patch Group and set the value to Linux Servers. I could have other groups of Amazon EC2 instances that I treat differently by having the same tag name but a different tag value. For example, I might have a collection of other servers with the tag name Patch Group with a value of Web Servers.

  • Use the following command to add the Patch Group tag to your Amazon EC2 instance.
    $ aws ec2 create-tags --resources YourInstanceId --tags --tags Key="Patch Group",Value="Linux Servers"

Note: You must wait a few minutes until the Amazon EC2 instance is available before you can proceed to the next section. To make sure your Amazon EC2 instance is online and ready, you can use the following AWS CLI command:

$ aws ec2 describe-instance-status --instance-ids YourInstanceId

At this point, you now have at least one Amazon EC2 instance you can use to configure Systems Manager.

Step 2: Configure Systems Manager

In this section, I show you how to configure and use Systems Manager to apply operating system patches to your Amazon EC2 instances, and how to manage patch compliance.

To start, I provide some background information about Systems Manager. Then, I cover how to:

  1. Create the Systems Manager IAM role so that Systems Manager is able to perform patch operations.
  2. Create a Systems Manager patch baseline and associate it with your instance to define which patches Systems Manager should apply.
  3. Define a maintenance window to make sure Systems Manager patches your instance when you tell it to.
  4. Monitor patch compliance to verify the patch state of your instances.

You must meet two prerequisites to use Systems Manager to apply operating system patches. First, you must attach the IAM role you created in the previous section, EC2SSM, to your Amazon EC2 instance. Second, you must install the Systems Manager agent on your Amazon EC2 instance. If you have used a recent Amazon Linux AMI, Amazon has already installed the Systems Manager agent on your Amazon EC2 instance. You can confirm this by logging in to an Amazon EC2 instance and checking the Systems Manager agent log files that are located at /var/log/amazon/ssm/.

To install the Systems Manager agent on an instance that does not have the agent preinstalled or if you want to use the Systems Manager agent on your on-premises servers, see Installing and Configuring the Systems Manager Agent on Linux Instances. If you forgot to attach the newly created role when launching your Amazon EC2 instance or if you want to attach the role to already running Amazon EC2 instances, see Attach an AWS IAM Role to an Existing Amazon EC2 Instance by Using the AWS CLI or use the AWS Management Console.

A. Create the Systems Manager IAM role

For a maintenance window to be able to run any tasks, you must create a new role for Systems Manager. This role is a different kind of role than the one you created earlier: this role will be used by Systems Manager instead of Amazon EC2. Earlier, you created the role, EC2SSM, with the policy, AmazonEC2RoleforSSM, which allowed the Systems Manager agent on your instance to communicate with Systems Manager. In this section, you need a new role with the policy, AmazonSSMMaintenanceWindowRole, so that the Systems Manager service can execute commands on your instance.

To create the new IAM role for Systems Manager:

  1. Create a JSON file named trustpolicy-maintenancewindowrole.json that contains the following trust policy. This policy describes which principal is allowed to assume the role you are going to create. This trust policy allows not only Amazon EC2 to assume this role, but also Systems Manager.
    {
       "Version":"2012-10-17",
       "Statement":[
          {
             "Sid":"",
             "Effect":"Allow",
             "Principal":{
                "Service":[
                   "ec2.amazonaws.com",
                   "ssm.amazonaws.com"
               ]
             },
             "Action":"sts:AssumeRole"
          }
       ]
    }

  1. Use the following command to create a role named MaintenanceWindowRole that has the AWS managed policy, AmazonSSMMaintenanceWindowRole, attached to it. This command generates JSON-based output that describes the role and its parameters, if the command is successful.
    $ aws iam create-role --role-name MaintenanceWindowRole --assume-role-policy-document file://trustpolicy-maintenancewindowrole.json

  1. Use the following command to attach the AWS managed IAM policy (AmazonEC2RoleforSSM) to your newly created role.
    $ aws iam attach-role-policy --role-name MaintenanceWindowRole --policy-arn arn:aws:iam::aws:policy/service-role/AmazonSSMMaintenanceWindowRole

B. Create a Systems Manager patch baseline and associate it with your instance

Next, you will create a Systems Manager patch baseline and associate it with your Amazon EC2 instance. A patch baseline defines which patches Systems Manager should apply to your instance. Before you can associate the patch baseline with your instance, though, you must determine if Systems Manager recognizes your Amazon EC2 instance. Use the following command to list all instances managed by Systems Manager. The --filters option ensures you look only for your newly created Amazon EC2 instance.

$ aws ssm describe-instance-information --filters Key=InstanceIds,Values= YourInstanceId

{
    "InstanceInformationList": [
        {
            "IsLatestVersion": true,
            "ComputerName": "ip-10-50-2-245",
            "PingStatus": "Online",
            "InstanceId": "YourInstanceId",
            "IPAddress": "10.50.2.245",
            "ResourceType": "EC2Instance",
            "AgentVersion": "2.2.120.0",
            "PlatformVersion": "2017.09",
            "PlatformName": "Amazon Linux AMI",
            "PlatformType": "Linux",
            "LastPingDateTime": 1515759143.826
        }
    ]
}

If your instance is missing from the list, verify that:

  1. Your instance is running.
  2. You attached the Systems Manager IAM role, EC2SSM.
  3. You deployed a NAT gateway in your public subnet to ensure your VPC reflects the diagram shown earlier in this post so that the Systems Manager agent can connect to the Systems Manager internet endpoint.
  4. The Systems Manager agent logs don’t include any unaddressed errors.

Now that you have checked that Systems Manager can manage your Amazon EC2 instance, it is time to create a patch baseline. With a patch baseline, you define which patches are approved to be installed on all Amazon EC2 instances associated with the patch baseline. The Patch Group resource tag you defined earlier will determine to which patch group an instance belongs. If you do not specifically define a patch baseline, the default AWS-managed patch baseline is used.

To create a patch baseline:

  1. Use the following command to create a patch baseline named AmazonLinuxServers. With approval rules, you can determine the approved patches that will be included in your patch baseline. In this example, you add all Critical severity patches to the patch baseline as soon as they are released, by setting the Auto approval delay to 0 days. By setting the Auto approval delay to 2 days, you add to this patch baseline the Important, Medium, and Low severity patches two days after they are released.
    $ aws ssm create-patch-baseline --name "AmazonLinuxServers" --description "Baseline containing all updates for Amazon Linux" --operating-system AMAZON_LINUX --approval-rules "PatchRules=[{PatchFilterGroup={PatchFilters=[{Values=[Critical],Key=SEVERITY}]},ApproveAfterDays=0,ComplianceLevel=CRITICAL},{PatchFilterGroup={PatchFilters=[{Values=[Important,Medium,Low],Key=SEVERITY}]},ApproveAfterDays=2,ComplianceLevel=HIGH}]"
    
    {
        "BaselineId": "YourBaselineId"
    }

  1. Use the following command to register the patch baseline you created with your instance. To do so, you use the Patch Group tag that you added to your Amazon EC2 instance.
    $ aws ssm register-patch-baseline-for-patch-group --baseline-id YourPatchBaselineId --patch-group "Linux Servers"
    
    {
        "PatchGroup": "Linux Servers",
        "BaselineId": "YourBaselineId"
    }

C.  Define a maintenance window

Now that you have successfully set up a role, created a patch baseline, and registered your Amazon EC2 instance with your patch baseline, you will define a maintenance window so that you can control when your Amazon EC2 instances will receive patches. By creating multiple maintenance windows and assigning them to different patch groups, you can make sure your Amazon EC2 instances do not all reboot at the same time.

To define a maintenance window:

  1. Use the following command to define a maintenance window. In this example command, the maintenance window will start every Saturday at 10:00 P.M. UTC. It will have a duration of 4 hours and will not start any new tasks 1 hour before the end of the maintenance window.
    $ aws ssm create-maintenance-window --name SaturdayNight --schedule "cron(0 0 22 ? * SAT *)" --duration 4 --cutoff 1 --allow-unassociated-targets
    
    {
        "WindowId": "YourMaintenanceWindowId"
    }

For more information about defining a cron-based schedule for maintenance windows, see Cron and Rate Expressions for Maintenance Windows.

  1. After defining the maintenance window, you must register the Amazon EC2 instance with the maintenance window so that Systems Manager knows which Amazon EC2 instance it should patch in this maintenance window. You can register the instance by using the same Patch Group tag you used to associate the Amazon EC2 instance with the AWS-provided patch baseline, as shown in the following command.
    $ aws ssm register-target-with-maintenance-window --window-id YourMaintenanceWindowId --resource-type INSTANCE --targets "Key=tag:Patch Group,Values=Linux Servers"
    
    {
        "WindowTargetId": "YourWindowTargetId"
    }

  1. Assign a task to the maintenance window that will install the operating system patches on your Amazon EC2 instance. The following command includes the following options.
    1. name is the name of your task and is optional. I named mine Patching.
    2. task-arn is the name of the task document you want to run.
    3. max-concurrency allows you to specify how many of your Amazon EC2 instances Systems Manager should patch at the same time. max-errors determines when Systems Manager should abort the task. For patching, this number should not be too low, because you do not want your entire patch task to stop on all instances if one instance fails. You can set this, for example, to 20%.
    4. service-role-arn is the Amazon Resource Name (ARN) of the AmazonSSMMaintenanceWindowRole role you created earlier in this blog post.
    5. task-invocation-parameters defines the parameters that are specific to the AWS-RunPatchBaseline task document and tells Systems Manager that you want to install patches with a timeout of 600 seconds (10 minutes).
      $ aws ssm register-task-with-maintenance-window --name "Patching" --window-id "YourMaintenanceWindowId" --targets "Key=WindowTargetIds,Values=YourWindowTargetId" --task-arn AWS-RunPatchBaseline --service-role-arn "arn:aws:iam::123456789012:role/MaintenanceWindowRole" --task-type "RUN_COMMAND" --task-invocation-parameters "RunCommand={Comment=,TimeoutSeconds=600,Parameters={SnapshotId=[''],Operation=[Install]}}" --max-concurrency "500" --max-errors "20%"
      
      {
          "WindowTaskId": "YourWindowTaskId"
      }

Now, you must wait for the maintenance window to run at least once according to the schedule you defined earlier. If your maintenance window has expired, you can check the status of any maintenance tasks Systems Manager has performed by using the following command.

$ aws ssm describe-maintenance-window-executions --window-id "YourMaintenanceWindowId"

{
    "WindowExecutions": [
        {
            "Status": "SUCCESS",
            "WindowId": "YourMaintenanceWindowId",
            "WindowExecutionId": "b594984b-430e-4ffa-a44c-a2e171de9dd3",
            "EndTime": 1515766467.487,
            "StartTime": 1515766457.691
        }
    ]
}

D.  Monitor patch compliance

You also can see the overall patch compliance of all Amazon EC2 instances using the following command in the AWS CLI.

$ aws ssm list-compliance-summaries

This command shows you the number of instances that are compliant with each category and the number of instances that are not in JSON format.

You also can see overall patch compliance by choosing Compliance under Insights in the navigation pane of the Systems Manager console. You will see a visual representation of how many Amazon EC2 instances are up to date, how many Amazon EC2 instances are noncompliant, and how many Amazon EC2 instances are compliant in relation to the earlier defined patch baseline.

Screenshot of the Compliance page of the Systems Manager console

In this section, you have set everything up for patch management on your instance. Now you know how to patch your Amazon EC2 instance in a controlled manner and how to check if your Amazon EC2 instance is compliant with the patch baseline you have defined. Of course, I recommend that you apply these steps to all Amazon EC2 instances you manage.

Summary

In this blog post, I showed how to use Systems Manager to create a patch baseline and maintenance window to keep your Amazon EC2 Linux instances up to date with the latest security patches. Remember that by creating multiple maintenance windows and assigning them to different patch groups, you can make sure your Amazon EC2 instances do not all reboot at the same time.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing any part of this solution, start a new thread on the Amazon EC2 forum or contact AWS Support.

– Koen

HackSpace magazine 4: the wearables issue

Post Syndicated from Andrew Gregory original https://www.raspberrypi.org/blog/hackspace-4-wearables/

Big things are afoot in the world of HackSpace magazine! This month we’re running our first special issue, with wearables projects throughout the magazine. Moreover, we’re giving away our first subscription gift free to all 12-month print subscribers. Lastly, and most importantly, we’ve made the cover EXTRA SHINY!

HackSpace magazine issue 4 cover

Prepare your eyeballs — it’s HackSpace magazine issue 4!

Wearables

In this issue, we’re taking an in-depth look at wearable tech. Not Fitbits or Apple Watches — we’re talking stuff you can make yourself, from projects that take a couple of hours to put together, to the huge, inspiring builds that are bringing technology to the runway. If you like wearing clothes and you like using your brain to make things better, then you’ll love this feature.

We’re continuing our obsession with Nixie tubes, with the brilliant Time-To-Go-Clock – Trump edition. This ingenious bit of kit uses obsolete Russian electronics to count down the time until the end of the 45th president’s term in office. However, you can also program it to tell the time left to any predictable event, such as the deadline for your tax return or essay submission, or the date England gets knocked out of the World Cup.

HackSpace magazine page 08
HackSpace magazine page 70
HackSpace magazine issue 4 page 98

We’re also talking to Dr Lucy Rogers — NASA alumna, Robot Wars judge, and fellow of the Institution of Mechanical Engineers — about the difference between making as a hobby and as a job, and about why we need the Guild of Makers. Plus, issue 4 has a teeny boat, the most beautiful Raspberry Pi cases you’ve ever seen, and it explores the results of what happens when you put a bunch of hardware hackers together in a French chateau — sacré bleu!

Tutorials

As always, we’ve got more how-tos than you can shake a soldering iron at. Fittingly for the current climate here in the UK, there’s a hot water monitor, which shows you how long you have before your morning shower turns cold, and an Internet of Tea project to summon a cuppa from your kettle via the web. Perhaps not so fittingly, there’s also an ESP8266 project for monitoring a solar power station online. Readers in the southern hemisphere, we’ll leave that one for you — we haven’t seen the sun here for months!

And there’s more!

We’re super happy to say that all our 12-month print subscribers have been sent an Adafruit Circuit Playground Express with this new issue:

Adafruit Circuit Playground Express HackSpace

This gadget was developed primarily with wearables in mind and comes with all sorts of in-built functionality, so subscribers can get cracking with their latest wearable project today! If you’re not a 12-month print subscriber, you’ll miss out, so subscribe here to get your magazine and your device,  and let us know what you’ll make.

The post HackSpace magazine 4: the wearables issue appeared first on Raspberry Pi.

Backblaze and GDPR

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/gdpr-compliance/

GDPR General Data Protection Regulation

Over the next few months the noise over GDPR will finally reach a crescendo. For the uninitiated, “GDPR” stands for “General Data Protection Regulation” and it goes into effect on May 25th of this year. GDPR is designed to protect how personal information of EU (European Union) citizens is collected, stored, and shared. The regulation should also improve transparency as to how personal information is managed by a business or organization.

Backblaze fully expects to be GDPR compliant when May 25th rolls around and we thought we’d share our experience along the way. We’ll start with this post as an introduction to GDPR. In future posts, we’ll dive into some of the details of the process we went through in meeting the GDPR objectives.

GDPR: A Two Way Street

To ensure we are GDPR compliant, Backblaze has assembled a dedicated internal team, engaged outside counsel in the United Kingdom, and consulted with other tech companies on best practices. While it is a sizable effort on our part, we view this as a waypoint in our ongoing effort to secure and protect our customers’ data and to be transparent in how we work as a company.

In addition to the effort we are putting into complying with the regulation, we think it is important to underscore and promote the idea that data privacy and security is a two-way street. We can spend millions of dollars on protecting the security of our systems, but we can’t stop a bad actor from finding and using your account credentials left on a note stuck to your monitor. We can give our customers tools like two factor authentication and private encryption keys, but it is the partnership with our customers that is the most powerful protection. The same thing goes for your digital privacy — we’ll do our best to protect your information, but we will need your help to do so.

Why GDPR is Important

At the center of GDPR is the protection of Personally Identifiable Information or “PII.” The definition for PII is information that can be used stand-alone or in concert with other information to identify a specific person. This includes obvious data like: name, address, and phone number, less obvious data like email address and IP address, and other data such as a credit card number, and unique identifiers that can be decoded back to the person.

How Will GDPR Affect You as an Individual

If you are a citizen in the EU, GDPR is designed to protect your private information from being used or shared without your permission. Technically, this only applies when your data is collected, processed, stored or shared outside of the EU, but it’s a good practice to hold all of your service providers to the same standard. For example, when you are deciding to sign up with a service, you should be able to quickly access and understand what personal information is being collected, why it is being collected, and what the business can do with that information. These terms are typically found in “Terms and Conditions” and “Privacy Policy” documents, or perhaps in a written contract you signed before starting to use a given service or product.

Even if you are not a citizen of the EU, GDPR will still affect you. Why? Because nearly every company you deal with, especially online, will have customers that live in the EU. It makes little sense for Backblaze, or any other service provider or vendor, to create a separate set of rules for just EU citizens. In practice, protection of private information should be more accountable and transparent with GDPR.

How Will GDPR Affect You as a Backblaze Customer

Over the coming months Backblaze customers will see changes to our current “Terms and Conditions,” “Privacy Policy,” and to our Backblaze services. While the changes to the Backblaze services are expected to be minimal, the “terms and privacy” documents will change significantly. The changes will include among other things the addition of a group of model clauses and related materials. These clauses will be generally consistent across all GDPR compliant vendors and are meant to be easily understood so that a customer can easily determine how their PII is being collected and used.

Common GDPR Questions:

Here are a few of the more common questions we have heard regarding GDPR.

  1. GDPR will only affect citizens in the EU.
    Answer: The changes that are being made by companies such as Backblaze to comply with GDPR will almost certainly apply to customers from all countries. And that’s a good thing. The protections afforded to EU citizens by GDPR are something all users of our service should benefit from.
  2. After May 25, 2018, a citizen of the EU will not be allowed to use any applications or services that store data outside of the EU.
    Answer: False, no one will stop you as an EU citizen from using the internet-based service you choose. But, you should make sure you know where your data is being collected, processed, and stored. If any of those activities occur outside the EU, make sure the company is following the GDPR guidelines.
  3. My business only has a few EU citizens as customers, so I don’t need to care about GDPR?
    Answer: False, even if you have just one EU citizen as a customer, and you capture, process or store data their PII outside of the EU, you need to comply with GDPR.
  4. Companies can be fined millions of dollars for not complying with GDPR.
    Answer:
    True, but: the regulation allows for companies to be fined up to $4 Million dollars or 20% of global revenue (whichever is greater) if they don’t comply with GDPR. In practice, the feeling is that such fines will be reserved (at least initially) for egregious violators that ignore or merely give “lip-service” to GDPR.
  5. You’ll be able to tell a company is GDPR compliant because they have a “GDPR Certified” badge on their website.
    Answer: There is no official GDPR certification or an official GDPR certification program. Companies that comply with GDPR are expected to follow the articles in the regulation and it should be clear from the outside looking in that they have followed the regulations. For example, their “Terms and Conditions,” and “Privacy Policy” should clearly spell out how and why they collect, use, and share your information. At some point a real GDPR certification program may be adopted, but not yet.

For all the hoopla about GDPR, the regulation is reasonably well thought out and addresses a very important issue — people’s privacy online. Creating a best practices document, or in this case a regulation, that companies such as Backblaze can follow is a good idea. The document isn’t perfect, and over the coming years we expect there to be changes. One thing we hope for is that the countries within the EU continue to stand behind one regulation and not fragment the document into multiple versions, each applying to themselves. We believe that having multiple different GDPR versions for different EU countries would lead to less protection overall of EU citizens.

In summary, GDPR changes are coming over the next few months. Backblaze has our internal staff and our EU-based legal council working diligently to ensure that we will be GDPR compliant by May 25th. We believe that GDPR will have a positive effect in enhancing the protection of personally identifiable information for not only EU citizens, but all of our Backblaze customers.

The post Backblaze and GDPR appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Can Consumers’ Online Data Be Protected?

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/02/can_consumers_o.html

Everything online is hackable. This is true for Equifax’s data and the federal Office of Personal Management’s data, which was hacked in 2015. If information is on a computer connected to the Internet, it is vulnerable.

But just because everything is hackable doesn’t mean everything will be hacked. The difference between the two is complex, and filled with defensive technologies, security best practices, consumer awareness, the motivation and skill of the hacker and the desirability of the data. The risks will be different if an attacker is a criminal who just wants credit card details ­ and doesn’t care where he gets them from ­ or the Chinese military looking for specific data from a specific place.

The proper question isn’t whether it’s possible to protect consumer data, but whether a particular site protects our data well enough for the benefits provided by that site. And here, again, there are complications.

In most cases, it’s impossible for consumers to make informed decisions about whether their data is protected. We have no idea what sorts of security measures Google uses to protect our highly intimate Web search data or our personal e-mails. We have no idea what sorts of security measures Facebook uses to protect our posts and conversations.

We have a feeling that these big companies do better than smaller ones. But we’re also surprised when a lone individual publishes personal data hacked from the infidelity site AshleyMadison.com, or when the North Korean government does the same with personal information in Sony’s network.

Think about all the companies collecting personal data about you ­ the websites you visit, your smartphone and its apps, your Internet-connected car — and how little you know about their security practices. Even worse, credit bureaus and data brokers like Equifax collect your personal information without your knowledge or consent.

So while it might be possible for companies to do a better job of protecting our data, you as a consumer are in no position to demand such protection.

Government policy is the missing ingredient. We need standards and a method for enforcement. We need liabilities and the ability to sue companies that poorly secure our data. The biggest reason companies don’t protect our data online is that it’s cheaper not to. Government policy is how we change that.

This essay appeared as half of a point/counterpoint with Priscilla Regan, in a CQ Researcher report titled “Privacy and the Internet.”

EFF Urges US Copyright Office To Reject Proactive ‘Piracy’ Filters

Post Syndicated from Andy original https://torrentfreak.com/eff-urges-us-copyright-office-to-reject-proactive-piracy-filters-180213/

Faced with millions of individuals consuming unlicensed audiovisual content from a variety of sources, entertainment industry groups have been seeking solutions closer to the roots of the problem.

As widespread site-blocking attempts to tackle ‘pirate’ sites in the background, greater attention has turned to legal platforms that host both licensed and unlicensed content.

Under current legislation, these sites and services can do business relatively comfortably due to the so-called safe harbor provisions of the US Digital Millennium Copyright Act (DMCA) and the European Union Copyright Directive (EUCD).

Both sets of legislation ensure that Internet platforms can avoid being held liable for the actions of others provided they themselves address infringement when they are made aware of specific problems. If a video hosting site has a copy of an unlicensed movie uploaded by a user, for example, it must be removed within a reasonable timeframe upon request from the copyright holder.

However, in both the US and EU there is mounting pressure to make it more difficult for online services to achieve ‘safe harbor’ protections.

Entertainment industry groups believe that platforms use the law to turn a blind eye to infringing content uploaded by users, content that is often monetized before being taken down. With this in mind, copyright holders on both sides of the Atlantic are pressing for more proactive regimes, ones that will see Internet platforms install filtering mechanisms to spot and discard infringing content before it can reach the public.

While such a system would be welcomed by rightsholders, Internet companies are fearful of a future in which they could be held more liable for the infringements of others. They’re supported by the EFF, who yesterday presented a petition to the US Copyright Office urging caution over potential changes to the DMCA.

“As Internet users, website owners, and online entrepreneurs, we urge you to preserve and strengthen the Digital Millennium Copyright Act safe harbors for Internet service providers,” the EFF writes.

“The DMCA safe harbors are key to keeping the Internet open to all. They allow anyone to launch a website, app, or other service without fear of crippling liability for copyright infringement by users.”

It is clear that pressure to introduce mandatory filtering is a concern to the EFF. Filters are blunt instruments that cannot fathom the intricacies of fair use and are liable to stifle free speech and stymie innovation, they argue.

“Major media and entertainment companies and their surrogates want Congress to replace today’s DMCA with a new law that would require websites and Internet services to use automated filtering to enforce copyrights.

“Systems like these, no matter how sophisticated, cannot accurately determine the copyright status of a work, nor whether a use is licensed, a fair use, or otherwise non-infringing. Simply put, automated filters censor lawful and important speech,” the EFF warns.

While its introduction was voluntary and doesn’t affect the company’s safe harbor protections, YouTube already has its own content filtering system in place.

ContentID is able to detect the nature of some content uploaded by users and give copyright holders a chance to remove or monetize it. The company says that the majority of copyright disputes are now handled by ContentID but the system is not perfect and mistakes are regularly flagged by users and mentioned in the media.

However, ContentID was also very expensive to implement so expecting smaller companies to deploy something similar on much more limited budgets could be a burden too far, the EFF warns.

“What’s more, even deeply flawed filters are prohibitively expensive for all but the largest Internet services. Requiring all websites to implement filtering would reinforce the market power wielded by today’s large Internet services and allow them to stifle competition. We urge you to preserve effective, usable DMCA safe harbors, and encourage Congress to do the same,” the EFF notes.

The same arguments, for and against, are currently raging in Europe where the EU Commission proposed mandatory upload filtering in 2016. Since then, opposition to the proposals has been fierce, with warnings of potential human rights breaches and conflicts with existing copyright law.

Back in the US, there are additional requirements for a provider to qualify for safe harbor, including having a named designated agent tasked with receiving copyright infringement notifications. This person’s name must be listed on a platform’s website and submitted to the US Copyright Office, which maintains a centralized online directory of designated agents’ contact information.

Under new rules, agents must be re-registered with the Copyright Office every three years, despite that not being a requirement under the DMCA. The EFF is concerned that by simply failing to re-register an agent, an otherwise responsible website could lose its safe harbor protections, even if the agent’s details have remained the same.

“We’re concerned that the new requirement will particularly disadvantage small and nonprofit websites. We ask you to reconsider this rule,” the EFF concludes.

The EFF’s letter to the Copyright Office can be found here.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Early Challenges: Making Critical Hires

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/early-challenges-making-critical-hires/

row of potential employee hires sitting waiting for an interview

In 2009, Google disclosed that they had 400 recruiters on staff working to hire nearly 10,000 people. Someday, that might be your challenge, but most companies in their early days are looking to hire a handful of people — the right people — each year. Assuming you are closer to startup stage than Google stage, let’s look at who you need to hire, when to hire them, where to find them (and how to help them find you), and how to get them to join your company.

Who Should Be Your First Hires

In later stage companies, the roles in the company have been well fleshed out, don’t change often, and each role can be segmented to focus on a specific area. A large company may have an entire department focused on just cubicle layout; at a smaller company you may not have a single person whose actual job encompasses all of facilities. At Backblaze, our CTO has a passion and knack for facilities and mostly led that charge. Also, the needs of a smaller company are quick to change. One of our first hires was a QA person, Sean, who ended up being 100% focused on data center infrastructure. In the early stage, things can shift quite a bit and you need people that are broadly capable, flexible, and most of all willing to pitch in where needed.

That said, there are times you may need an expert. At a previous company we hired Jon, a PhD in Bayesian statistics, because we needed algorithmic analysis for spam fighting. However, even that person was not only able and willing to do the math, but also code, and to not only focus on Bayesian statistics but explore a plethora of spam fighting options.

When To Hire

If you’ve raised a lot of cash and are willing to burn it with mistakes, you can guess at all the roles you might need and start hiring for them. No judgement: that’s a reasonable strategy if you’re cash-rich and time-poor.

If your cash is limited, try to see what you and your team are already doing and then hire people to take those jobs. It may sound counterintuitive, but if you’re already doing it presumably it needs to be done, you have a good sense of the type of skills required to do it, and you can bring someone on-board and get them up to speed quickly. That then frees you up to focus on tasks that can’t be done by someone else. At Backblaze, I ran marketing internally for years before hiring a VP of Marketing, making it easier for me to know what we needed. Once I was hiring, my primary goal was to find someone I could trust to take that role completely off of me so I could focus solely on my CEO duties

Where To Find the Right People

Finding great people is always difficult, particularly when the skillsets you’re looking for are highly in-demand by larger companies with lots of cash and cachet. You, however, have one massive advantage: you need to hire 5 people, not 5,000.

People You Worked With

The absolutely best people to hire are ones you’ve worked with before that you already know are good in a work situation. Consider your last job, the one before, and the one before that. A significant number of the people we recruited at Backblaze came from our previous startup MailFrontier. We knew what they could do and how they would fit into the culture, and they knew us and thus could quickly meld into the environment. If you didn’t have a previous job, consider people you went to school with or perhaps individuals with whom you’ve done projects previously.

People You Know

Hiring friends, family, and others can be risky, but should be considered. Sometimes a friend can be a “great buddy,” but is not able to do the job or isn’t a good fit for the organization. Having to let go of someone who is a friend or family member can be rough. Have the conversation up front with them about that possibility, so you have the ability to stay friends if the position doesn’t work out. Having said that, if you get along with someone as a friend, that’s one critical component of succeeding together at work. At Backblaze we’ve hired a number of people successfully that were friends of someone in the organization.

Friends Of People You Know

Your network is likely larger than you imagine. Your employees, investors, advisors, spouses, friends, and other folks all know people who might be a great fit for you. Make sure they know the roles you’re hiring for and ask them if they know anyone that would fit. Search LinkedIn for the titles you’re looking for and see who comes up; if they’re a 2nd degree connection, ask your connection for an introduction.

People You Know About

Sometimes the person you want isn’t someone anyone knows, but you may have read something they wrote, used a product they’ve built, or seen a video of a presentation they gave. Reach out. You may get a great hire: worst case, you’ll let them know they were appreciated, and make them aware of your organization.

Other Places to Find People

There are a million other places to find people, including job sites, community groups, Facebook/Twitter, GitHub, and more. Consider where the people you’re looking for are likely to congregate online and in person.

A Comment on Diversity

Hiring “People You Know” can often result in “Hiring People Like You” with the same workplace experiences, culture, background, and perceptions. Some studies have shown [1, 2, 3, 4] that homogeneous groups deliver faster, while heterogeneous groups are more creative. Also, “Hiring People Like You” often propagates the lack of women and minorities in tech and leadership positions in general. When looking for people you know, keep an eye to not discount people you know who don’t have the same cultural background as you.

Helping People To Find You

Reaching out proactively to people is the most direct way to find someone, but you want potential hires coming to you as well. To do this, they have to a) be aware of you, b) know you have a role they’re interested in, and c) think they would want to work there. Let’s tackle a) and b) first below.

Your Blog

I started writing our blog before we launched the product and talked about anything I found interesting related to our space. For several years now our team has owned the content on the blog and in 2017 over 1.5 million people read it. Each time we have a position open it’s published to the blog. If someone finds reading about backup and storage interesting, perhaps they’d want to dig in deeper from the inside. Many of the people we’ve recruited have mentioned reading the blog as either how they found us or as a factor in why they wanted to work here.
[BTW, this is Gleb’s 200th post on Backblaze’s blog. The first was in 2008. — Editor]

Your Email List

In addition to the emails our blog subscribers receive, we send regular emails to our customers, partners, and prospects. These are largely focused on content we think is directly useful or interesting for them. However, once every few months we include a small mention that we’re hiring, and the positions we’re looking for. Often a small blurb is all you need to capture people’s imaginations whether they might find the jobs interesting or can think of someone that might fit the bill.

Your Social Involvement

Whether it’s Twitter or Facebook, Hacker News or Slashdot, your potential hires are engaging in various communities. Being socially involved helps make people aware of you, reminds them of you when they’re considering a job, and paints a picture of what working with you and your company would be like. Adam was in a Reddit thread where we were discussing our Storage Pods, and that interaction was ultimately part of the reason he left Apple to come to Backblaze.

Convincing People To Join

Once you’ve found someone or they’ve found you, how do you convince them to join? They may be currently employed, have other offers, or have to relocate. Again, while the biggest companies have a number of advantages, you might have more unique advantages than you realize.

Why Should They Join You

Here are a set of items that you may be able to offer which larger organizations might not:

Role: Consider the strengths of the role. Perhaps it will have broader scope? More visibility at the executive level? No micromanagement? Ability to take risks? Option to create their own role?

Compensation: In addition to salary, will their options potentially be worth more since they’re getting in early? Can they trade-off salary for more options? Do they get option refreshes?

Benefits: In addition to healthcare, food, and 401(k) plans, are there unique benefits of your company? One company I knew took the entire team for a one-month working retreat abroad each year.

Location: Most people prefer to work close to home. If you’re located outside of the San Francisco Bay Area, you might be at a disadvantage for not being in the heart of tech. But if you find employees close to you you’ve got a huge advantage. Sometimes it’s micro; even in the Bay Area the difference of 5 miles can save 20 minutes each way every day. We located the Backblaze headquarters in San Mateo, a middle-ground that made it accessible to those coming from San Jose and San Francisco. We also chose a downtown location near a train, restaurants, and cafes: all to make it easier and more pleasant. Also, are you flexible in letting your employees work remotely? Our systems administrator Elliott is about to embark on a long-term cross-country journey working from an RV.

Environment: Open office, cubicle, cafe, work-from-home? Loud/quiet? Social or focused? 24×7 or work-life balance? Different environments appeal to different people.

Team: Who will they be working with? A company with 100,000 people might have 100 brilliant ones you’d want to work with, but ultimately we work with our core team. Who will your prospective hires be working with?

Market: Some people are passionate about gaming, others biotech, still others food. The market you’re targeting will get different people excited.

Product: Have an amazing product people love? Highlight that. If you’re lucky, your potential hire is already a fan.

Mission: Curing cancer, making people happy, and other company missions inspire people to strive to be part of the journey. Our mission is to make storing data astonishingly easy and low-cost. If you care about data, information, knowledge, and progress, our mission helps drive all of them.

Culture: I left this for last, but believe it’s the most important. What is the culture of your company? Finding people who want to work in the culture of your organization is critical. If they like the culture, they’ll fit and continue it. We’ve worked hard to build a culture that’s collaborative, friendly, supportive, and open; one in which people like coming to work. For example, the five founders started with (and still have) the same compensation and equity. That started a culture of “we’re all in this together.” Build a culture that will attract the people you want, and convey what the culture is.

Writing The Job Description

Most job descriptions focus on the all the requirements the candidate must meet. While important to communicate, the job description should first sell the job. Why would the appropriate candidate want the job? Then share some of the requirements you think are critical. Remember that people read not just what you say but how you say it. Try to write in a way that conveys what it is like to actually be at the company. Ahin, our VP of Marketing, said the job description itself was one of the things that attracted him to the company.

Orchestrating Interviews

Much can be said about interviewing well. I’m just going to say this: make sure that everyone who is interviewing knows that their job is not only to evaluate the candidate, but give them a sense of the culture, and sell them on the company. At Backblaze, we often have one person interview core prospects solely for company/culture fit.

Onboarding

Hiring success shouldn’t be defined by finding and hiring the right person, but instead by the right person being successful and happy within the organization. Ensure someone (usually their manager) provides them guidance on what they should be concentrating on doing during their first day, first week, and thereafter. Giving new employees opportunities and guidance so that they can achieve early wins and feel socially integrated into the company does wonders for bringing people on board smoothly

In Closing

Our Director of Production Systems, Chris, said to me the other day that he looks for companies where he can work on “interesting problems with nice people.” I’m hoping you’ll find your own version of that and find this post useful in looking for your early and critical hires.

Of course, I’d be remiss if I didn’t say, if you know of anyone looking for a place with “interesting problems with nice people,” Backblaze is hiring. 😉

The post Early Challenges: Making Critical Hires appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Hacker House’s Zero W–powered automated gardener

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/hacker-house-automated-gardener/

Are the plants in your home or office looking somewhat neglected? Then build an automated gardener using a Raspberry Pi Zero W, with help from the team at Hacker House.

Make a Raspberry Pi Automated Gardener

See how we built it, including our materials, code, and supplemental instructions, on Hackster.io: https://www.hackster.io/hackerhouse/automated-indoor-gardener-a90907 With how busy our lives are, it’s sometimes easy to forget to pay a little attention to your thirsty indoor plants until it’s too late and you are left with a crusty pile of yellow carcasses.

Building an automated gardener

Tired of their plants looking a little too ‘crispy’, Hacker House have created an automated gardener using a Raspberry Pi Zero W alongside some 3D-printed parts, a 5v USB grow light, and a peristaltic pump.

Hacker House Automated Gardener Raspberry Pi

They designed and 3D printed a PLA casing for the project, allowing enough space within for the Raspberry Pi Zero W, the pump, and the added electronics including soldered wiring and two N-channel power MOSFETs. The MOSFETs serve to switch the light and the pump on and off.

Hacker House Automated Gardener Raspberry Pi

Due to the amount of power the light and pump need, the team replaced the Pi’s standard micro USB power supply with a 12v switching supply.

Coding an automated gardener

All the code for the project — a fairly basic Python script —is on the Hacker House GitHub repository. To fit it to your requirements, you may need to edit a few lines of the code, and Hacker House provides information on how to do this. You can also find more details of the build on the hackster.io project page.

Hacker House Automated Gardener Raspberry Pi

While the project runs with preset timings, there’s no reason why you couldn’t upgrade it to be app-based, for example to set a watering schedule when you’re away on holiday.

To see more for the Hacker House team, be sure to follow them on YouTube. You can also check out some of their previous Raspberry Pi projects featured on our blog, such as the smartphone-connected door lock and gesture-controlled holographic visualiser.

Raspberry Pi and your home garden

Raspberry Pis make great babysitters for your favourite plants, both inside and outside your home. Here at Pi Towers, we have Bert, our Slack- and Twitter-connected potted plant who reminds us when he’s thirsty and in need of water.

Bert Plant on Twitter

I’m good. There’s plenty to drink!

And outside of the office, we’ve seen plenty of your vegetation-focused projects using Raspberry Pi for planting, monitoring or, well, commenting on social and political events within the media.

If you use a Raspberry Pi within your home gardening projects, we’d love to see how you’ve done it. So be sure to share a link with us either in the comments below, or via our social media channels.

 

The post Hacker House’s Zero W–powered automated gardener appeared first on Raspberry Pi.

Pirate Site Blockades Enter Germany With Kinox.to as First Target

Post Syndicated from Ernesto original https://torrentfreak.com/pirate-site-blockades-enter-germany-with-kinox-to-as-first-target-180213/

Website blocking has become one of the leading anti-piracy mechanisms of recent years.

It is particularly prevalent across Europe, where thousands of sites are blocked by ISPs following court orders.

This week, these blocking efforts also reached Germany. Following a provisional injunction issued by the federal court in Munich, Internet provider Vodafone must block access to the popular streaming portal Kinox.to.

The injunction was issued on behalf of the German film production and distribution company Constantin Film. The company complained that Kinox facilitates copyright infringement and cited a recent order from the European Court of Justice in its defense, Golem reports.

While these types of blockades are common in Europe, they’re a new sight in Germany. Vodafone users who attempt to access the Kinox site will now be welcomed with a blocking notification instead.

“This portal is temporarily unavailable due to a copyright claim,” it reads, translated from German.

Blocked

The Kinox streaming site has been a thorn in the side of German authorities and copyright holders for a long time. Last year, one of the site’s admins was detained in Kosovo after a three-year manhunt, but despite these and other actions, the site remains online.

With the blocking efforts, Constantin Film hopes to make it harder for people to access the site, although this measure is also limited.

For now, it seems to be a simple DNS blockade, which means that people can bypass it relatively easily by switching to a free alternative DNS provider such as Google DNS or OpenDNS.

And there are other workarounds as well, as operators of Kinox point out in a message on their homepage.

“Vodafone User: Use the public Google DNS server: 8.8.8.8, that goes the .TO domain again! Otherwise, a VPN or the free Tor Browser can be used!” they write.

While the measure may not be foolproof, the current order is certainly significant. Previously, all German courts have denied similar blocking orders based on different arguments. This means that more blocking efforts may be on the horizon.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Troubleshooting event publishing issues in Amazon SES

Post Syndicated from Dustin Taylor original https://aws.amazon.com/blogs/ses/troubleshooting-event-publishing-issues-in-amazon-ses/

Over the past year, we’ve released several features that make it easier to track the metrics that are associated with your Amazon SES account. The first of these features, launched in November of last year, was event publishing.

Initially, event publishing let you capture basic metrics related to your email sending and publish them to other AWS services, such as Amazon CloudWatch and Amazon Kinesis Data Firehose. Some examples of these basic metrics include the number of emails that were sent and delivered, as well as the number that bounced or received complaints. A few months ago, we expanded this feature by adding engagement metrics—specifically, information about the number of emails that your customers opened or engaged with by clicking links.

As a former Cloud Support Engineer, I’ve seen Amazon SES customers do some amazing things with event publishing, but I’ve also seen some common issues. In this article, we look at some of these issues, and discuss the steps you can take to resolve them.

Before we begin

This post assumes that your Amazon SES account is already out of the sandbox, that you’ve verified an identity (such as an email address or domain), and that you have the necessary permissions to use Amazon SES and the service that you’ll publish event data to (such as Amazon SNS, CloudWatch, or Kinesis Data Firehose).

We also assume that you’re familiar with the process of creating configuration sets and specifying event destinations for those configuration sets. For more information, see Using Amazon SES Configuration Sets in the Amazon SES Developer Guide.

Amazon SNS event destinations

If you want to receive notifications when events occur—such as when recipients click a link in an email, or when they report an email as spam—you can use Amazon SNS as an event destination.

Occasionally, customers ask us why they’re not receiving notifications when they use an Amazon SNS topic as an event destination. One of the most common reasons for this issue is that they haven’t configured subscriptions for their Amazon SNS topic yet.

A single topic in Amazon SNS can have one or more subscriptions. When you subscribe to a topic, you tell that topic which endpoints (such as email addresses or mobile phone numbers) to contact when it receives a notification. If you haven’t set up any subscriptions, nothing will happen when an email event occurs.

For more information about setting up topics and subscriptions, see Getting Started in the Amazon SNS Developer Guide. For information about publishing Amazon SES events to Amazon SNS topics, see Set Up an Amazon SNS Event Destination for Amazon SES Event Publishing in the Amazon SES Developer Guide.

Kinesis Data Firehose event destinations

If you want to store your Amazon SES event data for the long term, choose Amazon Kinesis Data Firehose as a destination for Amazon SES events. With Kinesis Data Firehose, you can stream data to Amazon S3 or Amazon Redshift for storage and analysis.

The process of setting up Kinesis Data Firehose as an event destination is similar to the process for setting up Amazon SNS: you choose the types of events (such as deliveries, opens, clicks, or bounces) that you want to export, and the name of the Kinesis Data Firehose stream that you want to export to. However, there’s one important difference. When you set up a Kinesis Data Firehose event destination, you must also choose the IAM role that Amazon SES uses to send event data to Kinesis Data Firehose.

When you set up the Kinesis Data Firehose event destination, you can choose to have Amazon SES create the IAM role for you automatically. For many users, this is the best solution—it ensures that the IAM role has the appropriate permissions to move event data from Amazon SES to Kinesis Data Firehose.

Customers occasionally run into issues with the Kinesis Data Firehose event destination when they use an existing IAM role. If you use an existing IAM role, or create a new role for this purpose, make sure that the role includes the firehose:PutRecord and firehose:PutRecordBatch permissions. If the role doesn’t include these permissions, then the Amazon SES event data isn’t published to Kinesis Data Firehose. For more information, see Controlling Access with Amazon Kinesis Data Firehose in the Amazon Kinesis Data Firehose Developer Guide.

CloudWatch event destinations

By publishing your Amazon SES event data to Amazon CloudWatch, you can create dashboards that track your sending statistics in real time, as well as alarms that notify you when your event metrics reach certain thresholds.

The amount that you’re charged for using CloudWatch is based on several factors, including the number of metrics you use. In order to give you more control over the specific metrics you send to CloudWatch—and to help you avoid unexpected charges—you can limit the email sending events that are sent to CloudWatch.

When you choose CloudWatch as an event destination, you must choose a value source. The value source can be one of three options: a message tag, a link tag, or an email header. After you choose a value source, you then specify a name and a value. When you send an email using a configuration set that refers to a CloudWatch event destination, it only sends the metrics for that email to CloudWatch if the email contains the name and value that you specified as the value source. This requirement is commonly overlooked.

For example, assume that you chose Message Tag as the value source, and specified “CategoryId” as the dimension name and “31415” as the dimension value. When you want to send events for an email to CloudWatch, you must specify the name of the configuration set that uses the CloudWatch destination. You must also include a tag in your message. The name of the tag must be “CategoryId” and the value must be “31415”.

For more information about adding tags and email headers to your messages, see Send Email Using Amazon SES Event Publishing in the Amazon SES Developer Guide. For more information about adding tags to links, see Amazon SES Email Sending Metrics FAQs in the Amazon SES Developer Guide.

Troubleshooting event publishing for open and click data

Occasionally, customers ask why they’re not seeing open and click data for their emails. This issue most often occurs when the customer only sends text versions of their emails. Because of the way Amazon SES tracks open and click events, you can only see open and click data for emails that are sent as HTML. For more information about how Amazon SES modifies your emails when you enable open and click tracking, see Amazon SES Email Sending Metrics FAQs in the Amazon SES Developer Guide.

The process that you use to send HTML emails varies based on the email sending method you use. The Code Examples section of the Amazon SES Developer Guide contains examples of several methods of sending email by using the Amazon SES SMTP interface or an AWS SDK. All of the examples in this section include methods for sending HTML (as well as text-only) emails.

If you encounter any issues that weren’t covered in this post, please open a case in the Support Center and we’d be more than happy to assist.

Hosting Provider Steadfast Maintains DMCA Safe Harbor Defense For Trial

Post Syndicated from Ernesto original https://torrentfreak.com/hosting-provider-steadfast-maintains-dmca-safe-harbor-defense-for-trial-180212/

Two years ago, adult entertainment publisher ALS Scan dragged several third-party Internet services to court.

The company targeted several companies including CDN provider CloudFlare and the Chicago-based hosting company Steadfast, accusing them of copyright infringement because they offered services to pirate sites.

The case against Steadfast is getting close to trial and to start with an advantage, ALS Scan recently asked the court for partial summary judgment, determining that the hosting company contributed to copyright infringement and that it has no safe harbor protection.

ALS argued that Steadfast refused to shut down the servers of the image sharing platform Imagebam.com, which was operated by its client Flixya. ALS Scan described the site as a repeat offender, as it had been targeted with dozens of DMCA notices, and accused Steadfast of turning a blind eye to the situation.

Steadfast, for its part, fiercely denied the allegations. The hosting provider admitted that it leased servers to Flixya for ten years but said that it forwarded all notices to its client. The hosting company could not address individual infringements, other than shutting down the entire site, which would have been disproportionate in their view.

A few days ago California District Court Judge George Wu ruled on the matter, denying ALS’s motion for summary judgment.

Both sides made sensible arguments on the contributory infringement issue, but it is by no means undisputed that the hosting provider ‘contributed’ to the infringing activities. The court, therefore, left this question open for the jury to determine at trial.

“Ultimately, both sides have raised triable issues of fact with respect to material contribution. As a result, the Court would deny Plaintiff’s Motion,” Judge Wu writes.

ALS also sought summary judgment on the DMCA safe harbor protection issue, but the court denied this request as well. While it’s clear that the hosting company never terminated a customer for repeat infringements, it’s not clear whether it was ever in a situation where it needed to.

The DMCA requires Internet services to implement a meaningful repeat infringer policy, but in this case, Steadfast’s client Imagebam reportedly had a takedown policy of its own, which complicates the issue.

“While the fact Steadfast has never terminated one of its own customers for infringement is potentially damaging to its ability to fit the safe harbor, Plaintiff has not established that Steadfast faced a situation requiring it to terminate one of its users,” Judge Wu writes.

“Even in the present case it is unclear that Steadfast needed to terminate Flixya’s account given Flixya itself had a policy that was arguably successful at removing infringing images from imagebam.com.”

Judge Wu adds that safe harbor defenses are generally left to the jury, and this is what he decided as well.

As a result, ALS’s entire motion for summary judgment is denied. This is good news for Steadfast, who will have their safe harbor defense available at the upcoming trial. However, they will likely celebrate this win with caution, as the jury makes its ultimate decision.

A copy of the court’s order is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

AWS Hot Startups for February 2018: Canva, Figma, InVision

Post Syndicated from Tina Barr original https://aws.amazon.com/blogs/aws/aws-hot-startups-for-february-2018-canva-figma-invision/

Note to readers! Starting next month, we will be publishing our monthly Hot Startups blog post on the AWS Startup Blog. Please come check us out.

As visual communication—whether through social media channels like Instagram or white space-heavy product pages—becomes a central part of everyone’s life, accessible design platforms and tools become more and more important in the world of tech. This trend is why we have chosen to spotlight three design-related startups—namely Canva, Figma, and InVision—as our hot startups for the month of February. Please read on to learn more about these design-savvy companies and be sure to check out our full post here.

Canva (Sydney, Australia)

For a long time, creating designs required expensive software, extensive studying, and time spent waiting for feedback from clients or colleagues. With Canva, a graphic design tool that makes creating designs much simpler and accessible, users have the opportunity to design anything and publish anywhere. The platform—which integrates professional design elements, including stock photography, graphic elements, and fonts for users to build designs either entirely from scratch or from thousands of free templates—is available on desktop, iOS, and Android, making it possible to spin up an invitation, poster, or graphic on a smartphone at any time.

To learn more about Canva, read our full interview with CEO Melanie Perkins here.

Figma (San Francisco, CA)

Figma is a cloud-based design platform that empowers designers to communicate and collaborate more effectively. Using recent advancements in WebGL, Figma offers a design tool that doesn’t require users to install any software or special operating systems. It also allows multiple people to work in a file at the same time—a crucial feature.

As the need for new design talent increases, the industry will need plenty of junior designers to keep up with the demand. Figma is prepared to help students by offering their platform for free. Through this, they “hope to give young designers the resources necessary to kick-start their education and eventually, their careers.”

For more about Figma, check out our full interview with CEO Dylan Field here.

InVision (New York, NY)

Founded in 2011 with the goal of helping improve every digital experience in the world, digital product design platform InVision helps users create a streamlined and scalable product design process, build and iterate on prototypes, and collaborate across organizations. The company, which raised a $100 million series E last November, bringing the company’s total funding to $235 million, currently powers the digital product design process at more than 80 percent of the Fortune 100 and brands like Airbnb, HBO, Netflix, and Uber.

Learn more about InVision here.

Be sure to check out our full post on the AWS Startups blog!

-Tina

How I built a data warehouse using Amazon Redshift and AWS services in record time

Post Syndicated from Stephen Borg original https://aws.amazon.com/blogs/big-data/how-i-built-a-data-warehouse-using-amazon-redshift-and-aws-services-in-record-time/

This is a customer post by Stephen Borg, the Head of Big Data and BI at Cerberus Technologies.

Cerberus Technologies, in their own words: Cerberus is a company founded in 2017 by a team of visionary iGaming veterans. Our mission is simple – to offer the best tech solutions through a data-driven and a customer-first approach, delivering innovative solutions that go against traditional forms of working and process. This mission is based on the solid foundations of reliability, flexibility and security, and we intend to fundamentally change the way iGaming and other industries interact with technology.

Over the years, I have developed and created a number of data warehouses from scratch. Recently, I built a data warehouse for the iGaming industry single-handedly. To do it, I used the power and flexibility of Amazon Redshift and the wider AWS data management ecosystem. In this post, I explain how I was able to build a robust and scalable data warehouse without the large team of experts typically needed.

In two of my recent projects, I ran into challenges when scaling our data warehouse using on-premises infrastructure. Data was growing at many tens of gigabytes per day, and query performance was suffering. Scaling required major capital investment for hardware and software licenses, and also significant operational costs for maintenance and technical staff to keep it running and performing well. Unfortunately, I couldn’t get the resources needed to scale the infrastructure with data growth, and these projects were abandoned. Thanks to cloud data warehousing, the bottleneck of infrastructure resources, capital expense, and operational costs have been significantly reduced or have totally gone away. There is no more excuse for allowing obstacles of the past to delay delivering timely insights to decision makers, no matter how much data you have.

With Amazon Redshift and AWS, I delivered a cloud data warehouse to the business very quickly, and with a small team: me. I didn’t have to order hardware or software, and I no longer needed to install, configure, tune, or keep up with patches and version updates. Instead, I easily set up a robust data processing pipeline and we were quickly ingesting and analyzing data. Now, my data warehouse team can be extremely lean, and focus more time on bringing in new data and delivering insights. In this post, I show you the AWS services and the architecture that I used.

Handling data feeds

I have several different data sources that provide everything needed to run the business. The data includes activity from our iGaming platform, social media posts, clickstream data, marketing and campaign performance, and customer support engagements.

To handle the diversity of data feeds, I developed abstract integration applications using Docker that run on Amazon EC2 Container Service (Amazon ECS) and feed data to Amazon Kinesis Data Streams. These data streams can be used for real time analytics. In my system, each record in Kinesis is preprocessed by an AWS Lambda function to cleanse and aggregate information. My system then routes it to be stored where I need on Amazon S3 by Amazon Kinesis Data Firehose. Suppose that you used an on-premises architecture to accomplish the same task. A team of data engineers would be required to maintain and monitor a Kafka cluster, develop applications to stream data, and maintain a Hadoop cluster and the infrastructure underneath it for data storage. With my stream processing architecture, there are no servers to manage, no disk drives to replace, and no service monitoring to write.

Setting up a Kinesis stream can be done with a few clicks, and the same for Kinesis Firehose. Firehose can be configured to automatically consume data from a Kinesis Data Stream, and then write compressed data every N minutes to Amazon S3. When I want to process a Kinesis data stream, it’s very easy to set up a Lambda function to be executed on each message received. I can just set a trigger from the AWS Lambda Management Console, as shown following.

I also monitor the duration of function execution using Amazon CloudWatch and AWS X-Ray.

Regardless of the format I receive the data from our partners, I can send it to Kinesis as JSON data using my own formatters. After Firehose writes this to Amazon S3, I have everything in nearly the same structure I received but compressed, encrypted, and optimized for reading.

This data is automatically crawled by AWS Glue and placed into the AWS Glue Data Catalog. This means that I can immediately query the data directly on S3 using Amazon Athena or through Amazon Redshift Spectrum. Previously, I used Amazon EMR and an Amazon RDS–based metastore in Apache Hive for catalog management. Now I can avoid the complexity of maintaining Hive Metastore catalogs. Glue takes care of high availability and the operations side so that I know that end users can always be productive.

Working with Amazon Athena and Amazon Redshift for analysis

I found Amazon Athena extremely useful out of the box for ad hoc analysis. Our engineers (me) use Athena to understand new datasets that we receive and to understand what transformations will be needed for long-term query efficiency.

For our data analysts and data scientists, we’ve selected Amazon Redshift. Amazon Redshift has proven to be the right tool for us over and over again. It easily processes 20+ million transactions per day, regardless of the footprint of the tables and the type of analytics required by the business. Latency is low and query performance expectations have been more than met. We use Redshift Spectrum for long-term data retention, which enables me to extend the analytic power of Amazon Redshift beyond local data to anything stored in S3, and without requiring me to load any data. Redshift Spectrum gives me the freedom to store data where I want, in the format I want, and have it available for processing when I need it.

To load data directly into Amazon Redshift, I use AWS Data Pipeline to orchestrate data workflows. I create Amazon EMR clusters on an intra-day basis, which I can easily adjust to run more or less frequently as needed throughout the day. EMR clusters are used together with Amazon RDS, Apache Spark 2.0, and S3 storage. The data pipeline application loads ETL configurations from Spring RESTful services hosted on AWS Elastic Beanstalk. The application then loads data from S3 into memory, aggregates and cleans the data, and then writes the final version of the data to Amazon Redshift. This data is then ready to use for analysis. Spark on EMR also helps with recommendations and personalization use cases for various business users, and I find this easy to set up and deliver what users want. Finally, business users use Amazon QuickSight for self-service BI to slice, dice, and visualize the data depending on their requirements.

Each AWS service in this architecture plays its part in saving precious time that’s crucial for delivery and getting different departments in the business on board. I found the services easy to set up and use, and all have proven to be highly reliable for our use as our production environments. When the architecture was in place, scaling out was either completely handled by the service, or a matter of a simple API call, and crucially doesn’t require me to change one line of code. Increasing shards for Kinesis can be done in a minute by editing a stream. Increasing capacity for Lambda functions can be accomplished by editing the megabytes allocated for processing, and concurrency is handled automatically. EMR cluster capacity can easily be increased by changing the master and slave node types in Data Pipeline, or by using Auto Scaling. Lastly, RDS and Amazon Redshift can be easily upgraded without any major tasks to be performed by our team (again, me).

In the end, using AWS services including Kinesis, Lambda, Data Pipeline, and Amazon Redshift allows me to keep my team lean and highly productive. I eliminated the cost and delays of capital infrastructure, as well as the late night and weekend calls for support. I can now give maximum value to the business while keeping operational costs down. My team pushed out an agile and highly responsive data warehouse solution in record time and we can handle changing business requirements rapidly, and quickly adapt to new data and new user requests.


Additional Reading

If you found this post useful, be sure to check out Deploy a Data Warehouse Quickly with Amazon Redshift, Amazon RDS for PostgreSQL and Tableau Server and Top 8 Best Practices for High-Performance ETL Processing Using Amazon Redshift.


About the Author

Stephen Borg is the Head of Big Data and BI at Cerberus Technologies. He has a background in platform software engineering, and first became involved in data warehousing using the typical RDBMS, SQL, ETL, and BI tools. He quickly became passionate about providing insight to help others optimize the business and add personalization to products. He is now the Head of Big Data and BI at Cerberus Technologies.