Tag Archives: dea

MP3 Stream Rippers Are Not Illegal Sites, EFF Tells US Government

Post Syndicated from Ernesto original https://torrentfreak.com/mp3-stream-rippers-are-not-illegal-sites-eff-tells-us-government-171021/

Free music is easy to find nowadays. Just head over to YouTube and you can find millions of tracks including many of the most recent releases.

While some artists happily share their work, the major record labels don’t want tracks to leak outside YouTube’s ecosystem. For this reason, they want YouTube to MP3 rippers shut down.

Earlier this month, the RIAA sent its overview of “notorious markets” to the Office of the US Trade Representative (USTR), highlighting several of these sites and asking for help.

“The overall popularity of these sites and the staggering volume of traffic it attracts evidences the enormous damage being inflicted on the U.S. record industry,” the RIAA wrote, calling out Mp3juices.cc, Convert2mp3.net, Savefrom.net, Ytmp3.cc, Convertmp3.io, Flvto.biz, and 2conv.com as the most popular offenders.

This position is shared by many other music industry groups. They see stream ripping as the largest piracy threat online. After shutting down YouTube-MP3, they hope to topple other sites as well, ideally with the backing of the US Government.

However, not everyone shares the belief that stream ripping equals copyright infringement.

In a rebuttal, the Electronic Frontier Foundation (EFF) informs the USTR that the RIAA is trying to twist the law in its favor. Not all stream ripping sites are facilitating copyright infringement by definition, the EFF argues.

“RIAA’s discussion of ‘stream-ripping’ websites misstates copyright law. Websites that simply allow users to extract the audio track from a user-selected online video are not ‘illegal sites’ and are not liable for copyright infringement, unless they engage in additional conduct that meets the definition of infringement,” the EFF writes.

Flvto

While some people may use these sites to ‘pirate’ tracks there are also legitimate purposes, the digital rights group notes. Some creators specifically allow others to download and modify their work, for example, and in other cases ripping can be seen as fair use.

“There exists a vast and growing volume of online video that is licensed for free downloading and modification, or contains audio tracks that are not subject to copyright,” the EFF stresses.

“Moreover, many audio extractions qualify as non-infringing fair uses under copyright. Providing a service that is capable of extracting audio tracks for these lawful purposes is itself lawful, even if some users infringe.”

The fact that these sites generate revenue from advertising doesn’t make them illegal either. While there are some issues that could make a site liable, such as distributing infringing content to third parties, the EFF argues that many of the sites identified by the RIAA are not clearly involved in such activities.

Instead of solely relying on the characterizations of the RIAA, the US Government should judge these sites independently, in accordance with the law.

“USTR must apply U.S. law as it is, not as particular industry organizations wish it to be. Accordingly, it is inappropriate to describe ‘stream-ripping’ sites as engaging in or facilitating infringement. That logic would discourage U.S. firms from providing many forms of useful, lawful technology that processes or interacts with copyrighted work in digital form, to the detriment of U.S. trade,” the EFF concludes.

It is worth highlighting that most sites the RIAA mentioned specifically advertise themselves as YouTube converters. While this violates YouTube’s Terms of Service, something the streaming platform isn’t happy with, it doesn’t automatically classify them as infringing services.

Ideally, the RIAA and other music industry group would like YouTube to shut down these sites but if that doesn’t happen, more lawsuits may follow in the future. Then, the claims from both sides can be properly tested in court.

The full EFF response is available here (pdf). In addition to the stream ripping comments, the digital rights group also defends CDN providers such as Cloudflare, reverse proxies, and domain registrars from MPAA and RIAA piracy complaints.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Enabling Two-Factor Authentication For Your Web Application

Post Syndicated from Bozho original https://techblog.bozho.net/enabling-two-factor-authentication-web-application/

It’s almost always a good idea to support two-factor authentication (2FA), especially for back-office systems. 2FA comes in many different forms, some of which include SMS, TOTP, or even hardware tokens.

Enabling them requires a similar flow:

  • The user goes to their profile page (skip this if you want to force 2fa upon registration)
  • Clicks “Enable two-factor authentication”
  • Enters some data to enable the particular 2FA method (phone number, TOTP verification code, etc.)
  • Next time they login, in addition to the username and password, the login form requests the 2nd factor (verification code) and sends that along with the credentials

I will focus on Google Authenticator, which uses a TOTP (Time-based one-time password) for generating a sequence of verification codes. The ideas is that the server and the client application share a secret key. Based on that key and on the current time, both come up with the same code. Of course, clocks are not perfectly synced, so there’s a window of a few codes that the server accepts as valid.

How to implement that with Java (on the server)? Using the GoogleAuth library. The flow is as follows:

  • The user goes to their profile page
  • Clicks “Enable two-factor authentication”
  • The server generates a secret key, stores it as part of the user profile and returns a URL to a QR code
  • The user scans the QR code with their Google Authenticator app thus creating a new profile in the app
  • The user enters the verification code shown the app in a field that has appeared together with the QR code and clicks “confirm”
  • The server marks the 2FA as enabled in the user profile
  • If the user doesn’t scan the code or doesn’t verify the process, the user profile will contain just a orphaned secret key, but won’t be marked as enabled
  • There should be an option to later disable the 2FA from their user profile page

The most important bit from theoretical point of view here is the sharing of the secret key. The crypto is symmetric, so both sides (the authenticator app and the server) have the same key. It is shared via a QR code that the user scans. If an attacker has control on the user’s machine at that point, the secret can be leaked and thus the 2FA – abused by the attacker as well. But that’s not in the threat model – in other words, if the attacker has access to the user’s machine, the damage is already done anyway.

Upon login, the flow is as follows:

  • The user enters username and password and clicks “Login”
  • Using an AJAX request the page asks the server whether this email has 2FA enabled
  • If 2FA is not enabled, just submit the username & password form
  • If 2FA is enabled, the login form is not submitted, but instead an additional field is shown to let the user input the verification code from the authenticator app
  • After the user enters the code and presses login, the form can be submitted. Either using the same login button, or a new “verify” button, or the verification input + button could be an entirely new screen (hiding the username/password inputs).
  • The server then checks again if the user has 2FA enabled and if yes, verifies the verification code. If it matches, login is successful. If not, login fails and the user is allowed to reenter the credentials and the verification code. Note here that you can have different responses depending on whether username/password are wrong or in case the code is wrong. You can also attempt to login prior to even showing the verification code input. That way is arguably better, because that way you don’t reveal to a potential attacker that the user uses 2FA.

While I’m speaking of username and password, that can apply to any other authentication method. After you get a success confirmation from an OAuth / OpenID Connect / SAML provider, or after you can a token from SecureLogin, you can request the second factor (code).

In code, the above processes look as follows (using Spring MVC; I’ve merged the controller and service layer for brevity. You can replace the @AuthenticatedPrincipal bit with your way of supplying the currently logged in user details to the controllers). Assuming the methods are in controller mapped to “/user/”:

@RequestMapping(value = "/init2fa", method = RequestMethod.POST)
@ResponseBody
public String initTwoFactorAuth(@AuthenticationPrincipal LoginAuthenticationToken token) {
    User user = getLoggedInUser(token);
    GoogleAuthenticatorKey googleAuthenticatorKey = googleAuthenticator.createCredentials();
    user.setTwoFactorAuthKey(googleAuthenticatorKey.getKey());
    dao.update(user);
    return GoogleAuthenticatorQRGenerator.getOtpAuthURL(GOOGLE_AUTH_ISSUER, email, googleAuthenticatorKey);
}

@RequestMapping(value = "/confirm2fa", method = RequestMethod.POST)
@ResponseBody
public boolean confirmTwoFactorAuth(@AuthenticationPrincipal LoginAuthenticationToken token, @RequestParam("code") int code) {
    User user = getLoggedInUser(token);
    boolean result = googleAuthenticator.authorize(user.getTwoFactorAuthKey(), code);
    user.setTwoFactorAuthEnabled(result);
    dao.update(user);
    return result;
}

@RequestMapping(value = "/disable2fa", method = RequestMethod.GET)
@ResponseBody
public void disableTwoFactorAuth(@AuthenticationPrincipal LoginAuthenticationToken token) {
    User user = getLoggedInUser(token);
    user.setTwoFactorAuthKey(null);
    user.setTwoFactorAuthEnabled(false);
    dao.update(user);
}

@RequestMapping(value = "/requires2fa", method = RequestMethod.POST)
@ResponseBody
public boolean login(@RequestParam("email") String email) {
    // TODO consider verifying the password here in order not to reveal that a given user uses 2FA
    return userService.getUserDetailsByEmail(email).isTwoFactorAuthEnabled();
}

On the client side it’s simple AJAX requests to the above methods (sidenote: I kind of feel the term AJAX is no longer trendy, but I don’t know how to call them. Async? Background? Javascript?).

$("#two-fa-init").click(function() {
    $.post("/user/init2fa", function(qrImage) {
	$("#two-fa-verification").show();
	$("#two-fa-qr").prepend($('<img>',{id:'qr',src:qrImage}));
	$("#two-fa-init").hide();
    });
});

$("#two-fa-confirm").click(function() {
    var verificationCode = $("#verificationCode").val().replace(/ /g,'')
    $.post("/user/confirm2fa?code=" + verificationCode, function() {
       $("#two-fa-verification").hide();
       $("#two-fa-qr").hide();
       $.notify("Successfully enabled two-factor authentication", "success");
       $("#two-fa-message").html("Successfully enabled");
    });
});

$("#two-fa-disable").click(function() {
    $.post("/user/disable2fa", function(qrImage) {
       window.location.reload();
    });
});

The login form code depends very much on the existing login form you are using, but the point is to call the /requires2fa with the email (and password) to check if 2FA is enabled and then show a verification code input.

Overall, the implementation if two-factor authentication is simple and I’d recommend it for most systems, where security is more important than simplicity of the user experience.

The post Enabling Two-Factor Authentication For Your Web Application appeared first on Bozho's tech blog.

UK ‘Pirate’ Kodi Box Seller Handed a Suspended Prison Sentence

Post Syndicated from Andy original https://torrentfreak.com/uk-pirate-kodi-box-seller-handed-a-suspended-prison-sentence-171021/

After being raided by police and Trading Standards in 2015, Middlesbrough-based shopkeeper Brian ‘Tomo’ Thompson found himself in the spotlight.

Accused of selling “fully-loaded” Kodi boxes (those with ‘pirate’ addons installed), Thompson continued to protest his innocence.

“All I want to know is whether I am doing anything illegal. I know it’s a gray area but I want it in black and white,” he said last September.

Unlike other cases, where copyright holders took direct action, Thompson was prosecuted by his local council. At the time, he seemed prepared to martyr himself to test the limits of the law.

“This may have to go to the crown court and then it may go all the way to the European court, but I want to make a point with this and I want to make it easier for people to know what is legal and what isn’t,” he said. “I expect it go against me but at least I will know where I stand.”

In an opinion piece not long after this statement, we agreed with Thompson’s sentiment, noting that barring a miracle, the Middlesbrough man would indeed lose his case, probably in short order. But Thompson’s case turned out to be less than straightforward.

Thompson wasn’t charged with straightforward “making available” under the Copyrights, Designs and Patents Acts. If he had, there would’ve been no question that he’d been breaking law. This is due to a European Court of Justice decision in the BREIN v Filmspeler case earlier this year which determined that selling fully loaded boxes in the EU is illegal.

Instead, for reasons best known to the prosecution, ‘Tomo’ stood accused of two offenses under section 296ZB of the Copyright, Designs and Patents Act, which deals with devices and services designed to “circumvent technological measures”. It’s a different aspect of copyright law previously applied to cases where encryption has been broken on official products.

“A person commits an offense if he — in the course of a business — sells or lets for hire, any device, product or component which is primarily designed, produced, or adapted for the purpose of enabling or facilitating the circumvention of effective technological measures,” the law reads.

‘Tomo’ in his store

In January this year, Thompson entered his official ‘not guilty’ plea, setting up a potentially fascinating full trial in which we would’ve heard how ‘circumvention of technological measures’ could possibly relate to streaming illicit content from entirely unprotected far-flung sources.

Last month, however, Thompson suddenly had a change of heart, entering guilty pleas against one count of selling and one count of advertising devices for the purpose of enabling or facilitating the circumvention of effective technological measures.

That plea stomped on what could’ve been a really interesting trial, particularly since the Federation Against Copyright Theft’s own lawyer predicted it could be difficult and complex.

As a result, Thompson appeared at Teeside Crown Court on Friday for sentencing. Prosecutor Cameron Crowe said Thompson advertised and sold the ‘pirate’ devices for commercial gain, fully aware that they would be used to access infringing content and premium subscription services.

Crowe said that Thompson made around £40,000 from the devices while potentially costing Sky around £200,000 in lost subscription fees. When Thompson was raided in June 2015, a diary revealed he’d sold 159 devices in the previous four months, sales which generated £17,000 in revenue.

After his arrest, Thompson changed premises and continued to offer the devices for sale on social media.

Passing sentence, Judge Peter Armstrong told the 55-year-old businessman that he’d receive an 18-month prison term, suspended for two years.

“If anyone was under any illusion as to whether such devices as these, fully loaded Kodi boxes, were illegal or not, they can no longer be in any such doubt,” Judge Armstrong told the court, as reported by Gazette Live.

“I’ve come to the conclusion that in all the circumstances an immediate custodial sentence is not called for. But as a warning to others in future, they may not be so lucky.”

Also sentenced Friday was another local seller, Julian Allen, who sold devices to Thompson, among others. He was arrested following raids on his Geeky Kit businesses in 2015 and pleaded guilty this July to using or acquiring criminal property.

But despite making more than £135,000 from selling ‘pirate’ boxes, he too avoided jail, receiving a 21-month prison sentence suspended for two years instead.

While Thompson’s and Allen’s sentences are likely to be portrayed by copyright holders as a landmark moment, the earlier ruling from the European Court of Justice means that selling these kinds of devices for infringing purposes has always been illegal.

Perhaps the big surprise, given the dramatic lead up to both cases, is the relative leniency of their sentences. All that being said, however, a line has been drawn in the sand and other sellers should be aware.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Firefox 57 coming soon: a Quantum leap (Fedora Magazine)

Post Syndicated from corbet original https://lwn.net/Articles/737022/rss

The upcoming Firefox 57 release presents a challenge to distributors, who
have to decide when and how to ship a major update that will break a bunch
of older extensions. This
Fedora Magazine article
describes the plan that Fedora has come up with
for this transition. “Users probably shouldn’t ‘hold back at FF56 as
my favorite extensions don’t work.’ Recall that security fixes only come
from new versions, and they’ll all be WebExtension only. The Extended
Support Release version will also switch to WebExtensions only at the next
release. This date, June 2018, marks the deadline for ESR users to migrate
their extensions.

Derek Woodroffe’s steampunk tentacle hat

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/steampunk-tentacle-hat/

Halloween: that glorious time of year when you’re officially allowed to make your friends jump out of their skin with your pranks. For those among us who enjoy dressing up, Halloween is also the occasion to go all out with costumes. And so, dear reader, we present to you: a steampunk tentacle hat, created by Derek Woodroffe.

Finished Tenticle hat

Finished Tenticle hat

Extreme Electronics

Derek is an engineer who loves all things electronics. He’s part of Extreme Kits, and he runs the website Extreme Electronics. Raspberry Pi Zero-controlled Tesla coils are Derek’s speciality — he’s even been on one of the Royal Institution’s Christmas Lectures with them! Skip ahead to 15:06 in this video to see Derek in action:

Let There Be Light! // 2016 CHRISTMAS LECTURES with Saiful Islam – Lecture 1

The first Lecture from Professor Saiful Islam’s 2016 series of CHRISTMAS LECTURES, ‘Supercharged: Fuelling the future’. Watch all three Lectures here: http://richannel.org/christmas-lectures 2016 marked the 80th anniversary since the BBC first broadcast the Christmas Lectures on TV. To celebrate, chemist Professor Saiful Islam explores a subject that the lectures’ founder – Michael Faraday – addressed in the very first Christmas Lectures – energy.

Wearables

Wearables are electronically augmented items you can wear. They might take the form of spy eyeglasses, clothes with integrated sensors, or, in this case, headgear adorned with mechanised tentacles.

Why did Derek make this? We’re not entirely sure, but we suspect he’s a fan of the Cthulu mythos. In any case, we were a little astounded by his project. This is how we reacted when Derek tweeted us about it:

Raspberry Pi on Twitter

@ExtElec @extkits This is beyond incredible and completely unexpected.

In fact, we had to recover from a fit of laughter before we actually managed to type this answer.

Making a steampunk tentacle hat

Derek made the ‘skeleton’ of each tentacle out of a net curtain spring, acrylic rings, and four lengths of fishing line. Two servomotors connect to two ends of fishing line each, and pull them to move the tentacle.

net curtain spring and acrylic rings forming a mechanic tentacle skeleton - steampunk tentacle hat by Derek Woodroffe
Two servos connecting to lengths of fishing line - steampunk tentacle hat by Derek Woodroffe

Then he covered the tentacles with nylon stockings and liquid latex, glued suckers cut out of MDF onto them, and mounted them on an acrylic base. The eight motors connect to a Raspberry Pi via an I2C 8-port PWM controller board.

artificial tentacles - steampunk tentacle hat by Derek Woodroffe
8 servomotors connected to a controller board and a raspberry pi- steampunk tentacle hat by Derek Woodroffe

The Pi makes the servos pull the tentacles so that they move in sine waves in both the x and y directions, seemingly of their own accord. Derek cut open the top of a hat to insert the mounted tentacles, and he used more liquid latex to give the whole thing a slimy-looking finish.

steampunk tentacle hat by Derek Woodroffe

Iä! Iä! Cthulhu fhtagn!

You can read more about Derek’s steampunk tentacle hat here. He will be at the Beeston Raspberry Jam in November to show off his build, so if you’re in the Nottingham area, why not drop by?

Wearables for Halloween

This build is already pretty creepy, but just imagine it with a sensor- or camera-powered upgrade that makes the tentacles reach for people nearby. You’d have nightmare fodder for weeks.

With the help of the Raspberry Pi, any Halloween costume can be taken to the next level. How could Pi technology help you to win that coveted ‘Scariest costume’ prize this year? Tell us your ideas in the comments, and be sure to share pictures of you in your get-up with us on Twitter, Facebook, or Instagram.

The post Derek Woodroffe’s steampunk tentacle hat appeared first on Raspberry Pi.

Backing Up Linux to Backblaze B2 with Duplicity and Restic

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/backing-linux-backblaze-b2-duplicity-restic/

Linux users have a variety of options for handling data backup. The choices range from free and open-source programs to paid commercial tools, and include applications that are purely command-line based (CLI) and others that have a graphical interface (GUI), or both.

If you take a look at our Backblaze B2 Cloud Storage Integrations page, you will see a number of offerings that enable you to back up your Linux desktops and servers to Backblaze B2. These include CloudBerry, Duplicity, Duplicacy, 45 Drives, GoodSync, HashBackup, QNAP, Restic, and Rclone, plus other choices for NAS and hybrid uses.

In this post, we’ll discuss two popular command line and open-source programs: one older, Duplicity, and a new player, Restic.

Old School vs. New School

We’re highlighting Duplicity and Restic today because they exemplify two different philosophical approaches to data backup: “Old School” (Duplicity) vs “New School” (Restic).

Old School (Duplicity)

In the old school model, data is written sequentially to the storage medium. Once a section of data is recorded, new data is written starting where that section of data ends. It’s not possible to go back and change the data that’s already been written.

This old-school model has long been associated with the use of magnetic tape, a prime example of which is the LTO (Linear Tape-Open) standard. In this “write once” model, files are always appended to the end of the tape. If a file is modified and overwritten or removed from the volume, the associated tape blocks used are not freed up: they are simply marked as unavailable, and the used volume capacity is not recovered. Data is deleted and capacity recovered only if the whole tape is reformatted. As a Linux/Unix user, you undoubtedly are familiar with the TAR archive format, which is an acronym for Tape ARchive. TAR has been around since 1979 and was originally developed to write data to sequential I/O devices with no file system of their own.

It is from the use of tape that we get the full backup/incremental backup approach to backups. A backup sequence beings with a full backup of data. Each incremental backup contains what’s been changed since the last full backup until the next full backup is made and the process starts over, filling more and more tape or whatever medium is being used.

This is the model used by Duplicity: full and incremental backups. Duplicity backs up files by producing encrypted, digitally signed, versioned, TAR-format volumes and uploading them to a remote location, including Backblaze B2 Cloud Storage. Released under the terms of the GNU General Public License (GPL), Duplicity is free software.

With Duplicity, the first archive is a complete (full) backup, and subsequent (incremental) backups only add differences from the latest full or incremental backup. Chains consisting of a full backup and a series of incremental backups can be recovered to the point in time that any of the incremental steps were taken. If any of the incremental backups are missing, then reconstructing a complete and current backup is much more difficult and sometimes impossible.

Duplicity is available under many Unix-like operating systems (such as Linux, BSD, and Mac OS X) and ships with many popular Linux distributions including Ubuntu, Debian, and Fedora. It also can be used with Windows under Cygwin.

We recently published a KB article on How to configure Backblaze B2 with Duplicity on Linux that demonstrates how to set up Duplicity with B2 and back up and restore a directory from Linux.

New School (Restic)

With the arrival of non-sequential storage medium, such as disk drives, and new ideas such as deduplication, comes the new school approach, which is used by Restic. Data can be written and changed anywhere on the storage medium. This efficiency comes largely through the use of deduplication. Deduplication is a process that eliminates redundant copies of data and reduces storage overhead. Data deduplication techniques ensure that only one unique instance of data is retained on storage media, greatly increasing storage efficiency and flexibility.

Restic is a recently available multi-platform command line backup software program that is designed to be fast, efficient, and secure. Restic supports a variety of backends for storing backups, including a local server, SFTP server, HTTP Rest server, and a number of cloud storage providers, including Backblaze B2.

Files are uploaded to a B2 bucket as deduplicated, encrypted chunks. Each time a backup runs, only changed data is backed up. On each backup run, a snapshot is created enabling restores to a specific date or time.

Restic assumes that the storage location for repository is shared, so it always encrypts the backed up data. This is in addition to any encryption and security from the storage provider.

Restic is open source and free software and licensed under the BSD 2-Clause License and actively developed on GitHub.

There’s a lot more you can do with Restic, including adding tags, mounting a repository locally, and scripting. To learn more, you can review the documentation at https://restic.readthedocs.io.

Coincidentally with this blog post, we published a KB article, How to configure Backblaze B2 with Restic on Linux, in which we show how to set up Restic for use with B2 and how to back up and restore a home directory from Linux to B2.

Which is Right for You?

While Duplicity is a popular, widely-available, and useful program, many users of cloud storage solutions such as B2 are moving to new-school solutions like Restic that take better advantage of the non-sequential access capabilities and speed of modern storage media used by cloud storage providers.

Tell us how you’re backing up Linux

Please let us know in the comments what you’re using for Linux backups, and if you have experience using Duplicity, Restic, or other backup software with Backblaze B2.

The post Backing Up Linux to Backblaze B2 with Duplicity and Restic appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Using AWS Step Functions State Machines to Handle Workflow-Driven AWS CodePipeline Actions

Post Syndicated from Marcilio Mendonca original https://aws.amazon.com/blogs/devops/using-aws-step-functions-state-machines-to-handle-workflow-driven-aws-codepipeline-actions/

AWS CodePipeline is a continuous integration and continuous delivery service for fast and reliable application and infrastructure updates. It offers powerful integration with other AWS services, such as AWS CodeBuildAWS CodeDeployAWS CodeCommit, AWS CloudFormation and with third-party tools such as Jenkins and GitHub. These services make it possible for AWS customers to successfully automate various tasks, including infrastructure provisioning, blue/green deployments, serverless deployments, AMI baking, database provisioning, and release management.

Developers have been able to use CodePipeline to build sophisticated automation pipelines that often require a single CodePipeline action to perform multiple tasks, fork into different execution paths, and deal with asynchronous behavior. For example, to deploy a Lambda function, a CodePipeline action might first inspect the changes pushed to the code repository. If only the Lambda code has changed, the action can simply update the Lambda code package, create a new version, and point the Lambda alias to the new version. If the changes also affect infrastructure resources managed by AWS CloudFormation, the pipeline action might have to create a stack or update an existing one through the use of a change set. In addition, if an update is required, the pipeline action might enforce a safety policy to infrastructure resources that prevents the deletion and replacement of resources. You can do this by creating a change set and having the pipeline action inspect its changes before updating the stack. Change sets that do not conform to the policy are deleted.

This use case is a good illustration of workflow-driven pipeline actions. These are actions that run multiple tasks, deal with async behavior and loops, need to maintain and propagate state, and fork into different execution paths. Implementing workflow-driven actions directly in CodePipeline can lead to complex pipelines that are hard for developers to understand and maintain. Ideally, a pipeline action should perform a single task and delegate the complexity of dealing with workflow-driven behavior associated with that task to a state machine engine. This would make it possible for developers to build simpler, more intuitive pipelines and allow them to use state machine execution logs to visualize and troubleshoot their pipeline actions.

In this blog post, we discuss how AWS Step Functions state machines can be used to handle workflow-driven actions. We show how a CodePipeline action can trigger a Step Functions state machine and how the pipeline and the state machine are kept decoupled through a Lambda function. The advantages of using state machines include:

  • Simplified logic (complex tasks are broken into multiple smaller tasks).
  • Ease of handling asynchronous behavior (through state machine wait states).
  • Built-in support for choices and processing different execution paths (through state machine choices).
  • Built-in visualization and logging of the state machine execution.

The source code for the sample pipeline, pipeline actions, and state machine used in this post is available at https://github.com/awslabs/aws-codepipeline-stepfunctions.

Overview

This figure shows the components in the CodePipeline-Step Functions integration that will be described in this post. The pipeline contains two stages: a Source stage represented by a CodeCommit Git repository and a Prod stage with a single Deploy action that represents the workflow-driven action.

This action invokes a Lambda function (1) called the State Machine Trigger Lambda, which, in turn, triggers a Step Function state machine to process the request (2). The Lambda function sends a continuation token back to the pipeline (3) to continue its execution later and terminates. Seconds later, the pipeline invokes the Lambda function again (4), passing the continuation token received. The Lambda function checks the execution state of the state machine (5,6) and communicates the status to the pipeline. The process is repeated until the state machine execution is complete. Then the Lambda function notifies the pipeline that the corresponding pipeline action is complete (7). If the state machine has failed, the Lambda function will then fail the pipeline action and stop its execution (7). While running, the state machine triggers various Lambda functions to perform different tasks. The state machine and the pipeline are fully decoupled. Their interaction is handled by the Lambda function.

The Deploy State Machine

The sample state machine used in this post is a simplified version of the use case, with emphasis on infrastructure deployment. The state machine will follow distinct execution paths and thus have different outcomes, depending on:

  • The current state of the AWS CloudFormation stack.
  • The nature of the code changes made to the AWS CloudFormation template and pushed into the pipeline.

If the stack does not exist, it will be created. If the stack exists, a change set will be created and its resources inspected by the state machine. The inspection consists of parsing the change set results and detecting whether any resources will be deleted or replaced. If no resources are being deleted or replaced, the change set is allowed to be executed and the state machine completes successfully. Otherwise, the change set is deleted and the state machine completes execution with a failure as the terminal state.

Let’s dive into each of these execution paths.

Path 1: Create a Stack and Succeed Deployment

The Deploy state machine is shown here. It is triggered by the Lambda function using the following input parameters stored in an S3 bucket.

Create New Stack Execution Path

{
    "environmentName": "prod",
    "stackName": "sample-lambda-app",
    "templatePath": "infra/Lambda-template.yaml",
    "revisionS3Bucket": "codepipeline-us-east-1-418586629775",
    "revisionS3Key": "StepFunctionsDrivenD/CodeCommit/sjcmExZ"
}

Note that some values used here are for the use case example only. Account-specific parameters like revisionS3Bucket and revisionS3Key will be different when you deploy this use case in your account.

These input parameters are used by various states in the state machine and passed to the corresponding Lambda functions to perform different tasks. For example, stackName is used to create a stack, check the status of stack creation, and create a change set. The environmentName represents the environment (for example, dev, test, prod) to which the code is being deployed. It is used to prefix the name of stacks and change sets.

With the exception of built-in states such as wait and choice, each state in the state machine invokes a specific Lambda function.  The results received from the Lambda invocations are appended to the state machine’s original input. When the state machine finishes its execution, several parameters will have been added to its original input.

The first stage in the state machine is “Check Stack Existence”. It checks whether a stack with the input name specified in the stackName input parameter already exists. The output of the state adds a Boolean value called doesStackExist to the original state machine input as follows:

{
  "doesStackExist": true,
  "environmentName": "prod",
  "stackName": "sample-lambda-app",
  "templatePath": "infra/lambda-template.yaml",
  "revisionS3Bucket": "codepipeline-us-east-1-418586629775",
  "revisionS3Key": "StepFunctionsDrivenD/CodeCommit/sjcmExZ",
}

The following stage, “Does Stack Exist?”, is represented by Step Functions built-in choice state. It checks the value of doesStackExist to determine whether a new stack needs to be created (doesStackExist=true) or a change set needs to be created and inspected (doesStackExist=false).

If the stack does not exist, the states illustrated in green in the preceding figure are executed. This execution path creates the stack, waits until the stack is created, checks the status of the stack’s creation, and marks the deployment successful after the stack has been created. Except for “Stack Created?” and “Wait Stack Creation,” each of these stages invokes a Lambda function. “Stack Created?” and “Wait Stack Creation” are implemented by using the built-in choice state (to decide which path to follow) and the wait state (to wait a few seconds before proceeding), respectively. Each stage adds the results of their Lambda function executions to the initial input of the state machine, allowing future stages to process them.

Path 2: Safely Update a Stack and Mark Deployment as Successful

Safely Update a Stack and Mark Deployment as Successful Execution Path

If the stack indicated by the stackName parameter already exists, a different path is executed. (See the green states in the figure.) This path will create a change set and use wait and choice states to wait until the change set is created. Afterwards, a stage in the execution path will inspect  the resources affected before the change set is executed.

The inspection procedure represented by the “Inspect Change Set Changes” stage consists of parsing the resources affected by the change set and checking whether any of the existing resources are being deleted or replaced. The following is an excerpt of the algorithm, where changeSetChanges.Changes is the object representing the change set changes:

...
var RESOURCES_BEING_DELETED_OR_REPLACED = "RESOURCES-BEING-DELETED-OR-REPLACED";
var CAN_SAFELY_UPDATE_EXISTING_STACK = "CAN-SAFELY-UPDATE-EXISTING-STACK";
for (var i = 0; i < changeSetChanges.Changes.length; i++) {
    var change = changeSetChanges.Changes[i];
    if (change.Type == "Resource") {
        if (change.ResourceChange.Action == "Delete") {
            return RESOURCES_BEING_DELETED_OR_REPLACED;
        }
        if (change.ResourceChange.Action == "Modify") {
            if (change.ResourceChange.Replacement == "True") {
                return RESOURCES_BEING_DELETED_OR_REPLACED;
            }
        }
    }
}
return CAN_SAFELY_UPDATE_EXISTING_STACK;

The algorithm returns different values to indicate whether the change set can be safely executed (CAN_SAFELY_UPDATE_EXISTING_STACK or RESOURCES_BEING_DELETED_OR_REPLACED). This value is used later by the state machine to decide whether to execute the change set and update the stack or interrupt the deployment.

The output of the “Inspect Change Set” stage is shown here.

{
  "environmentName": "prod",
  "stackName": "sample-lambda-app",
  "templatePath": "infra/lambda-template.yaml",
  "revisionS3Bucket": "codepipeline-us-east-1-418586629775",
  "revisionS3Key": "StepFunctionsDrivenD/CodeCommit/sjcmExZ",
  "doesStackExist": true,
  "changeSetName": "prod-sample-lambda-app-change-set-545",
  "changeSetCreationStatus": "complete",
  "changeSetAction": "CAN-SAFELY-UPDATE-EXISTING-STACK"
}

At this point, these parameters have been added to the state machine’s original input:

  • changeSetName, which is added by the “Create Change Set” state.
  • changeSetCreationStatus, which is added by the “Get Change Set Creation Status” state.
  • changeSetAction, which is added by the “Inspect Change Set Changes” state.

The “Safe to Update Infra?” step is a choice state (its JSON spec follows) that simply checks the value of the changeSetAction parameter. If the value is equal to “CAN-SAFELY-UPDATE-EXISTING-STACK“, meaning that no resources will be deleted or replaced, the step will execute the change set by proceeding to the “Execute Change Set” state. The deployment is successful (the state machine completes its execution successfully).

"Safe to Update Infra?": {
      "Type": "Choice",
      "Choices": [
        {
          "Variable": "$.taskParams.changeSetAction",
          "StringEquals": "CAN-SAFELY-UPDATE-EXISTING-STACK",
          "Next": "Execute Change Set"
        }
      ],
      "Default": "Deployment Failed"
 }

Path 3: Reject Stack Update and Fail Deployment

Reject Stack Update and Fail Deployment Execution Path

If the changeSetAction parameter is different from “CAN-SAFELY-UPDATE-EXISTING-STACK“, the state machine will interrupt the deployment by deleting the change set and proceeding to the “Deployment Fail” step, which is a built-in Fail state. (Its JSON spec follows.) This state causes the state machine to stop in a failed state and serves to indicate to the Lambda function that the pipeline deployment should be interrupted in a fail state as well.

 "Deployment Failed": {
      "Type": "Fail",
      "Cause": "Deployment Failed",
      "Error": "Deployment Failed"
    }

In all three scenarios, there’s a state machine’s visual representation available in the AWS Step Functions console that makes it very easy for developers to identify what tasks have been executed or why a deployment has failed. Developers can also inspect the inputs and outputs of each state and look at the state machine Lambda function’s logs for details. Meanwhile, the corresponding CodePipeline action remains very simple and intuitive for developers who only need to know whether the deployment was successful or failed.

The State Machine Trigger Lambda Function

The Trigger Lambda function is invoked directly by the Deploy action in CodePipeline. The CodePipeline action must pass a JSON structure to the trigger function through the UserParameters attribute, as follows:

{
  "s3Bucket": "codepipeline-StepFunctions-sample",
  "stateMachineFile": "state_machine_input.json"
}

The s3Bucket parameter specifies the S3 bucket location for the state machine input parameters file. The stateMachineFile parameter specifies the file holding the input parameters. By being able to specify different input parameters to the state machine, we make the Trigger Lambda function and the state machine reusable across environments. For example, the same state machine could be called from a test and prod pipeline action by specifying a different S3 bucket or state machine input file for each environment.

The Trigger Lambda function performs two main tasks: triggering the state machine and checking the execution state of the state machine. Its core logic is shown here:

exports.index = function (event, context, callback) {
    try {
        console.log("Event: " + JSON.stringify(event));
        console.log("Context: " + JSON.stringify(context));
        console.log("Environment Variables: " + JSON.stringify(process.env));
        if (Util.isContinuingPipelineTask(event)) {
            monitorStateMachineExecution(event, context, callback);
        }
        else {
            triggerStateMachine(event, context, callback);
        }
    }
    catch (err) {
        failure(Util.jobId(event), callback, context.invokeid, err.message);
    }
}

Util.isContinuingPipelineTask(event) is a utility function that checks if the Trigger Lambda function is being called for the first time (that is, no continuation token is passed by CodePipeline) or as a continuation of a previous call. In its first execution, the Lambda function will trigger the state machine and send a continuation token to CodePipeline that contains the state machine execution ARN. The state machine ARN is exposed to the Lambda function through a Lambda environment variable called stateMachineArn. Here is the code that triggers the state machine:

function triggerStateMachine(event, context, callback) {
    var stateMachineArn = process.env.stateMachineArn;
    var s3Bucket = Util.actionUserParameter(event, "s3Bucket");
    var stateMachineFile = Util.actionUserParameter(event, "stateMachineFile");
    getStateMachineInputData(s3Bucket, stateMachineFile)
        .then(function (data) {
            var initialParameters = data.Body.toString();
            var stateMachineInputJSON = createStateMachineInitialInput(initialParameters, event);
            console.log("State machine input JSON: " + JSON.stringify(stateMachineInputJSON));
            return stateMachineInputJSON;
        })
        .then(function (stateMachineInputJSON) {
            return triggerStateMachineExecution(stateMachineArn, stateMachineInputJSON);
        })
        .then(function (triggerStateMachineOutput) {
            var continuationToken = { "stateMachineExecutionArn": triggerStateMachineOutput.executionArn };
            var message = "State machine has been triggered: " + JSON.stringify(triggerStateMachineOutput) + ", continuationToken: " + JSON.stringify(continuationToken);
            return continueExecution(Util.jobId(event), continuationToken, callback, message);
        })
        .catch(function (err) {
            console.log("Error triggering state machine: " + stateMachineArn + ", Error: " + err.message);
            failure(Util.jobId(event), callback, context.invokeid, err.message);
        })
}

The Trigger Lambda function fetches the state machine input parameters from an S3 file, triggers the execution of the state machine using the input parameters and the stateMachineArn environment variable, and signals to CodePipeline that the execution should continue later by passing a continuation token that contains the state machine execution ARN. In case any of these operations fail and an exception is thrown, the Trigger Lambda function will fail the pipeline immediately by signaling a pipeline failure through the putJobFailureResult CodePipeline API.

If the Lambda function is continuing a previous execution, it will extract the state machine execution ARN from the continuation token and check the status of the state machine, as shown here.

function monitorStateMachineExecution(event, context, callback) {
    var stateMachineArn = process.env.stateMachineArn;
    var continuationToken = JSON.parse(Util.continuationToken(event));
    var stateMachineExecutionArn = continuationToken.stateMachineExecutionArn;
    getStateMachineExecutionStatus(stateMachineExecutionArn)
        .then(function (response) {
            if (response.status === "RUNNING") {
                var message = "Execution: " + stateMachineExecutionArn + " of state machine: " + stateMachineArn + " is still " + response.status;
                return continueExecution(Util.jobId(event), continuationToken, callback, message);
            }
            if (response.status === "SUCCEEDED") {
                var message = "Execution: " + stateMachineExecutionArn + " of state machine: " + stateMachineArn + " has: " + response.status;
                return success(Util.jobId(event), callback, message);
            }
            // FAILED, TIMED_OUT, ABORTED
            var message = "Execution: " + stateMachineExecutionArn + " of state machine: " + stateMachineArn + " has: " + response.status;
            return failure(Util.jobId(event), callback, context.invokeid, message);
        })
        .catch(function (err) {
            var message = "Error monitoring execution: " + stateMachineExecutionArn + " of state machine: " + stateMachineArn + ", Error: " + err.message;
            failure(Util.jobId(event), callback, context.invokeid, message);
        });
}

If the state machine is in the RUNNING state, the Lambda function will send the continuation token back to the CodePipeline action. This will cause CodePipeline to call the Lambda function again a few seconds later. If the state machine has SUCCEEDED, then the Lambda function will notify the CodePipeline action that the action has succeeded. In any other case (FAILURE, TIMED-OUT, or ABORT), the Lambda function will fail the pipeline action.

This behavior is especially useful for developers who are building and debugging a new state machine because a bug in the state machine can potentially leave the pipeline action hanging for long periods of time until it times out. The Trigger Lambda function prevents this.

Also, by having the Trigger Lambda function as a means to decouple the pipeline and state machine, we make the state machine more reusable. It can be triggered from anywhere, not just from a CodePipeline action.

The Pipeline in CodePipeline

Our sample pipeline contains two simple stages: the Source stage represented by a CodeCommit Git repository and the Prod stage, which contains the Deploy action that invokes the Trigger Lambda function. When the state machine decides that the change set created must be rejected (because it replaces or deletes some the existing production resources), it fails the pipeline without performing any updates to the existing infrastructure. (See the failed Deploy action in red.) Otherwise, the pipeline action succeeds, indicating that the existing provisioned infrastructure was either created (first run) or updated without impacting any resources. (See the green Deploy stage in the pipeline on the left.)

The Pipeline in CodePipeline

The JSON spec for the pipeline’s Prod stage is shown here. We use the UserParameters attribute to pass the S3 bucket and state machine input file to the Lambda function. These parameters are action-specific, which means that we can reuse the state machine in another pipeline action.

{
  "name": "Prod",
  "actions": [
      {
          "inputArtifacts": [
              {
                  "name": "CodeCommitOutput"
              }
          ],
          "name": "Deploy",
          "actionTypeId": {
              "category": "Invoke",
              "owner": "AWS",
              "version": "1",
              "provider": "Lambda"
          },
          "outputArtifacts": [],
          "configuration": {
              "FunctionName": "StateMachineTriggerLambda",
              "UserParameters": "{\"s3Bucket\": \"codepipeline-StepFunctions-sample\", \"stateMachineFile\": \"state_machine_input.json\"}"
          },
          "runOrder": 1
      }
  ]
}

Conclusion

In this blog post, we discussed how state machines in AWS Step Functions can be used to handle workflow-driven actions. We showed how a Lambda function can be used to fully decouple the pipeline and the state machine and manage their interaction. The use of a state machine greatly simplified the associated CodePipeline action, allowing us to build a much simpler and cleaner pipeline while drilling down into the state machine’s execution for troubleshooting or debugging.

Here are two exercises you can complete by using the source code.

Exercise #1: Do not fail the state machine and pipeline action after inspecting a change set that deletes or replaces resources. Instead, create a stack with a different name (think of blue/green deployments). You can do this by creating a state machine transition between the “Safe to Update Infra?” and “Create Stack” stages and passing a new stack name as input to the “Create Stack” stage.

Exercise #2: Add wait logic to the state machine to wait until the change set completes its execution before allowing the state machine to proceed to the “Deployment Succeeded” stage. Use the stack creation case as an example. You’ll have to create a Lambda function (similar to the Lambda function that checks the creation status of a stack) to get the creation status of the change set.

Have fun and share your thoughts!

About the Author

Marcilio Mendonca is a Sr. Consultant in the Canadian Professional Services Team at Amazon Web Services. He has helped AWS customers design, build, and deploy best-in-class, cloud-native AWS applications using VMs, containers, and serverless architectures. Before he joined AWS, Marcilio was a Software Development Engineer at Amazon. Marcilio also holds a Ph.D. in Computer Science. In his spare time, he enjoys playing drums, riding his motorcycle in the Toronto GTA area, and spending quality time with his family.

IoT Cybersecurity: What’s Plan B?

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/10/iot_cybersecuri.html

In August, four US Senators introduced a bill designed to improve Internet of Things (IoT) security. The IoT Cybersecurity Improvement Act of 2017 is a modest piece of legislation. It doesn’t regulate the IoT market. It doesn’t single out any industries for particular attention, or force any companies to do anything. It doesn’t even modify the liability laws for embedded software. Companies can continue to sell IoT devices with whatever lousy security they want.

What the bill does do is leverage the government’s buying power to nudge the market: any IoT product that the government buys must meet minimum security standards. It requires vendors to ensure that devices can not only be patched, but are patched in an authenticated and timely manner; don’t have unchangeable default passwords; and are free from known vulnerabilities. It’s about as low a security bar as you can set, and that it will considerably improve security speaks volumes about the current state of IoT security. (Full disclosure: I helped draft some of the bill’s security requirements.)

The bill would also modify the Computer Fraud and Abuse and the Digital Millennium Copyright Acts to allow security researchers to study the security of IoT devices purchased by the government. It’s a far narrower exemption than our industry needs. But it’s a good first step, which is probably the best thing you can say about this legislation.

However, it’s unlikely this first step will even be taken. I am writing this column in August, and have no doubt that the bill will have gone nowhere by the time you read it in October or later. If hearings are held, they won’t matter. The bill won’t have been voted on by any committee, and it won’t be on any legislative calendar. The odds of this bill becoming law are zero. And that’s not just because of current politics — I’d be equally pessimistic under the Obama administration.

But the situation is critical. The Internet is dangerous — and the IoT gives it not just eyes and ears, but also hands and feet. Security vulnerabilities, exploits, and attacks that once affected only bits and bytes now affect flesh and blood.

Markets, as we’ve repeatedly learned over the past century, are terrible mechanisms for improving the safety of products and services. It was true for automobile, food, restaurant, airplane, fire, and financial-instrument safety. The reasons are complicated, but basically, sellers don’t compete on safety features because buyers can’t efficiently differentiate products based on safety considerations. The race-to-the-bottom mechanism that markets use to minimize prices also minimizes quality. Without government intervention, the IoT remains dangerously insecure.

The US government has no appetite for intervention, so we won’t see serious safety and security regulations, a new federal agency, or better liability laws. We might have a better chance in the EU. Depending on how the General Data Protection Regulation on data privacy pans out, the EU might pass a similar security law in 5 years. No other country has a large enough market share to make a difference.

Sometimes we can opt out of the IoT, but that option is becoming increasingly rare. Last year, I tried and failed to purchase a new car without an Internet connection. In a few years, it’s going to be nearly impossible to not be multiply connected to the IoT. And our biggest IoT security risks will stem not from devices we have a market relationship with, but from everyone else’s cars, cameras, routers, drones, and so on.

We can try to shop our ideals and demand more security, but companies don’t compete on IoT safety — and we security experts aren’t a large enough market force to make a difference.

We need a Plan B, although I’m not sure what that is. E-mail me if you have any ideas.

This essay previously appeared in the September/October issue of IEEE Security & Privacy.

Amazon Elasticsearch Service now supports VPC

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/amazon-elasticsearch-service-now-supports-vpc/

Starting today, you can connect to your Amazon Elasticsearch Service domains from within an Amazon VPC without the need for NAT instances or Internet gateways. VPC support for Amazon ES is easy to configure, reliable, and offers an extra layer of security. With VPC support, traffic between other services and Amazon ES stays entirely within the AWS network, isolated from the public Internet. You can manage network access using existing VPC security groups, and you can use AWS Identity and Access Management (IAM) policies for additional protection. VPC support for Amazon ES domains is available at no additional charge.

Getting Started

Creating an Amazon Elasticsearch Service domain in your VPC is easy. Follow all the steps you would normally follow to create your cluster and then select “VPC access”.

That’s it. There are no additional steps. You can now access your domain from within your VPC!

Things To Know

To support VPCs, Amazon ES places an endpoint into at least one subnet of your VPC. Amazon ES places an Elastic Network Interface (ENI) into the VPC for each data node in the cluster. Each ENI uses a private IP address from the IPv4 range of your subnet and receives a public DNS hostname. If you enable zone awareness, Amazon ES creates endpoints in two subnets in different availability zones, which provides greater data durability.

You need to set aside three times the number of IP addresses as the number of nodes in your cluster. You can divide that number by two if Zone Awareness is enabled. Ideally, you would create separate subnets just for Amazon ES.

A few notes:

  • Currently, you cannot move existing domains to a VPC or vice-versa. To take advantage of VPC support, you must create a new domain and migrate your data.
  • Currently, Amazon ES does not support Amazon Kinesis Firehose integration for domains inside a VPC.

To learn more, see the Amazon ES documentation.

Randall

ACME Support in Apache HTTP Server Project

Post Syndicated from ris original https://lwn.net/Articles/736668/rss

Let’s Encrypt has announced
that Automatic Certificate Management Environment (ACME) protocol support
is being integrated into the Apache HTTP Server (httpd). “ACME support being built in to one of the world’s most popular Web servers, Apache httpd, is great because it means that deploying HTTPS will be even easier for millions of websites. It’s a huge step towards delivering the ideal certificate issuance and management experience to as many people as possible.

How to Compete with Giants

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/how-to-compete-with-giants/

How to Compete with Giants

This post by Backblaze’s CEO and co-founder Gleb Budman is the sixth in a series about entrepreneurship. You can choose posts in the series from the list below:

  1. How Backblaze got Started: The Problem, The Solution, and the Stuff In-Between
  2. Building a Competitive Moat: Turning Challenges Into Advantages
  3. From Idea to Launch: Getting Your First Customers
  4. How to Get Your First 1,000 Customers
  5. Surviving Your First Year
  6. How to Compete with Giants

Use the Join button above to receive notification of new posts in this series.

Perhaps your business is competing in a brand new space free from established competitors. Most of us, though, start companies that compete with existing offerings from large, established companies. You need to come up with a better mousetrap — not the first mousetrap.

That’s the challenge Backblaze faced. In this post, I’d like to share some of the lessons I learned from that experience.

Backblaze vs. Giants

Competing with established companies that are orders of magnitude larger can be daunting. How can you succeed?

I’ll set the stage by offering a few sets of giants we compete with:

  • When we started Backblaze, we offered online backup in a market where companies had been offering “online backup” for at least a decade, and even the newer entrants had raised tens of millions of dollars.
  • When we built our storage servers, the alternatives were EMC, NetApp, and Dell — each of which had a market cap of over $10 billion.
  • When we introduced our cloud storage offering, B2, our direct competitors were Amazon, Google, and Microsoft. You might have heard of them.

What did we learn by competing with these giants on a bootstrapped budget? Let’s take a look.

Determine What Success Means

For a long time Apple considered Apple TV to be a hobby, not a real product worth focusing on, because it did not generate a billion in revenue. For a $10 billion per year revenue company, a new business that generates $50 million won’t move the needle and often isn’t worth putting focus on. However, for a startup, getting to $50 million in revenue can be the start of a wildly successful business.

Lesson Learned: Don’t let the giants set your success metrics.

The Advantages Startups Have

The giants have a lot of advantages: more money, people, scale, resources, access, etc. Following their playbook and attacking head-on means you’re simply outgunned. Common paths to failure are trying to build more features, enter more markets, outspend on marketing, and other similar approaches where scale and resources are the primary determinants of success.

But being a startup affords many advantages most giants would salivate over. As a nimble startup you can leverage those to succeed. Let’s breakdown nine competitive advantages we’ve used that you can too.

1. Drive Focus

It’s hard to build a $10 billion revenue business doing just one thing, and most giants have a broad portfolio of businesses, numerous products for each, and targeting a variety of customer segments in multiple markets. That adds complexity and distributes management attention.

Startups get the benefit of having everyone in the company be extremely focused, often on a singular mission, product, customer segment, and market. While our competitors sell everything from advertising to Zantac, and are investing in groceries and shipping, Backblaze has focused exclusively on cloud storage. This means all of our best people (i.e. everyone) is focused on our cloud storage business. Where is all of your focus going?

Lesson Learned: Align everyone in your company to a singular focus to dramatically out-perform larger teams.

2. Use Lack-of-Scale as an Advantage

You may have heard Paul Graham say “Do things that don’t scale.” There are a host of things you can do specifically because you don’t have the same scale as the giants. Use that as an advantage.

When we look for data center space, we have more options than our largest competitors because there are simply more spaces available with room for 100 cabinets than for 1,000 cabinets. With some searching, we can find data center space that is better/cheaper.

When a flood in Thailand destroyed factories, causing the world’s supply of hard drives to plummet and prices to triple, we started drive farming. The giants certainly couldn’t. It was a bit crazy, but it let us keep prices unchanged for our customers.

Our Chief Cloud Officer, Tim, used to work at Adobe. Because of their size, any new product needed to always launch in a multitude of languages and in global markets. Once launched, they had scale. But getting any new product launched was incredibly challenging.

Lesson Learned: Use lack-of-scale to exploit opportunities that are closed to giants.

3. Build a Better Product

This one is probably obvious. If you’re going to provide the same product, at the same price, to the same customers — why do it? Remember that better does not always mean more features. Here’s one way we built a better product that didn’t require being a bigger company.

All online backup services required customers to choose what to include in their backup. We found that this was complicated for users since they often didn’t know what needed to be backed up. We flipped the model to back up everything and allow users to exclude if they wanted to, but it was not required. This reduced the number of features/options, while making it easier and better for the user.

This didn’t require the resources of a huge company; it just required understanding customers a bit deeper and thinking about the solution differently. Building a better product is the most classic startup competitive advantage.

Lesson Learned: Dig deep with your customers to understand and deliver a better mousetrap.

4. Provide Better Service

How can you provide better service? Use your advantages. Escalations from your customer care folks to engineering can go through fewer hoops. Fixing an issue and shipping can be quicker. Access to real answers on Twitter or Facebook can be more effective.

A strategic decision we made was to have all customer support people as full-time employees in our headquarters. This ensures they are in close contact to the whole company for feedback to quickly go both ways.

Having a smaller team and fewer layers enables faster internal communication, which increases customer happiness. And the option to do things that don’t scale — such as help a customer in a unique situation — can go a long way in building customer loyalty.

Lesson Learned: Service your customers better by establishing clear internal communications.

5. Remove The Unnecessary

After determining that the industry standard EMC/NetApp/Dell storage servers would be too expensive to build our own cloud storage upon, we decided to build our own infrastructure. Many said we were crazy to compete with these multi-billion dollar companies and that it would be impossible to build a lower cost storage server. However, not only did it prove to not be impossible — it wasn’t even that hard.

One key trick? Remove the unnecessary. While EMC and others built servers to sell to other companies for a wide variety of use cases, Backblaze needed servers that only Backblaze would run, and for a single use case. As a result we could tailor the servers for our needs by removing redundancy from each server (since we would run redundant servers), and using lower-performance components (since we would get high-performance by running parallel servers).

What do your customers and use cases not need? This can trim costs and complexity while often improving the product for your use case.

Lesson Learned: Don’t think “what can we add” to what the giants offer — think “what can we remove.”

6. Be Easy

How many times have you visited a large company website, particularly one that’s not consumer-focused, only to leave saying, “Huh? I don’t understand what you do.” Keeping your website clear, and your product and pricing simple, will dramatically increase conversion and customer satisfaction. If you’re able to make it 2x easier and thus increasing your conversion by 2x, you’ve just allowed yourself to spend ½ as much acquiring a customer.

Providing unlimited data backup wasn’t specifically about providing more storage — it was about making it easier. Since users didn’t know how much data they needed to back up, charging per gigabyte meant they wouldn’t know the cost. Providing unlimited data backup meant they could just relax.

Customers love easy — and being smaller makes easy easier to deliver. Use that as an advantage in your website, marketing materials, pricing, product, and in every other customer interaction.

Lesson Learned: Ease-of-use isn’t a slogan: it’s a competitive advantage. Treat it as seriously as any other feature of your product

7. Don’t Be Afraid of Risk

Obviously unnecessary risks are unnecessary, and some risks aren’t worth taking. However, large companies that have given guidance to Wall Street with a $0.01 range on their earning-per-share are inherently going to be very risk-averse. Use risk-tolerance to open up opportunities, and adjust your tolerance level as you scale. In your first year, there are likely an infinite number of ways your business may vaporize; don’t be too worried about taking a risk that might have a 20% downside when the upside is hockey stick growth.

Using consumer-grade hard drives in our servers may have caused pain and suffering for us years down-the-line, but they were priced at approximately 50% of enterprise drives. Giants wouldn’t have considered the option. Turns out, the consumer drives performed great for us.

Lesson Learned: Use calculated risks as an advantage.

8. Be Open

The larger a company grows, the more it wants to hide information. Some of this is driven by regulatory requirements as a public company. But most of this is cultural. Sharing something might cause a problem, so let’s not. All external communication is treated as a critical press release, with rounds and rounds of editing by multiple teams and approvals. However, customers are often desperate for information. Moreover, sharing information builds trust, understanding, and advocates.

I started blogging at Backblaze before we launched. When we blogged about our Storage Pod and open-sourced the design, many thought we were crazy to share this information. But it was transformative for us, establishing Backblaze as a tech thought leader in storage and giving people a sense of how we were able to provide our service at such a low cost.

Over the years we’ve developed a culture of being open internally and externally, on our blog and with the press, and in communities such as Hacker News and Reddit. Often we’ve been asked, “why would you share that!?” — but it’s the continual openness that builds trust. And that culture of openness is incredibly challenging for the giants.

Lesson Learned: Overshare to build trust and brand where giants won’t.

9. Be Human

As companies scale, typically a smaller percent of founders and executives interact with customers. The people who build the company become more hidden, the language feels “corporate,” and customers start to feel they’re interacting with the cliche “faceless, nameless corporation.” Use your humanity to your advantage. From day one the Backblaze About page listed all the founders, and my email address. While contacting us shouldn’t be the first path for a customer support question, I wanted it to be clear that we stand behind the service we offer; if we’re doing something wrong — I want to know it.

To scale it’s important to have processes and procedures, but sometimes a situation falls outside of a well-established process. While we want our employees to follow processes, they’re still encouraged to be human and “try to do the right thing.” How to you strike this balance? Simon Sinek gives a good talk about it: make your employees feel safe. If employees feel safe they’ll be human.

If your customer is a consumer, they’ll appreciate being treated as a human. Even if your customer is a corporation, the purchasing decision-makers are still people.

Lesson Learned: Being human is the ultimate antithesis to the faceless corporation.

Build Culture to Sustain Your Advantages at Scale

Presumably the goal is not to always be competing with giants, but to one day become a giant. Does this mean you’ll lose all of these advantages? Some, yes — but not all. Some of these advantages are cultural, and if you build these into the culture from the beginning, and fight to keep them as you scale, you can keep them as you become a giant.

Tesla still comes across as human, with Elon Musk frequently interacting with people on Twitter. Apple continues to provide great service through their Genius Bar. And, worst case, if you lose these at scale, you’ll still have the other advantages of being a giant such as money, people, scale, resources, and access.

Of course, some new startup will be gunning for you with grand ambitions, so just be sure not to get complacent. 😉

The post How to Compete with Giants appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

More Raspberry Pi labs in West Africa

Post Syndicated from Rachel Churcher original https://www.raspberrypi.org/blog/pi-based-ict-west-africa/

Back in May 2013, we heard from Dominique Laloux about an exciting project to bring Raspberry Pi labs to schools in rural West Africa. Until 2012, 75 percent of teachers there had never used a computer. The project has been very successful, and Dominique has been in touch again to bring us the latest news.

A view of the inside of the new Pi lab building

Preparing the new Pi labs building in Kuma Tokpli, Togo

Growing the project

Thanks to the continuing efforts of a dedicated team of teachers, parents and other supporters, the Centre Informatique de Kuma, now known as INITIC (from the French ‘INItiation aux TIC’), runs two Raspberry Pi labs in schools in Togo, and plans to open a third in December. The second lab was opened last year in Kpalimé, a town in the Plateaux Region in the west of the country.

Student using a Raspberry Pi computer

Using the new Raspberry Pi labs in Kpalimé, Togo

More than 400 students used the new lab intensively during the last school year. Dominique tells us more:

“The report made in early July by the seven teachers who accompanied the students was nothing short of amazing: the young people covered a very impressive number of concepts and skills, from the GUI and the file system, to a solid introduction to word processing and spreadsheets, and many other skills. The lab worked exactly as expected. Its 21 Raspberry Pis worked flawlessly, with the exception of a couple of SD cards that needed re-cloning, and a couple of old screens that needed to be replaced. All the Raspberry Pis worked without a glitch. They are so reliable!”

The teachers and students have enjoyed access to a range of software and resources, all running on Raspberry Pi 2s and 3s.

“Our current aim is to introduce the students to ICT using the Raspberry Pis, rather than introducing them to programming and electronics (a step that will certainly be considered later). We use Ubuntu Mate along with a large selection of applications, from LibreOffice, Firefox, GIMP, Audacity, and Calibre, to special maths, science, and geography applications. There are also special applications such as GnuCash and GanttProject, as well as logic games including PyChess. Since December, students also have access to a local server hosting Kiwix, Wiktionary (a local copy of Wikipedia in four languages), several hundred videos, and several thousand books. They really love it!”

Pi lab upgrade

This summer, INITIC upgraded the equipment in their Pi lab in Kuma Adamé, which has been running since 2014. 21 older model Raspberry Pis were replaced with Pi 2s and 3s, to bring this lab into line with the others, and encourage co-operation between the different locations.

“All 21 first-generation Raspberry Pis worked flawlessly for three years, despite the less-than-ideal conditions in which they were used — tropical conditions, dust, frequent power outages, etc. I brought them all back to Brussels, and they all still work fine. The rationale behind the upgrade was to bring more computing power to the lab, and also to have the same equipment in our two Raspberry Pi labs (and in other planned installations).”

Students and teachers using the upgraded Pi labs in Kuma Adamé

Students and teachers using the upgraded Pi lab in Kuma Adamé

An upgrade of the organisation’s first lab, installed in 2012 in Kuma Tokpli, will be completed in December. This lab currently uses ‘retired’ laptops, which will be replaced with Raspberry Pis and peripherals. INITIC, in partnership with the local community, is also constructing a new building to house the upgraded technology, and the organisation’s third Raspberry Pi lab.

Reliable tech

Dominique has been very impressed with the performance of the Raspberry Pis since 2014.

“Our experience of three years, in two very different contexts, clearly demonstrates that the Raspberry Pi is a very convincing alternative to more ‘conventional’ computers for introducing young students to ICT where resources are scarce. I wish I could convince more communities in the world to invest in such ‘low cost, low consumption, low maintenance’ infrastructure. It really works!”

He goes on to explain that:

“Our goal now is to build at least one new Raspberry Pi lab in another Togolese school each year. That will, of course, depend on how successful we are at gathering the funds necessary for each installation, but we are confident we can convince enough friends to give us the financial support needed for our action.”

A desk with Raspberry Pis and peripherals

Reliable Raspberry Pis in the labs at Kpalimé

Get involved

We are delighted to see the Raspberry Pi being used to bring information technology to new teachers, students, and communities in Togo – it’s wonderful to see this project becoming established and building on its achievements. The mission of the Raspberry Pi Foundation is to put the power of digital making into the hands of people all over the world. Therefore, projects like this, in which people use our tech to fulfil this mission in places with few resources, are wonderful to us.

More information about INITIC and its projects can be found on its website. If you are interested in helping the organisation to meet its goals, visit the How to help page. And if you are involved with a project like this, bringing ICT, computer science, and coding to new places, please tell us about it in the comments below.

The post More Raspberry Pi labs in West Africa appeared first on Raspberry Pi.

ACME Support in Apache HTTP Server Project

Post Syndicated from Let's Encrypt - Free SSL/TLS Certificates original https://letsencrypt.org//2017/10/17/acme-support-in-apache-httpd.html

We’re excited that support for getting and managing TLS certificates via the ACME protocol is coming to the Apache HTTP Server Project (httpd). ACME is the protocol used by Let’s Encrypt, and hopefully other Certificate Authorities in the future. We anticipate this feature will significantly aid the adoption of HTTPS for new and existing websites.

We created Let’s Encrypt in order to make getting and managing TLS certificates as simple as possible. For Let’s Encrypt subscribers, this usually means obtaining an ACME client and executing some simple commands. Ultimately though, we’d like for most Let’s Encrypt subscribers to have ACME clients built in to their server software so that obtaining an additional piece of software is not necessary. The less work people have to do to deploy HTTPS the better!

ACME support being built in to one of the world’s most popular Web servers, Apache httpd, is great because it means that deploying HTTPS will be even easier for millions of websites. It’s a huge step towards delivering the ideal certificate issuance and management experience to as many people as possible.

The Apache httpd ACME module is called mod_md. It’s currently in the development version of httpd and a plan is being formulated to backport it to an httpd 2.4.x stable release. The mod_md code is also available on GitHub.

It’s also worth mentioning that the development version of Apache httpd now includes support for an SSLPolicy directive. Properly configuring TLS has traditionally involved making a large number of complex choices. With the SSLPolicy directive, admins simply select a modern, intermediate, or old TLS configuration, and sensible choices will be made for them.

Development of mod_md and the SSLPolicy directive has been funded by Mozilla and carried out primarily by Stefan Eissing of greenbytes. Thank you Mozilla and Stefan!

Let’s Encrypt is currently providing certificates for more than 55 million websites. We look forward to being able to serve even more websites as efforts like this make deploying HTTPS with Let’s Encrypt even easier. If you’re as excited about the potential for a 100% HTTPS Web as we are, please consider getting involved, making a donation, or sponsoring Let’s Encrypt.

An enforcement clarification from the kernel community

Post Syndicated from corbet original https://lwn.net/Articles/736492/rss

The Linux Foundation’s Technical Advisory board, in response to concerns
about exploitative license enforcement around the kernel, has put together
this patch adding a document to the kernel
describing its view of license enforcement. This document has been signed
or acknowledged by a long list of kernel developers.
In particular, it seeks to
reduce the effect of the “GPLv2 death penalty” by stating that a violator’s
license to the software will be reinstated upon a timely return to
compliance. “We view legal action as a last resort, to be initiated
only when other community efforts have failed to resolve the problem.

Finally, once a non-compliance issue is resolved, we hope the user will feel
welcome to join us in our efforts on this project. Working together, we will
be stronger.”

See this
blog post from Greg Kroah-Hartman
for more information.

PureVPN Explains How it Helped the FBI Catch a Cyberstalker

Post Syndicated from Andy original https://torrentfreak.com/purevpn-explains-how-it-helped-the-fbi-catch-a-cyberstalker-171016/

Early October, Ryan S. Lin, 24, of Newton, Massachusetts, was arrested on suspicion of conducting “an extensive cyberstalking campaign” against a 24-year-old Massachusetts woman, as well as her family members and friends.

The Department of Justice described Lin’s offenses as a “multi-faceted” computer hacking and cyberstalking campaign. Launched in April 2016 when he began hacking into the victim’s online accounts, Lin allegedly obtained personal photographs and sensitive information about her medical and sexual histories and distributed that information to hundreds of other people.

Details of what information the FBI compiled on Lin can be found in our earlier report but aside from his alleged crimes (which are both significant and repugnant), it was PureVPN’s involvement in the case that caused the most controversy.

In a report compiled by an FBI special agent, it was revealed that the Hong Kong-based company’s logs helped the authorities net the alleged criminal.

“Significantly, PureVPN was able to determine that their service was accessed by the same customer from two originating IP addresses: the RCN IP address from the home Lin was living in at the time, and the software company where Lin was employed at the time,” the agent’s affidavit reads.

Among many in the privacy community, this revelation was met with disappointment. On the PureVPN website the company claims to carry no logs and on a general basis, it’s expected that so-called “no-logging” VPN providers should provide people with some anonymity, at least as far as their service goes. Now, several days after the furor, the company has responded to its critics.

In a fairly lengthy statement, the company begins by confirming that it definitely doesn’t log what websites a user views or what content he or she downloads.

“PureVPN did not breach its Privacy Policy and certainly did not breach your trust. NO browsing logs, browsing habits or anything else was, or ever will be shared,” the company writes.

However, that’s only half the problem. While it doesn’t log user activity (what sites people visit or content they download), it does log the IP addresses that customers use to access the PureVPN service. These, given the right circumstances, can be matched to external activities thanks to logs carried by other web companies.

PureVPN talks about logs held by Google’s Gmail service to illustrate its point.

“A network log is automatically generated every time a user visits a website. For the sake of this example, let’s say a user logged into their Gmail account. Every time they accessed Gmail, the email provider created a network log,” the company explains.

“If you are using a VPN, Gmail’s network log would contain the IP provided by PureVPN. This is one half of the picture. Now, if someone asks Google who accessed the user’s account, Google would state that whoever was using this IP, accessed the account.

“If the user was connected to PureVPN, it would be a PureVPN IP. The inquirer [in the Lin case, the FBI] would then share timestamps and network logs acquired from Google and ask them to be compared with the network logs maintained by the VPN provider.”

Now, if PureVPN carried no logs – literally no logs – it would not be able to help with this kind of inquiry. That was the case last year when the FBI approached Private Internet Access for information and the company was unable to assist.

However, as is made pretty clear by PureVPN’s explanation, the company does log user IP addresses and timestamps which reveal when a user was logged on to the service. It doesn’t matter that PureVPN doesn’t log what the user allegedly did online, since the third-party service already knows that information to the precise second.

Following the example, GMail knows that a user sent an email at 10:22am on Monday October 16 from a PureVPN IP address. So, if PureVPN is approached by the FBI, the company can confirm that User X was using the same IP address at exactly the same time, and his home IP address was XXX.XX.XXX.XX. Effectively, the combined logs link one IP address to the other and the user is revealed. It’s that simple.

It is for this reason that in TorrentFreak’s annual summary of no-logging VPN providers, the very first question we ask every single company reads as follows:

Do you keep ANY logs which would allow you to match an IP-address and a time stamp to a user/users of your service? If so, what information do you hold and for how long?

Clearly, if a company says “yes we log incoming IP addresses and associated timestamps”, any claim to total user anonymity is ended right there and then.

While not completely useless (a logging service will still stop the prying eyes of ISPs and similar surveillance, while also defeating throttling and site-blocking), if you’re a whistle-blower with a job or even your life to protect, this level of protection is entirely inadequate.

The take-home points from this controversy are numerous, but perhaps the most important is for people to read and understand VPN provider logging policies.

Secondly, and just as importantly, VPN providers need to be extremely clear about the information they log. Not tracking browsing or downloading activities is all well and good, but if home IP addresses and timestamps are stored, this needs to be made clear to the customer.

Finally, VPN users should not be evil. There are plenty of good reasons to stay anonymous online but cyberstalking, death threats and ruining people’s lives are not included. Fortunately, the FBI have offline methods for catching this type of offender, and long may that continue.

PureVPN’s blog post is available here.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Pirate Bay’s Iconic .SE Domain has Expired (Updated)

Post Syndicated from Ernesto original https://torrentfreak.com/pirate-bays-iconic-se-domain-has-expired-and-is-for-sale-171016/

When The Pirate Bay first came online during the summer of 2003, its main point of access was thepiratebay.org.

Since then the site has burnt through more than a dozen domains, trying to evade seizures or other legal threats.

For many years thepiratebay.se operated as the site’s main domain name. Earlier this year the site moved back to the good old .org again, and from the looks of it, TPB is ready to say farewell to the Swedish domain.

Thepiratebay.se expired last week and, if nothing happens, it will be de-activated tomorrow. This means that the site might lose control over a piece of its history.

The torrent site moved from the ORG to the SE domain in 2012, fearing that US authorities would seize the former. Around that time the Department of Homeland Security took hundreds of sites offline and the Pirate Bay team feared that they would be next.

Thepiratebay.se has expired

Ironically, however, the next big threat came from Sweden, the Scandinavian country where the site once started.

In 2013, a local anti-piracy group filed a motion targeting two of The Pirate Bay’s domains, ThePirateBay.se and PirateBay.se. This case that has been dragging on for years now.

During this time TPB moved back and forth between domains but the .se domain turned out to be a safer haven than most alternatives, despite the legal issues. Many other domains were simply seized or suspended without prior notice.

When the Swedish Court of Appeal eventually ruled that The Pirate Bay’s domain had to be confiscated and forfeited to the state, the site’s operators moved back to the .org domain, where it all started.

Although a Supreme Court appeal is still pending, according to a report from IDG earlier this year the court has placed a lock on the domain. This prevents the owner from changing or transferring it, which may explain why it has expired.

The lock is relevant, as the domain not only expired but has also been put of for sale again in the SEDO marketplace, with a minimum bid of $90. This sale would be impossible, if the domain is locked.

Thepiratebay.se for sale

Perhaps the most ironic of all is the fact that TPB moved to .se because it feared that the US controlled .org domain was easy prey.

Fast forward half a decade and over a dozen domains have come and gone while thepiratebay.org still stands strong, despite entertainment industry pressure.

Update: We updated the article to mention that the domain name is locked by the Swedish Supreme Court. This means that it can’t be updated and would explain why it has expired.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Top 10 Most Pirated Movies of The Week on BitTorrent – 10/16/17

Post Syndicated from Ernesto original https://torrentfreak.com/top-10-pirated-movies-week-bittorrent-101617/

This week we have two newcomers in our chart.

War for the Planet of the Apes is the most downloaded movie.

The data for our weekly download chart is estimated by TorrentFreak, and is for informational and educational reference only. All the movies in the list are Web-DL/Webrip/HDRip/BDrip/DVDrip unless stated otherwise.

RSS feed for the weekly movie download chart.

This week’s most downloaded movies are:
Movie Rank Rank last week Movie name IMDb Rating / Trailer
Most downloaded movies via torrents
1 (2) War for the Planet of the Apes 7.8 / trailer
2 (9) The Dark Tower 5.9 / trailer
3 (1) Spider-Man: Homecoming 7.8 / trailer
4 (…) American Made (Subbed HDrip) 7.3 / trailer
5 (3) Baby Driver 8.0 / trailer
6 (…) Annabelle Creation (Subbed HDRip) 6.7 / trailer
7 (7) Wonder Woman 8.2 / trailer
8 (4) Pirates of the Caribbean: Dead Men Tell No Tales 6.9 / trailer
9 (5) Transformers: The Last Knight 5.2 / trailer
10 (8) Despicable Me 3 6.4 / trailer

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

AI in the Cloud Market: AWS & Microsoft Lend a Big Hand

Post Syndicated from Chris De Santis original https://www.anchor.com.au/blog/2017/10/aws-microsoft-launch-ai-platform/

Artificial intelligence (or AI) doesn’t necessarily play a big role in the current cloud hosting market, but Amazon Web Services (AWS) and Microsoft are looking to change that.

AI is starting to grow at an alarming rate and may be a significant role-player in the near future. According to Bernie Trudel, chairman of the Asia Cloud Computing Association (ACCA), AI “will become the killer application that will drive cloud computing forward”. He continues to mention that, although AI only accounts for 1% of the today’s global cloud computing market, its overall IT market share is growing at 52%, and its expected to rapidly grow to 10% of cloud revenue by 2025.

Trudel made notable that, although the big players in the cloud game are currently offering AI capabilities, the cloud-based AI market is still in its early stages. These big players include AWS, Microsoft, Google, and IBM. He also continues to state that AWS is certainly the leader in the cloud market, but they’re playing catch-up in terms of an AI perspective.

AWS 💘 Microsoft?

Here’s the funny bit–that a day or two after Trudel said all of this at Cloud Expo Asia, AWS announce (on their blog) their combined effort with Microsoft to create a new open-source deep-learning interface that “allows developers to more easily and quickly build machine learning models”. In other words, Gluon is an AI application for developers to create their own AI models, to the benefit of their own cloud applications and technical endeavours.

If you’d like to learn more about Gluon and the details of the project, head over to the AWS blog here.

AWS + Microsoft

 

The post AI in the Cloud Market: AWS & Microsoft Lend a Big Hand appeared first on AWS Managed Services by Anchor.

Popular Zer0day Torrent Tracker Taken Offline By Mass Copyright Complaint

Post Syndicated from Andy original https://torrentfreak.com/popular-zer0day-torrent-tracker-taken-offline-by-mass-copyright-complaint-171014/

In January 2016, a BitTorrent enthusiast decided to launch a stand-alone tracker, purely for fun.

The Zer0day platform, which hosts no torrents, is a tracker in the purest sense, directing traffic between peers, no matter what content is involved and no matter where people are in the world.

With this type of tracker in short supply, it was soon utilized by The Pirate Bay and the now-defunct ExtraTorrent. By August 2016, it was tracking almost four million peers and a million torrents, a considerable contribution to the BitTorrent ecosystem.

After handling many ups and downs associated with a service of this type, the tracker eventually made it to the end of 2016 intact. This year it grew further still and by the end of September was tracking an impressive 5.5 million peers spread over 1.2 million torrents. Soon after, however, the tracker disappeared from the Internet without warning.

In an effort to find out what had happened, TorrentFreak contacted Zer0day’s operator who told us a familiar story. Without any warning at all, the site’s host pulled the plug on the service, despite having been paid 180 euros for hosting just a week earlier.

“We’re hereby informing you of the termination of your dedicated server due to a breach of our terms of service,” the host informed Zer0day.

“Hosting trackers on our servers that distribute infringing and copyrighted content is prohibited. This server was found to distribute such content. Should we identify additional similar activity in your services, we will be forced to close your account.”

While hosts tend not to worry too much about what their customers are doing, this one had just received a particularly lengthy complaint. Sent by the head of anti-piracy at French collecting society SCPP, it laid out the group’s problems with the Zer0day tracker.

“SCPP has been responsible for the collective management and protection of sound recordings and music videos producers’ rights since 1985. SCPP counts more than 2,600 members including the majority of independent French producers, in addition to independent European producers, and the major international companies: Sony, Universal and Warner,” the complaints reads.

“SCPP administers a catalog of 7,200,000 sound tracks and 77,000 music videos. SCPP is empowered by its members to take legal action in order to put an end to any infringements of the producers’ rights set out in Article L335-4 of the French Intellectual Property Code…..punishable by a three-year prison sentence or a fine of €300,000.”

Noting that it works on behalf of a number of labels and distributors including BMG, Sony Music, Universal Music, Warner Music and others, SCPP listed countless dozens of albums under its protection, each allegedly tracked by the Zer0day platform.

“It has come to our attention that these music albums are illegally being communicated to the public (made available for download) by various users of the BitTorrent-Network,” the complaint reads.

Noting that Zer0day is involved in the process, the anti-piracy outfit presented dozens of hash codes relating to protected works, demanding that the site stop facilitation of infringement on each and every one of them.

“We have proof that your tracker udp://tracker.zer0day.to:1337/announce provided peers of the BitTorrent-Network with information regarding these torrents, to be specific IP Addresses of peers that were offering without authorization the full albums for download, and that this information enabled peers to download files that contain the sound recordings to which our members producers have the exclusive rights.

“These sound recordings are thus being illegally communicated to the public, and your tracker is enabling the seeders to do so.”

Rather than take the hashes down from the tracker, SCPP actually demanded that Zer0day create a permanent blacklist within 24 hours, to ensure the corresponding torrents wouldn’t be tracked again.

“You should understand that this letter constitutes a notice to you that you may be liable for the infringing activity occurring on your service. In addition, if you ignore this notice, you may also be liable for any resulting infringement,” the complaint added.

But despite all the threats, SCPP didn’t receive the response they’d demanded since the operator of the site refused to take any action.

“Obviously, ‘info hashes’ are not copyrightable nor point to specific copyrighted content, or even have any meaning. Further, I cannot verify that request strings parameters (‘info hashes’) you sent me contain copyrighted material,” he told SCPP.

“Like the website says; for content removal kindly ask the indexing site to remove the listing and the .torrent file. Also, tracker software does not have an option to block request strings parameters (‘info hashes’).”

The net effect of non-compliance with SCPP was fairly dramatic and swift. Zer0day’s host took down the whole tracker instead and currently it remains offline. Whether it reappears depends on the site’s operator finding a suitable web host, but at the moment he says he has no idea where one will appear from.

“Currently I’m searching for some virtual private server as a temporary home for the tracker,” he concludes.

As mentioned in an earlier article detailing the problems sites like Zer0day.to face, trackers aren’t absolutely essential for the functioning of BitTorrent transfers. Nevertheless, their existence certainly improves matters for file-sharers so when they go down, millions can be affected.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Backblaze Release 5.1 – RMM Compatibility for Mass Deployments

Post Syndicated from Yev original https://www.backblaze.com/blog/rmm-for-mass-deployments/

diagram of Backblaze remote monitoring and management

Introducing Backblaze Computer Backup Release 5.1

This is a relatively minor release in terms of the core Backblaze Computer Backup service functionality, but is a big deal for Backblaze for Business as we’ve updated our Mac and PC clients to be RMM (Remote Monitoring and Management) compatible.

What Is New?

  • Updated Mac and PC clients to better handle large file uploads
  • Updated PC downloader to improve stability
  • Added RMM support for PC and Mac clients

What Is RMM?

RMM stands for “Remote Monitoring and Management.” It’s a way to administer computers that might be distributed geographically, without having access to the actual machine. If you are a systems administrator working with anywhere from a few distributed computers to a few thousand, you’re familiar with RMM and how it makes life easier.

The new clients allow administrators to deploy Backblaze Computer Backup through most “silent” installation/mass deployment tools. Two popular RMM tools are Munki and Jamf. We’ve written up knowledge base articles for both of these.

munki logo jamf logo
Learn more about Munki Learn more about Jamf

Do I Need To Use RMM Tools?

No — unless you are a systems administrator or someone who is deploying Backblaze to a lot of people all at once, you do not have to worry about RMM support.

Release Version Number:

Mac:  5.1.0
PC:  5.1.0

Availability:

October 12, 2017

Upgrade Methods:

  • “Check for Updates” on the Backblaze Client (right click on the Backblaze icon and then select “Check for Updates”)
  • Download from: https://secure.backblaze.com/update.htm
  • Auto-update will begin in a couple of weeks
Mac backup update PC backup update
Updating Backblaze on Mac Updating Backblaze on Windows

Questions:

If you have any questions, please contact Backblaze Support at www.backblaze.com/help.

The post Backblaze Release 5.1 – RMM Compatibility for Mass Deployments appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.