Tag Archives: ps

Canada’s Supreme Court Orders Google to Remove Search Results Worldwide

Post Syndicated from Andy original https://torrentfreak.com/canadas-supreme-court-orders-google-remove-search-results-worldwide-170629/

Back in 2014, the case of Equustek Solutions Inc. v. Jack saw two Canadian entities battle over stolen intellectual property used to manufacture competing products.

Google had no direct links to the case, yet it became embroiled when Equustek Solutions claimed that Google’s search results helped to send visitors to websites operated by the defendants (former Equustek employees) who were selling unlawful products.

Google voluntarily removed links to the sites from its Google.ca (Canada) results, but Equustek demanded a more comprehensive response. It got one.

In a ruling handed down by a court in British Columbia, Google was ordered to remove the infringing websites’ listings from its central database in the United States, meaning that the ruling had worldwide implications.

Google filed an appeal hoping for a better result, arguing that it does not operate servers in British Columbia, nor does it operate any local offices. It also questioned whether the injunction could be enforced outside Canada’s borders.

Ultimately, the British Columbia Court of Appeal disappointed the search giant. In a June 2015 ruling, the Court decided that Google does indeed do business in the region. It also found that a decision to restrict infringement was unlikely to offend any overseas nation.

“The plaintiffs have established, in my view, that an order limited to the google.ca search site would not be effective. I am satisfied that there was a basis, here, for giving the injunction worldwide effect,” Justice Groberman wrote.

Undeterred, Google took its case all the way to the Supreme Court of Canada, hoping to limit the scope of the injunction by arguing that it violates freedom of expression. That effort has now failed.

In a 7-2 majority decision released Wednesday, Google was branded a “determinative player” in facilitating harm to Equustek.

“This is not an order to remove speech that, on its face, engages freedom of expression values, it is an order to de-index websites that are in violation of several court orders,” wrote Justice Rosalia Abella.

“We have not, to date, accepted that freedom of expression requires the facilitation of the unlawful sale of goods.”

With Google now required to delist the sites on a global basis, the big question is what happens when other players attempt to apply the ruling to their particular business sector. Unsurprisingly that hasn’t taken long.

The International Federation of the Phonographic Industry (IFPI), which supported Equustek’s position in the long-running case, welcomed the decision and said that Google must “take on the responsibility” to ensure it does not direct users to illegal sites.

“Canada’s highest court has handed down a decision that is very good news for rights holders both in Canada and around the world. Whilst this was not a music piracy case, search engines play a prominent role in directing users to illegal content online including illegal music sites,” said IFPI CEO, Frances Moore.

“If the digital economy is to grow to its full potential, online intermediaries, including search engines, must play their part by ensuring that their services are not used to facilitate the infringement of intellectual property rights.”

Graham Henderson, President and CEO of Music Canada, which represents Sony, Universal, Warner and others, also welcomed the ruling.

“Today’s decision confirms that online service providers cannot turn a blind eye to illegal activity that they facilitate; on the contrary, they have an affirmative duty to take steps to prevent the Internet from becoming a black market,” Henderson said.

But for every voice of approval from groups like IFPI and Music Canada, others raised concerns over the scope of the decision and its potential to create a legal and political minefield. In particular, University of Ottawa professor Michael Geist raised a number of interesting scenarios.

“What happens if a Chinese court orders [Google] to remove Taiwanese sites from the index? Or if an Iranian court orders it to remove gay and lesbian sites from the index? Since local content laws differ from country to country, there is a great likelihood of conflicts,” Geist said.

But rather than painting Google as the loser in this battle, Geist believes the decision actually grants the search giant more power.

“When it comes to Internet jurisdiction, exercising restraint and limiting the scope of court orders is likely to increase global respect for the law and the effectiveness of judicial decisions. Yet this decision demonstrates what many have feared: the temptation for courts will be to assert jurisdiction over online activities and leave it to the parties to sort out potential conflicts,” Geist says.

“In doing so, the Supreme Court of Canada has lent its support to global takedowns and vested more power in Internet intermediaries, who may increasingly emerge as the arbiters of which laws to follow online.”

Only time will tell how Google will react, but it’s clear there will be plenty of entities ready to test the limits and scope of the company’s responses to the ruling.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

New Power Bundle for Amazon WorkSpaces – More vCPUs, Memory, and Storage

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-power-bundle-for-amazon-workspaces-more-vcpus-memory-and-storage/

Are you tired of hearing me talk about Amazon WorkSpaces yet? I hope not, because we have a lot of customer-driven additions on the roadmap! Our customers in the developer and analyst community have been asking for a workstation-class machine that will allow them to take advantage of the low cost and flexibility of WorkSpaces. Developers want to run Visual Studio, IntelliJ, Eclipse, and other IDEs. Analysts want to run complex simulations and statistical analysis using MatLab, GNU Octave, R, and Stata.

New Power Bundle
Today we are extending the current set of WorkSpaces bundles with a new Power bundle. With four vCPUs, 16 GiB of memory, and 275 GB of storage (175 GB on the system volume and another 100 GB on the user volume), this bundle is designed to make developers, analysts, (and me) smile. You can launch them in all of the usual ways: Console, CLI (create-workspaces), or API (CreateWorkSpaces):

One really interesting benefit to using a cloud-based virtual desktop for simulations and statistical analysis is the ease of access to data that’s already stored in the cloud. Analysts can mine and analyze petabytes of data stored in S3 that is effectively local (with respect to access time) to the WorkSpace. This low-latency access will boost productivity and also simplifies the use of other AWS data analysis tools such as Amazon Redshift, Amazon Redshift Spectrum, Amazon QuickSight, and Amazon Athena.

Like the existing bundles, the new Power bundle can be used in either billing configuration, AlwaysOn or AutoStop (read Amazon WorkSpaces Update – Hourly Usage and Expanded Root Volume to learn more). The bundle is available in all AWS Regions where WorkSpaces is available and you can launch one today! Visit the WorkSpaces Pricing page for pricing in your region.

Jeff;

Now Available – Developer Preview of AWS SDK for Java 2.0

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-available-developer-preview-of-aws-sdk-for-java-2-0/

The AWS Developer Tools Team has been hard at work on the AWS SDK for Java and is launching a Developer Preview of version 2.0 today.

This version is a major rewrite of the older, 1.11.x codebase. Built on top of Java 8 with a focus on consistency, immutability and ease of use, the new SDK includes frequently requested features such as support for non-blocking I/O and the ability to choose the desired HTTP implementation at runtime. The new non-blocking I/O support is more efficient than the existing, thread-based implementation of the Async variants of the service clients. Each non-blocking request returns a CompletableFuture object.

The version 2.0 SDK includes a number of changes to the earlier APIs. For example, it replaces the existing mix of client constructors and mutable methods with a consistent model based on client builders and immutable clients. The SDK also collapses the disparate collection of classes used to configure regions into a single Region class, and provides a new set of APIs for streaming.

The SDK is available on GitHub. You can send public feedback by opening GitHub issues and you can also send pull requests in the usual way.

To learn more about this SDK, read AWS SDK for Java 2.0 – Developer Preview on the AWS Developer Blog.

Jeff;

 

Validating AWS CloudFormation Templates

Post Syndicated from Remek Hetman original https://aws.amazon.com/blogs/devops/validating-aws-cloudformation-templates/

For their continuous integration and continuous deployment (CI/CD) pipeline path, many companies use tools like Jenkins, Chef, and AWS CloudFormation. Usually, the process is managed by two or more teams. One team is responsible for designing and developing an application, CloudFormation templates, and so on. The other team is generally responsible for integration and deployment.

One of the challenges that a CI/CD team has is to validate the CloudFormation templates provided by the development team. Validation provides early warning about any incorrect syntax and ensures that the development team follows company policies in terms of security and the resources created by CloudFormation templates.

In this post, I focus on the validation of AWS CloudFormation templates for syntax as well as in the context of business rules.

Scripted validation solution

For CloudFormation syntax validation, one option is to use the AWS CLI to call the validate-template command. For security and resource management, another approach is to run a Jenkins pipeline from an Amazon EC2 instance under an EC2 role that has been granted only the necessary permissions.

What if you need more control over your CloudFormation templates, such as managing parameters or attributes? What if you have many development teams where permissions to the AWS environment required by one team are either too open or not open enough for another team?

To have more control over the contents of your CloudFormation template, you can use the cf-validator Python script, which shows you how to validate different template aspects. With this script, you can validate:

  • JSON syntax
  • IAM capabilities
  • Root tags
  • Parameters
  • CloudFormation resources
  • Attributes
  • Reference resources

You can download this script from the cf-validator GitHub repo. Use the following command to run the script:

python cf-validator.py

The script takes the following parameters:

  • –cf_path [Required]

    The location of the CloudFormation template in JSON format. Supported location types:

    • File system – Path to the CloudFormation template on the file system
    • Web – URL, for example, https://my-file.com/my_cf.json
    • Amazon S3 – Amazon S3 bucket, for example, s3://my_bucket/my_cf.json
  • –cf_rules [Required]

    The location of the JSON file with the validation rules. This parameter supports the same locations as –cf_path. The next section of this post has more information about defining rules.

  • –cf_res [Optional]

    The location of the JSON file with the defined AWS resources, which need to be confirmed before launching the CloudFormation template. A later section of this post has more information about resource validation.

  • –allow_cap [Optional][yes/no]

    Controls whether you allow the creation of IAM resources by the CloudFormation template, such as policies, rules, or IAM users. The default value is no.

  • –region [Optional]

    The AWS region where the existing resources were created. The default value is us-east-1.

Defining rules

All rules are defined in the JSON format file. Rules consist of the following keys:

  • “allow_root_keys”

    Lists allowed root CloudFormation keys. Example of root keys are Parameters, Resources, Output, and so on. An empty list means that any key is allowed.

  • “allow_parameters”

    Lists allowed CloudFormation parameters. For instance, to force each CloudFormation template to use only the set of parameters defined in your pipeline, list them under this key. An empty list means that any parameter is allowed.

  • “allow_resources”

    Lists the AWS resources allowed for creation by a CloudFormation template. The format of the resource is the same as resource types in CloudFormation, but without the “AWS::” prefix. Examples:  EC2::Instance, EC2::Volume, and so on. If you allow the creation of all resources from the given group, you can use a wildcard. For instance, if you allow all resources related to CloudFormation, you can add CloudFormation::* to the list instead of typing CloudFormation::Init, CloudFormation:Stack, and so on. An empty list means that all resources are allowed.

  • “require_ref_attributes”

    Lists attributes (per resource) that have to be defined in CloudFormation. The value must be referenced and cannot be hardcoded. For instance, you can require that each EC2 instance must be created from a specific AMI where Image ID has to be a passed-in parameter. An empty list means that you are not requiring specific attributes to be present for a given resource.

  • “allow_additional_attributes”

    Lists additional attributes (per resource) that can be defined and have any value in the CloudFormation template. An empty list means that any additional attribute is allowed. If you specify additional attributes for this key, then any resource attribute defined in a CloudFormation template that is not listed in this key or in the require_ref_attributes key causes validation to fail.

  • “not_allow_attributes”

    Lists attributes (per resource) that are not allowed in the CloudFormation template. This key takes precedence over the require_ref_attributes and allow_additional_attributes keys.

Rule file example

The following is an example of a rule file:

{
  "allow_root_keys" : ["AWSTemplateFormatVersion", "Description", "Parameters", "Conditions", "Resources", "Outputs"],
  "allow_parameters" : [],
  "allow_resources" : [
    "CloudFormation::*",
    "CloudWatch::Alarm",
    "EC2::Instance",
    "EC2::Volume",
    "EC2::VolumeAttachment",
    "ElasticLoadBalancing::LoadBalancer",
    "IAM::Role",
    "IAM::Policy",
    "IAM::InstanceProfile"
  ],
  "require_ref_attributes" :
    {
      "EC2::Instance" : [ "InstanceType", "ImageId", "SecurityGroupIds", "SubnetId", "KeyName", "IamInstanceProfile" ],
      "ElasticLoadBalancing::LoadBalancer" : ["SecurityGroups", "Subnets"]
    },
  "allow_additional_attributes" : {},
  "not_allow_attributes" : {}
}

Validating resources

You can use the –cf_res parameter to validate that the resources you are planning to reference in the CloudFormation template exist and are available. As a value for this parameter, point to the JSON file with defined resources. The format should be as follows:

[
  { "Type" : "SG",
    "ID" : "sg-37c9b448A"
  },
  { "Type" : "AMI",
    "ID" : "ami-e7e523f1"
  },
  { "Type" : "Subnet",
    "ID" : "subnet-034e262e"
  }
]

Summary

At this moment, this CloudFormation template validation script supports only security groups, AMIs, and subnets. But anyone with some knowledge of Python and the boto3 package can add support for additional resources type, as needed.

For more tips please visit our AWS CloudFormation blog

Operation ‘Pirate On Demand’ Blocks Pirate IPTV Portals

Post Syndicated from Andy original https://torrentfreak.com/operation-pirate-on-demand-blocks-pirate-iptv-portals-170628/

Via cheap set-top boxes, IPTV services (Internet Protocol TV) allow people to access thousands of live TV channels in their living rooms for a nominal fee.

Some of these services are available for just a few euros, dollars or pounds per month, often in HD quality.

While service levels can vary, some of the best also offer comprehensive Video On Demand (VOD), with hundreds and in some cases thousands of movies and TV shows on tap, supported by catch-up TV. Given their professional nature, the best IPTV products are proving a real thorn in the side for rights holders, who hope to charge ten times the money while delivering a lesser product.

As a result, crackdowns against IPTV providers, resellers and other people in the chain are underway across the world, but Europe in particular. Today’s news comes from Italy, where Operation “Pirate On Demand” is hoping to make a dent in IPTV piracy.

The operation is being headed up by the Guardia di Finanza (GdF), a department under Italy’s Minister of Economy and Finance. Part of the Italian Armed Forces, GdF says it has targeted nine sites involved in the unlawful distribution of content offered officially by local media giants Mediaset and Sky.

The authorities received assistance of a specialized team from the local anti-piracy group DCP, which operates on behalf of a broad range of entertainment industry companies.

According to GdF, a total of 89 servers were behind the portals which together delivered an estimated 178 terabytes of pirate content, ranging from TV shows and sports, to movies and children’s entertainment.

The nine portals are in the process of being blocked with some displaying the following message.

Seizure notice on the affected sites

The investigation began in September 2016 and was coordinated by Giangiacomo Pilia, the prosecutor at the Cagliari Court. Thus far, two people have been arrested.

A person arrested in the Varese area, who police believe is the commercial director of an illicit platform, has been charged with breaching copyright law.

A second individual arrested in Macerata is also suspected of copyright offenses, having technically managed the platform. Computer equipment, decoders, smart cards, and other electronic devices were also seized.

In addition to blocking various web portals, measures will now be taken to block the servers being used to supply the IPTV services. The GdF has also delivered a veiled threat to people who subscribed to the illicit services.

“It is also in the hands of investigators the position of those who have actively accessed the platforms by purchasing pirated subscriptions and thus benefiting by taking advantage,” GdF said.

The moves this week are the latest to take place under the Operation “Pirate On Demand” banner. Back in March, authorities moved to shut down and block 15 portals offering illegal IPTV access to Mediaset and Sky channels.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Desert To Data in 7 Days – Our New Phoenix Data Center

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/data-center-design/

We are pleased to announce that Backblaze is now storing some of our customers’ data in our newest data center in Phoenix. Our Sacramento facility was slated to store about 500 petabytes of data and was starting to fill up so it was time to expand. After visiting multiple locations in the US and Canada, we selected Phoenix as it had the right combination of power, networking, price and more that we were seeking. Let’s take you through the process of getting the Phoenix data center up and running.

Day 0 – Designing the Data Center

After we selected the Phoenix location as our next DC (data center), we had to negotiate the contract. We’re going to skip that part of the process because, unless you’re a lawyer, it’s a long, boring process. Let’s just say we wanted to be ready to move in once the contract was signed. That meant we had to gather up everything we needed and order a bunch of other things like networking equipment, racks, storage pods, cables, etc. We decided to use our Sacramento DC as the staging point and started gathering what was going to be needed in Phoenix.

In actuality, for some items we started the process several months ago as lead times for things like network switches, Storage Pods, and even hard drives can be measured in months and delays are normal. For example, depending on our move in date, the network providers we wanted would only be able to provide limited bandwidth, so we had to prepare for that possibility. It helps to have a procurement person who knows what they are doing, can work the schedule, and is creatively flexible – thanks Amanda.

So by Day 0, we had amassed multiple pallets of cabinets, network gear, PDUs, tools, hard drives, carts, Guido, and more. And yes, for all you Guido fans he is still with us and he now resides in Phoenix. Everything was wrapped and loaded into a 53-foot semi-truck that was driven the 755 miles (1,215 km) from Sacramento, California to Phoenix, Arizona.

Day 1 – Move In Day

We sent a crew of 5 people to Phoenix with the goal of going from empty space to being ready to accept data in one week. The truck from Sacramento arrived mid-morning and work started unloading and marshaling the pallets and boxes into one area, while the racks were placed near their permanent location on the DC floor.

Day 2 – Building the Racks

Day 2 was spent primarily working with the racks. First they were positioned to their precise location on the data center floor. They were then anchored down and tied together. We started with 2 rows of twenty-two racks each, with twenty being for storage pods and two being for networking equipment. By the end of the week there will be 4 rows of racks installed.

Day 3 – Networking and Power, Part 1

While one team continued to work on the racks, another team began the process a getting the racks connected to the electricty and running the network cables to the network distribution racks. Once that was done, networking gear and rack-based PDUs (Power Distribution Units) were installed in the racks.

Day 4 – Rack Storage Pods

The truck from Sacramento brought 100 Storage Pods, a combination of 45 drive and 60 drive systems. Why did we use 45 drives units here? It has to do with the size (in racks and power) of the initial installation commitment and the ramp (increase) of installations over time. Contract stuff: boring yes, important yes. Basically to optimize our spend we wanted to use as much of the initial space we were allotted as possible. Since we had a number of empty 45 drive chassis available in Sacramento we decided to put them to use.

Day 5 – Drive Day

Our initial set-up goal was to build out five Backblaze Vaults. Each Vault is comprised of twenty Storage Pods. Four of the Vaults were filled with 45 drive Storage Pods and one was filled with 60 drive Storage Pods. That’s 4,800 hard drives to install – thank goodness we don’t use those rubber bands around the drives anymore.

Day 6 – Networking and Power, Part 2

With the storage pods in place, Day 6 was spent routing network and power cables to the individual pods. A critical part of the process is to label every wire so you know where it comes from and where it goes too. Once labeled, wires are bundled together and secured to the racks in a standard pattern. Not only does this make things look neat, it standardizes where you’ll find each cable across the hundreds of racks that are in the DC.

Day 7 – Test, Repair, Test, Ready

With all the power and networking finished, it was time to test the installation. Most of the Storage Pods light up with no problem, but there were a few that failed. These failures are quickly dealt with, and one by one each Backblaze Vault is registered into our monitoring and administration systems. By the end of the day, all five Vaults were ready.

Moving Forward

The Phoenix data center was ready for operation except that the network carriers we wanted to use could only provide a limited amount of bandwidth to start. It would take a few more weeks before the final network lines would be provisioned and operational. Even with the limited bandwidth we kicked off the migration of customer data from Sacramento to Phoenix to help balance out the workload. A few weeks later, once the networking was sorted out, we started accepting external customer data.

We’d like to thank our data center build team for documenting their work in pictures and allowing us to share some of them with our readers.

















Questions About Our New Data Center

Now that we have a second DC, you might have a few questions, such as can you store your data there and so on. Here’s the status of things today…

    Q: Does the new DC mean Backblaze has multi-region storage?
    A: Not yet. Right now we consider the Phoenix DC and the Sacramento DC to be in the same region.

    Q: Will you ever provide multi-region support?
    A: Yes, we expect to provide multi-region support in the future, but we don’t have a date for that capability yet.

    Q: Can I pick which data center will store my data?
    A: Not yet. This capability is part of our plans when we provide multi-region support.

    Q: Which data center is my data being stored in?
    A: Chances are that your data is in the Sacramento data center given it currently stores about 90% of our customer’s data.

    Q: Will my data be split across the two data centers?
    A: It is possible that one portion of your data will be stored in the Sacramento DC and another portion of your data will be stored in the Phoenix DC. This will be completely invisible to you and you should see no difference in storage or data retrieval times.

    Q: Can my data be replicated from one DC to the other?
    A: Not today. As noted above, your data will be in one DC or the other. That said files uploaded to the Backblaze Vaults in either DC are stored redundantly across 20 Backblaze Storage Pods within that DC. This translates to 99.999999% durability for the data stored this way.

    Q: Do you plan on opening more data centers?
    A: Yes. We are actively looking for new locations.

If you have any additional questions, please let us know in the comments or on social media. Thanks.

The post Desert To Data in 7 Days – Our New Phoenix Data Center appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Milestone: 100 Million Certificates Issued

Post Syndicated from Let's Encrypt - Free SSL/TLS Certificates original https://letsencrypt.org//2017/06/28/hundred-million-certs.html

Let’s Encrypt has reached a milestone: we’ve now issued more than 100,000,000 certificates. This number reflects at least a few things:

First, it illustrates the strong demand for our services. We’d like to thank all of the sysadmins, web developers, and everyone else managing servers for prioritizing protecting your visitors with HTTPS.

Second, it illustrates our ability to scale. I’m incredibly proud of the work our engineering teams have done to make this volume of issuance possible. I’m also very grateful to our operational partners, including IdenTrust, Akamai, and Sumo Logic.

Third, it illustrates the power of automated certificate management. If getting and managing certificates from Let’s Encrypt always required manual steps there is simply no way we’d be able to serve as many sites as we do. We’d like to thank our community for creating a wide range of clients for automating certificate issuance and management.

The total number of certificates we’ve issued is an interesting number, but it doesn’t reflect much about tangible progress towards our primary goal: a 100% HTTPS Web. To understand that progress we need to look at this graph:

Percentage of HTTPS Page Loads in Firefox.

When Let’s Encrypt’s service first became available, less than 40% of page loads on the Web used HTTPS. It took the Web 20 years to get to that point. In the 19 months since we launched, encrypted page loads have gone up by 18%, to nearly 58%. That’s an incredible rate of change for the Web. Contributing to this trend is what we’re most proud of.

If you’re as excited about the potential for a 100% HTTPS Web as we are, please consider getting involved, making a donation, or sponsoring Let’s Encrypt.

Here’s to the next 100,000,000 certificates, and a more secure and privacy-respecting Web for everyone!

Continuous Delivery of Nested AWS CloudFormation Stacks Using AWS CodePipeline

Post Syndicated from Prakash Palanisamy original https://aws.amazon.com/blogs/devops/continuous-delivery-of-nested-aws-cloudformation-stacks-using-aws-codepipeline/

In CodePipeline Update – Build Continuous Delivery Workflows for CloudFormation Stacks, Jeff Barr discusses infrastructure as code and how to use AWS CodePipeline for continuous delivery. In this blog post, I discuss the continuous delivery of nested CloudFormation stacks using AWS CodePipeline, with AWS CodeCommit as the source repository and AWS CodeBuild as a build and testing tool. I deploy the stacks using CloudFormation change sets following a manual approval process.

Here’s how to do it:

In AWS CodePipeline, create a pipeline with four stages:

  • Source (AWS CodeCommit)
  • Build and Test (AWS CodeBuild and AWS CloudFormation)
  • Staging (AWS CloudFormation and manual approval)
  • Production (AWS CloudFormation and manual approval)

Pipeline stages, the actions in each stage, and transitions between stages are shown in the following diagram.

CloudFormation templates, test scripts, and the build specification are stored in AWS CodeCommit repositories. These files are used in the Source stage of the pipeline in AWS CodePipeline.

The AWS::CloudFormation::Stack resource type is used to create child stacks from a master stack. The CloudFormation stack resource requires the templates of the child stacks to be stored in the S3 bucket. The location of the template file is provided as a URL in the properties section of the resource definition.

The following template creates three child stacks:

  • Security (IAM, security groups).
  • Database (an RDS instance).
  • Web stacks (EC2 instances in an Auto Scaling group, elastic load balancer).
Description: Master stack which creates all required nested stacks

Parameters:
  TemplatePath:
    Type: String
    Description: S3Bucket Path where the templates are stored
  VPCID:
    Type: "AWS::EC2::VPC::Id"
    Description: Enter a valid VPC Id
  PrivateSubnet1:
    Type: "AWS::EC2::Subnet::Id"
    Description: Enter a valid SubnetId of private subnet in AZ1
  PrivateSubnet2:
    Type: "AWS::EC2::Subnet::Id"
    Description: Enter a valid SubnetId of private subnet in AZ2
  PublicSubnet1:
    Type: "AWS::EC2::Subnet::Id"
    Description: Enter a valid SubnetId of public subnet in AZ1
  PublicSubnet2:
    Type: "AWS::EC2::Subnet::Id"
    Description: Enter a valid SubnetId of public subnet in AZ2
  S3BucketName:
    Type: String
    Description: Name of the S3 bucket to allow access to the Web Server IAM Role.
  KeyPair:
    Type: "AWS::EC2::KeyPair::KeyName"
    Description: Enter a valid KeyPair Name
  AMIId:
    Type: "AWS::EC2::Image::Id"
    Description: Enter a valid AMI ID to launch the instance
  WebInstanceType:
    Type: String
    Description: Enter one of the possible instance type for web server
    AllowedValues:
      - t2.large
      - m4.large
      - m4.xlarge
      - c4.large
  WebMinSize:
    Type: String
    Description: Minimum number of instances in auto scaling group
  WebMaxSize:
    Type: String
    Description: Maximum number of instances in auto scaling group
  DBSubnetGroup:
    Type: String
    Description: Enter a valid DB Subnet Group
  DBUsername:
    Type: String
    Description: Enter a valid Database master username
    MinLength: 1
    MaxLength: 16
    AllowedPattern: "[a-zA-Z][a-zA-Z0-9]*"
  DBPassword:
    Type: String
    Description: Enter a valid Database master password
    NoEcho: true
    MinLength: 1
    MaxLength: 41
    AllowedPattern: "[a-zA-Z0-9]*"
  DBInstanceType:
    Type: String
    Description: Enter one of the possible instance type for database
    AllowedValues:
      - db.t2.micro
      - db.t2.small
      - db.t2.medium
      - db.t2.large
  Environment:
    Type: String
    Description: Select the appropriate environment
    AllowedValues:
      - dev
      - test
      - uat
      - prod

Resources:
  SecurityStack:
    Type: "AWS::CloudFormation::Stack"
    Properties:
      TemplateURL:
        Fn::Sub: "https://s3.amazonaws.com/${TemplatePath}/security-stack.yml"
      Parameters:
        S3BucketName:
          Ref: S3BucketName
        VPCID:
          Ref: VPCID
        Environment:
          Ref: Environment
      Tags:
        - Key: Name
          Value: SecurityStack

  DatabaseStack:
    Type: "AWS::CloudFormation::Stack"
    Properties:
      TemplateURL:
        Fn::Sub: "https://s3.amazonaws.com/${TemplatePath}/database-stack.yml"
      Parameters:
        DBSubnetGroup:
          Ref: DBSubnetGroup
        DBUsername:
          Ref: DBUsername
        DBPassword:
          Ref: DBPassword
        DBServerSecurityGroup:
          Fn::GetAtt: SecurityStack.Outputs.DBServerSG
        DBInstanceType:
          Ref: DBInstanceType
        Environment:
          Ref: Environment
      Tags:
        - Key: Name
          Value:   DatabaseStack

  ServerStack:
    Type: "AWS::CloudFormation::Stack"
    Properties:
      TemplateURL:
        Fn::Sub: "https://s3.amazonaws.com/${TemplatePath}/server-stack.yml"
      Parameters:
        VPCID:
          Ref: VPCID
        PrivateSubnet1:
          Ref: PrivateSubnet1
        PrivateSubnet2:
          Ref: PrivateSubnet2
        PublicSubnet1:
          Ref: PublicSubnet1
        PublicSubnet2:
          Ref: PublicSubnet2
        KeyPair:
          Ref: KeyPair
        AMIId:
          Ref: AMIId
        WebSG:
          Fn::GetAtt: SecurityStack.Outputs.WebSG
        ELBSG:
          Fn::GetAtt: SecurityStack.Outputs.ELBSG
        DBClientSG:
          Fn::GetAtt: SecurityStack.Outputs.DBClientSG
        WebIAMProfile:
          Fn::GetAtt: SecurityStack.Outputs.WebIAMProfile
        WebInstanceType:
          Ref: WebInstanceType
        WebMinSize:
          Ref: WebMinSize
        WebMaxSize:
          Ref: WebMaxSize
        Environment:
          Ref: Environment
      Tags:
        - Key: Name
          Value: ServerStack

Outputs:
  WebELBURL:
    Description: "URL endpoint of web ELB"
    Value:
      Fn::GetAtt: ServerStack.Outputs.WebELBURL

During the Validate stage, AWS CodeBuild checks for changes to the AWS CodeCommit source repositories. It uses the ValidateTemplate API to validate the CloudFormation template and copies the child templates and configuration files to the appropriate location in the S3 bucket.

The following AWS CodeBuild build specification validates the CloudFormation templates listed under the TEMPLATE_FILES environment variable and copies them to the S3 bucket specified in the TEMPLATE_BUCKET environment variable in the AWS CodeBuild project. Optionally, you can use the TEMPLATE_PREFIX environment variable to specify a path inside the bucket. This updates the configuration files to use the location of the child template files. The location of the template files is provided as a parameter to the master stack.

version: 0.1

environment_variables:
  plaintext:
    CHILD_TEMPLATES: |
      security-stack.yml
      server-stack.yml
      database-stack.yml
    TEMPLATE_FILES: |
      master-stack.yml
      security-stack.yml
      server-stack.yml
      database-stack.yml
    CONFIG_FILES: |
      config-prod.json
      config-test.json
      config-uat.json

phases:
  install:
    commands:
      npm install jsonlint -g
  pre_build:
    commands:
      - echo "Validating CFN templates"
      - |
        for cfn_template in $TEMPLATE_FILES; do
          echo "Validating CloudFormation template file $cfn_template"
          aws cloudformation validate-template --template-body file://$cfn_template
        done
      - |
        for conf in $CONFIG_FILES; do
          echo "Validating CFN parameters config file $conf"
          jsonlint -q $conf
        done
  build:
    commands:
      - echo "Copying child stack templates to S3"
      - |
        for child_template in $CHILD_TEMPLATES; do
          if [ "X$TEMPLATE_PREFIX" = "X" ]; then
            aws s3 cp "$child_template" "s3://$TEMPLATE_BUCKET/$child_template"
          else
            aws s3 cp "$child_template" "s3://$TEMPLATE_BUCKET/$TEMPLATE_PREFIX/$child_template"
          fi
        done
      - echo "Updating template configurtion files to use the appropriate values"
      - |
        for conf in $CONFIG_FILES; do
          if [ "X$TEMPLATE_PREFIX" = "X" ]; then
            echo "Replacing \"TEMPLATE_PATH_PLACEHOLDER\" for \"$TEMPLATE_BUCKET\" in $conf"
            sed -i -e "s/TEMPLATE_PATH_PLACEHOLDER/$TEMPLATE_BUCKET/" $conf
          else
            echo "Replacing \"TEMPLATE_PATH_PLACEHOLDER\" for \"$TEMPLATE_BUCKET/$TEMPLATE_PREFIX\" in $conf"
            sed -i -e "s/TEMPLATE_PATH_PLACEHOLDER/$TEMPLATE_BUCKET\/$TEMPLATE_PREFIX/" $conf
          fi
        done

artifacts:
  files:
    - master-stack.yml
    - config-*.json

After the template files are copied to S3, CloudFormation creates a test stack and triggers AWS CodeBuild as a test action.

Then the AWS CodeBuild build specification executes validate-env.py, the Python script used to determine whether resources created using the nested CloudFormation stacks conform to the specifications provided in the CONFIG_FILE.

version: 0.1

environment_variables:
  plaintext:
    CONFIG_FILE: env-details.yml

phases:
  install:
    commands:
      - pip install --upgrade pip
      - pip install boto3 --upgrade
      - pip install pyyaml --upgrade
      - pip install yamllint --upgrade
  pre_build:
    commands:
      - echo "Validating config file $CONFIG_FILE"
      - yamllint $CONFIG_FILE
  build:
    commands:
      - echo "Validating resources..."
      - python validate-env.py
      - exit $?

Upon successful completion of the test action, CloudFormation deletes the test stack and proceeds to the UAT stage in the pipeline.

During this stage, CloudFormation creates a change set against the UAT stack and then executes the change set. This updates the UAT environment and makes it available for acceptance testing. The process continues to a manual approval action. After the QA team validates the UAT environment and provides an approval, the process moves to the Production stage in the pipeline.

During this stage, CloudFormation creates a change set for the nested production stack and the process continues to a manual approval step. Upon approval (usually by a designated executive), the change set is executed and the production deployment is completed.
 

Setting up a continuous delivery pipeline

 
I used a CloudFormation template to set up my continuous delivery pipeline. The codepipeline-cfn-codebuild.yml template, available from GitHub, sets up a full-featured pipeline.

When I use the template to create my pipeline, I specify the following:

  • AWS CodeCommit repositories.
  • SNS topics to send approval notifications.
  • S3 bucket name where the artifacts will be stored.

The CFNTemplateRepoName points to the AWS CodeCommit repository where CloudFormation templates, configuration files, and build specification files are stored.

My repo contains following files:

The continuous delivery pipeline is ready just seconds after clicking Create Stack. After it’s created, the pipeline executes each stage. Upon manual approvals for the UAT and Production stages, the pipeline successfully enables continuous delivery.


 

Implementing a change in nested stack

 
To make changes to a child stack in a nested stack (for example, to update a parameter value or add or change resources), update the master stack. The changes must be made in the appropriate template or configuration files and then checked in to the AWS CodeCommit repository. This triggers the following deployment process:

 

Conclusion

 
In this post, I showed how you can use AWS CodePipeline, AWS CloudFormation, AWS CodeBuild, and a manual approval process to create a continuous delivery pipeline for both infrastructure as code and application deployment.

For more information about AWS CodePipeline, see the AWS CodePipeline documentation. You can get started in just a few clicks. All CloudFormation templates, AWS CodeBuild build specification files, and the Python script that performs the validation are available in codepipeline-nested-cfn GitHub repository.


About the author

 
Prakash Palanisamy is a Solutions Architect for Amazon Web Services. When he is not working on Serverless, DevOps or Alexa, he will be solving problems in Project Euler. He also enjoys watching educational documentaries.

mkosi — A Tool for Generating OS Images

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/mkosi-a-tool-for-generating-os-images.html

Introducing mkosi

After blogging about
casync
I realized I never blogged about the
mkosi tool that combines nicely
with it. mkosi has been around for a while already, and its time to
make it a bit better known. mkosi stands for Make Operating System
Image
, and is a tool for precisely that: generating an OS tree or
image that can be booted.

Yes, there are many tools like mkosi, and a number of them are quite
well known and popular. But mkosi has a number of features that I
think make it interesting for a variety of use-cases that other tools
don’t cover that well.

What is mkosi?

What are those use-cases, and what does mkosi precisely set apart?
mkosi is definitely a tool with a focus on developer’s needs for
building OS images, for testing and debugging, but also for generating
production images with cryptographic protection. A typical use-case
would be to add a mkosi.default file to an existing project (for
example, one written in C or Python), and thus making it easy to
generate an OS image for it. mkosi will put together the image with
development headers and tools, compile your code in it, run your test
suite, then throw away the image again, and build a new one, this time
without development headers and tools, and install your build
artifacts in it. This final image is then “production-ready”, and only
contains your built program and the minimal set of packages you
configured otherwise. Such an image could then be deployed with
casync (or any other tool of course) to be delivered to your set of
servers, or IoT devices or whatever you are building.

mkosi is supposed to be legacy-free: the focus is clearly on
today’s technology, not yesteryear’s. Specifically this means that
we’ll generate GPT partition tables, not MBR/DOS ones. When you tell
mkosi to generate a bootable image for you, it will make it bootable
on EFI, not on legacy BIOS. The GPT images generated follow
specifications such as the Discoverable Partitions
Specification
,
so that /etc/fstab can remain unpopulated and tools such as
systemd-nspawn can automatically dissect the image and boot from
them.

So, let’s have a look on the specific images it can generate:

  1. Raw GPT disk image, with ext4 as root
  2. Raw GPT disk image, with btrfs as root
  3. Raw GPT disk image, with a read-only squashfs as root
  4. A plain directory on disk containing the OS tree directly (this is useful for creating generic container images)
  5. A btrfs subvolume on disk, similar to the plain directory
  6. A tarball of a plain directory

When any of the GPT choices above are selected, a couple of additional
options are available:

  1. A swap partition may be added in
  2. The system may be made bootable on EFI systems
  3. Separate partitions for /home and /srv may be added in
  4. The root, /home and /srv partitions may be optionally encrypted with LUKS
  5. The root partition may be protected using dm-verity, thus making offline attacks on the generated system hard
  6. If the image is made bootable, the dm-verity root hash is automatically added to the kernel command line, and the kernel together with its initial RAM disk and the kernel command line is optionally cryptographically signed for UEFI SecureBoot

Note that mkosi is distribution-agnostic. It currently can build
images based on the following Linux distributions:

  1. Fedora
  2. Debian
  3. Ubuntu
  4. ArchLinux
  5. openSUSE

Note though that not all distributions are supported at the same
feature level currently. Also, as mkosi is based on dnf
--installroot
, debootstrap, pacstrap and zypper, and those
packages are not packaged universally on all distributions, you might
not be able to build images for all those distributions on arbitrary
host distributions. For example, Fedora doesn’t package zypper,
hence you cannot build an openSUSE image easily on Fedora, but you can
still build Fedora (obviously…), Debian, Ubuntu and ArchLinux images
on it just fine.

The GPT images are put together in a way that they aren’t just
compatible with UEFI systems, but also with VM and container managers
(that is, at least the smart ones, i.e. VM managers that know UEFI,
and container managers that grok GPT disk images) to a large
degree. In fact, the idea is that you can use mkosi to build a
single GPT image that may be used to:

  1. Boot on bare-metal boxes
  2. Boot in a VM
  3. Boot in a systemd-nspawn container
  4. Directly run a systemd service off, using systemd’s RootImage= unit file setting

Note that in all four cases the dm-verity data is automatically used
if available to ensure the image is not tempered with (yes, you read
that right, systemd-nspawn and systemd’s RootImage= setting
automatically do dm-verity these days if the image has it.)

Mode of Operation

The simplest usage of mkosi is by simply invoking it without
parameters (as root):

# mkosi

Without any configuration this will create a GPT disk image for you,
will call it image.raw and drop it in the current directory. The
distribution used will be the same one as your host runs.

Of course in most cases you want more control about how the image is
put together, i.e. select package sets, select the distribution, size
partitions and so on. Most of that you can actually specify on the
command line, but it is recommended to instead create a couple of
mkosi.$SOMETHING files and directories in some directory. Then,
simply change to that directory and run mkosi without any further
arguments. The tool will then look in the current working directory
for these files and directories and make use of them (similar to how
make looks for a Makefile…). Every single file/directory is
optional, but if they exist they are honored. Here’s a list of the
files/directories mkosi currently looks for:

  1. mkosi.default — This is the main configuration file, here you
    can configure what kind of image you want, which distribution, which
    packages and so on.

  2. mkosi.extra/ — If this directory exists, then mkosi will copy
    everything inside it into the images built. You can place arbitrary
    directory hierarchies in here, and they’ll be copied over whatever is
    already in the image, after it was put together by the distribution’s
    package manager. This is the best way to drop additional static files
    into the image, or override distribution-supplied ones.

  3. mkosi.build — This executable file is supposed to be a build
    script. When it exists, mkosi will build two images, one after the
    other in the mode already mentioned above: the first version is the
    build image, and may include various build-time dependencies such as
    a compiler or development headers. The build script is also copied
    into it, and then run inside it. The script should then build
    whatever shall be built and place the result in $DESTDIR (don’t
    worry, popular build tools such as Automake or Meson all honor
    $DESTDIR anyway, so there’s not much to do here explicitly). It may
    also run a test suite, or anything else you like. After the script
    finished, the build image is removed again, and a second image (the
    final image) is built. This time, no development packages are
    included, and the build script is not copied into the image again —
    however, the build artifacts from the first run (i.e. those placed in
    $DESTDIR) are copied into the image.

  4. mkosi.postinst — If this executable script exists, it is invoked
    inside the image (inside a systemd-nspawn invocation) and can
    adjust the image as it likes at a very late point in the image
    preparation. If mkosi.build exists, i.e. the dual-phased
    development build process used, then this script will be invoked
    twice: once inside the build image and once inside the final
    image. The first parameter passed to the script clarifies which phase
    it is run in.

  5. mkosi.nspawn — If this file exists, it should contain a
    container configuration file for systemd-nspawn (see
    systemd.nspawn(5)
    for details), which shall be shipped along with the final image and
    shall be included in the check-sum calculations (see below).

  6. mkosi.cache/ — If this directory exists, it is used as package
    cache directory for the builds. This directory is effectively bind
    mounted into the image at build time, in order to speed up building
    images. The package installers of the various distributions will
    place their package files here, so that subsequent runs can reuse
    them.

  7. mkosi.passphrase — If this file exists, it should contain a
    pass-phrase to use for the LUKS encryption (if that’s enabled for the
    image built). This file should not be readable to other users.

  8. mkosi.secure-boot.crt and mkosi.secure-boot.key should be an
    X.509 key pair to use for signing the kernel and initrd for UEFI
    SecureBoot, if that’s enabled.

How to use it

So, let’s come back to our most trivial example, without any of the
mkosi.$SOMETHING files around:

# mkosi

As mentioned, this will create a build file image.raw in the current
directory. How do we use it? Of course, we could dd it onto some USB
stick and boot it on a bare-metal device. However, it’s much simpler
to first run it in a container for testing:

# systemd-nspawn -bi image.raw

And there you go: the image should boot up, and just work for you.

Now, let’s make things more interesting. Let’s still not use any of
the mkosi.$SOMETHING files around:

# mkosi -t raw_btrfs --bootable -o foobar.raw
# systemd-nspawn -bi foobar.raw

This is similar as the above, but we made three changes: it’s no
longer GPT + ext4, but GPT + btrfs. Moreover, the system is made
bootable on UEFI systems, and finally, the output is now called
foobar.raw.

Because this system is bootable on UEFI systems, we can run it in KVM:

qemu-kvm -m 512 -smp 2 -bios /usr/share/edk2/ovmf/OVMF_CODE.fd -drive format=raw,file=foobar.raw

This will look very similar to the systemd-nspawn invocation, except
that this uses full VM virtualization rather than container
virtualization. (Note that the way to run a UEFI qemu/kvm instance
appears to change all the time and is different on the various
distributions. It’s quite annoying, and I can’t really tell you what
the right qemu command line is to make this work on your system.)

Of course, it’s not all raw GPT disk images with mkosi. Let’s try
a plain directory image:

# mkosi -d fedora -t directory -o quux
# systemd-nspawn -bD quux

Of course, if you generate the image as plain directory you can’t boot
it on bare-metal just like that, nor run it in a VM.

A more complex command line is the following:

# mkosi -d fedora -t raw_squashfs --checksum --xz --package=openssh-clients --package=emacs

In this mode we explicitly pick Fedora as the distribution to use, ask
mkosi to generate a compressed GPT image with a root squashfs,
compress the result with xz, and generate a SHA256SUMS file with
the hashes of the generated artifacts. The package will contain the
SSH client as well as everybody’s favorite editor.

Now, let’s make use of the various mkosi.$SOMETHING files. Let’s
say we are working on some Automake-based project and want to make it
easy to generate a disk image off the development tree with the
version you are hacking on. Create a configuration file:

# cat > mkosi.default <<EOF
[Distribution]
Distribution=fedora
Release=24

[Output]
Format=raw_btrfs
Bootable=yes

[Packages]
# The packages to appear in both the build and the final image
Packages=openssh-clients httpd
# The packages to appear in the build image, but absent from the final image
BuildPackages=make gcc libcurl-devel
EOF

And let’s add a build script:

# cat > mkosi.build <<EOF
#!/bin/sh
cd $SRCDIR
./autogen.sh
./configure --prefix=/usr
make -j `nproc`
make install
EOF
# chmod +x mkosi.build

And with all that in place we can now build our project into a disk image, simply by typing:

# mkosi

Let’s try it out:

# systemd-nspawn -bi image.raw

Of course, if you do this you’ll notice that building an image like
this can be quite slow. And slow build times are actively hurtful to
your productivity as a developer. Hence let’s make things a bit
faster. First, let’s make use of a package cache shared between runs:

# mkdir mkosi.chache

Building images now should already be substantially faster (and
generate less network traffic) as the packages will now be downloaded
only once and reused. However, you’ll notice that unpacking all those
packages and the rest of the work is still quite slow. But mkosi can
help you with that. Simply use mkosi‘s incremental build feature. In
this mode mkosi will make a copy of the build and final images
immediately before dropping in your build sources or artifacts, so
that building an image becomes a lot quicker: instead of always
starting totally from scratch a build will now reuse everything it can
reuse from a previous run, and immediately begin with building your
sources rather than the build image to build your sources in. To
enable the incremental build feature use -i:

# mkosi -i

Note that if you use this option, the package list is not updated
anymore from your distribution’s servers, as the cached copy is made
after all packages are installed, and hence until you actually delete
the cached copy the distribution’s network servers aren’t contacted
again and no RPMs or DEBs are downloaded. This means the distribution
you use becomes “frozen in time” this way. (Which might be a bad
thing, but also a good thing, as it makes things kinda reproducible.)

Of course, if you run mkosi a couple of times you’ll notice that it
won’t overwrite the generated image when it already exists. You can
either delete the file yourself first (rm image.raw) or let mkosi
do it for you right before building a new image, with mkosi -f. You
can also tell mkosi to not only remove any such pre-existing images,
but also remove any cached copies of the incremental feature, by using
-f twice.

I wrote mkosi originally in order to test systemd, and quickly
generate a disk image of various distributions with the most current
systemd version from git, without all that affecting my host system. I
regularly use mkosi for that today, in incremental mode. The two
commands I use most in that context are:

# mkosi -if && systemd-nspawn -bi image.raw

And sometimes:

# mkosi -iff && systemd-nspawn -bi image.raw

The latter I use only if I want to regenerate everything based on the
very newest set of RPMs provided by Fedora, instead of a cached
snapshot of it.

BTW, the mkosi files for systemd are included in the systemd git
tree:
mkosi.default
and
mkosi.build. This
way, any developer who wants to quickly test something with current
systemd git, or wants to prepare a patch based on it and test it can
check out the systemd repository and simply run mkosi in it and a
few minutes later he has a bootable image he can test in
systemd-nspawn or KVM. casync has similar files:
mkosi.default,
mkosi.build.

Random Interesting Features

  1. As mentioned already, mkosi will generate dm-verity enabled
    disk images if you ask for it. For that use the --verity switch on
    the command line or Verity= setting in mkosi.default. Of course,
    dm-verity implies that the root volume is read-only. In this mode
    the top-level dm-verity hash will be placed along-side the output
    disk image in a file named the same way, but with the .roothash
    suffix. If the image is to be created bootable, the root hash is also
    included on the kernel command line in the roothash= parameter,
    which current systemd versions can use to both find and activate the
    root partition in a dm-verity protected way. BTW: it’s a good idea
    to combine this dm-verity mode with the raw_squashfs image mode,
    to generate a genuinely protected, compressed image suitable for
    running in your IoT device.

  2. As indicated above, mkosi can automatically create a check-sum
    file SHA256SUMS for you (--checksum) covering all the files it
    outputs (which could be the image file itself, a matching .nspawn
    file using the mkosi.nspawn file mentioned above, as well as the
    .roothash file for the dm-verity root hash.) It can then
    optionally sign this with gpg (--sign). Note that systemd‘s
    machinectl pull-tar and machinectl pull-raw command can download
    these files and the SHA256SUMS file automatically and verify things
    on download. With other words: what mkosi outputs is perfectly
    ready for downloads using these two systemd commands.

  3. As mentioned, mkosi is big on supporting UEFI SecureBoot. To
    make use of that, place your X.509 key pair in two files
    mkosi.secureboot.crt and mkosi.secureboot.key, and set
    SecureBoot= or --secure-boot. If so, mkosi will sign the
    kernel/initrd/kernel command line combination during the build. Of
    course, if you use this mode, you should also use
    Verity=/--verity=, otherwise the setup makes only partial
    sense. Note that mkosi will not help you with actually enrolling
    the keys you use in your UEFI BIOS.

  4. mkosi has minimal support for GIT checkouts: when it recognizes
    it is run in a git checkout and you use the mkosi.build script
    stuff, the source tree will be copied into the build image, but will
    all files excluded by .gitignore removed.

  5. There’s support for encryption in place. Use --encrypt= or
    Encrypt=. Note that the UEFI ESP is never encrypted though, and the
    root partition only if explicitly requested. The /home and /srv
    partitions are unconditionally encrypted if that’s enabled.

  6. Images may be built with all documentation removed.

  7. The password for the root user and additional kernel command line
    arguments may be configured for the image to generate.

Minimum Requirements

Current mkosi requires Python 3.5, and has a number of dependencies,
listed in the
README. Most
notably you need a somewhat recent systemd version to make use of its
full feature set: systemd 233. Older versions are already packaged for
various distributions, but much of what I describe above is only
available in the most recent release mkosi 3.

The UEFI SecureBoot support requires sbsign which currently isn’t
available in Fedora, but there’s a
COPR
.

Future

It is my intention to continue turning mkosi into a tool suitable
for:

  1. Testing and debugging projects
  2. Building images for secure devices
  3. Building portable service images
  4. Building images for secure VMs and containers

One of the biggest goals I have for the future is to teach mkosi and
systemd/sd-boot native support for A/B IoT style partition
setups. The idea is that the combination of systemd, casync and
mkosi provides generic building blocks for building secure,
auto-updating devices in a generic way from, even though all pieces
may be used individually, too.

FAQ

  1. Why are you reinventing the wheel again? This is exactly like
    $SOMEOTHERPROJECT!
    — Well, to my knowledge there’s no tool that
    integrates this nicely with your project’s development tree, and can
    do dm-verity and UEFI SecureBoot and all that stuff for you. So
    nope, I don’t think this exactly like $SOMEOTHERPROJECT, thank you
    very much.

  2. What about creating MBR/DOS partition images? — That’s really
    out of focus to me. This is an exercise in figuring out how generic
    OSes and devices in the future should be built and an attempt to
    commoditize OS image building. And no, the future doesn’t speak MBR,
    sorry. That said, I’d be quite interested in adding support for
    booting on Raspberry Pi, possibly using a hybrid approach, i.e. using
    a GPT disk label, but arranging things in a way that the Raspberry Pi
    boot protocol (which is built around DOS partition tables), can still
    work.

  3. Is this portable? — Well, depends what you mean by
    portable. No, this tool runs on Linux only, and as it uses
    systemd-nspawn during the build process it doesn’t run on
    non-systemd systems either. But then again, you should be able to
    create images for any architecture you like with it, but of course if
    you want the image bootable on bare-metal systems only systems doing
    UEFI are supported (but systemd-nspawn should still work fine on
    them).

  4. Where can I get this stuff? — Try
    GitHub. And some distributions
    carry packaged versions, but I think none of them the current v3
    yet.

  5. Is this a systemd project? — Yes, it’s hosted under the
    systemd GitHub umbrella. And yes,
    during run-time systemd-nspawn in a current version is required. But
    no, the code-bases are separate otherwise, already because systemd
    is a C project, and mkosi Python.

  6. Requiring systemd 233 is a pretty steep requirement, no?
    Yes, but the feature we need kind of matters (systemd-nspawn‘s
    --overlay= switch), and again, this isn’t supposed to be a tool for
    legacy systems.

  7. Can I run the resulting images in LXC or Docker? — Humm, I am
    not an LXC nor Docker guy. If you select directory or subvolume
    as image type, LXC should be able to boot the generated images just
    fine, but I didn’t try. Last time I looked, Docker doesn’t permit
    running proper init systems as PID 1 inside the container, as they
    define their own run-time without intention to emulate a proper
    system. Hence, no I don’t think it will work, at least not with an
    unpatched Docker version. That said, again, don’t ask me questions
    about Docker, it’s not precisely my area of expertise, and quite
    frankly I am not a fan. To my knowledge neither LXC nor Docker are
    able to run containers directly off GPT disk images, hence the
    various raw_xyz image types are definitely not compatible with
    either. That means if you want to generate a single raw disk image
    that can be booted unmodified both in a container and on bare-metal,
    then systemd-nspawn is the container manager to go for
    (specifically, its -i/--image= switch).

Should you care? Is this a tool for you?

Well, that’s up to you really.

If you hack on some complex project and need a quick way to compile
and run your project on a specific current Linux distribution, then
mkosi is an excellent way to do that. Simply drop the mkosi.default
and mkosi.build files in your git tree and everything will be
easy. (And of course, as indicated above: if the project you are
hacking on happens to be called systemd or casync be aware that
those files are already part of the git tree — you can just use them.)

If you hack on some embedded or IoT device, then mkosi is a great
choice too, as it will make it reasonably easy to generate secure
images that are protected against offline modification, by using
dm-verity and UEFI SecureBoot.

If you are an administrator and need a nice way to build images for a
VM or systemd-nspawn container, or a portable service then mkosi
is an excellent choice too.

If you care about legacy computers, old distributions, non-systemd
init systems, old VM managers, Docker, … then no, mkosi is not for
you, but there are plenty of well-established alternatives around that
cover that nicely.

And never forget: mkosi is an Open Source project. We are happy to
accept your patches and other contributions.

Oh, and one unrelated last thing: don’t forget to submit your talk
proposal

and/or buy a ticket for
All Systems Go! 2017 in Berlin — the
conference where things like systemd, casync and mkosi are
discussed, along with a variety of other Linux userspace projects used
for building systems.

Cox: Supreme Court Suggests That Pirates Shouldn’t Lose Internet Access

Post Syndicated from Ernesto original https://torrentfreak.com/cox-supreme-court-suggests-that-pirates-shouldnt-lose-internet-access-170627/

December 2015 a Virginia federal jury held Internet provider Cox Communications responsible for the copyright infringements of its subscribers.

The ISP refused to disconnect alleged pirates and was found guilty of willful contributory copyright infringement. In addition, it was ordered to pay music publisher BMG Rights Management $25 million in damages.

Cox has since filed an appeal and this week it submitted an additional piece of evidence from the US Supreme Court, stating that this strongly supports its side of the argument.

Last week the Supreme Court issued an important verdict in Packingham v. North Carolina, ruling that it’s unconstitutional to bar convicted sex offenders from social media. The Court described the Internet as an important tool for people to exercise free speech rights.

While nothing in the ruling refers to online piracy, it could turn out to be crucial in the case between Cox and BMG. The Internet provider now argues that if convicted criminals have the right to use the Internet, accused file-sharers should have it too.

“Packingham is directly relevant to what constitute ‘appropriate circumstances’ to terminate Internet access to Cox’s customers. The decision emphatically establishes the centrality of Internet access to protected First Amendment activity..,” Cox writes in its filing at the Court of Appeals.

“As the Court recognized, Internet sources are often ‘the principal sources for knowing current events, checking ads for employment, speaking and listening in the modern public square, and otherwise exploring the vast realms of human thought and knowledge’.”

Citing the Supreme Court ruling, Cox notes that the Government “may not suppress lawful speech as the means to suppress unlawful speech.” This would be the case if entire households lost Internet access because a copyright holder accused someone of repeated copyright infringements.

“The Court’s analysis strongly suggests that at least intermediate scrutiny must apply to any law that purports to restrict the ability of a class of persons to access the Internet,” ISP writes (pdf).

In its case against BMG, Cox was held liable because it failed to take appropriate action against frequent pirates, solely based on allegations of piracy monitoring outfit Rightscorp. Cox doesn’t believe these one-sided complaints should be enough for people to be disconnected from the Internet.

If convicted sex offenders still have the right to use social media, accused pirates should not be barred from the Internet on a whim, the argument goes.

“And if it offends the Constitution to cut off a portion of Internet access to convicted criminals, then the district court’s erroneous interpretation of Section 512(i) of the DMCA — which effectively invokes the state’s coercive power to require ISPs to terminate all Internet access to merely accused infringers — cannot stand,” Cox writes.

Whether the Court of Appeals will agree has yet to be seen, but with the stakes at hand this issue is far from resolved. In addition to the case between BMG and Cox, the MPAA recently filed a lawsuit against Grande Communications, which centers around the same issue.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Yahoo Mail’s New Tech Stack, Built for Performance and Reliability

Post Syndicated from mikesefanov original https://yahooeng.tumblr.com/post/162320493306

By Suhas Sadanandan, Director of Engineering 

When it comes to performance and reliability, there is perhaps no application where this matters more than with email. Today, we announced a new Yahoo Mail experience for desktop based on a completely rewritten tech stack that embodies these fundamental considerations and more.

We built the new Yahoo Mail experience using a best-in-class front-end tech stack with open source technologies including React, Redux, Node.js, react-intl (open-sourced by Yahoo), and others. A high-level architectural diagram of our stack is below.

image

New Yahoo Mail Tech Stack

In building our new tech stack, we made use of the most modern tools available in the industry to come up with the best experience for our users by optimizing the following fundamentals:

Performance

A key feature of the new Yahoo Mail architecture is blazing-fast initial loading (aka, launch).

We introduced new network routing which sends users to their nearest geo-located email servers (proximity-based routing). This has resulted in a significant reduction in time to first byte and should be immediately noticeable to our international users in particular.

We now do server-side rendering to allow our users to see their mail sooner. This change will be immediately noticeable to our low-bandwidth users. Our application is isomorphic, meaning that the same code runs on the server (using Node.js) and the client. Prior versions of Yahoo Mail had programming logic duplicated on the server and the client because we used PHP on the server and JavaScript on the client.   

Using efficient bundling strategies (JavaScript code is separated into application, vendor, and lazy loaded bundles) and pushing only the changed bundles during production pushes, we keep the cache hit ratio high. By using react-atomic-css, our homegrown solution for writing modular and scoped CSS in React, we get much better CSS reuse.  

In prior versions of Yahoo Mail, the need to run various experiments in parallel resulted in additional branching and bloating of our JavaScript and CSS code. While rewriting all of our code, we solved this issue using Mendel, our homegrown solution for bucket testing isomorphic web apps, which we have open sourced.  

Rather than using custom libraries, we use native HTML5 APIs and ES6 heavily and use PolyesterJS, our homegrown polyfill solution, to fill the gaps. These factors have further helped us to keep payload size minimal.

With all the above optimizations, we have been able to reduce our JavaScript and CSS footprint by approximately 50% compared to the previous desktop version of Yahoo Mail, helping us achieve a blazing-fast launch.

In addition to initial launch improvements, key features like search and message read (when a user opens an email to read it) have also benefited from the above optimizations and are considerably faster in the latest version of Yahoo Mail.

We also significantly reduced the memory consumed by Yahoo Mail on the browser. This is especially noticeable during a long running session.

Reliability

With this new version of Yahoo Mail, we have a 99.99% success rate on core flows: launch, message read, compose, search, and actions that affect messages. Accomplishing this over several billion user actions a day is a significant feat. Client-side errors (JavaScript exceptions) are reduced significantly when compared to prior Yahoo Mail versions.

Product agility and launch velocity

We focused on independently deployable components. As part of the re-architecture of Yahoo Mail, we invested in a robust continuous integration and delivery flow. Our new pipeline allows for daily (or more) pushes to all Mail users, and we push only the bundles that are modified, which keeps the cache hit ratio high.

Developer effectiveness and satisfaction

In developing our tech stack for the new Yahoo Mail experience, we heavily leveraged open source technologies, which allowed us to ensure a shorter learning curve for new engineers. We were able to implement a consistent and intuitive onboarding program for 30+ developers and are now using our program for all new hires. During the development process, we emphasise predictable flows and easy debugging.

Accessibility

The accessibility of this new version of Yahoo Mail is state of the art and delivers outstanding usability (efficiency) in addition to accessibility. It features six enhanced visual themes that can provide accommodation for people with low vision and has been optimized for use with Assistive Technology including alternate input devices, magnifiers, and popular screen readers such as NVDA and VoiceOver. These features have been rigorously evaluated and incorporate feedback from users with disabilities. It sets a new standard for the accessibility of web-based mail and is our most-accessible Mail experience yet.

Open source 

We have open sourced some key components of our new Mail stack, like Mendel, our solution for bucket testing isomorphic web applications. We invite the community to use and build upon our code. Going forward, we plan on also open sourcing additional components like react-atomic-css, our solution for writing modular and scoped CSS in React, and lazy-component, our solution for on-demand loading of resources.

Many of our company’s best technical minds came together to write a brand new tech stack and enable a delightful new Yahoo Mail experience for our users.

We encourage our users and engineering peers in the industry to test the limits of our application, and to provide feedback by clicking on the Give Feedback call out in the lower left corner of the new version of Yahoo Mail.

Backblaze B2, Cloud Storage on a Budget: One Year Later

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/backblaze-b2-cloud-storage-on-a-budget-one-year-later/

B2 Cloud Storage Review

A year ago, Backblaze B2 Cloud Storage came out of beta and became available for everyone to use. We were pretty excited, even though it seemed like everyone and their brother had a cloud storage offering. Now that we are a year down the road let’s see how B2 has fared in the real world of tight budgets, maxed-out engineering schedules, insanely funded competition, and more. Spoiler alert: We’re still pretty excited…

Cloud Storage on a Budget

There are dozens of companies offering cloud storage and the landscape is cluttered with incomprehensible pricing models, cleverly disguised transfer and download charges, and differing levels of service that seem to be driven more by marketing departments than customer needs.

Backblaze B2 keeps things simple: A single performant level of service, a single affordable price for storage ($0.005/GB/month), a single affordable price for downloads ($0.02/GB), and a single list of transaction charges – all on a single pricing page.

Who’s Using B2?

By making cloud storage affordable, companies and organizations now have a way to store their data in the cloud and still be able to access and restore it as quickly as needed. You don’t have to choose between price and performance. Here are a few examples:

  • Media & Entertainment: KLRU-TV, Austin PBS, is using B2 to preserve their video catalog of the world renown musical anthology series, Austin City Limits.
  • LTO Migration: The Girl Scouts San Diego, were able to move their daily incremental backups from LTO tape to the cloud, saving money and time, while helping automate their entire backup process.
  • Cloud Migration: Vintage Aerial found it cost effective to discard their internal data server and store their unique hi-resolution images in B2 Cloud Storage.
  • Backup: Ahuja and Clark, a boutique accounting firm, was able to save over 80% on the cost to backup all their corporate and client data.

How is B2 Being Used?

B2 Cloud Storage can be accessed in four ways: using the Web GUI, using the CLI, using the API library, and using a product or service integrated with B2. While many customers are using the Web GUI, CLI and API to store and retrieve data, the most prolific use of B2 occurs via our integration partners. Each integration partner has certified they have met our best practices for integrating to B2 and we’ve tested each of the integrations submitted to us. Here are a few of the highlights.

  • NAS Devices – Synology and QNAP have integrations which allow their NAS devices to sync their data to/from B2.
  • Backup and Sync – CloudBerry, GoodSync, and Retrospect are just a few of the services that can backup and/or sync data to/from B2.
  • Hybrid Cloud – 45 Drives and OpenIO are solutions that allow you to setup and operate a hybrid data storage cloud environment.
  • Desktop Apps – CyberDuck, MountainDuck, Dropshare, and more allow users an easy way to store and use data in B2 right from your desktop.
  • Digital Asset Management – Cantemo, Cubix, CatDV, and axle Video, let you catalog your digital assets and then store them in B2 for fast retrieval when they are needed.

If you have an application or service that stores data in the cloud and it isn’t integrated with Backblaze B2, then your customers are probably paying too much for cloud storage.

What’s New in B2?

B2 Fireball – our rapid data ingest service. We send you a storage device, and you load it up with up to 40 TB of data and send it back, then we load the data into your B2 account. The cost is $550 per trip plus shipping. Save your network bandwidth with the B2 Fireball.

Lowered the download price – When we introduced B2, we set the price to download a gigabyte of data to be $0.05/GB – the same as most competitors. A year in, we reevaluated the price based on usage and decided to lower the price to $0.02/GB.

B2 User Groups – Backblaze Groups functionality is now available in B2. An administrator can invite users to a B2 centric Group to centralize the storage location for that group of users. For example, multiple members of a department working on a project will be able to archive their work-in-process activities into a single B2 bucket.

Time Machine backup – You may know that you can use your Synology NAS as the destination for your Time Machine backup. With B2 you can also sync your Synology NAS to B2 for a true 3-2-1 backup solution. If your system crashes or is lost, you can restore your Time Machine image directly from B2 to your new machine.

Life Cycle Rules – Create rules that allow you to manage the length of time deleted files will remain in your B2 bucket before they are deleted. A great option for managing the cleanup of outdated file versions to save on storage costs.

Large Files – In the B2 Web GUI you can upload files as large as 500 MB using either the upload or drag-and-drop functionality. The B2 CLI and API support the ability to upload/download files as large as 10 TB.

5 MB file part size – When working with large files, the minimum file part size can now be set as low as 5 MB versus the previous low setting of 100 MB. Now the range of a file part when working with large files can be from 5 MB to 5GB. This increases the throughput of your data uploads and downloads.

SHA-1 at the end – This feature allows you to compute the SHA-1 checksum and append it to the end of the request body versus doing the computation before the file is sent. This is especially useful for those applications which stream data to/from B2.

Cache-Control – When data is downloaded from B2 into a browser, the length of time the file remains in the browser cache can be set at the bucket level using the b2_create_bucket and b2_update_bucket API calls. Setting this policy is optional.

Customized delimiters – Used in the API, this allows you to specify a delimiter to use for a given purpose. A common use is to set a delimiter in the file name string. Then use that delimiter to detect a folder name within the string.

Looking Ahead

Over the past year we added nearly 30,000 new B2 customers to the fold and are welcoming more and more each day as B2 continues to grow. We have plans to expand our storage footprint by adding more data centers as we look forward to moving towards a multi-region environment.

For those of you who are B2 customers – thank you for helping build B2. If you have an interesting way you are using B2, tell us in the comments below.

The post Backblaze B2, Cloud Storage on a Budget: One Year Later appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Scratch 2.0: all-new features for your Raspberry Pi

Post Syndicated from Rik Cross original https://www.raspberrypi.org/blog/scratch-2-raspberry-pi/

We’re very excited to announce that Scratch 2.0 is now available as an offline app for the Raspberry Pi! This new version of Scratch allows you to control the Pi’s GPIO (General Purpose Input and Output) pins, and offers a host of other exciting new features.

Offline accessibility

The most recent update to Raspbian includes the app, which makes Scratch 2.0 available offline on the Raspberry Pi. This is great news for clubs and classrooms, where children can now use Raspberry Pis instead of connected laptops or desktops to explore block-based programming and physical computing.

Controlling GPIO with Scratch 2.0

As with Scratch 1.4, Scratch 2.0 on the Raspberry Pi allows you to create code to control and respond to components connected to the Pi’s GPIO pins. This means that your Scratch projects can light LEDs, sound buzzers and use input from buttons and a range of sensors to control the behaviour of sprites. Interacting with GPIO pins in Scratch 2.0 is easier than ever before, as text-based broadcast instructions have been replaced with custom blocks for setting pin output and getting current pin state.

Scratch 2.0 GPIO blocks

To add GPIO functionality, first click ‘More Blocks’ and then ‘Add an Extension’. You should then select the ‘Pi GPIO’ extension option and click OK.

Scratch 2.0 GPIO extension

In the ‘More Blocks’ section you should now see the additional blocks for controlling and responding to your Pi GPIO pins. To give an example, the entire code for repeatedly flashing an LED connected to GPIO pin 2.0 is now:

Flashing an LED with Scratch 2.0

To react to a button connected to GPIO pin 2.0, simply set the pin as input, and use the ‘gpio (x) is high?’ block to check the button’s state. In the example below, the Scratch cat will say “Pressed” only when the button is being held down.

Responding to a button press on Scractch 2.0

Cloning sprites

Scratch 2.0 also offers some additional features and improvements over Scratch 1.4. One of the main new features of Scratch 2.0 is the ability to create clones of sprites. Clones are instances of a particular sprite that inherit all of the scripts of the main sprite.

The scripts below show how cloned sprites are used — in this case to allow the Scratch cat to throw a clone of an apple sprite whenever the space key is pressed. Each apple sprite clone then follows its ‘when i start as clone’ script.

Cloning sprites with Scratch 2.0

The cloning functionality avoids the need to create multiple copies of a sprite, for example multiple enemies in a game or multiple snowflakes in an animation.

Custom blocks

Scratch 2.0 also allows the creation of custom blocks, allowing code to be encapsulated and used (possibly multiple times) in a project. The code below shows a simple custom block called ‘jump’, which is used to make a sprite jump whenever it is clicked.

Custom 'jump' block on Scratch 2.0

These custom blocks can also optionally include parameters, allowing further generalisation and reuse of code blocks. Here’s another example of a custom block that draws a shape. This time, however, the custom block includes parameters for specifying the number of sides of the shape, as well as the length of each side.

Custom shape-drawing block with Scratch 2.0

The custom block can now be used with different numbers provided, allowing lots of different shapes to be drawn.

Drawing shapes with Scratch 2.0

Peripheral interaction

Another feature of Scratch 2.0 is the addition of code blocks to allow easy interaction with a webcam or a microphone. This opens up a whole new world of possibilities, and for some examples of projects that make use of this new functionality see Clap-O-Meter which uses the microphone to control a noise level meter, and a Keepie Uppies game that uses video motion to control a football. You can use the Raspberry Pi or USB cameras to detect motion in your Scratch 2.0 projects.

Other new features include a vector image editor and a sound editor, as well as lots of new sprites, costumes and backdrops.

Update your Raspberry Pi for Scratch 2.0

Scratch 2.0 is available in the latest Raspbian release, under the ‘Programming’ menu. We’ve put together a guide for getting started with Scratch 2.0 on the Raspberry Pi online (note that GPIO functionality is only available via the desktop version). You can also try out Scratch 2.0 on the Pi by having a go at a project from the Code Club projects site.

As always, we love to see the projects you create using the Raspberry Pi. Once you’ve upgraded to Scratch 2.0, tell us about your projects via Twitter, Instagram and Facebook, or by leaving us a comment below.

The post Scratch 2.0: all-new features for your Raspberry Pi appeared first on Raspberry Pi.

Separating the Paranoid from the Hacked

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/06/separating_the_.html

Sad story of someone whose computer became owned by a griefer:

The trouble began last year when he noticed strange things happening: files went missing from his computer; his Facebook picture was changed; and texts from his daughter didn’t reach him or arrived changed.

“Nobody believed me,” says Gary. “My wife and my brother thought I had lost my mind. They scheduled an appointment with a psychiatrist for me.”

But he built up a body of evidence and called in a professional cybersecurity firm. It found that his email addresses had been compromised, his phone records hacked and altered, and an entire virtual internet interface created.

“All my communications were going through a man-in-the-middle unauthorised server,” he explains.

It’s the “psychiatrist” quote that got me. I regularly get e-mails from people explaining in graphic detail how their whole lives have been hacked. Most of them are just paranoid. But a few of them are probably legitimate. And I have no way of telling them apart.

This problem isn’t going away. As computers permeate even more aspects of our lives, it’s going to get even more debilitating. And we don’t have any way, other than hiring a “professional cybersecurity firm,” of telling the paranoids from the victims.

Cybercrime Officials Shutdown Large eBook Portal, Three Arrested

Post Syndicated from Andy original https://torrentfreak.com/cybercrime-officials-shutdown-large-ebook-portal-three-arrested-170626/

Back in February 2015, German anti-piracy outfit GVU filed a complaint against the operators of large eBook portal Lul.to.

Targeted mainly at the German audience, the site carried around 160,000 eBooks, 28,000 audiobooks, plus newspapers and periodicals. Its motto was “Read and Listen” and claimed to be both the largest German eBook portal and the largest DRM-free platform in the world.

Unlike most file-sharing sites, Lul.to charged around 30,000 customers a small fee to access content, around $0.23 per download. However, all that came to end last week when authorities moved to shut the platform down.

According to the General Prosecutor’s Office, searches in several locations led to the discovery of around 55,000 euros in bitcoin, 100,000 euros in bank deposits, 10,000 euros in cash, plus a “high-quality” motorcycle.

As is often the case following significant action, the site has been completely taken down and now displays the following seizure notice.

Lul.to seized (translated from German)

Authorities report that three people were arrested and are being detained while investigations continue.

It is not yet clear how many times the site’s books were downloaded by users but investigators believe that the retail value of the content offered on the site was around 392,000 euros. By volume, investigators seized more than 11 terabytes of data.

The German Publishers & Booksellers Association welcomed the shutdown of the platform.

“Intervening against lul.to is an important success in the fight against Internet piracy. By blocking one of the largest illegal providers for e-books and audiobooks, many publishers and retailers can breathe,” said CEO Alexander Skipis.

“Piracy is not an excusable offense, it’s the theft of intellectual property, which is the basis for the work of authors, publishers, and bookshops. Portals like lul.to harm the media market massively. The success of the investigation is another example of the fact that such illegal models ultimately can not hold up.”

Last week in a separate case in Denmark, three men aged between 26 and 71-years-old were handed suspended sentences for offering subscription access to around 198 pirate textbooks.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Top 10 Most Pirated Movies of The Week on BitTorrent – 06/26/17

Post Syndicated from Ernesto original https://torrentfreak.com/top-10-pirated-movies-week-bittorrent-062617/

This week we have two newcomers in our chart.

Kong: Skull Island is the most downloaded movie.

The data for our weekly download chart is estimated by TorrentFreak, and is for informational and educational reference only. All the movies in the list are Web-DL/Webrip/HDRip/BDrip/DVDrip unless stated otherwise.

RSS feed for the weekly movie download chart.

This week’s most downloaded movies are:
Movie Rank Rank last week Movie name IMDb Rating / Trailer
Most downloaded movies via torrents
1 (…) Kong: Skull Island 6.9 / trailer
2 (…) King Arthur: Legend of the Sword 7.2 / trailer
3 (1) Wonder Woman (TC) 8.2 / trailer
4 (3) The Fate of the Furious 6.7 / trailer
5 (8) The Mummy 2017 (HDTS) 5.8 / trailer
6 (2) Power Rangers 6.5 / trailer
7 (5) The Boss Baby 6.5 / trailer
8 (4) Chips 5.8 / trailer
9 (6) John Wick: Chapter 2 8.0 / trailer
10 (9) Logan 8.6 / trailer

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Synchronizing Amazon S3 Buckets Using AWS Step Functions

Post Syndicated from Andy Katz original https://aws.amazon.com/blogs/compute/synchronizing-amazon-s3-buckets-using-aws-step-functions/

Constantin Gonzalez is a Principal Solutions Architect at AWS

In my free time, I run a small blog that uses Amazon S3 to host static content and Amazon CloudFront to distribute it world-wide. I use a home-grown, static website generator to create and upload my blog content onto S3.

My blog uses two S3 buckets: one for staging and testing, and one for production. As a website owner, I want to update the production bucket with all changes from the staging bucket in a reliable and efficient way, without having to create and populate a new bucket from scratch. Therefore, to synchronize files between these two buckets, I use AWS Lambda and AWS Step Functions.

In this post, I show how you can use Step Functions to build a scalable synchronization engine for S3 buckets and learn some common patterns for designing Step Functions state machines while you do so.

Step Functions overview

Step Functions makes it easy to coordinate the components of distributed applications and microservices using visual workflows. Building applications from individual components that each perform a discrete function lets you scale and change applications quickly.

While this particular example focuses on synchronizing objects between two S3 buckets, it can be generalized to any other use case that involves coordinated processing of any number of objects in S3 buckets, or other, similar data processing patterns.

Bucket replication options

Before I dive into the details on how this particular example works, take a look at some alternatives for copying or replicating data between two Amazon S3 buckets:

  • The AWS CLI provides customers with a powerful aws s3 sync command that can synchronize the contents of one bucket with another.
  • S3DistCP is a powerful tool for users of Amazon EMR that can efficiently load, save, or copy large amounts of data between S3 buckets and HDFS.
  • The S3 cross-region replication functionality enables automatic, asynchronous copying of objects across buckets in different AWS regions.

In this use case, you are looking for a slightly different bucket synchronization solution that:

  • Works within the same region
  • Is more scalable than a CLI approach running on a single machine
  • Doesn’t require managing any servers
  • Uses a more finely grained cost model than the hourly based Amazon EMR approach

You need a scalable, serverless, and customizable bucket synchronization utility.

Solution architecture

Your solution needs to do three things:

  1. Copy all objects from a source bucket into a destination bucket, but leave out objects that are already present, for efficiency.
  2. Delete all "orphaned" objects from the destination bucket that aren’t present on the source bucket, because you don’t want obsolete objects lying around.
  3. Keep track of all objects for #1 and #2, regardless of how many objects there are.

In the beginning, you read in the source and destination buckets as parameters and perform basic parameter validation. Then, you operate two separate, independent loops, one for copying missing objects and one for deleting obsolete objects. Each loop is a sequence of Step Functions states that read in chunks of S3 object lists and use the continuation token to decide in a choice state whether to continue the loop or not.

This solution is based on the following architecture that uses Step Functions, Lambda, and two S3 buckets:

As you can see, this setup involves no servers, just two main building blocks:

  • Step Functions manages the overall flow of synchronizing the objects from the source bucket with the destination bucket.
  • A set of Lambda functions carry out the individual steps necessary to perform the work, such as validating input, getting lists of objects from source and destination buckets, copying or deleting objects in batches, and so on.

To understand the synchronization flow in more detail, look at the Step Functions state machine diagram for this example.

Walkthrough

Here’s a detailed discussion of how this works.

To follow along, use the code in the sync-buckets-state-machine GitHub repo. The code comes with a ready-to-run deployment script in Python that takes care of all the IAM roles, policies, Lambda functions, and of course the Step Functions state machine deployment using AWS CloudFormation, as well as instructions on how to use it.

Fine print: Use at your own risk

Before I start, here are some disclaimers:

  • Educational purposes only.

    The following example and code are intended for educational purposes only. Make sure that you customize, test, and review it on your own before using any of this in production.

  • S3 object deletion.

    In particular, using the code included below may delete objects on S3 in order to perform synchronization. Make sure that you have backups of your data. In particular, consider using the Amazon S3 Versioning feature to protect yourself against unintended data modification or deletion.

Step Functions execution starts with an initial set of parameters that contain the source and destination bucket names in JSON:

{
    "source":       "my-source-bucket-name",
    "destination":  "my-destination-bucket-name"
}

Armed with this data, Step Functions execution proceeds as follows.

Step 1: Detect the bucket region

First, you need to know the regions where your buckets reside. In this case, take advantage of the Step Functions Parallel state. This allows you to use a Lambda function get_bucket_location.py inside two different, parallel branches of task states:

  • FindRegionForSourceBucket
  • FindRegionForDestinationBucket

Each task state receives one bucket name as an input parameter, then detects the region corresponding to "their" bucket. The output of these functions is collected in a result array containing one element per parallel function.

Step 2: Combine the parallel states

The output of a parallel state is a list with all the individual branches’ outputs. To combine them into a single structure, use a Lambda function called combine_dicts.py in its own CombineRegionOutputs task state. The function combines the two outputs from step 1 into a single JSON dict that provides you with the necessary region information for each bucket.

Step 3: Validate the input

In this walkthrough, you only support buckets that reside in the same region, so you need to decide if the input is valid or if the user has given you two buckets in different regions. To find out, use a Lambda function called validate_input.py in the ValidateInput task state that tests if the two regions from the previous step are equal. The output is a Boolean.

Step 4: Branch the workflow

Use another type of Step Functions state, a Choice state, which branches into a Failure state if the comparison in step 3 yields false, or proceeds with the remaining steps if the comparison was successful.

Step 5: Execute in parallel

The actual work is happening in another Parallel state. Both branches of this state are very similar to each other and they re-use some of the Lambda function code.

Each parallel branch implements a looping pattern across the following steps:

  1. Use a Pass state to inject either the string value "source" (InjectSourceBucket) or "destination" (InjectDestinationBucket) into the listBucket attribute of the state document.

    The next step uses either the source or the destination bucket, depending on the branch, while executing the same, generic Lambda function. You don’t need two Lambda functions that differ only slightly. This step illustrates how to use Pass states as a way of injecting constant parameters into your state machine and as a way of controlling step behavior while re-using common step execution code.

  2. The next step UpdateSourceKeyList/UpdateDestinationKeyList lists objects in the given bucket.

    Remember that the previous step injected either "source" or "destination" into the state document’s listBucket attribute. This step uses the same list_bucket.py Lambda function to list objects in an S3 bucket. The listBucket attribute of its input decides which bucket to list. In the left branch of the main parallel state, use the list of source objects to work through copying missing objects. The right branch uses the list of destination objects, to check if they have a corresponding object in the source bucket and eliminate any orphaned objects. Orphans don’t have a source object of the same S3 key.

  3. This step performs the actual work. In the left branch, the CopySourceKeys step uses the copy_keys.py Lambda function to go through the list of source objects provided by the previous step, then copies any missing object into the destination bucket. Its sister step in the other branch, DeleteOrphanedKeys, uses its destination bucket key list to test whether each object from the destination bucket has a corresponding source object, then deletes any orphaned objects.

  4. The S3 ListObjects API action is designed to be scalable across many objects in a bucket. Therefore, it returns object lists in chunks of configurable size, along with a continuation token. If the API result has a continuation token, it means that there are more objects in this list. You can work from token to token to continue getting object list chunks, until you get no more continuation tokens.

By breaking down large amounts of work into chunks, you can make sure each chunk is completed within the timeframe allocated for the Lambda function, and within the maximum input/output data size for a Step Functions state.

This approach comes with a slight tradeoff: the more objects you process at one time in a given chunk, the faster you are done. There’s less overhead for managing individual chunks. On the other hand, if you process too many objects within the same chunk, you risk going over time and space limits of the processing Lambda function or the Step Functions state so the work cannot be completed.

In this particular case, use a Lambda function that maximizes the number of objects listed from the S3 bucket that can be stored in the input/output state data. This is currently up to 32,768 bytes, assuming (based on some experimentation) that the execution of the COPY/DELETE requests in the processing states can always complete in time.

A more sophisticated approach would use the Step Functions retry/catch state attributes to account for any time limits encountered and adjust the list size accordingly through some list site adjusting.

Step 6: Test for completion

Because the presence of a continuation token in the S3 ListObjects output signals that you are not done processing all objects yet, use a Choice state to test for its presence. If a continuation token exists, it branches into the UpdateSourceKeyList step, which uses the token to get to the next chunk of objects. If there is no token, you’re done. The state machine then branches into the FinishCopyBranch/FinishDeleteBranch state.

By using Choice states like this, you can create loops exactly like the old times, when you didn’t have for statements and used branches in assembly code instead!

Step 7: Success!

Finally, you’re done, and can step into your final Success state.

Lessons learned

When implementing this use case with Step Functions and Lambda, I learned the following things:

  • Sometimes, it is necessary to manipulate the JSON state of a Step Functions state machine with just a few lines of code that hardly seem to warrant their own Lambda function. This is ok, and the cost is actually pretty low given Lambda’s 100 millisecond billing granularity. The upside is that functions like these can be helpful to make the data more palatable for the following steps or for facilitating Choice states. An example here would be the combine_dicts.py function.
  • Pass states can be useful beyond debugging and tracing, they can be used to inject arbitrary values into your state JSON and guide generic Lambda functions into doing specific things.
  • Choice states are your friend because you can build while-loops with them. This allows you to reliably grind through large amounts of data with the patience of an engine that currently supports execution times of up to 1 year.

    Currently, there is an execution history limit of 25,000 events. Each Lambda task state execution takes up 5 events, while each choice state takes 2 events for a total of 7 events per loop. This means you can loop about 3500 times with this state machine. For even more scalability, you can split up work across multiple Step Functions executions through object key sharding or similar approaches.

  • It’s not necessary to spend a lot of time coding exception handling within your Lambda functions. You can delegate all exception handling to Step Functions and instead simplify your functions as much as possible.

  • Step Functions are great replacements for shell scripts. This could have been a shell script, but then I would have had to worry about where to execute it reliably, how to scale it if it went beyond a few thousand objects, etc. Think of Step Functions and Lambda as tools for scripting at a cloud level, beyond the boundaries of servers or containers. "Serverless" here also means "boundary-less".

Summary

This approach gives you scalability by breaking down any number of S3 objects into chunks, then using Step Functions to control logic to work through these objects in a scalable, serverless, and fully managed way.

To take a look at the code or tweak it for your own needs, use the code in the sync-buckets-state-machine GitHub repo.

To see more examples, please visit the Step Functions Getting Started page.

Enjoy!

Traveling “Kodi Repair Men” Are Apparently a Thing Now

Post Syndicated from Andy original https://torrentfreak.com/traveling-kodi-repair-men-are-apparently-a-thing-now-170625/

Earlier this month, third-party Kodi add-on ZemTV and the TVAddons library were sued in a federal court in Texas.

The complaint, filed by American satellite and broadcast provider Dish Network, accused the pair of copyright infringement and demanded $150,000 for each offense.

With that case continuing, there has been significant fallout. Not only has the TVAddons repository disappeared but addon developers have been falling like dominos.

Of course, there are large numbers of people out there who are able to acquire and install new addons to restore performance to their faltering setups. These enthusiasts can weather the storms, with most understanding that such setbacks are all part of the piracy experience.

However, unlike most other types of Internet piracy, the world of augmented Kodi setups has a somewhat unusual characteristic.

Although numbers are impossible to come by, it’s likely that the majority of users have no idea how the software in their ‘pirate’ box actually works. This is because through convenience or lack of knowledge they bought their device already setup. So what can these people do?

Well, for some it’s a case of trawling the Internet for help and advice to learn how to reprogram the hardware themselves. It may take time, but those with the patience will be glad they did since it will help them deal with similar problems in the future.

For others, it’s taking the misguided route of trying to get the entirely legal (and probably sick-to-the-teeth) official Kodi team to solve their problems on Twitter. Pro tip: Don’t bother, they’re not interested.

Kodi.tv are not interested in piracy problems

It’s likely that the remainder will take their device back to where they bought it, complain like crazy, and then get things fixed for a small fee. But for those running out of options, never fear – there’s another innovative solution available.

In a local pub this week I overheard a discussion about “everybody’s Kodi going off” which wasn’t a big shock given recent developments. However, what did surprise me was the revelation that a local guy is now touring pubs in the area doing on-site “Kodi repairs.”

To put things back in working order using a laptop he’s charging $25/£20/€23 or, for those with an Amazon Firestick, a $50/£40 trade-in for a new, fully-loaded stick. Apparently, the whole thing takes about 15 to 20 mins and is conveniently carried out while having a drink. While obviously illegal, it’s amazing how quickly opportunists step in to make a few bucks.

That being said, the notion of ‘Kodi repair men’ appearing in the flesh is perhaps not such a surprise after all. Countless millions of these devices have been sold, and they invariably go wrong when pirate sources have issues. In reality, it would be more of a surprise if repairers didn’t exist because there’s clearly a lot of demand.

But exist they do and some are even doing home visits. One, who offers to assist people “for a small call out charge” via his Facebook page, has been receiving glowing reviews, like the one shown below.

Thanks for the help KodiMan

In many cases, these “repair men” are actually the same people selling the pre-configured boxes in the first place. Like pirate DVD sellers, PlayStation modders, and similar characters before them, they’re heroes to many people, particularly those in cash-deprived areas. They’re seen as Robin Hoods who can cut subscription TV prices by 95% and ensure sporting events keep flowing for next to nothing.

What remains to be seen though is how busy these people will be in the future. When people’s devices stop working there’s obviously a lot of bad feeling, so paying each time for “repairs” could eventually become tiresome. That’s certainly what copyright holders are hoping for, so expect further action against more addon providers in the future.

But in the meantime and despite the trouble, ‘pirate’ Kodi devices are still selling like hot cakes. Despite suggestions to the contrary, they’re easily purchased from sites like eBay, and plenty of local publications are carrying ads. But for those prepared to do the work themselves, everything is a lot cheaper and easier to fix when it goes wrong.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.