Tag Archives: tor

Canada’s Supreme Court Orders Google to Remove Search Results Worldwide

Post Syndicated from Andy original https://torrentfreak.com/canadas-supreme-court-orders-google-remove-search-results-worldwide-170629/

Back in 2014, the case of Equustek Solutions Inc. v. Jack saw two Canadian entities battle over stolen intellectual property used to manufacture competing products.

Google had no direct links to the case, yet it became embroiled when Equustek Solutions claimed that Google’s search results helped to send visitors to websites operated by the defendants (former Equustek employees) who were selling unlawful products.

Google voluntarily removed links to the sites from its Google.ca (Canada) results, but Equustek demanded a more comprehensive response. It got one.

In a ruling handed down by a court in British Columbia, Google was ordered to remove the infringing websites’ listings from its central database in the United States, meaning that the ruling had worldwide implications.

Google filed an appeal hoping for a better result, arguing that it does not operate servers in British Columbia, nor does it operate any local offices. It also questioned whether the injunction could be enforced outside Canada’s borders.

Ultimately, the British Columbia Court of Appeal disappointed the search giant. In a June 2015 ruling, the Court decided that Google does indeed do business in the region. It also found that a decision to restrict infringement was unlikely to offend any overseas nation.

“The plaintiffs have established, in my view, that an order limited to the google.ca search site would not be effective. I am satisfied that there was a basis, here, for giving the injunction worldwide effect,” Justice Groberman wrote.

Undeterred, Google took its case all the way to the Supreme Court of Canada, hoping to limit the scope of the injunction by arguing that it violates freedom of expression. That effort has now failed.

In a 7-2 majority decision released Wednesday, Google was branded a “determinative player” in facilitating harm to Equustek.

“This is not an order to remove speech that, on its face, engages freedom of expression values, it is an order to de-index websites that are in violation of several court orders,” wrote Justice Rosalia Abella.

“We have not, to date, accepted that freedom of expression requires the facilitation of the unlawful sale of goods.”

With Google now required to delist the sites on a global basis, the big question is what happens when other players attempt to apply the ruling to their particular business sector. Unsurprisingly that hasn’t taken long.

The International Federation of the Phonographic Industry (IFPI), which supported Equustek’s position in the long-running case, welcomed the decision and said that Google must “take on the responsibility” to ensure it does not direct users to illegal sites.

“Canada’s highest court has handed down a decision that is very good news for rights holders both in Canada and around the world. Whilst this was not a music piracy case, search engines play a prominent role in directing users to illegal content online including illegal music sites,” said IFPI CEO, Frances Moore.

“If the digital economy is to grow to its full potential, online intermediaries, including search engines, must play their part by ensuring that their services are not used to facilitate the infringement of intellectual property rights.”

Graham Henderson, President and CEO of Music Canada, which represents Sony, Universal, Warner and others, also welcomed the ruling.

“Today’s decision confirms that online service providers cannot turn a blind eye to illegal activity that they facilitate; on the contrary, they have an affirmative duty to take steps to prevent the Internet from becoming a black market,” Henderson said.

But for every voice of approval from groups like IFPI and Music Canada, others raised concerns over the scope of the decision and its potential to create a legal and political minefield. In particular, University of Ottawa professor Michael Geist raised a number of interesting scenarios.

“What happens if a Chinese court orders [Google] to remove Taiwanese sites from the index? Or if an Iranian court orders it to remove gay and lesbian sites from the index? Since local content laws differ from country to country, there is a great likelihood of conflicts,” Geist said.

But rather than painting Google as the loser in this battle, Geist believes the decision actually grants the search giant more power.

“When it comes to Internet jurisdiction, exercising restraint and limiting the scope of court orders is likely to increase global respect for the law and the effectiveness of judicial decisions. Yet this decision demonstrates what many have feared: the temptation for courts will be to assert jurisdiction over online activities and leave it to the parties to sort out potential conflicts,” Geist says.

“In doing so, the Supreme Court of Canada has lent its support to global takedowns and vested more power in Internet intermediaries, who may increasingly emerge as the arbiters of which laws to follow online.”

Only time will tell how Google will react, but it’s clear there will be plenty of entities ready to test the limits and scope of the company’s responses to the ruling.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

New Power Bundle for Amazon WorkSpaces – More vCPUs, Memory, and Storage

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-power-bundle-for-amazon-workspaces-more-vcpus-memory-and-storage/

Are you tired of hearing me talk about Amazon WorkSpaces yet? I hope not, because we have a lot of customer-driven additions on the roadmap! Our customers in the developer and analyst community have been asking for a workstation-class machine that will allow them to take advantage of the low cost and flexibility of WorkSpaces. Developers want to run Visual Studio, IntelliJ, Eclipse, and other IDEs. Analysts want to run complex simulations and statistical analysis using MatLab, GNU Octave, R, and Stata.

New Power Bundle
Today we are extending the current set of WorkSpaces bundles with a new Power bundle. With four vCPUs, 16 GiB of memory, and 275 GB of storage (175 GB on the system volume and another 100 GB on the user volume), this bundle is designed to make developers, analysts, (and me) smile. You can launch them in all of the usual ways: Console, CLI (create-workspaces), or API (CreateWorkSpaces):

One really interesting benefit to using a cloud-based virtual desktop for simulations and statistical analysis is the ease of access to data that’s already stored in the cloud. Analysts can mine and analyze petabytes of data stored in S3 that is effectively local (with respect to access time) to the WorkSpace. This low-latency access will boost productivity and also simplifies the use of other AWS data analysis tools such as Amazon Redshift, Amazon Redshift Spectrum, Amazon QuickSight, and Amazon Athena.

Like the existing bundles, the new Power bundle can be used in either billing configuration, AlwaysOn or AutoStop (read Amazon WorkSpaces Update – Hourly Usage and Expanded Root Volume to learn more). The bundle is available in all AWS Regions where WorkSpaces is available and you can launch one today! Visit the WorkSpaces Pricing page for pricing in your region.

Jeff;

Now Available – Developer Preview of AWS SDK for Java 2.0

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-available-developer-preview-of-aws-sdk-for-java-2-0/

The AWS Developer Tools Team has been hard at work on the AWS SDK for Java and is launching a Developer Preview of version 2.0 today.

This version is a major rewrite of the older, 1.11.x codebase. Built on top of Java 8 with a focus on consistency, immutability and ease of use, the new SDK includes frequently requested features such as support for non-blocking I/O and the ability to choose the desired HTTP implementation at runtime. The new non-blocking I/O support is more efficient than the existing, thread-based implementation of the Async variants of the service clients. Each non-blocking request returns a CompletableFuture object.

The version 2.0 SDK includes a number of changes to the earlier APIs. For example, it replaces the existing mix of client constructors and mutable methods with a consistent model based on client builders and immutable clients. The SDK also collapses the disparate collection of classes used to configure regions into a single Region class, and provides a new set of APIs for streaming.

The SDK is available on GitHub. You can send public feedback by opening GitHub issues and you can also send pull requests in the usual way.

To learn more about this SDK, read AWS SDK for Java 2.0 – Developer Preview on the AWS Developer Blog.

Jeff;

 

Validating AWS CloudFormation Templates

Post Syndicated from Remek Hetman original https://aws.amazon.com/blogs/devops/validating-aws-cloudformation-templates/

For their continuous integration and continuous deployment (CI/CD) pipeline path, many companies use tools like Jenkins, Chef, and AWS CloudFormation. Usually, the process is managed by two or more teams. One team is responsible for designing and developing an application, CloudFormation templates, and so on. The other team is generally responsible for integration and deployment.

One of the challenges that a CI/CD team has is to validate the CloudFormation templates provided by the development team. Validation provides early warning about any incorrect syntax and ensures that the development team follows company policies in terms of security and the resources created by CloudFormation templates.

In this post, I focus on the validation of AWS CloudFormation templates for syntax as well as in the context of business rules.

Scripted validation solution

For CloudFormation syntax validation, one option is to use the AWS CLI to call the validate-template command. For security and resource management, another approach is to run a Jenkins pipeline from an Amazon EC2 instance under an EC2 role that has been granted only the necessary permissions.

What if you need more control over your CloudFormation templates, such as managing parameters or attributes? What if you have many development teams where permissions to the AWS environment required by one team are either too open or not open enough for another team?

To have more control over the contents of your CloudFormation template, you can use the cf-validator Python script, which shows you how to validate different template aspects. With this script, you can validate:

  • JSON syntax
  • IAM capabilities
  • Root tags
  • Parameters
  • CloudFormation resources
  • Attributes
  • Reference resources

You can download this script from the cf-validator GitHub repo. Use the following command to run the script:

python cf-validator.py

The script takes the following parameters:

  • –cf_path [Required]

    The location of the CloudFormation template in JSON format. Supported location types:

    • File system – Path to the CloudFormation template on the file system
    • Web – URL, for example, https://my-file.com/my_cf.json
    • Amazon S3 – Amazon S3 bucket, for example, s3://my_bucket/my_cf.json
  • –cf_rules [Required]

    The location of the JSON file with the validation rules. This parameter supports the same locations as –cf_path. The next section of this post has more information about defining rules.

  • –cf_res [Optional]

    The location of the JSON file with the defined AWS resources, which need to be confirmed before launching the CloudFormation template. A later section of this post has more information about resource validation.

  • –allow_cap [Optional][yes/no]

    Controls whether you allow the creation of IAM resources by the CloudFormation template, such as policies, rules, or IAM users. The default value is no.

  • –region [Optional]

    The AWS region where the existing resources were created. The default value is us-east-1.

Defining rules

All rules are defined in the JSON format file. Rules consist of the following keys:

  • “allow_root_keys”

    Lists allowed root CloudFormation keys. Example of root keys are Parameters, Resources, Output, and so on. An empty list means that any key is allowed.

  • “allow_parameters”

    Lists allowed CloudFormation parameters. For instance, to force each CloudFormation template to use only the set of parameters defined in your pipeline, list them under this key. An empty list means that any parameter is allowed.

  • “allow_resources”

    Lists the AWS resources allowed for creation by a CloudFormation template. The format of the resource is the same as resource types in CloudFormation, but without the “AWS::” prefix. Examples:  EC2::Instance, EC2::Volume, and so on. If you allow the creation of all resources from the given group, you can use a wildcard. For instance, if you allow all resources related to CloudFormation, you can add CloudFormation::* to the list instead of typing CloudFormation::Init, CloudFormation:Stack, and so on. An empty list means that all resources are allowed.

  • “require_ref_attributes”

    Lists attributes (per resource) that have to be defined in CloudFormation. The value must be referenced and cannot be hardcoded. For instance, you can require that each EC2 instance must be created from a specific AMI where Image ID has to be a passed-in parameter. An empty list means that you are not requiring specific attributes to be present for a given resource.

  • “allow_additional_attributes”

    Lists additional attributes (per resource) that can be defined and have any value in the CloudFormation template. An empty list means that any additional attribute is allowed. If you specify additional attributes for this key, then any resource attribute defined in a CloudFormation template that is not listed in this key or in the require_ref_attributes key causes validation to fail.

  • “not_allow_attributes”

    Lists attributes (per resource) that are not allowed in the CloudFormation template. This key takes precedence over the require_ref_attributes and allow_additional_attributes keys.

Rule file example

The following is an example of a rule file:

{
  "allow_root_keys" : ["AWSTemplateFormatVersion", "Description", "Parameters", "Conditions", "Resources", "Outputs"],
  "allow_parameters" : [],
  "allow_resources" : [
    "CloudFormation::*",
    "CloudWatch::Alarm",
    "EC2::Instance",
    "EC2::Volume",
    "EC2::VolumeAttachment",
    "ElasticLoadBalancing::LoadBalancer",
    "IAM::Role",
    "IAM::Policy",
    "IAM::InstanceProfile"
  ],
  "require_ref_attributes" :
    {
      "EC2::Instance" : [ "InstanceType", "ImageId", "SecurityGroupIds", "SubnetId", "KeyName", "IamInstanceProfile" ],
      "ElasticLoadBalancing::LoadBalancer" : ["SecurityGroups", "Subnets"]
    },
  "allow_additional_attributes" : {},
  "not_allow_attributes" : {}
}

Validating resources

You can use the –cf_res parameter to validate that the resources you are planning to reference in the CloudFormation template exist and are available. As a value for this parameter, point to the JSON file with defined resources. The format should be as follows:

[
  { "Type" : "SG",
    "ID" : "sg-37c9b448A"
  },
  { "Type" : "AMI",
    "ID" : "ami-e7e523f1"
  },
  { "Type" : "Subnet",
    "ID" : "subnet-034e262e"
  }
]

Summary

At this moment, this CloudFormation template validation script supports only security groups, AMIs, and subnets. But anyone with some knowledge of Python and the boto3 package can add support for additional resources type, as needed.

For more tips please visit our AWS CloudFormation blog

Operation ‘Pirate On Demand’ Blocks Pirate IPTV Portals

Post Syndicated from Andy original https://torrentfreak.com/operation-pirate-on-demand-blocks-pirate-iptv-portals-170628/

Via cheap set-top boxes, IPTV services (Internet Protocol TV) allow people to access thousands of live TV channels in their living rooms for a nominal fee.

Some of these services are available for just a few euros, dollars or pounds per month, often in HD quality.

While service levels can vary, some of the best also offer comprehensive Video On Demand (VOD), with hundreds and in some cases thousands of movies and TV shows on tap, supported by catch-up TV. Given their professional nature, the best IPTV products are proving a real thorn in the side for rights holders, who hope to charge ten times the money while delivering a lesser product.

As a result, crackdowns against IPTV providers, resellers and other people in the chain are underway across the world, but Europe in particular. Today’s news comes from Italy, where Operation “Pirate On Demand” is hoping to make a dent in IPTV piracy.

The operation is being headed up by the Guardia di Finanza (GdF), a department under Italy’s Minister of Economy and Finance. Part of the Italian Armed Forces, GdF says it has targeted nine sites involved in the unlawful distribution of content offered officially by local media giants Mediaset and Sky.

The authorities received assistance of a specialized team from the local anti-piracy group DCP, which operates on behalf of a broad range of entertainment industry companies.

According to GdF, a total of 89 servers were behind the portals which together delivered an estimated 178 terabytes of pirate content, ranging from TV shows and sports, to movies and children’s entertainment.

The nine portals are in the process of being blocked with some displaying the following message.

Seizure notice on the affected sites

The investigation began in September 2016 and was coordinated by Giangiacomo Pilia, the prosecutor at the Cagliari Court. Thus far, two people have been arrested.

A person arrested in the Varese area, who police believe is the commercial director of an illicit platform, has been charged with breaching copyright law.

A second individual arrested in Macerata is also suspected of copyright offenses, having technically managed the platform. Computer equipment, decoders, smart cards, and other electronic devices were also seized.

In addition to blocking various web portals, measures will now be taken to block the servers being used to supply the IPTV services. The GdF has also delivered a veiled threat to people who subscribed to the illicit services.

“It is also in the hands of investigators the position of those who have actively accessed the platforms by purchasing pirated subscriptions and thus benefiting by taking advantage,” GdF said.

The moves this week are the latest to take place under the Operation “Pirate On Demand” banner. Back in March, authorities moved to shut down and block 15 portals offering illegal IPTV access to Mediaset and Sky channels.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

[$] Ripples from Stack Clash

Post Syndicated from corbet original https://lwn.net/Articles/726580/rss

In one sense, the Stack Clash vulnerability
that was announced on June 19 has not had a huge impact: thus far, at
least, there have been few (if any) stories of active exploits in the
wild. At other levels, though, this would appear to be in important
vulnerability, in that it has raised a number of questions about how the
community handles security issues and what can be expected in the future.
The indications, unfortunately, are not all positive.

Desert To Data in 7 Days – Our New Phoenix Data Center

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/data-center-design/

We are pleased to announce that Backblaze is now storing some of our customers’ data in our newest data center in Phoenix. Our Sacramento facility was slated to store about 500 petabytes of data and was starting to fill up so it was time to expand. After visiting multiple locations in the US and Canada, we selected Phoenix as it had the right combination of power, networking, price and more that we were seeking. Let’s take you through the process of getting the Phoenix data center up and running.

Day 0 – Designing the Data Center

After we selected the Phoenix location as our next DC (data center), we had to negotiate the contract. We’re going to skip that part of the process because, unless you’re a lawyer, it’s a long, boring process. Let’s just say we wanted to be ready to move in once the contract was signed. That meant we had to gather up everything we needed and order a bunch of other things like networking equipment, racks, storage pods, cables, etc. We decided to use our Sacramento DC as the staging point and started gathering what was going to be needed in Phoenix.

In actuality, for some items we started the process several months ago as lead times for things like network switches, Storage Pods, and even hard drives can be measured in months and delays are normal. For example, depending on our move in date, the network providers we wanted would only be able to provide limited bandwidth, so we had to prepare for that possibility. It helps to have a procurement person who knows what they are doing, can work the schedule, and is creatively flexible – thanks Amanda.

So by Day 0, we had amassed multiple pallets of cabinets, network gear, PDUs, tools, hard drives, carts, Guido, and more. And yes, for all you Guido fans he is still with us and he now resides in Phoenix. Everything was wrapped and loaded into a 53-foot semi-truck that was driven the 755 miles (1,215 km) from Sacramento, California to Phoenix, Arizona.

Day 1 – Move In Day

We sent a crew of 5 people to Phoenix with the goal of going from empty space to being ready to accept data in one week. The truck from Sacramento arrived mid-morning and work started unloading and marshaling the pallets and boxes into one area, while the racks were placed near their permanent location on the DC floor.

Day 2 – Building the Racks

Day 2 was spent primarily working with the racks. First they were positioned to their precise location on the data center floor. They were then anchored down and tied together. We started with 2 rows of twenty-two racks each, with twenty being for storage pods and two being for networking equipment. By the end of the week there will be 4 rows of racks installed.

Day 3 – Networking and Power, Part 1

While one team continued to work on the racks, another team began the process a getting the racks connected to the electricty and running the network cables to the network distribution racks. Once that was done, networking gear and rack-based PDUs (Power Distribution Units) were installed in the racks.

Day 4 – Rack Storage Pods

The truck from Sacramento brought 100 Storage Pods, a combination of 45 drive and 60 drive systems. Why did we use 45 drives units here? It has to do with the size (in racks and power) of the initial installation commitment and the ramp (increase) of installations over time. Contract stuff: boring yes, important yes. Basically to optimize our spend we wanted to use as much of the initial space we were allotted as possible. Since we had a number of empty 45 drive chassis available in Sacramento we decided to put them to use.

Day 5 – Drive Day

Our initial set-up goal was to build out five Backblaze Vaults. Each Vault is comprised of twenty Storage Pods. Four of the Vaults were filled with 45 drive Storage Pods and one was filled with 60 drive Storage Pods. That’s 4,800 hard drives to install – thank goodness we don’t use those rubber bands around the drives anymore.

Day 6 – Networking and Power, Part 2

With the storage pods in place, Day 6 was spent routing network and power cables to the individual pods. A critical part of the process is to label every wire so you know where it comes from and where it goes too. Once labeled, wires are bundled together and secured to the racks in a standard pattern. Not only does this make things look neat, it standardizes where you’ll find each cable across the hundreds of racks that are in the DC.

Day 7 – Test, Repair, Test, Ready

With all the power and networking finished, it was time to test the installation. Most of the Storage Pods light up with no problem, but there were a few that failed. These failures are quickly dealt with, and one by one each Backblaze Vault is registered into our monitoring and administration systems. By the end of the day, all five Vaults were ready.

Moving Forward

The Phoenix data center was ready for operation except that the network carriers we wanted to use could only provide a limited amount of bandwidth to start. It would take a few more weeks before the final network lines would be provisioned and operational. Even with the limited bandwidth we kicked off the migration of customer data from Sacramento to Phoenix to help balance out the workload. A few weeks later, once the networking was sorted out, we started accepting external customer data.

We’d like to thank our data center build team for documenting their work in pictures and allowing us to share some of them with our readers.

















Questions About Our New Data Center

Now that we have a second DC, you might have a few questions, such as can you store your data there and so on. Here’s the status of things today…

    Q: Does the new DC mean Backblaze has multi-region storage?
    A: Not yet. Right now we consider the Phoenix DC and the Sacramento DC to be in the same region.

    Q: Will you ever provide multi-region support?
    A: Yes, we expect to provide multi-region support in the future, but we don’t have a date for that capability yet.

    Q: Can I pick which data center will store my data?
    A: Not yet. This capability is part of our plans when we provide multi-region support.

    Q: Which data center is my data being stored in?
    A: Chances are that your data is in the Sacramento data center given it currently stores about 90% of our customer’s data.

    Q: Will my data be split across the two data centers?
    A: It is possible that one portion of your data will be stored in the Sacramento DC and another portion of your data will be stored in the Phoenix DC. This will be completely invisible to you and you should see no difference in storage or data retrieval times.

    Q: Can my data be replicated from one DC to the other?
    A: Not today. As noted above, your data will be in one DC or the other. That said files uploaded to the Backblaze Vaults in either DC are stored redundantly across 20 Backblaze Storage Pods within that DC. This translates to 99.999999% durability for the data stored this way.

    Q: Do you plan on opening more data centers?
    A: Yes. We are actively looking for new locations.

If you have any additional questions, please let us know in the comments or on social media. Thanks.

The post Desert To Data in 7 Days – Our New Phoenix Data Center appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

The Code Club International movement

Post Syndicated from Katherine Leadbetter original https://www.raspberrypi.org/blog/code-club-international/

Over the past few years, Code Club has made strides toward world domination! There are now more than 10,000 Code Clubs running in 125 countries. More than 140,000 kids have taken part in our clubs in places as diverse as the northernmost tip of Canada and the favelas of Rio de Janeiro.

In the first video from our Code Club International network, we find out about Code Clubs around the world from the people supporting these communities.

Global communities

Code Club currently has official local partners in twelve countries. Our passionate and motivated partner organisations are responsible for championing their countries’ Code Clubs. In March we brought the partners together for the first time, and they shared what it means to be part of the Code Club community:

You can help Code Club make a difference around the world

We invited our international Code Club partners to join us in London and discuss why we think Code Club is so special. Whether you’re a seasoned pro, a budding educator, or simply want to give back to your local community, there’s a place for you among our incredible Code Club volunteers.

Of course, Code Clubs aren’t restricted to countries with official partner communities – they can be started anywhere in the world! Code Clubs are up and running in a number of unexpected places, from Kosovo to Kazakhstan.

Code Club International

Code Club partners gathered together at the International Meetup

The geographical spread of Code Clubs means we hear of clubs overcoming a range of different challenges. One club in Zambia, run by volunteer Mwiza Simbeye, started as a way to get kids off the streets of Lusaka and teach them useful skills. Many children attending had hardly used a computer before writing their first line of code at the club. And it’s making a difference! As Mwiza told us, ‘you only need to see the light shine in the eyes of [Code Club] participants to see how much they enjoy these sessions.’

Code Club International

Student Joyce codes in Scratch at her Code Club in Nunavut, Canada

In the Nunavut region of Canada, Talia Metuq was first introduced to coding at a Code Club. In an area comprised of 25 Inuit communities that are inaccessible via roads and currently combating severe social and economic deprivation, computer science was not on the school timetable. Code Club, along with club volunteer Ryan Oliver, is starting to change that. After graduating from Code Club, Talia went on to study 3D modelling in Vancouver. She has now returned to Nunavut and is helping inspire more children to pursue digital making.

Start a Code Club

Code Clubs are volunteer-led extra-curricular coding clubs for children age 9 to 13. Children that attend learn to code games, animations, and websites using the projects we provide. Working with volunteers and with other children in their club, they grow their digital skillset.

You can run a Code Club anywhere if you have a venue, volunteers, and kids ready to learn coding. Help us achieve our goal of having a Code Club in every community in the world!

To find out how to start a Code Club outside of the UK, you can visit the Code Club International website. If you are in the UK, head to the Code Club UK website for more information.

Code Club International

Help the Code Club International community grow

On the Code Club site, we currently have projects in 28 languages, allowing more young people than ever to learn programming in their native language. But that’s not enough! We are always on the lookout for volunteers to translate projects and resources. If you are proficient in translating from English and would like to help, please visit the website to find out more.

We are also looking for official local partners in Italy and Germany to join our international network – if you know of, or are a part of an enthusiastic non-profit organisation who might be interested to join us, you can learn more here.

The post The Code Club International movement appeared first on Raspberry Pi.

FACT Threatens Users of ‘Pirate’ Kodi Add-Ons

Post Syndicated from Ernesto original https://torrentfreak.com/fact-threatens-users-of-pirate-kodi-add-ons-170628/

In the UK there’s a war going on against streaming pirates. At least, that’s what the local anti-piracy body FACT would like the public to know.

The popular media streaming platform Kodi is at the center of the controversy. While Kodi is perfectly legal, many people use it in conjunction with third party add-ons that offer pirated content.

FACT hopes to curb this trend. The group has already taken action against sellers of Kodi devices pre-loaded with these add-ons and they’re keeping a keen eye on developers of illicit add-ons too.

However, according to FACT, the ‘crackdown’ doesn’t stop there. Users of pirate add-ons are also at risk, they claim.

“And then we’ll also be looking at, at some point, the end user. The reason for end users to come into this is that they are committing criminal offences,” FACT’s chief executive Kieron Sharp told the Independent.

While people who stream pirated content are generally hard to track, since they don’t broadcast their IP-address to the public, FACT says that customer data could be obtained directly from sellers of fully-loaded Kodi boxes.

“When we’re working with the police against a company that’s selling IPTV boxes or illicit streaming devices on a large scale, they have records of who they’ve sold them to,” Sharp noted.

While the current legal efforts are focused on the supply side, including these sellers, the end users may also be targeted in the future.

“We have a number of cases coming before the courts in terms of those people who have been providing, selling and distributing illicit streaming devices. It’s something for the very near future, when we’ll consider whether we go any further than that, in terms of customers.”

The comments above make it clear that FACT wants users of these pirate devices to feel vulnerable and exposed. But threatening talk is much easier than action.

It will be very hard to get someone convicted, simply because they bought a device that can access both legal and illegal content. A receipt doesn’t prove intent, and even if it did, it’s pretty much impossible to prove that a person streamed specific pirated content.

But let’s say FACT was able to prove that someone bought a fully-loaded Kodi box and streamed content without permission. How would that result in a conviction? Contrary to claims in the mainstream press, watching a pirated stream isn’t an offense covered by the new Digital Economy Act.

In theory, there could be other ways, but given the complexity of the situation, one would think that FACT would be better off spending its efforts elsewhere.

If FACT was indeed interested in going after individuals then they could easily target people who use torrents. These people broadcast their IP-addresses to the public, which makes them easy to identify. In addition, you can see what they are uploading, and they would also be liable under the Digital Economy Act.

However, after FACT’s decades-long association with the MPAA ended, its main partner in the demonization of Kodi-enabled devices is now the Premier League, who are far more concerned about piracy of live broadcasts (streaming) than content made available after the fact via torrents.

So, given the challenges of having a meaningful criminal prosecution of an end-user as suggested, that leaves us with the probability of FACT sowing fear, uncertainty, and doubt. In other words, scaring the public to deter them from buying or using a fully-loaded Kodi box.

This would also fit in with FACT’s recent claims that some pirate devices are a fire hazard. While it’s kind of FACT to be concerned about the well-being of pirates, as an anti-piracy organization their warnings also serve as a deterrent.

This strategy could pay off to a degree but there’s also some risk involved. Every day new “Kodi” related articles appear in the UK tabloid press, many of them with comments from FACT. Some of these may scare prospective users, but the same headlines also make these boxes known to a much wider public.

In fact, in what is quite a serious backfire, some recent pieces published by the popular Trinity Mirror group (which include FACT comments) actually provide a nice list of pirate addons that are still operational following recent crackdowns.

So are we just sowing fear now or educating a whole new audience?

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Milestone: 100 Million Certificates Issued

Post Syndicated from Let's Encrypt - Free SSL/TLS Certificates original https://letsencrypt.org//2017/06/28/hundred-million-certs.html

Let’s Encrypt has reached a milestone: we’ve now issued more than 100,000,000 certificates. This number reflects at least a few things:

First, it illustrates the strong demand for our services. We’d like to thank all of the sysadmins, web developers, and everyone else managing servers for prioritizing protecting your visitors with HTTPS.

Second, it illustrates our ability to scale. I’m incredibly proud of the work our engineering teams have done to make this volume of issuance possible. I’m also very grateful to our operational partners, including IdenTrust, Akamai, and Sumo Logic.

Third, it illustrates the power of automated certificate management. If getting and managing certificates from Let’s Encrypt always required manual steps there is simply no way we’d be able to serve as many sites as we do. We’d like to thank our community for creating a wide range of clients for automating certificate issuance and management.

The total number of certificates we’ve issued is an interesting number, but it doesn’t reflect much about tangible progress towards our primary goal: a 100% HTTPS Web. To understand that progress we need to look at this graph:

Percentage of HTTPS Page Loads in Firefox.

When Let’s Encrypt’s service first became available, less than 40% of page loads on the Web used HTTPS. It took the Web 20 years to get to that point. In the 19 months since we launched, encrypted page loads have gone up by 18%, to nearly 58%. That’s an incredible rate of change for the Web. Contributing to this trend is what we’re most proud of.

If you’re as excited about the potential for a 100% HTTPS Web as we are, please consider getting involved, making a donation, or sponsoring Let’s Encrypt.

Here’s to the next 100,000,000 certificates, and a more secure and privacy-respecting Web for everyone!

Continuous Delivery of Nested AWS CloudFormation Stacks Using AWS CodePipeline

Post Syndicated from Prakash Palanisamy original https://aws.amazon.com/blogs/devops/continuous-delivery-of-nested-aws-cloudformation-stacks-using-aws-codepipeline/

In CodePipeline Update – Build Continuous Delivery Workflows for CloudFormation Stacks, Jeff Barr discusses infrastructure as code and how to use AWS CodePipeline for continuous delivery. In this blog post, I discuss the continuous delivery of nested CloudFormation stacks using AWS CodePipeline, with AWS CodeCommit as the source repository and AWS CodeBuild as a build and testing tool. I deploy the stacks using CloudFormation change sets following a manual approval process.

Here’s how to do it:

In AWS CodePipeline, create a pipeline with four stages:

  • Source (AWS CodeCommit)
  • Build and Test (AWS CodeBuild and AWS CloudFormation)
  • Staging (AWS CloudFormation and manual approval)
  • Production (AWS CloudFormation and manual approval)

Pipeline stages, the actions in each stage, and transitions between stages are shown in the following diagram.

CloudFormation templates, test scripts, and the build specification are stored in AWS CodeCommit repositories. These files are used in the Source stage of the pipeline in AWS CodePipeline.

The AWS::CloudFormation::Stack resource type is used to create child stacks from a master stack. The CloudFormation stack resource requires the templates of the child stacks to be stored in the S3 bucket. The location of the template file is provided as a URL in the properties section of the resource definition.

The following template creates three child stacks:

  • Security (IAM, security groups).
  • Database (an RDS instance).
  • Web stacks (EC2 instances in an Auto Scaling group, elastic load balancer).
Description: Master stack which creates all required nested stacks

Parameters:
  TemplatePath:
    Type: String
    Description: S3Bucket Path where the templates are stored
  VPCID:
    Type: "AWS::EC2::VPC::Id"
    Description: Enter a valid VPC Id
  PrivateSubnet1:
    Type: "AWS::EC2::Subnet::Id"
    Description: Enter a valid SubnetId of private subnet in AZ1
  PrivateSubnet2:
    Type: "AWS::EC2::Subnet::Id"
    Description: Enter a valid SubnetId of private subnet in AZ2
  PublicSubnet1:
    Type: "AWS::EC2::Subnet::Id"
    Description: Enter a valid SubnetId of public subnet in AZ1
  PublicSubnet2:
    Type: "AWS::EC2::Subnet::Id"
    Description: Enter a valid SubnetId of public subnet in AZ2
  S3BucketName:
    Type: String
    Description: Name of the S3 bucket to allow access to the Web Server IAM Role.
  KeyPair:
    Type: "AWS::EC2::KeyPair::KeyName"
    Description: Enter a valid KeyPair Name
  AMIId:
    Type: "AWS::EC2::Image::Id"
    Description: Enter a valid AMI ID to launch the instance
  WebInstanceType:
    Type: String
    Description: Enter one of the possible instance type for web server
    AllowedValues:
      - t2.large
      - m4.large
      - m4.xlarge
      - c4.large
  WebMinSize:
    Type: String
    Description: Minimum number of instances in auto scaling group
  WebMaxSize:
    Type: String
    Description: Maximum number of instances in auto scaling group
  DBSubnetGroup:
    Type: String
    Description: Enter a valid DB Subnet Group
  DBUsername:
    Type: String
    Description: Enter a valid Database master username
    MinLength: 1
    MaxLength: 16
    AllowedPattern: "[a-zA-Z][a-zA-Z0-9]*"
  DBPassword:
    Type: String
    Description: Enter a valid Database master password
    NoEcho: true
    MinLength: 1
    MaxLength: 41
    AllowedPattern: "[a-zA-Z0-9]*"
  DBInstanceType:
    Type: String
    Description: Enter one of the possible instance type for database
    AllowedValues:
      - db.t2.micro
      - db.t2.small
      - db.t2.medium
      - db.t2.large
  Environment:
    Type: String
    Description: Select the appropriate environment
    AllowedValues:
      - dev
      - test
      - uat
      - prod

Resources:
  SecurityStack:
    Type: "AWS::CloudFormation::Stack"
    Properties:
      TemplateURL:
        Fn::Sub: "https://s3.amazonaws.com/${TemplatePath}/security-stack.yml"
      Parameters:
        S3BucketName:
          Ref: S3BucketName
        VPCID:
          Ref: VPCID
        Environment:
          Ref: Environment
      Tags:
        - Key: Name
          Value: SecurityStack

  DatabaseStack:
    Type: "AWS::CloudFormation::Stack"
    Properties:
      TemplateURL:
        Fn::Sub: "https://s3.amazonaws.com/${TemplatePath}/database-stack.yml"
      Parameters:
        DBSubnetGroup:
          Ref: DBSubnetGroup
        DBUsername:
          Ref: DBUsername
        DBPassword:
          Ref: DBPassword
        DBServerSecurityGroup:
          Fn::GetAtt: SecurityStack.Outputs.DBServerSG
        DBInstanceType:
          Ref: DBInstanceType
        Environment:
          Ref: Environment
      Tags:
        - Key: Name
          Value:   DatabaseStack

  ServerStack:
    Type: "AWS::CloudFormation::Stack"
    Properties:
      TemplateURL:
        Fn::Sub: "https://s3.amazonaws.com/${TemplatePath}/server-stack.yml"
      Parameters:
        VPCID:
          Ref: VPCID
        PrivateSubnet1:
          Ref: PrivateSubnet1
        PrivateSubnet2:
          Ref: PrivateSubnet2
        PublicSubnet1:
          Ref: PublicSubnet1
        PublicSubnet2:
          Ref: PublicSubnet2
        KeyPair:
          Ref: KeyPair
        AMIId:
          Ref: AMIId
        WebSG:
          Fn::GetAtt: SecurityStack.Outputs.WebSG
        ELBSG:
          Fn::GetAtt: SecurityStack.Outputs.ELBSG
        DBClientSG:
          Fn::GetAtt: SecurityStack.Outputs.DBClientSG
        WebIAMProfile:
          Fn::GetAtt: SecurityStack.Outputs.WebIAMProfile
        WebInstanceType:
          Ref: WebInstanceType
        WebMinSize:
          Ref: WebMinSize
        WebMaxSize:
          Ref: WebMaxSize
        Environment:
          Ref: Environment
      Tags:
        - Key: Name
          Value: ServerStack

Outputs:
  WebELBURL:
    Description: "URL endpoint of web ELB"
    Value:
      Fn::GetAtt: ServerStack.Outputs.WebELBURL

During the Validate stage, AWS CodeBuild checks for changes to the AWS CodeCommit source repositories. It uses the ValidateTemplate API to validate the CloudFormation template and copies the child templates and configuration files to the appropriate location in the S3 bucket.

The following AWS CodeBuild build specification validates the CloudFormation templates listed under the TEMPLATE_FILES environment variable and copies them to the S3 bucket specified in the TEMPLATE_BUCKET environment variable in the AWS CodeBuild project. Optionally, you can use the TEMPLATE_PREFIX environment variable to specify a path inside the bucket. This updates the configuration files to use the location of the child template files. The location of the template files is provided as a parameter to the master stack.

version: 0.1

environment_variables:
  plaintext:
    CHILD_TEMPLATES: |
      security-stack.yml
      server-stack.yml
      database-stack.yml
    TEMPLATE_FILES: |
      master-stack.yml
      security-stack.yml
      server-stack.yml
      database-stack.yml
    CONFIG_FILES: |
      config-prod.json
      config-test.json
      config-uat.json

phases:
  install:
    commands:
      npm install jsonlint -g
  pre_build:
    commands:
      - echo "Validating CFN templates"
      - |
        for cfn_template in $TEMPLATE_FILES; do
          echo "Validating CloudFormation template file $cfn_template"
          aws cloudformation validate-template --template-body file://$cfn_template
        done
      - |
        for conf in $CONFIG_FILES; do
          echo "Validating CFN parameters config file $conf"
          jsonlint -q $conf
        done
  build:
    commands:
      - echo "Copying child stack templates to S3"
      - |
        for child_template in $CHILD_TEMPLATES; do
          if [ "X$TEMPLATE_PREFIX" = "X" ]; then
            aws s3 cp "$child_template" "s3://$TEMPLATE_BUCKET/$child_template"
          else
            aws s3 cp "$child_template" "s3://$TEMPLATE_BUCKET/$TEMPLATE_PREFIX/$child_template"
          fi
        done
      - echo "Updating template configurtion files to use the appropriate values"
      - |
        for conf in $CONFIG_FILES; do
          if [ "X$TEMPLATE_PREFIX" = "X" ]; then
            echo "Replacing \"TEMPLATE_PATH_PLACEHOLDER\" for \"$TEMPLATE_BUCKET\" in $conf"
            sed -i -e "s/TEMPLATE_PATH_PLACEHOLDER/$TEMPLATE_BUCKET/" $conf
          else
            echo "Replacing \"TEMPLATE_PATH_PLACEHOLDER\" for \"$TEMPLATE_BUCKET/$TEMPLATE_PREFIX\" in $conf"
            sed -i -e "s/TEMPLATE_PATH_PLACEHOLDER/$TEMPLATE_BUCKET\/$TEMPLATE_PREFIX/" $conf
          fi
        done

artifacts:
  files:
    - master-stack.yml
    - config-*.json

After the template files are copied to S3, CloudFormation creates a test stack and triggers AWS CodeBuild as a test action.

Then the AWS CodeBuild build specification executes validate-env.py, the Python script used to determine whether resources created using the nested CloudFormation stacks conform to the specifications provided in the CONFIG_FILE.

version: 0.1

environment_variables:
  plaintext:
    CONFIG_FILE: env-details.yml

phases:
  install:
    commands:
      - pip install --upgrade pip
      - pip install boto3 --upgrade
      - pip install pyyaml --upgrade
      - pip install yamllint --upgrade
  pre_build:
    commands:
      - echo "Validating config file $CONFIG_FILE"
      - yamllint $CONFIG_FILE
  build:
    commands:
      - echo "Validating resources..."
      - python validate-env.py
      - exit $?

Upon successful completion of the test action, CloudFormation deletes the test stack and proceeds to the UAT stage in the pipeline.

During this stage, CloudFormation creates a change set against the UAT stack and then executes the change set. This updates the UAT environment and makes it available for acceptance testing. The process continues to a manual approval action. After the QA team validates the UAT environment and provides an approval, the process moves to the Production stage in the pipeline.

During this stage, CloudFormation creates a change set for the nested production stack and the process continues to a manual approval step. Upon approval (usually by a designated executive), the change set is executed and the production deployment is completed.
 

Setting up a continuous delivery pipeline

 
I used a CloudFormation template to set up my continuous delivery pipeline. The codepipeline-cfn-codebuild.yml template, available from GitHub, sets up a full-featured pipeline.

When I use the template to create my pipeline, I specify the following:

  • AWS CodeCommit repositories.
  • SNS topics to send approval notifications.
  • S3 bucket name where the artifacts will be stored.

The CFNTemplateRepoName points to the AWS CodeCommit repository where CloudFormation templates, configuration files, and build specification files are stored.

My repo contains following files:

The continuous delivery pipeline is ready just seconds after clicking Create Stack. After it’s created, the pipeline executes each stage. Upon manual approvals for the UAT and Production stages, the pipeline successfully enables continuous delivery.


 

Implementing a change in nested stack

 
To make changes to a child stack in a nested stack (for example, to update a parameter value or add or change resources), update the master stack. The changes must be made in the appropriate template or configuration files and then checked in to the AWS CodeCommit repository. This triggers the following deployment process:

 

Conclusion

 
In this post, I showed how you can use AWS CodePipeline, AWS CloudFormation, AWS CodeBuild, and a manual approval process to create a continuous delivery pipeline for both infrastructure as code and application deployment.

For more information about AWS CodePipeline, see the AWS CodePipeline documentation. You can get started in just a few clicks. All CloudFormation templates, AWS CodeBuild build specification files, and the Python script that performs the validation are available in codepipeline-nested-cfn GitHub repository.


About the author

 
Prakash Palanisamy is a Solutions Architect for Amazon Web Services. When he is not working on Serverless, DevOps or Alexa, he will be solving problems in Project Euler. He also enjoys watching educational documentaries.

mkosi — A Tool for Generating OS Images

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/mkosi-a-tool-for-generating-os-images.html

Introducing mkosi

After blogging about
casync
I realized I never blogged about the
mkosi tool that combines nicely
with it. mkosi has been around for a while already, and its time to
make it a bit better known. mkosi stands for Make Operating System
Image
, and is a tool for precisely that: generating an OS tree or
image that can be booted.

Yes, there are many tools like mkosi, and a number of them are quite
well known and popular. But mkosi has a number of features that I
think make it interesting for a variety of use-cases that other tools
don’t cover that well.

What is mkosi?

What are those use-cases, and what does mkosi precisely set apart?
mkosi is definitely a tool with a focus on developer’s needs for
building OS images, for testing and debugging, but also for generating
production images with cryptographic protection. A typical use-case
would be to add a mkosi.default file to an existing project (for
example, one written in C or Python), and thus making it easy to
generate an OS image for it. mkosi will put together the image with
development headers and tools, compile your code in it, run your test
suite, then throw away the image again, and build a new one, this time
without development headers and tools, and install your build
artifacts in it. This final image is then “production-ready”, and only
contains your built program and the minimal set of packages you
configured otherwise. Such an image could then be deployed with
casync (or any other tool of course) to be delivered to your set of
servers, or IoT devices or whatever you are building.

mkosi is supposed to be legacy-free: the focus is clearly on
today’s technology, not yesteryear’s. Specifically this means that
we’ll generate GPT partition tables, not MBR/DOS ones. When you tell
mkosi to generate a bootable image for you, it will make it bootable
on EFI, not on legacy BIOS. The GPT images generated follow
specifications such as the Discoverable Partitions
Specification
,
so that /etc/fstab can remain unpopulated and tools such as
systemd-nspawn can automatically dissect the image and boot from
them.

So, let’s have a look on the specific images it can generate:

  1. Raw GPT disk image, with ext4 as root
  2. Raw GPT disk image, with btrfs as root
  3. Raw GPT disk image, with a read-only squashfs as root
  4. A plain directory on disk containing the OS tree directly (this is useful for creating generic container images)
  5. A btrfs subvolume on disk, similar to the plain directory
  6. A tarball of a plain directory

When any of the GPT choices above are selected, a couple of additional
options are available:

  1. A swap partition may be added in
  2. The system may be made bootable on EFI systems
  3. Separate partitions for /home and /srv may be added in
  4. The root, /home and /srv partitions may be optionally encrypted with LUKS
  5. The root partition may be protected using dm-verity, thus making offline attacks on the generated system hard
  6. If the image is made bootable, the dm-verity root hash is automatically added to the kernel command line, and the kernel together with its initial RAM disk and the kernel command line is optionally cryptographically signed for UEFI SecureBoot

Note that mkosi is distribution-agnostic. It currently can build
images based on the following Linux distributions:

  1. Fedora
  2. Debian
  3. Ubuntu
  4. ArchLinux
  5. openSUSE

Note though that not all distributions are supported at the same
feature level currently. Also, as mkosi is based on dnf
--installroot
, debootstrap, pacstrap and zypper, and those
packages are not packaged universally on all distributions, you might
not be able to build images for all those distributions on arbitrary
host distributions. For example, Fedora doesn’t package zypper,
hence you cannot build an openSUSE image easily on Fedora, but you can
still build Fedora (obviously…), Debian, Ubuntu and ArchLinux images
on it just fine.

The GPT images are put together in a way that they aren’t just
compatible with UEFI systems, but also with VM and container managers
(that is, at least the smart ones, i.e. VM managers that know UEFI,
and container managers that grok GPT disk images) to a large
degree. In fact, the idea is that you can use mkosi to build a
single GPT image that may be used to:

  1. Boot on bare-metal boxes
  2. Boot in a VM
  3. Boot in a systemd-nspawn container
  4. Directly run a systemd service off, using systemd’s RootImage= unit file setting

Note that in all four cases the dm-verity data is automatically used
if available to ensure the image is not tempered with (yes, you read
that right, systemd-nspawn and systemd’s RootImage= setting
automatically do dm-verity these days if the image has it.)

Mode of Operation

The simplest usage of mkosi is by simply invoking it without
parameters (as root):

# mkosi

Without any configuration this will create a GPT disk image for you,
will call it image.raw and drop it in the current directory. The
distribution used will be the same one as your host runs.

Of course in most cases you want more control about how the image is
put together, i.e. select package sets, select the distribution, size
partitions and so on. Most of that you can actually specify on the
command line, but it is recommended to instead create a couple of
mkosi.$SOMETHING files and directories in some directory. Then,
simply change to that directory and run mkosi without any further
arguments. The tool will then look in the current working directory
for these files and directories and make use of them (similar to how
make looks for a Makefile…). Every single file/directory is
optional, but if they exist they are honored. Here’s a list of the
files/directories mkosi currently looks for:

  1. mkosi.default — This is the main configuration file, here you
    can configure what kind of image you want, which distribution, which
    packages and so on.

  2. mkosi.extra/ — If this directory exists, then mkosi will copy
    everything inside it into the images built. You can place arbitrary
    directory hierarchies in here, and they’ll be copied over whatever is
    already in the image, after it was put together by the distribution’s
    package manager. This is the best way to drop additional static files
    into the image, or override distribution-supplied ones.

  3. mkosi.build — This executable file is supposed to be a build
    script. When it exists, mkosi will build two images, one after the
    other in the mode already mentioned above: the first version is the
    build image, and may include various build-time dependencies such as
    a compiler or development headers. The build script is also copied
    into it, and then run inside it. The script should then build
    whatever shall be built and place the result in $DESTDIR (don’t
    worry, popular build tools such as Automake or Meson all honor
    $DESTDIR anyway, so there’s not much to do here explicitly). It may
    also run a test suite, or anything else you like. After the script
    finished, the build image is removed again, and a second image (the
    final image) is built. This time, no development packages are
    included, and the build script is not copied into the image again —
    however, the build artifacts from the first run (i.e. those placed in
    $DESTDIR) are copied into the image.

  4. mkosi.postinst — If this executable script exists, it is invoked
    inside the image (inside a systemd-nspawn invocation) and can
    adjust the image as it likes at a very late point in the image
    preparation. If mkosi.build exists, i.e. the dual-phased
    development build process used, then this script will be invoked
    twice: once inside the build image and once inside the final
    image. The first parameter passed to the script clarifies which phase
    it is run in.

  5. mkosi.nspawn — If this file exists, it should contain a
    container configuration file for systemd-nspawn (see
    systemd.nspawn(5)
    for details), which shall be shipped along with the final image and
    shall be included in the check-sum calculations (see below).

  6. mkosi.cache/ — If this directory exists, it is used as package
    cache directory for the builds. This directory is effectively bind
    mounted into the image at build time, in order to speed up building
    images. The package installers of the various distributions will
    place their package files here, so that subsequent runs can reuse
    them.

  7. mkosi.passphrase — If this file exists, it should contain a
    pass-phrase to use for the LUKS encryption (if that’s enabled for the
    image built). This file should not be readable to other users.

  8. mkosi.secure-boot.crt and mkosi.secure-boot.key should be an
    X.509 key pair to use for signing the kernel and initrd for UEFI
    SecureBoot, if that’s enabled.

How to use it

So, let’s come back to our most trivial example, without any of the
mkosi.$SOMETHING files around:

# mkosi

As mentioned, this will create a build file image.raw in the current
directory. How do we use it? Of course, we could dd it onto some USB
stick and boot it on a bare-metal device. However, it’s much simpler
to first run it in a container for testing:

# systemd-nspawn -bi image.raw

And there you go: the image should boot up, and just work for you.

Now, let’s make things more interesting. Let’s still not use any of
the mkosi.$SOMETHING files around:

# mkosi -t raw_btrfs --bootable -o foobar.raw
# systemd-nspawn -bi foobar.raw

This is similar as the above, but we made three changes: it’s no
longer GPT + ext4, but GPT + btrfs. Moreover, the system is made
bootable on UEFI systems, and finally, the output is now called
foobar.raw.

Because this system is bootable on UEFI systems, we can run it in KVM:

qemu-kvm -m 512 -smp 2 -bios /usr/share/edk2/ovmf/OVMF_CODE.fd -drive format=raw,file=foobar.raw

This will look very similar to the systemd-nspawn invocation, except
that this uses full VM virtualization rather than container
virtualization. (Note that the way to run a UEFI qemu/kvm instance
appears to change all the time and is different on the various
distributions. It’s quite annoying, and I can’t really tell you what
the right qemu command line is to make this work on your system.)

Of course, it’s not all raw GPT disk images with mkosi. Let’s try
a plain directory image:

# mkosi -d fedora -t directory -o quux
# systemd-nspawn -bD quux

Of course, if you generate the image as plain directory you can’t boot
it on bare-metal just like that, nor run it in a VM.

A more complex command line is the following:

# mkosi -d fedora -t raw_squashfs --checksum --xz --package=openssh-clients --package=emacs

In this mode we explicitly pick Fedora as the distribution to use, ask
mkosi to generate a compressed GPT image with a root squashfs,
compress the result with xz, and generate a SHA256SUMS file with
the hashes of the generated artifacts. The package will contain the
SSH client as well as everybody’s favorite editor.

Now, let’s make use of the various mkosi.$SOMETHING files. Let’s
say we are working on some Automake-based project and want to make it
easy to generate a disk image off the development tree with the
version you are hacking on. Create a configuration file:

# cat > mkosi.default <<EOF
[Distribution]
Distribution=fedora
Release=24

[Output]
Format=raw_btrfs
Bootable=yes

[Packages]
# The packages to appear in both the build and the final image
Packages=openssh-clients httpd
# The packages to appear in the build image, but absent from the final image
BuildPackages=make gcc libcurl-devel
EOF

And let’s add a build script:

# cat > mkosi.build <<EOF
#!/bin/sh
cd $SRCDIR
./autogen.sh
./configure --prefix=/usr
make -j `nproc`
make install
EOF
# chmod +x mkosi.build

And with all that in place we can now build our project into a disk image, simply by typing:

# mkosi

Let’s try it out:

# systemd-nspawn -bi image.raw

Of course, if you do this you’ll notice that building an image like
this can be quite slow. And slow build times are actively hurtful to
your productivity as a developer. Hence let’s make things a bit
faster. First, let’s make use of a package cache shared between runs:

# mkdir mkosi.chache

Building images now should already be substantially faster (and
generate less network traffic) as the packages will now be downloaded
only once and reused. However, you’ll notice that unpacking all those
packages and the rest of the work is still quite slow. But mkosi can
help you with that. Simply use mkosi‘s incremental build feature. In
this mode mkosi will make a copy of the build and final images
immediately before dropping in your build sources or artifacts, so
that building an image becomes a lot quicker: instead of always
starting totally from scratch a build will now reuse everything it can
reuse from a previous run, and immediately begin with building your
sources rather than the build image to build your sources in. To
enable the incremental build feature use -i:

# mkosi -i

Note that if you use this option, the package list is not updated
anymore from your distribution’s servers, as the cached copy is made
after all packages are installed, and hence until you actually delete
the cached copy the distribution’s network servers aren’t contacted
again and no RPMs or DEBs are downloaded. This means the distribution
you use becomes “frozen in time” this way. (Which might be a bad
thing, but also a good thing, as it makes things kinda reproducible.)

Of course, if you run mkosi a couple of times you’ll notice that it
won’t overwrite the generated image when it already exists. You can
either delete the file yourself first (rm image.raw) or let mkosi
do it for you right before building a new image, with mkosi -f. You
can also tell mkosi to not only remove any such pre-existing images,
but also remove any cached copies of the incremental feature, by using
-f twice.

I wrote mkosi originally in order to test systemd, and quickly
generate a disk image of various distributions with the most current
systemd version from git, without all that affecting my host system. I
regularly use mkosi for that today, in incremental mode. The two
commands I use most in that context are:

# mkosi -if && systemd-nspawn -bi image.raw

And sometimes:

# mkosi -iff && systemd-nspawn -bi image.raw

The latter I use only if I want to regenerate everything based on the
very newest set of RPMs provided by Fedora, instead of a cached
snapshot of it.

BTW, the mkosi files for systemd are included in the systemd git
tree:
mkosi.default
and
mkosi.build. This
way, any developer who wants to quickly test something with current
systemd git, or wants to prepare a patch based on it and test it can
check out the systemd repository and simply run mkosi in it and a
few minutes later he has a bootable image he can test in
systemd-nspawn or KVM. casync has similar files:
mkosi.default,
mkosi.build.

Random Interesting Features

  1. As mentioned already, mkosi will generate dm-verity enabled
    disk images if you ask for it. For that use the --verity switch on
    the command line or Verity= setting in mkosi.default. Of course,
    dm-verity implies that the root volume is read-only. In this mode
    the top-level dm-verity hash will be placed along-side the output
    disk image in a file named the same way, but with the .roothash
    suffix. If the image is to be created bootable, the root hash is also
    included on the kernel command line in the roothash= parameter,
    which current systemd versions can use to both find and activate the
    root partition in a dm-verity protected way. BTW: it’s a good idea
    to combine this dm-verity mode with the raw_squashfs image mode,
    to generate a genuinely protected, compressed image suitable for
    running in your IoT device.

  2. As indicated above, mkosi can automatically create a check-sum
    file SHA256SUMS for you (--checksum) covering all the files it
    outputs (which could be the image file itself, a matching .nspawn
    file using the mkosi.nspawn file mentioned above, as well as the
    .roothash file for the dm-verity root hash.) It can then
    optionally sign this with gpg (--sign). Note that systemd‘s
    machinectl pull-tar and machinectl pull-raw command can download
    these files and the SHA256SUMS file automatically and verify things
    on download. With other words: what mkosi outputs is perfectly
    ready for downloads using these two systemd commands.

  3. As mentioned, mkosi is big on supporting UEFI SecureBoot. To
    make use of that, place your X.509 key pair in two files
    mkosi.secureboot.crt and mkosi.secureboot.key, and set
    SecureBoot= or --secure-boot. If so, mkosi will sign the
    kernel/initrd/kernel command line combination during the build. Of
    course, if you use this mode, you should also use
    Verity=/--verity=, otherwise the setup makes only partial
    sense. Note that mkosi will not help you with actually enrolling
    the keys you use in your UEFI BIOS.

  4. mkosi has minimal support for GIT checkouts: when it recognizes
    it is run in a git checkout and you use the mkosi.build script
    stuff, the source tree will be copied into the build image, but will
    all files excluded by .gitignore removed.

  5. There’s support for encryption in place. Use --encrypt= or
    Encrypt=. Note that the UEFI ESP is never encrypted though, and the
    root partition only if explicitly requested. The /home and /srv
    partitions are unconditionally encrypted if that’s enabled.

  6. Images may be built with all documentation removed.

  7. The password for the root user and additional kernel command line
    arguments may be configured for the image to generate.

Minimum Requirements

Current mkosi requires Python 3.5, and has a number of dependencies,
listed in the
README. Most
notably you need a somewhat recent systemd version to make use of its
full feature set: systemd 233. Older versions are already packaged for
various distributions, but much of what I describe above is only
available in the most recent release mkosi 3.

The UEFI SecureBoot support requires sbsign which currently isn’t
available in Fedora, but there’s a
COPR
.

Future

It is my intention to continue turning mkosi into a tool suitable
for:

  1. Testing and debugging projects
  2. Building images for secure devices
  3. Building portable service images
  4. Building images for secure VMs and containers

One of the biggest goals I have for the future is to teach mkosi and
systemd/sd-boot native support for A/B IoT style partition
setups. The idea is that the combination of systemd, casync and
mkosi provides generic building blocks for building secure,
auto-updating devices in a generic way from, even though all pieces
may be used individually, too.

FAQ

  1. Why are you reinventing the wheel again? This is exactly like
    $SOMEOTHERPROJECT!
    — Well, to my knowledge there’s no tool that
    integrates this nicely with your project’s development tree, and can
    do dm-verity and UEFI SecureBoot and all that stuff for you. So
    nope, I don’t think this exactly like $SOMEOTHERPROJECT, thank you
    very much.

  2. What about creating MBR/DOS partition images? — That’s really
    out of focus to me. This is an exercise in figuring out how generic
    OSes and devices in the future should be built and an attempt to
    commoditize OS image building. And no, the future doesn’t speak MBR,
    sorry. That said, I’d be quite interested in adding support for
    booting on Raspberry Pi, possibly using a hybrid approach, i.e. using
    a GPT disk label, but arranging things in a way that the Raspberry Pi
    boot protocol (which is built around DOS partition tables), can still
    work.

  3. Is this portable? — Well, depends what you mean by
    portable. No, this tool runs on Linux only, and as it uses
    systemd-nspawn during the build process it doesn’t run on
    non-systemd systems either. But then again, you should be able to
    create images for any architecture you like with it, but of course if
    you want the image bootable on bare-metal systems only systems doing
    UEFI are supported (but systemd-nspawn should still work fine on
    them).

  4. Where can I get this stuff? — Try
    GitHub. And some distributions
    carry packaged versions, but I think none of them the current v3
    yet.

  5. Is this a systemd project? — Yes, it’s hosted under the
    systemd GitHub umbrella. And yes,
    during run-time systemd-nspawn in a current version is required. But
    no, the code-bases are separate otherwise, already because systemd
    is a C project, and mkosi Python.

  6. Requiring systemd 233 is a pretty steep requirement, no?
    Yes, but the feature we need kind of matters (systemd-nspawn‘s
    --overlay= switch), and again, this isn’t supposed to be a tool for
    legacy systems.

  7. Can I run the resulting images in LXC or Docker? — Humm, I am
    not an LXC nor Docker guy. If you select directory or subvolume
    as image type, LXC should be able to boot the generated images just
    fine, but I didn’t try. Last time I looked, Docker doesn’t permit
    running proper init systems as PID 1 inside the container, as they
    define their own run-time without intention to emulate a proper
    system. Hence, no I don’t think it will work, at least not with an
    unpatched Docker version. That said, again, don’t ask me questions
    about Docker, it’s not precisely my area of expertise, and quite
    frankly I am not a fan. To my knowledge neither LXC nor Docker are
    able to run containers directly off GPT disk images, hence the
    various raw_xyz image types are definitely not compatible with
    either. That means if you want to generate a single raw disk image
    that can be booted unmodified both in a container and on bare-metal,
    then systemd-nspawn is the container manager to go for
    (specifically, its -i/--image= switch).

Should you care? Is this a tool for you?

Well, that’s up to you really.

If you hack on some complex project and need a quick way to compile
and run your project on a specific current Linux distribution, then
mkosi is an excellent way to do that. Simply drop the mkosi.default
and mkosi.build files in your git tree and everything will be
easy. (And of course, as indicated above: if the project you are
hacking on happens to be called systemd or casync be aware that
those files are already part of the git tree — you can just use them.)

If you hack on some embedded or IoT device, then mkosi is a great
choice too, as it will make it reasonably easy to generate secure
images that are protected against offline modification, by using
dm-verity and UEFI SecureBoot.

If you are an administrator and need a nice way to build images for a
VM or systemd-nspawn container, or a portable service then mkosi
is an excellent choice too.

If you care about legacy computers, old distributions, non-systemd
init systems, old VM managers, Docker, … then no, mkosi is not for
you, but there are plenty of well-established alternatives around that
cover that nicely.

And never forget: mkosi is an Open Source project. We are happy to
accept your patches and other contributions.

Oh, and one unrelated last thing: don’t forget to submit your talk
proposal

and/or buy a ticket for
All Systems Go! 2017 in Berlin — the
conference where things like systemd, casync and mkosi are
discussed, along with a variety of other Linux userspace projects used
for building systems.

GitHub announces Open Source Friday

Post Syndicated from ris original https://lwn.net/Articles/726598/rss

GitHub has announced
a new program that aims to make it easier for people to contribute to open
source projects. “Open Source Friday isn’t limited to
individuals. Your team, department, or company can take part,
too. Contributing to the software you already use isn’t altruistic—it’s an
investment in the tools your company relies on. And you can always start
small: spend two hours every Friday working on an open source project
relevant to your business. Whether you’re an aspiring contributor or active
maintainer of open source software, we help you track and share your Friday
contributions. We also provide a framework for regular contribution, along
with resources to help you convince your employers to join in.

Cox: Supreme Court Suggests That Pirates Shouldn’t Lose Internet Access

Post Syndicated from Ernesto original https://torrentfreak.com/cox-supreme-court-suggests-that-pirates-shouldnt-lose-internet-access-170627/

December 2015 a Virginia federal jury held Internet provider Cox Communications responsible for the copyright infringements of its subscribers.

The ISP refused to disconnect alleged pirates and was found guilty of willful contributory copyright infringement. In addition, it was ordered to pay music publisher BMG Rights Management $25 million in damages.

Cox has since filed an appeal and this week it submitted an additional piece of evidence from the US Supreme Court, stating that this strongly supports its side of the argument.

Last week the Supreme Court issued an important verdict in Packingham v. North Carolina, ruling that it’s unconstitutional to bar convicted sex offenders from social media. The Court described the Internet as an important tool for people to exercise free speech rights.

While nothing in the ruling refers to online piracy, it could turn out to be crucial in the case between Cox and BMG. The Internet provider now argues that if convicted criminals have the right to use the Internet, accused file-sharers should have it too.

“Packingham is directly relevant to what constitute ‘appropriate circumstances’ to terminate Internet access to Cox’s customers. The decision emphatically establishes the centrality of Internet access to protected First Amendment activity..,” Cox writes in its filing at the Court of Appeals.

“As the Court recognized, Internet sources are often ‘the principal sources for knowing current events, checking ads for employment, speaking and listening in the modern public square, and otherwise exploring the vast realms of human thought and knowledge’.”

Citing the Supreme Court ruling, Cox notes that the Government “may not suppress lawful speech as the means to suppress unlawful speech.” This would be the case if entire households lost Internet access because a copyright holder accused someone of repeated copyright infringements.

“The Court’s analysis strongly suggests that at least intermediate scrutiny must apply to any law that purports to restrict the ability of a class of persons to access the Internet,” ISP writes (pdf).

In its case against BMG, Cox was held liable because it failed to take appropriate action against frequent pirates, solely based on allegations of piracy monitoring outfit Rightscorp. Cox doesn’t believe these one-sided complaints should be enough for people to be disconnected from the Internet.

If convicted sex offenders still have the right to use social media, accused pirates should not be barred from the Internet on a whim, the argument goes.

“And if it offends the Constitution to cut off a portion of Internet access to convicted criminals, then the district court’s erroneous interpretation of Section 512(i) of the DMCA — which effectively invokes the state’s coercive power to require ISPs to terminate all Internet access to merely accused infringers — cannot stand,” Cox writes.

Whether the Court of Appeals will agree has yet to be seen, but with the stakes at hand this issue is far from resolved. In addition to the case between BMG and Cox, the MPAA recently filed a lawsuit against Grande Communications, which centers around the same issue.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Yahoo Mail’s New Tech Stack, Built for Performance and Reliability

Post Syndicated from mikesefanov original https://yahooeng.tumblr.com/post/162320493306

By Suhas Sadanandan, Director of Engineering 

When it comes to performance and reliability, there is perhaps no application where this matters more than with email. Today, we announced a new Yahoo Mail experience for desktop based on a completely rewritten tech stack that embodies these fundamental considerations and more.

We built the new Yahoo Mail experience using a best-in-class front-end tech stack with open source technologies including React, Redux, Node.js, react-intl (open-sourced by Yahoo), and others. A high-level architectural diagram of our stack is below.

image

New Yahoo Mail Tech Stack

In building our new tech stack, we made use of the most modern tools available in the industry to come up with the best experience for our users by optimizing the following fundamentals:

Performance

A key feature of the new Yahoo Mail architecture is blazing-fast initial loading (aka, launch).

We introduced new network routing which sends users to their nearest geo-located email servers (proximity-based routing). This has resulted in a significant reduction in time to first byte and should be immediately noticeable to our international users in particular.

We now do server-side rendering to allow our users to see their mail sooner. This change will be immediately noticeable to our low-bandwidth users. Our application is isomorphic, meaning that the same code runs on the server (using Node.js) and the client. Prior versions of Yahoo Mail had programming logic duplicated on the server and the client because we used PHP on the server and JavaScript on the client.   

Using efficient bundling strategies (JavaScript code is separated into application, vendor, and lazy loaded bundles) and pushing only the changed bundles during production pushes, we keep the cache hit ratio high. By using react-atomic-css, our homegrown solution for writing modular and scoped CSS in React, we get much better CSS reuse.  

In prior versions of Yahoo Mail, the need to run various experiments in parallel resulted in additional branching and bloating of our JavaScript and CSS code. While rewriting all of our code, we solved this issue using Mendel, our homegrown solution for bucket testing isomorphic web apps, which we have open sourced.  

Rather than using custom libraries, we use native HTML5 APIs and ES6 heavily and use PolyesterJS, our homegrown polyfill solution, to fill the gaps. These factors have further helped us to keep payload size minimal.

With all the above optimizations, we have been able to reduce our JavaScript and CSS footprint by approximately 50% compared to the previous desktop version of Yahoo Mail, helping us achieve a blazing-fast launch.

In addition to initial launch improvements, key features like search and message read (when a user opens an email to read it) have also benefited from the above optimizations and are considerably faster in the latest version of Yahoo Mail.

We also significantly reduced the memory consumed by Yahoo Mail on the browser. This is especially noticeable during a long running session.

Reliability

With this new version of Yahoo Mail, we have a 99.99% success rate on core flows: launch, message read, compose, search, and actions that affect messages. Accomplishing this over several billion user actions a day is a significant feat. Client-side errors (JavaScript exceptions) are reduced significantly when compared to prior Yahoo Mail versions.

Product agility and launch velocity

We focused on independently deployable components. As part of the re-architecture of Yahoo Mail, we invested in a robust continuous integration and delivery flow. Our new pipeline allows for daily (or more) pushes to all Mail users, and we push only the bundles that are modified, which keeps the cache hit ratio high.

Developer effectiveness and satisfaction

In developing our tech stack for the new Yahoo Mail experience, we heavily leveraged open source technologies, which allowed us to ensure a shorter learning curve for new engineers. We were able to implement a consistent and intuitive onboarding program for 30+ developers and are now using our program for all new hires. During the development process, we emphasise predictable flows and easy debugging.

Accessibility

The accessibility of this new version of Yahoo Mail is state of the art and delivers outstanding usability (efficiency) in addition to accessibility. It features six enhanced visual themes that can provide accommodation for people with low vision and has been optimized for use with Assistive Technology including alternate input devices, magnifiers, and popular screen readers such as NVDA and VoiceOver. These features have been rigorously evaluated and incorporate feedback from users with disabilities. It sets a new standard for the accessibility of web-based mail and is our most-accessible Mail experience yet.

Open source 

We have open sourced some key components of our new Mail stack, like Mendel, our solution for bucket testing isomorphic web applications. We invite the community to use and build upon our code. Going forward, we plan on also open sourcing additional components like react-atomic-css, our solution for writing modular and scoped CSS in React, and lazy-component, our solution for on-demand loading of resources.

Many of our company’s best technical minds came together to write a brand new tech stack and enable a delightful new Yahoo Mail experience for our users.

We encourage our users and engineering peers in the industry to test the limits of our application, and to provide feedback by clicking on the Give Feedback call out in the lower left corner of the new version of Yahoo Mail.

Backblaze B2, Cloud Storage on a Budget: One Year Later

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/backblaze-b2-cloud-storage-on-a-budget-one-year-later/

B2 Cloud Storage Review

A year ago, Backblaze B2 Cloud Storage came out of beta and became available for everyone to use. We were pretty excited, even though it seemed like everyone and their brother had a cloud storage offering. Now that we are a year down the road let’s see how B2 has fared in the real world of tight budgets, maxed-out engineering schedules, insanely funded competition, and more. Spoiler alert: We’re still pretty excited…

Cloud Storage on a Budget

There are dozens of companies offering cloud storage and the landscape is cluttered with incomprehensible pricing models, cleverly disguised transfer and download charges, and differing levels of service that seem to be driven more by marketing departments than customer needs.

Backblaze B2 keeps things simple: A single performant level of service, a single affordable price for storage ($0.005/GB/month), a single affordable price for downloads ($0.02/GB), and a single list of transaction charges – all on a single pricing page.

Who’s Using B2?

By making cloud storage affordable, companies and organizations now have a way to store their data in the cloud and still be able to access and restore it as quickly as needed. You don’t have to choose between price and performance. Here are a few examples:

  • Media & Entertainment: KLRU-TV, Austin PBS, is using B2 to preserve their video catalog of the world renown musical anthology series, Austin City Limits.
  • LTO Migration: The Girl Scouts San Diego, were able to move their daily incremental backups from LTO tape to the cloud, saving money and time, while helping automate their entire backup process.
  • Cloud Migration: Vintage Aerial found it cost effective to discard their internal data server and store their unique hi-resolution images in B2 Cloud Storage.
  • Backup: Ahuja and Clark, a boutique accounting firm, was able to save over 80% on the cost to backup all their corporate and client data.

How is B2 Being Used?

B2 Cloud Storage can be accessed in four ways: using the Web GUI, using the CLI, using the API library, and using a product or service integrated with B2. While many customers are using the Web GUI, CLI and API to store and retrieve data, the most prolific use of B2 occurs via our integration partners. Each integration partner has certified they have met our best practices for integrating to B2 and we’ve tested each of the integrations submitted to us. Here are a few of the highlights.

  • NAS Devices – Synology and QNAP have integrations which allow their NAS devices to sync their data to/from B2.
  • Backup and Sync – CloudBerry, GoodSync, and Retrospect are just a few of the services that can backup and/or sync data to/from B2.
  • Hybrid Cloud – 45 Drives and OpenIO are solutions that allow you to setup and operate a hybrid data storage cloud environment.
  • Desktop Apps – CyberDuck, MountainDuck, Dropshare, and more allow users an easy way to store and use data in B2 right from your desktop.
  • Digital Asset Management – Cantemo, Cubix, CatDV, and axle Video, let you catalog your digital assets and then store them in B2 for fast retrieval when they are needed.

If you have an application or service that stores data in the cloud and it isn’t integrated with Backblaze B2, then your customers are probably paying too much for cloud storage.

What’s New in B2?

B2 Fireball – our rapid data ingest service. We send you a storage device, and you load it up with up to 40 TB of data and send it back, then we load the data into your B2 account. The cost is $550 per trip plus shipping. Save your network bandwidth with the B2 Fireball.

Lowered the download price – When we introduced B2, we set the price to download a gigabyte of data to be $0.05/GB – the same as most competitors. A year in, we reevaluated the price based on usage and decided to lower the price to $0.02/GB.

B2 User Groups – Backblaze Groups functionality is now available in B2. An administrator can invite users to a B2 centric Group to centralize the storage location for that group of users. For example, multiple members of a department working on a project will be able to archive their work-in-process activities into a single B2 bucket.

Time Machine backup – You may know that you can use your Synology NAS as the destination for your Time Machine backup. With B2 you can also sync your Synology NAS to B2 for a true 3-2-1 backup solution. If your system crashes or is lost, you can restore your Time Machine image directly from B2 to your new machine.

Life Cycle Rules – Create rules that allow you to manage the length of time deleted files will remain in your B2 bucket before they are deleted. A great option for managing the cleanup of outdated file versions to save on storage costs.

Large Files – In the B2 Web GUI you can upload files as large as 500 MB using either the upload or drag-and-drop functionality. The B2 CLI and API support the ability to upload/download files as large as 10 TB.

5 MB file part size – When working with large files, the minimum file part size can now be set as low as 5 MB versus the previous low setting of 100 MB. Now the range of a file part when working with large files can be from 5 MB to 5GB. This increases the throughput of your data uploads and downloads.

SHA-1 at the end – This feature allows you to compute the SHA-1 checksum and append it to the end of the request body versus doing the computation before the file is sent. This is especially useful for those applications which stream data to/from B2.

Cache-Control – When data is downloaded from B2 into a browser, the length of time the file remains in the browser cache can be set at the bucket level using the b2_create_bucket and b2_update_bucket API calls. Setting this policy is optional.

Customized delimiters – Used in the API, this allows you to specify a delimiter to use for a given purpose. A common use is to set a delimiter in the file name string. Then use that delimiter to detect a folder name within the string.

Looking Ahead

Over the past year we added nearly 30,000 new B2 customers to the fold and are welcoming more and more each day as B2 continues to grow. We have plans to expand our storage footprint by adding more data centers as we look forward to moving towards a multi-region environment.

For those of you who are B2 customers – thank you for helping build B2. If you have an interesting way you are using B2, tell us in the comments below.

The post Backblaze B2, Cloud Storage on a Budget: One Year Later appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Fighting Leakers at Apple

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/06/fighting_leaker.html

Apple is fighting its own battle against leakers, using people and tactics from the NSA.

According to the hour-long presentation, Apple’s Global Security team employs an undisclosed number of investigators around the world to prevent information from reaching competitors, counterfeiters, and the press, as well as hunt down the source when leaks do occur. Some of these investigators have previously worked at U.S. intelligence agencies like the National Security Agency (NSA), law enforcement agencies like the FBI and the U.S. Secret Service, and in the U.S. military.

The information is from an internal briefing, which was leaked.

Scratch 2.0: all-new features for your Raspberry Pi

Post Syndicated from Rik Cross original https://www.raspberrypi.org/blog/scratch-2-raspberry-pi/

We’re very excited to announce that Scratch 2.0 is now available as an offline app for the Raspberry Pi! This new version of Scratch allows you to control the Pi’s GPIO (General Purpose Input and Output) pins, and offers a host of other exciting new features.

Offline accessibility

The most recent update to Raspbian includes the app, which makes Scratch 2.0 available offline on the Raspberry Pi. This is great news for clubs and classrooms, where children can now use Raspberry Pis instead of connected laptops or desktops to explore block-based programming and physical computing.

Controlling GPIO with Scratch 2.0

As with Scratch 1.4, Scratch 2.0 on the Raspberry Pi allows you to create code to control and respond to components connected to the Pi’s GPIO pins. This means that your Scratch projects can light LEDs, sound buzzers and use input from buttons and a range of sensors to control the behaviour of sprites. Interacting with GPIO pins in Scratch 2.0 is easier than ever before, as text-based broadcast instructions have been replaced with custom blocks for setting pin output and getting current pin state.

Scratch 2.0 GPIO blocks

To add GPIO functionality, first click ‘More Blocks’ and then ‘Add an Extension’. You should then select the ‘Pi GPIO’ extension option and click OK.

Scratch 2.0 GPIO extension

In the ‘More Blocks’ section you should now see the additional blocks for controlling and responding to your Pi GPIO pins. To give an example, the entire code for repeatedly flashing an LED connected to GPIO pin 2.0 is now:

Flashing an LED with Scratch 2.0

To react to a button connected to GPIO pin 2.0, simply set the pin as input, and use the ‘gpio (x) is high?’ block to check the button’s state. In the example below, the Scratch cat will say “Pressed” only when the button is being held down.

Responding to a button press on Scractch 2.0

Cloning sprites

Scratch 2.0 also offers some additional features and improvements over Scratch 1.4. One of the main new features of Scratch 2.0 is the ability to create clones of sprites. Clones are instances of a particular sprite that inherit all of the scripts of the main sprite.

The scripts below show how cloned sprites are used — in this case to allow the Scratch cat to throw a clone of an apple sprite whenever the space key is pressed. Each apple sprite clone then follows its ‘when i start as clone’ script.

Cloning sprites with Scratch 2.0

The cloning functionality avoids the need to create multiple copies of a sprite, for example multiple enemies in a game or multiple snowflakes in an animation.

Custom blocks

Scratch 2.0 also allows the creation of custom blocks, allowing code to be encapsulated and used (possibly multiple times) in a project. The code below shows a simple custom block called ‘jump’, which is used to make a sprite jump whenever it is clicked.

Custom 'jump' block on Scratch 2.0

These custom blocks can also optionally include parameters, allowing further generalisation and reuse of code blocks. Here’s another example of a custom block that draws a shape. This time, however, the custom block includes parameters for specifying the number of sides of the shape, as well as the length of each side.

Custom shape-drawing block with Scratch 2.0

The custom block can now be used with different numbers provided, allowing lots of different shapes to be drawn.

Drawing shapes with Scratch 2.0

Peripheral interaction

Another feature of Scratch 2.0 is the addition of code blocks to allow easy interaction with a webcam or a microphone. This opens up a whole new world of possibilities, and for some examples of projects that make use of this new functionality see Clap-O-Meter which uses the microphone to control a noise level meter, and a Keepie Uppies game that uses video motion to control a football. You can use the Raspberry Pi or USB cameras to detect motion in your Scratch 2.0 projects.

Other new features include a vector image editor and a sound editor, as well as lots of new sprites, costumes and backdrops.

Update your Raspberry Pi for Scratch 2.0

Scratch 2.0 is available in the latest Raspbian release, under the ‘Programming’ menu. We’ve put together a guide for getting started with Scratch 2.0 on the Raspberry Pi online (note that GPIO functionality is only available via the desktop version). You can also try out Scratch 2.0 on the Pi by having a go at a project from the Code Club projects site.

As always, we love to see the projects you create using the Raspberry Pi. Once you’ve upgraded to Scratch 2.0, tell us about your projects via Twitter, Instagram and Facebook, or by leaving us a comment below.

The post Scratch 2.0: all-new features for your Raspberry Pi appeared first on Raspberry Pi.

T411, France’s Most-Visited Torrent Site, Has Been Shut Down

Post Syndicated from Andy original https://torrentfreak.com/t411-frances-most-visited-torrent-site-has-disappeared-170627/

As the number one torrent site among French speakers and one of the most popular sites in France, T411’s rise to stardom is the product of more than a decade of twists and turns.

After a prolonged battle against 31 Canadian media organizations including the CRIA, the administrator of a torrent site known as QuebecTorrent closed its doors in 2008 after the handing down of a permanent injunction.

“I just wanna say thanks to all the people who supported the cause and me all along,” admin Sebastian Doditz told TorrentFreak at the time.

Initially, it was believed that the 109,000 members of the site would be left homeless but shortly after another torrent site appeared. Called Torrent411 with the slogan The Torrent Yellow Pages (411 is Canada’s version), it launched with around 109,000 members – the number that QuebecTorrent closed with.

No surprise then that all QuebecTorrent user accounts had been transferred to T411, including ratios and even some content categories that were previously excluded due to copyright holder disputes.

“Welcome to one and all!” a notice on the site read. “It is with great pleasure that we launch the Torrent411.com site today. All the team of Torrent411.com wishes you the most cordial of welcomes! Here you will find all the torrents imaginable which will be for you for thousands of hours to come! Filled with surprises that await you!”

Even following its resurrection, pressure on the site continued to build. In 2011, it was forced to move to T411.me, to avoid problems with its .com domain, but against the odds, it continued to grow.

As shown in the image to the right (courtesy OpenTrackers), in 2013 the site had more than 5.3 million members, 336,000 torrents, and 4.7m seeders. That made it a significant site indeed.

In early 2015, the site decided to move again, from .me to .io, following action to have the site blocked in France.

But later in the year, there was yet more trouble when the site found itself reported to the United States Trade Representative, identified as a “rogue site” by the RIAA.

With a number of copyright holders on its back, it’s clear that T411’s troubles weren’t going away anytime soon, but now there’s a crisis from which the site is unlikely to recover.

On Sunday, T411 simply stopped responding on its latest T411.al domain. No warning and no useful messages have been forthcoming from its operators. For a site of this scale and resilience, that’s not something one expects.

Message greeting site visitors

Even though the site itself has been down, there have been some very basic signs of life. For example, the site’s Wiki remained operational which indicates the T411.al domain is at least partially intact, at least for now. But for those hoping for good news, none will be forthcoming.

Moments ago, French journalist Tristan Brossat‏ confirmed that T411 has been shut down in a joint operation between French and Swedish police.

He reports that “the brains” behind the site (reportedly two Ukrainians) have been arrested. Servers hosted at a Swedish company have been seized.

Anti-piracy activity against France-connected torrent sites has been high during recent months. Last November, torrent icon What.cd shutdown following action by French authorities.

Soon after, the cybercrime unit of the French military police targeted the country’s largest pirate site, Zone-Telechargement (1,2).

Update: A source familiar with developments informs TF that a one of those arrested in Sweden was a developer. In France, he reports that moderators have been arrested.

Update2: The arrests in Sweden took place in the Huddinge Municipality in Stockholm County, east central Sweden. The men are said to be around 30-years-old and are suspected of copyright infringement and money laundering offenses.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.