Tag Archives: R

Canada’s Supreme Court Orders Google to Remove Search Results Worldwide

Post Syndicated from Andy original https://torrentfreak.com/canadas-supreme-court-orders-google-remove-search-results-worldwide-170629/

Back in 2014, the case of Equustek Solutions Inc. v. Jack saw two Canadian entities battle over stolen intellectual property used to manufacture competing products.

Google had no direct links to the case, yet it became embroiled when Equustek Solutions claimed that Google’s search results helped to send visitors to websites operated by the defendants (former Equustek employees) who were selling unlawful products.

Google voluntarily removed links to the sites from its Google.ca (Canada) results, but Equustek demanded a more comprehensive response. It got one.

In a ruling handed down by a court in British Columbia, Google was ordered to remove the infringing websites’ listings from its central database in the United States, meaning that the ruling had worldwide implications.

Google filed an appeal hoping for a better result, arguing that it does not operate servers in British Columbia, nor does it operate any local offices. It also questioned whether the injunction could be enforced outside Canada’s borders.

Ultimately, the British Columbia Court of Appeal disappointed the search giant. In a June 2015 ruling, the Court decided that Google does indeed do business in the region. It also found that a decision to restrict infringement was unlikely to offend any overseas nation.

“The plaintiffs have established, in my view, that an order limited to the google.ca search site would not be effective. I am satisfied that there was a basis, here, for giving the injunction worldwide effect,” Justice Groberman wrote.

Undeterred, Google took its case all the way to the Supreme Court of Canada, hoping to limit the scope of the injunction by arguing that it violates freedom of expression. That effort has now failed.

In a 7-2 majority decision released Wednesday, Google was branded a “determinative player” in facilitating harm to Equustek.

“This is not an order to remove speech that, on its face, engages freedom of expression values, it is an order to de-index websites that are in violation of several court orders,” wrote Justice Rosalia Abella.

“We have not, to date, accepted that freedom of expression requires the facilitation of the unlawful sale of goods.”

With Google now required to delist the sites on a global basis, the big question is what happens when other players attempt to apply the ruling to their particular business sector. Unsurprisingly that hasn’t taken long.

The International Federation of the Phonographic Industry (IFPI), which supported Equustek’s position in the long-running case, welcomed the decision and said that Google must “take on the responsibility” to ensure it does not direct users to illegal sites.

“Canada’s highest court has handed down a decision that is very good news for rights holders both in Canada and around the world. Whilst this was not a music piracy case, search engines play a prominent role in directing users to illegal content online including illegal music sites,” said IFPI CEO, Frances Moore.

“If the digital economy is to grow to its full potential, online intermediaries, including search engines, must play their part by ensuring that their services are not used to facilitate the infringement of intellectual property rights.”

Graham Henderson, President and CEO of Music Canada, which represents Sony, Universal, Warner and others, also welcomed the ruling.

“Today’s decision confirms that online service providers cannot turn a blind eye to illegal activity that they facilitate; on the contrary, they have an affirmative duty to take steps to prevent the Internet from becoming a black market,” Henderson said.

But for every voice of approval from groups like IFPI and Music Canada, others raised concerns over the scope of the decision and its potential to create a legal and political minefield. In particular, University of Ottawa professor Michael Geist raised a number of interesting scenarios.

“What happens if a Chinese court orders [Google] to remove Taiwanese sites from the index? Or if an Iranian court orders it to remove gay and lesbian sites from the index? Since local content laws differ from country to country, there is a great likelihood of conflicts,” Geist said.

But rather than painting Google as the loser in this battle, Geist believes the decision actually grants the search giant more power.

“When it comes to Internet jurisdiction, exercising restraint and limiting the scope of court orders is likely to increase global respect for the law and the effectiveness of judicial decisions. Yet this decision demonstrates what many have feared: the temptation for courts will be to assert jurisdiction over online activities and leave it to the parties to sort out potential conflicts,” Geist says.

“In doing so, the Supreme Court of Canada has lent its support to global takedowns and vested more power in Internet intermediaries, who may increasingly emerge as the arbiters of which laws to follow online.”

Only time will tell how Google will react, but it’s clear there will be plenty of entities ready to test the limits and scope of the company’s responses to the ruling.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

New Power Bundle for Amazon WorkSpaces – More vCPUs, Memory, and Storage

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-power-bundle-for-amazon-workspaces-more-vcpus-memory-and-storage/

Are you tired of hearing me talk about Amazon WorkSpaces yet? I hope not, because we have a lot of customer-driven additions on the roadmap! Our customers in the developer and analyst community have been asking for a workstation-class machine that will allow them to take advantage of the low cost and flexibility of WorkSpaces. Developers want to run Visual Studio, IntelliJ, Eclipse, and other IDEs. Analysts want to run complex simulations and statistical analysis using MatLab, GNU Octave, R, and Stata.

New Power Bundle
Today we are extending the current set of WorkSpaces bundles with a new Power bundle. With four vCPUs, 16 GiB of memory, and 275 GB of storage (175 GB on the system volume and another 100 GB on the user volume), this bundle is designed to make developers, analysts, (and me) smile. You can launch them in all of the usual ways: Console, CLI (create-workspaces), or API (CreateWorkSpaces):

One really interesting benefit to using a cloud-based virtual desktop for simulations and statistical analysis is the ease of access to data that’s already stored in the cloud. Analysts can mine and analyze petabytes of data stored in S3 that is effectively local (with respect to access time) to the WorkSpace. This low-latency access will boost productivity and also simplifies the use of other AWS data analysis tools such as Amazon Redshift, Amazon Redshift Spectrum, Amazon QuickSight, and Amazon Athena.

Like the existing bundles, the new Power bundle can be used in either billing configuration, AlwaysOn or AutoStop (read Amazon WorkSpaces Update – Hourly Usage and Expanded Root Volume to learn more). The bundle is available in all AWS Regions where WorkSpaces is available and you can launch one today! Visit the WorkSpaces Pricing page for pricing in your region.

Jeff;

[$] Distributing filesystem images and updates with casync

Post Syndicated from jake original https://lwn.net/Articles/726625/rss

Recently, Lennart Poettering announced
a new tool called casync for efficiently distributing filesystem and disk
images. Deployment of virtual machines or containers often requires such
an image to be distributed for them. These images typically contain most
or all of an entire operating system and its requisite data files; they can
be quite large. The images also often need updates, which can take up
considerable bandwidth depending on how efficient the update mechanism
is. Poettering developed casync as an efficient tool for distributing such
filesystem images, as well as for their updates.

[$] An introduction to asynchronous Python

Post Syndicated from jake original https://lwn.net/Articles/726600/rss

In his PyCon 2017 talk, Miguel
Grinberg wanted to introduce asynchronous programming with Python to
complete beginners. There is a lot of talk about asynchronous Python,
especially with the advent of the
asyncio module
, but there are multiple ways to create
asynchronous Python programs, many of which have been available for quite
some time. In the talk, Grinberg took something of a step back from the
intricacies of those solutions to look at what asynchronous processing
means at a
higher level.

Now Available – Developer Preview of AWS SDK for Java 2.0

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-available-developer-preview-of-aws-sdk-for-java-2-0/

The AWS Developer Tools Team has been hard at work on the AWS SDK for Java and is launching a Developer Preview of version 2.0 today.

This version is a major rewrite of the older, 1.11.x codebase. Built on top of Java 8 with a focus on consistency, immutability and ease of use, the new SDK includes frequently requested features such as support for non-blocking I/O and the ability to choose the desired HTTP implementation at runtime. The new non-blocking I/O support is more efficient than the existing, thread-based implementation of the Async variants of the service clients. Each non-blocking request returns a CompletableFuture object.

The version 2.0 SDK includes a number of changes to the earlier APIs. For example, it replaces the existing mix of client constructors and mutable methods with a consistent model based on client builders and immutable clients. The SDK also collapses the disparate collection of classes used to configure regions into a single Region class, and provides a new set of APIs for streaming.

The SDK is available on GitHub. You can send public feedback by opening GitHub issues and you can also send pull requests in the usual way.

To learn more about this SDK, read AWS SDK for Java 2.0 – Developer Preview on the AWS Developer Blog.

Jeff;

 

NotPetya Ransomeware Wreaking Havoc

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/0IfKiBP5jIo/

The latest splash has been made by the Petya or NotPetya Ransomware that exploded in Ukraine and is infecting companies all over the World. It’s getting some people in deep trouble as there’s no way to recover the files once encrypted. The malware seems to be trying to hide it’s intent as it doesn’t really […]

The post NotPetya Ransomeware…

Read the full post at darknet.org.uk

Validating AWS CloudFormation Templates

Post Syndicated from Remek Hetman original https://aws.amazon.com/blogs/devops/validating-aws-cloudformation-templates/

For their continuous integration and continuous deployment (CI/CD) pipeline path, many companies use tools like Jenkins, Chef, and AWS CloudFormation. Usually, the process is managed by two or more teams. One team is responsible for designing and developing an application, CloudFormation templates, and so on. The other team is generally responsible for integration and deployment.

One of the challenges that a CI/CD team has is to validate the CloudFormation templates provided by the development team. Validation provides early warning about any incorrect syntax and ensures that the development team follows company policies in terms of security and the resources created by CloudFormation templates.

In this post, I focus on the validation of AWS CloudFormation templates for syntax as well as in the context of business rules.

Scripted validation solution

For CloudFormation syntax validation, one option is to use the AWS CLI to call the validate-template command. For security and resource management, another approach is to run a Jenkins pipeline from an Amazon EC2 instance under an EC2 role that has been granted only the necessary permissions.

What if you need more control over your CloudFormation templates, such as managing parameters or attributes? What if you have many development teams where permissions to the AWS environment required by one team are either too open or not open enough for another team?

To have more control over the contents of your CloudFormation template, you can use the cf-validator Python script, which shows you how to validate different template aspects. With this script, you can validate:

  • JSON syntax
  • IAM capabilities
  • Root tags
  • Parameters
  • CloudFormation resources
  • Attributes
  • Reference resources

You can download this script from the cf-validator GitHub repo. Use the following command to run the script:

python cf-validator.py

The script takes the following parameters:

  • –cf_path [Required]

    The location of the CloudFormation template in JSON format. Supported location types:

    • File system – Path to the CloudFormation template on the file system
    • Web – URL, for example, https://my-file.com/my_cf.json
    • Amazon S3 – Amazon S3 bucket, for example, s3://my_bucket/my_cf.json
  • –cf_rules [Required]

    The location of the JSON file with the validation rules. This parameter supports the same locations as –cf_path. The next section of this post has more information about defining rules.

  • –cf_res [Optional]

    The location of the JSON file with the defined AWS resources, which need to be confirmed before launching the CloudFormation template. A later section of this post has more information about resource validation.

  • –allow_cap [Optional][yes/no]

    Controls whether you allow the creation of IAM resources by the CloudFormation template, such as policies, rules, or IAM users. The default value is no.

  • –region [Optional]

    The AWS region where the existing resources were created. The default value is us-east-1.

Defining rules

All rules are defined in the JSON format file. Rules consist of the following keys:

  • “allow_root_keys”

    Lists allowed root CloudFormation keys. Example of root keys are Parameters, Resources, Output, and so on. An empty list means that any key is allowed.

  • “allow_parameters”

    Lists allowed CloudFormation parameters. For instance, to force each CloudFormation template to use only the set of parameters defined in your pipeline, list them under this key. An empty list means that any parameter is allowed.

  • “allow_resources”

    Lists the AWS resources allowed for creation by a CloudFormation template. The format of the resource is the same as resource types in CloudFormation, but without the “AWS::” prefix. Examples:  EC2::Instance, EC2::Volume, and so on. If you allow the creation of all resources from the given group, you can use a wildcard. For instance, if you allow all resources related to CloudFormation, you can add CloudFormation::* to the list instead of typing CloudFormation::Init, CloudFormation:Stack, and so on. An empty list means that all resources are allowed.

  • “require_ref_attributes”

    Lists attributes (per resource) that have to be defined in CloudFormation. The value must be referenced and cannot be hardcoded. For instance, you can require that each EC2 instance must be created from a specific AMI where Image ID has to be a passed-in parameter. An empty list means that you are not requiring specific attributes to be present for a given resource.

  • “allow_additional_attributes”

    Lists additional attributes (per resource) that can be defined and have any value in the CloudFormation template. An empty list means that any additional attribute is allowed. If you specify additional attributes for this key, then any resource attribute defined in a CloudFormation template that is not listed in this key or in the require_ref_attributes key causes validation to fail.

  • “not_allow_attributes”

    Lists attributes (per resource) that are not allowed in the CloudFormation template. This key takes precedence over the require_ref_attributes and allow_additional_attributes keys.

Rule file example

The following is an example of a rule file:

{
  "allow_root_keys" : ["AWSTemplateFormatVersion", "Description", "Parameters", "Conditions", "Resources", "Outputs"],
  "allow_parameters" : [],
  "allow_resources" : [
    "CloudFormation::*",
    "CloudWatch::Alarm",
    "EC2::Instance",
    "EC2::Volume",
    "EC2::VolumeAttachment",
    "ElasticLoadBalancing::LoadBalancer",
    "IAM::Role",
    "IAM::Policy",
    "IAM::InstanceProfile"
  ],
  "require_ref_attributes" :
    {
      "EC2::Instance" : [ "InstanceType", "ImageId", "SecurityGroupIds", "SubnetId", "KeyName", "IamInstanceProfile" ],
      "ElasticLoadBalancing::LoadBalancer" : ["SecurityGroups", "Subnets"]
    },
  "allow_additional_attributes" : {},
  "not_allow_attributes" : {}
}

Validating resources

You can use the –cf_res parameter to validate that the resources you are planning to reference in the CloudFormation template exist and are available. As a value for this parameter, point to the JSON file with defined resources. The format should be as follows:

[
  { "Type" : "SG",
    "ID" : "sg-37c9b448A"
  },
  { "Type" : "AMI",
    "ID" : "ami-e7e523f1"
  },
  { "Type" : "Subnet",
    "ID" : "subnet-034e262e"
  }
]

Summary

At this moment, this CloudFormation template validation script supports only security groups, AMIs, and subnets. But anyone with some knowledge of Python and the boto3 package can add support for additional resources type, as needed.

For more tips please visit our AWS CloudFormation blog

The mkosi OS generation tool

Post Syndicated from ris original https://lwn.net/Articles/726655/rss

Last week Lennart Poettering introduced
casync
, a tool for distributing system images. This week he introduces
mkosi
, a tool for making OS images. “mkosi is definitely a tool with a focus on developer’s needs for building OS images, for testing and debugging, but also for generating production images with cryptographic protection. A typical use-case would be to add a mkosi.default file to an existing project (for example, one written in C or Python), and thus making it easy to generate an OS image for it. mkosi will put together the image with development headers and tools, compile your code in it, run your test suite, then throw away the image again, and build a new one, this time without development headers and tools, and install your build artifacts in it. This final image is then “production-ready”, and only contains your built program and the minimal set of packages you configured otherwise. Such an image could then be deployed with casync (or any other tool of course) to be delivered to your set of servers, or IoT devices or whatever you are building.

Operation ‘Pirate On Demand’ Blocks Pirate IPTV Portals

Post Syndicated from Andy original https://torrentfreak.com/operation-pirate-on-demand-blocks-pirate-iptv-portals-170628/

Via cheap set-top boxes, IPTV services (Internet Protocol TV) allow people to access thousands of live TV channels in their living rooms for a nominal fee.

Some of these services are available for just a few euros, dollars or pounds per month, often in HD quality.

While service levels can vary, some of the best also offer comprehensive Video On Demand (VOD), with hundreds and in some cases thousands of movies and TV shows on tap, supported by catch-up TV. Given their professional nature, the best IPTV products are proving a real thorn in the side for rights holders, who hope to charge ten times the money while delivering a lesser product.

As a result, crackdowns against IPTV providers, resellers and other people in the chain are underway across the world, but Europe in particular. Today’s news comes from Italy, where Operation “Pirate On Demand” is hoping to make a dent in IPTV piracy.

The operation is being headed up by the Guardia di Finanza (GdF), a department under Italy’s Minister of Economy and Finance. Part of the Italian Armed Forces, GdF says it has targeted nine sites involved in the unlawful distribution of content offered officially by local media giants Mediaset and Sky.

The authorities received assistance of a specialized team from the local anti-piracy group DCP, which operates on behalf of a broad range of entertainment industry companies.

According to GdF, a total of 89 servers were behind the portals which together delivered an estimated 178 terabytes of pirate content, ranging from TV shows and sports, to movies and children’s entertainment.

The nine portals are in the process of being blocked with some displaying the following message.

Seizure notice on the affected sites

The investigation began in September 2016 and was coordinated by Giangiacomo Pilia, the prosecutor at the Cagliari Court. Thus far, two people have been arrested.

A person arrested in the Varese area, who police believe is the commercial director of an illicit platform, has been charged with breaching copyright law.

A second individual arrested in Macerata is also suspected of copyright offenses, having technically managed the platform. Computer equipment, decoders, smart cards, and other electronic devices were also seized.

In addition to blocking various web portals, measures will now be taken to block the servers being used to supply the IPTV services. The GdF has also delivered a veiled threat to people who subscribed to the illicit services.

“It is also in the hands of investigators the position of those who have actively accessed the platforms by purchasing pirated subscriptions and thus benefiting by taking advantage,” GdF said.

The moves this week are the latest to take place under the Operation “Pirate On Demand” banner. Back in March, authorities moved to shut down and block 15 portals offering illegal IPTV access to Mediaset and Sky channels.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

[$] Ripples from Stack Clash

Post Syndicated from corbet original https://lwn.net/Articles/726580/rss

In one sense, the Stack Clash vulnerability
that was announced on June 19 has not had a huge impact: thus far, at
least, there have been few (if any) stories of active exploits in the
wild. At other levels, though, this would appear to be in important
vulnerability, in that it has raised a number of questions about how the
community handles security issues and what can be expected in the future.
The indications, unfortunately, are not all positive.

Desert To Data in 7 Days – Our New Phoenix Data Center

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/data-center-design/

We are pleased to announce that Backblaze is now storing some of our customers’ data in our newest data center in Phoenix. Our Sacramento facility was slated to store about 500 petabytes of data and was starting to fill up so it was time to expand. After visiting multiple locations in the US and Canada, we selected Phoenix as it had the right combination of power, networking, price and more that we were seeking. Let’s take you through the process of getting the Phoenix data center up and running.

Day 0 – Designing the Data Center

After we selected the Phoenix location as our next DC (data center), we had to negotiate the contract. We’re going to skip that part of the process because, unless you’re a lawyer, it’s a long, boring process. Let’s just say we wanted to be ready to move in once the contract was signed. That meant we had to gather up everything we needed and order a bunch of other things like networking equipment, racks, storage pods, cables, etc. We decided to use our Sacramento DC as the staging point and started gathering what was going to be needed in Phoenix.

In actuality, for some items we started the process several months ago as lead times for things like network switches, Storage Pods, and even hard drives can be measured in months and delays are normal. For example, depending on our move in date, the network providers we wanted would only be able to provide limited bandwidth, so we had to prepare for that possibility. It helps to have a procurement person who knows what they are doing, can work the schedule, and is creatively flexible – thanks Amanda.

So by Day 0, we had amassed multiple pallets of cabinets, network gear, PDUs, tools, hard drives, carts, Guido, and more. And yes, for all you Guido fans he is still with us and he now resides in Phoenix. Everything was wrapped and loaded into a 53-foot semi-truck that was driven the 755 miles (1,215 km) from Sacramento, California to Phoenix, Arizona.

Day 1 – Move In Day

We sent a crew of 5 people to Phoenix with the goal of going from empty space to being ready to accept data in one week. The truck from Sacramento arrived mid-morning and work started unloading and marshaling the pallets and boxes into one area, while the racks were placed near their permanent location on the DC floor.

Day 2 – Building the Racks

Day 2 was spent primarily working with the racks. First they were positioned to their precise location on the data center floor. They were then anchored down and tied together. We started with 2 rows of twenty-two racks each, with twenty being for storage pods and two being for networking equipment. By the end of the week there will be 4 rows of racks installed.

Day 3 – Networking and Power, Part 1

While one team continued to work on the racks, another team began the process a getting the racks connected to the electricty and running the network cables to the network distribution racks. Once that was done, networking gear and rack-based PDUs (Power Distribution Units) were installed in the racks.

Day 4 – Rack Storage Pods

The truck from Sacramento brought 100 Storage Pods, a combination of 45 drive and 60 drive systems. Why did we use 45 drives units here? It has to do with the size (in racks and power) of the initial installation commitment and the ramp (increase) of installations over time. Contract stuff: boring yes, important yes. Basically to optimize our spend we wanted to use as much of the initial space we were allotted as possible. Since we had a number of empty 45 drive chassis available in Sacramento we decided to put them to use.

Day 5 – Drive Day

Our initial set-up goal was to build out five Backblaze Vaults. Each Vault is comprised of twenty Storage Pods. Four of the Vaults were filled with 45 drive Storage Pods and one was filled with 60 drive Storage Pods. That’s 4,800 hard drives to install – thank goodness we don’t use those rubber bands around the drives anymore.

Day 6 – Networking and Power, Part 2

With the storage pods in place, Day 6 was spent routing network and power cables to the individual pods. A critical part of the process is to label every wire so you know where it comes from and where it goes too. Once labeled, wires are bundled together and secured to the racks in a standard pattern. Not only does this make things look neat, it standardizes where you’ll find each cable across the hundreds of racks that are in the DC.

Day 7 – Test, Repair, Test, Ready

With all the power and networking finished, it was time to test the installation. Most of the Storage Pods light up with no problem, but there were a few that failed. These failures are quickly dealt with, and one by one each Backblaze Vault is registered into our monitoring and administration systems. By the end of the day, all five Vaults were ready.

Moving Forward

The Phoenix data center was ready for operation except that the network carriers we wanted to use could only provide a limited amount of bandwidth to start. It would take a few more weeks before the final network lines would be provisioned and operational. Even with the limited bandwidth we kicked off the migration of customer data from Sacramento to Phoenix to help balance out the workload. A few weeks later, once the networking was sorted out, we started accepting external customer data.

We’d like to thank our data center build team for documenting their work in pictures and allowing us to share some of them with our readers.

















Questions About Our New Data Center

Now that we have a second DC, you might have a few questions, such as can you store your data there and so on. Here’s the status of things today…

    Q: Does the new DC mean Backblaze has multi-region storage?
    A: Not yet. Right now we consider the Phoenix DC and the Sacramento DC to be in the same region.

    Q: Will you ever provide multi-region support?
    A: Yes, we expect to provide multi-region support in the future, but we don’t have a date for that capability yet.

    Q: Can I pick which data center will store my data?
    A: Not yet. This capability is part of our plans when we provide multi-region support.

    Q: Which data center is my data being stored in?
    A: Chances are that your data is in the Sacramento data center given it currently stores about 90% of our customer’s data.

    Q: Will my data be split across the two data centers?
    A: It is possible that one portion of your data will be stored in the Sacramento DC and another portion of your data will be stored in the Phoenix DC. This will be completely invisible to you and you should see no difference in storage or data retrieval times.

    Q: Can my data be replicated from one DC to the other?
    A: Not today. As noted above, your data will be in one DC or the other. That said files uploaded to the Backblaze Vaults in either DC are stored redundantly across 20 Backblaze Storage Pods within that DC. This translates to 99.999999% durability for the data stored this way.

    Q: Do you plan on opening more data centers?
    A: Yes. We are actively looking for new locations.

If you have any additional questions, please let us know in the comments or on social media. Thanks.

The post Desert To Data in 7 Days – Our New Phoenix Data Center appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

The Code Club International movement

Post Syndicated from Katherine Leadbetter original https://www.raspberrypi.org/blog/code-club-international/

Over the past few years, Code Club has made strides toward world domination! There are now more than 10,000 Code Clubs running in 125 countries. More than 140,000 kids have taken part in our clubs in places as diverse as the northernmost tip of Canada and the favelas of Rio de Janeiro.

In the first video from our Code Club International network, we find out about Code Clubs around the world from the people supporting these communities.

Global communities

Code Club currently has official local partners in twelve countries. Our passionate and motivated partner organisations are responsible for championing their countries’ Code Clubs. In March we brought the partners together for the first time, and they shared what it means to be part of the Code Club community:

You can help Code Club make a difference around the world

We invited our international Code Club partners to join us in London and discuss why we think Code Club is so special. Whether you’re a seasoned pro, a budding educator, or simply want to give back to your local community, there’s a place for you among our incredible Code Club volunteers.

Of course, Code Clubs aren’t restricted to countries with official partner communities – they can be started anywhere in the world! Code Clubs are up and running in a number of unexpected places, from Kosovo to Kazakhstan.

Code Club International

Code Club partners gathered together at the International Meetup

The geographical spread of Code Clubs means we hear of clubs overcoming a range of different challenges. One club in Zambia, run by volunteer Mwiza Simbeye, started as a way to get kids off the streets of Lusaka and teach them useful skills. Many children attending had hardly used a computer before writing their first line of code at the club. And it’s making a difference! As Mwiza told us, ‘you only need to see the light shine in the eyes of [Code Club] participants to see how much they enjoy these sessions.’

Code Club International

Student Joyce codes in Scratch at her Code Club in Nunavut, Canada

In the Nunavut region of Canada, Talia Metuq was first introduced to coding at a Code Club. In an area comprised of 25 Inuit communities that are inaccessible via roads and currently combating severe social and economic deprivation, computer science was not on the school timetable. Code Club, along with club volunteer Ryan Oliver, is starting to change that. After graduating from Code Club, Talia went on to study 3D modelling in Vancouver. She has now returned to Nunavut and is helping inspire more children to pursue digital making.

Start a Code Club

Code Clubs are volunteer-led extra-curricular coding clubs for children age 9 to 13. Children that attend learn to code games, animations, and websites using the projects we provide. Working with volunteers and with other children in their club, they grow their digital skillset.

You can run a Code Club anywhere if you have a venue, volunteers, and kids ready to learn coding. Help us achieve our goal of having a Code Club in every community in the world!

To find out how to start a Code Club outside of the UK, you can visit the Code Club International website. If you are in the UK, head to the Code Club UK website for more information.

Code Club International

Help the Code Club International community grow

On the Code Club site, we currently have projects in 28 languages, allowing more young people than ever to learn programming in their native language. But that’s not enough! We are always on the lookout for volunteers to translate projects and resources. If you are proficient in translating from English and would like to help, please visit the website to find out more.

We are also looking for official local partners in Italy and Germany to join our international network – if you know of, or are a part of an enthusiastic non-profit organisation who might be interested to join us, you can learn more here.

The post The Code Club International movement appeared first on Raspberry Pi.

FACT Threatens Users of ‘Pirate’ Kodi Add-Ons

Post Syndicated from Ernesto original https://torrentfreak.com/fact-threatens-users-of-pirate-kodi-add-ons-170628/

In the UK there’s a war going on against streaming pirates. At least, that’s what the local anti-piracy body FACT would like the public to know.

The popular media streaming platform Kodi is at the center of the controversy. While Kodi is perfectly legal, many people use it in conjunction with third party add-ons that offer pirated content.

FACT hopes to curb this trend. The group has already taken action against sellers of Kodi devices pre-loaded with these add-ons and they’re keeping a keen eye on developers of illicit add-ons too.

However, according to FACT, the ‘crackdown’ doesn’t stop there. Users of pirate add-ons are also at risk, they claim.

“And then we’ll also be looking at, at some point, the end user. The reason for end users to come into this is that they are committing criminal offences,” FACT’s chief executive Kieron Sharp told the Independent.

While people who stream pirated content are generally hard to track, since they don’t broadcast their IP-address to the public, FACT says that customer data could be obtained directly from sellers of fully-loaded Kodi boxes.

“When we’re working with the police against a company that’s selling IPTV boxes or illicit streaming devices on a large scale, they have records of who they’ve sold them to,” Sharp noted.

While the current legal efforts are focused on the supply side, including these sellers, the end users may also be targeted in the future.

“We have a number of cases coming before the courts in terms of those people who have been providing, selling and distributing illicit streaming devices. It’s something for the very near future, when we’ll consider whether we go any further than that, in terms of customers.”

The comments above make it clear that FACT wants users of these pirate devices to feel vulnerable and exposed. But threatening talk is much easier than action.

It will be very hard to get someone convicted, simply because they bought a device that can access both legal and illegal content. A receipt doesn’t prove intent, and even if it did, it’s pretty much impossible to prove that a person streamed specific pirated content.

But let’s say FACT was able to prove that someone bought a fully-loaded Kodi box and streamed content without permission. How would that result in a conviction? Contrary to claims in the mainstream press, watching a pirated stream isn’t an offense covered by the new Digital Economy Act.

In theory, there could be other ways, but given the complexity of the situation, one would think that FACT would be better off spending its efforts elsewhere.

If FACT was indeed interested in going after individuals then they could easily target people who use torrents. These people broadcast their IP-addresses to the public, which makes them easy to identify. In addition, you can see what they are uploading, and they would also be liable under the Digital Economy Act.

However, after FACT’s decades-long association with the MPAA ended, its main partner in the demonization of Kodi-enabled devices is now the Premier League, who are far more concerned about piracy of live broadcasts (streaming) than content made available after the fact via torrents.

So, given the challenges of having a meaningful criminal prosecution of an end-user as suggested, that leaves us with the probability of FACT sowing fear, uncertainty, and doubt. In other words, scaring the public to deter them from buying or using a fully-loaded Kodi box.

This would also fit in with FACT’s recent claims that some pirate devices are a fire hazard. While it’s kind of FACT to be concerned about the well-being of pirates, as an anti-piracy organization their warnings also serve as a deterrent.

This strategy could pay off to a degree but there’s also some risk involved. Every day new “Kodi” related articles appear in the UK tabloid press, many of them with comments from FACT. Some of these may scare prospective users, but the same headlines also make these boxes known to a much wider public.

In fact, in what is quite a serious backfire, some recent pieces published by the popular Trinity Mirror group (which include FACT comments) actually provide a nice list of pirate addons that are still operational following recent crackdowns.

So are we just sowing fear now or educating a whole new audience?

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Milestone: 100 Million Certificates Issued

Post Syndicated from Let's Encrypt - Free SSL/TLS Certificates original https://letsencrypt.org//2017/06/28/hundred-million-certs.html

Let’s Encrypt has reached a milestone: we’ve now issued more than 100,000,000 certificates. This number reflects at least a few things:

First, it illustrates the strong demand for our services. We’d like to thank all of the sysadmins, web developers, and everyone else managing servers for prioritizing protecting your visitors with HTTPS.

Second, it illustrates our ability to scale. I’m incredibly proud of the work our engineering teams have done to make this volume of issuance possible. I’m also very grateful to our operational partners, including IdenTrust, Akamai, and Sumo Logic.

Third, it illustrates the power of automated certificate management. If getting and managing certificates from Let’s Encrypt always required manual steps there is simply no way we’d be able to serve as many sites as we do. We’d like to thank our community for creating a wide range of clients for automating certificate issuance and management.

The total number of certificates we’ve issued is an interesting number, but it doesn’t reflect much about tangible progress towards our primary goal: a 100% HTTPS Web. To understand that progress we need to look at this graph:

Percentage of HTTPS Page Loads in Firefox.

When Let’s Encrypt’s service first became available, less than 40% of page loads on the Web used HTTPS. It took the Web 20 years to get to that point. In the 19 months since we launched, encrypted page loads have gone up by 18%, to nearly 58%. That’s an incredible rate of change for the Web. Contributing to this trend is what we’re most proud of.

If you’re as excited about the potential for a 100% HTTPS Web as we are, please consider getting involved, making a donation, or sponsoring Let’s Encrypt.

Here’s to the next 100,000,000 certificates, and a more secure and privacy-respecting Web for everyone!

Continuous Delivery of Nested AWS CloudFormation Stacks Using AWS CodePipeline

Post Syndicated from Prakash Palanisamy original https://aws.amazon.com/blogs/devops/continuous-delivery-of-nested-aws-cloudformation-stacks-using-aws-codepipeline/

In CodePipeline Update – Build Continuous Delivery Workflows for CloudFormation Stacks, Jeff Barr discusses infrastructure as code and how to use AWS CodePipeline for continuous delivery. In this blog post, I discuss the continuous delivery of nested CloudFormation stacks using AWS CodePipeline, with AWS CodeCommit as the source repository and AWS CodeBuild as a build and testing tool. I deploy the stacks using CloudFormation change sets following a manual approval process.

Here’s how to do it:

In AWS CodePipeline, create a pipeline with four stages:

  • Source (AWS CodeCommit)
  • Build and Test (AWS CodeBuild and AWS CloudFormation)
  • Staging (AWS CloudFormation and manual approval)
  • Production (AWS CloudFormation and manual approval)

Pipeline stages, the actions in each stage, and transitions between stages are shown in the following diagram.

CloudFormation templates, test scripts, and the build specification are stored in AWS CodeCommit repositories. These files are used in the Source stage of the pipeline in AWS CodePipeline.

The AWS::CloudFormation::Stack resource type is used to create child stacks from a master stack. The CloudFormation stack resource requires the templates of the child stacks to be stored in the S3 bucket. The location of the template file is provided as a URL in the properties section of the resource definition.

The following template creates three child stacks:

  • Security (IAM, security groups).
  • Database (an RDS instance).
  • Web stacks (EC2 instances in an Auto Scaling group, elastic load balancer).
Description: Master stack which creates all required nested stacks

Parameters:
  TemplatePath:
    Type: String
    Description: S3Bucket Path where the templates are stored
  VPCID:
    Type: "AWS::EC2::VPC::Id"
    Description: Enter a valid VPC Id
  PrivateSubnet1:
    Type: "AWS::EC2::Subnet::Id"
    Description: Enter a valid SubnetId of private subnet in AZ1
  PrivateSubnet2:
    Type: "AWS::EC2::Subnet::Id"
    Description: Enter a valid SubnetId of private subnet in AZ2
  PublicSubnet1:
    Type: "AWS::EC2::Subnet::Id"
    Description: Enter a valid SubnetId of public subnet in AZ1
  PublicSubnet2:
    Type: "AWS::EC2::Subnet::Id"
    Description: Enter a valid SubnetId of public subnet in AZ2
  S3BucketName:
    Type: String
    Description: Name of the S3 bucket to allow access to the Web Server IAM Role.
  KeyPair:
    Type: "AWS::EC2::KeyPair::KeyName"
    Description: Enter a valid KeyPair Name
  AMIId:
    Type: "AWS::EC2::Image::Id"
    Description: Enter a valid AMI ID to launch the instance
  WebInstanceType:
    Type: String
    Description: Enter one of the possible instance type for web server
    AllowedValues:
      - t2.large
      - m4.large
      - m4.xlarge
      - c4.large
  WebMinSize:
    Type: String
    Description: Minimum number of instances in auto scaling group
  WebMaxSize:
    Type: String
    Description: Maximum number of instances in auto scaling group
  DBSubnetGroup:
    Type: String
    Description: Enter a valid DB Subnet Group
  DBUsername:
    Type: String
    Description: Enter a valid Database master username
    MinLength: 1
    MaxLength: 16
    AllowedPattern: "[a-zA-Z][a-zA-Z0-9]*"
  DBPassword:
    Type: String
    Description: Enter a valid Database master password
    NoEcho: true
    MinLength: 1
    MaxLength: 41
    AllowedPattern: "[a-zA-Z0-9]*"
  DBInstanceType:
    Type: String
    Description: Enter one of the possible instance type for database
    AllowedValues:
      - db.t2.micro
      - db.t2.small
      - db.t2.medium
      - db.t2.large
  Environment:
    Type: String
    Description: Select the appropriate environment
    AllowedValues:
      - dev
      - test
      - uat
      - prod

Resources:
  SecurityStack:
    Type: "AWS::CloudFormation::Stack"
    Properties:
      TemplateURL:
        Fn::Sub: "https://s3.amazonaws.com/${TemplatePath}/security-stack.yml"
      Parameters:
        S3BucketName:
          Ref: S3BucketName
        VPCID:
          Ref: VPCID
        Environment:
          Ref: Environment
      Tags:
        - Key: Name
          Value: SecurityStack

  DatabaseStack:
    Type: "AWS::CloudFormation::Stack"
    Properties:
      TemplateURL:
        Fn::Sub: "https://s3.amazonaws.com/${TemplatePath}/database-stack.yml"
      Parameters:
        DBSubnetGroup:
          Ref: DBSubnetGroup
        DBUsername:
          Ref: DBUsername
        DBPassword:
          Ref: DBPassword
        DBServerSecurityGroup:
          Fn::GetAtt: SecurityStack.Outputs.DBServerSG
        DBInstanceType:
          Ref: DBInstanceType
        Environment:
          Ref: Environment
      Tags:
        - Key: Name
          Value:   DatabaseStack

  ServerStack:
    Type: "AWS::CloudFormation::Stack"
    Properties:
      TemplateURL:
        Fn::Sub: "https://s3.amazonaws.com/${TemplatePath}/server-stack.yml"
      Parameters:
        VPCID:
          Ref: VPCID
        PrivateSubnet1:
          Ref: PrivateSubnet1
        PrivateSubnet2:
          Ref: PrivateSubnet2
        PublicSubnet1:
          Ref: PublicSubnet1
        PublicSubnet2:
          Ref: PublicSubnet2
        KeyPair:
          Ref: KeyPair
        AMIId:
          Ref: AMIId
        WebSG:
          Fn::GetAtt: SecurityStack.Outputs.WebSG
        ELBSG:
          Fn::GetAtt: SecurityStack.Outputs.ELBSG
        DBClientSG:
          Fn::GetAtt: SecurityStack.Outputs.DBClientSG
        WebIAMProfile:
          Fn::GetAtt: SecurityStack.Outputs.WebIAMProfile
        WebInstanceType:
          Ref: WebInstanceType
        WebMinSize:
          Ref: WebMinSize
        WebMaxSize:
          Ref: WebMaxSize
        Environment:
          Ref: Environment
      Tags:
        - Key: Name
          Value: ServerStack

Outputs:
  WebELBURL:
    Description: "URL endpoint of web ELB"
    Value:
      Fn::GetAtt: ServerStack.Outputs.WebELBURL

During the Validate stage, AWS CodeBuild checks for changes to the AWS CodeCommit source repositories. It uses the ValidateTemplate API to validate the CloudFormation template and copies the child templates and configuration files to the appropriate location in the S3 bucket.

The following AWS CodeBuild build specification validates the CloudFormation templates listed under the TEMPLATE_FILES environment variable and copies them to the S3 bucket specified in the TEMPLATE_BUCKET environment variable in the AWS CodeBuild project. Optionally, you can use the TEMPLATE_PREFIX environment variable to specify a path inside the bucket. This updates the configuration files to use the location of the child template files. The location of the template files is provided as a parameter to the master stack.

version: 0.1

environment_variables:
  plaintext:
    CHILD_TEMPLATES: |
      security-stack.yml
      server-stack.yml
      database-stack.yml
    TEMPLATE_FILES: |
      master-stack.yml
      security-stack.yml
      server-stack.yml
      database-stack.yml
    CONFIG_FILES: |
      config-prod.json
      config-test.json
      config-uat.json

phases:
  install:
    commands:
      npm install jsonlint -g
  pre_build:
    commands:
      - echo "Validating CFN templates"
      - |
        for cfn_template in $TEMPLATE_FILES; do
          echo "Validating CloudFormation template file $cfn_template"
          aws cloudformation validate-template --template-body file://$cfn_template
        done
      - |
        for conf in $CONFIG_FILES; do
          echo "Validating CFN parameters config file $conf"
          jsonlint -q $conf
        done
  build:
    commands:
      - echo "Copying child stack templates to S3"
      - |
        for child_template in $CHILD_TEMPLATES; do
          if [ "X$TEMPLATE_PREFIX" = "X" ]; then
            aws s3 cp "$child_template" "s3://$TEMPLATE_BUCKET/$child_template"
          else
            aws s3 cp "$child_template" "s3://$TEMPLATE_BUCKET/$TEMPLATE_PREFIX/$child_template"
          fi
        done
      - echo "Updating template configurtion files to use the appropriate values"
      - |
        for conf in $CONFIG_FILES; do
          if [ "X$TEMPLATE_PREFIX" = "X" ]; then
            echo "Replacing \"TEMPLATE_PATH_PLACEHOLDER\" for \"$TEMPLATE_BUCKET\" in $conf"
            sed -i -e "s/TEMPLATE_PATH_PLACEHOLDER/$TEMPLATE_BUCKET/" $conf
          else
            echo "Replacing \"TEMPLATE_PATH_PLACEHOLDER\" for \"$TEMPLATE_BUCKET/$TEMPLATE_PREFIX\" in $conf"
            sed -i -e "s/TEMPLATE_PATH_PLACEHOLDER/$TEMPLATE_BUCKET\/$TEMPLATE_PREFIX/" $conf
          fi
        done

artifacts:
  files:
    - master-stack.yml
    - config-*.json

After the template files are copied to S3, CloudFormation creates a test stack and triggers AWS CodeBuild as a test action.

Then the AWS CodeBuild build specification executes validate-env.py, the Python script used to determine whether resources created using the nested CloudFormation stacks conform to the specifications provided in the CONFIG_FILE.

version: 0.1

environment_variables:
  plaintext:
    CONFIG_FILE: env-details.yml

phases:
  install:
    commands:
      - pip install --upgrade pip
      - pip install boto3 --upgrade
      - pip install pyyaml --upgrade
      - pip install yamllint --upgrade
  pre_build:
    commands:
      - echo "Validating config file $CONFIG_FILE"
      - yamllint $CONFIG_FILE
  build:
    commands:
      - echo "Validating resources..."
      - python validate-env.py
      - exit $?

Upon successful completion of the test action, CloudFormation deletes the test stack and proceeds to the UAT stage in the pipeline.

During this stage, CloudFormation creates a change set against the UAT stack and then executes the change set. This updates the UAT environment and makes it available for acceptance testing. The process continues to a manual approval action. After the QA team validates the UAT environment and provides an approval, the process moves to the Production stage in the pipeline.

During this stage, CloudFormation creates a change set for the nested production stack and the process continues to a manual approval step. Upon approval (usually by a designated executive), the change set is executed and the production deployment is completed.
 

Setting up a continuous delivery pipeline

 
I used a CloudFormation template to set up my continuous delivery pipeline. The codepipeline-cfn-codebuild.yml template, available from GitHub, sets up a full-featured pipeline.

When I use the template to create my pipeline, I specify the following:

  • AWS CodeCommit repositories.
  • SNS topics to send approval notifications.
  • S3 bucket name where the artifacts will be stored.

The CFNTemplateRepoName points to the AWS CodeCommit repository where CloudFormation templates, configuration files, and build specification files are stored.

My repo contains following files:

The continuous delivery pipeline is ready just seconds after clicking Create Stack. After it’s created, the pipeline executes each stage. Upon manual approvals for the UAT and Production stages, the pipeline successfully enables continuous delivery.


 

Implementing a change in nested stack

 
To make changes to a child stack in a nested stack (for example, to update a parameter value or add or change resources), update the master stack. The changes must be made in the appropriate template or configuration files and then checked in to the AWS CodeCommit repository. This triggers the following deployment process:

 

Conclusion

 
In this post, I showed how you can use AWS CodePipeline, AWS CloudFormation, AWS CodeBuild, and a manual approval process to create a continuous delivery pipeline for both infrastructure as code and application deployment.

For more information about AWS CodePipeline, see the AWS CodePipeline documentation. You can get started in just a few clicks. All CloudFormation templates, AWS CodeBuild build specification files, and the Python script that performs the validation are available in codepipeline-nested-cfn GitHub repository.


About the author

 
Prakash Palanisamy is a Solutions Architect for Amazon Web Services. When he is not working on Serverless, DevOps or Alexa, he will be solving problems in Project Euler. He also enjoys watching educational documentaries.