Tag Archives: hosting

CoderDojo: 2000 Dojos ever

Post Syndicated from Giustina Mizzoni original https://www.raspberrypi.org/blog/2000-dojos-ever/

Every day of the week, we verify new Dojos all around the world, and each Dojo is championed by passionate volunteers. Last week, a huge milestone for the CoderDojo community went by relatively unnoticed: in the history of the movement, more than 2000 Dojos have now been verified!

CoderDojo banner — 2000 Dojos

2000 Dojos

This is a phenomenal achievement for a movement that’s just six years old and powered by volunteers. Presently, there are more than 1650 active Dojos running weekly, fortnightly, or monthly, and all of them are free for participants — for example, the Dojos run by Joel Bayubasire in Kampala, Uganda:

Joel Bayubasire with Ninjas at his Ugandan Dojo — 2000 Dojos

Empowering refugee children

This week, Joel set up his second Dojo and verified it on our global map. Joel is a Congolese refugee living in Kampala, Uganda, where he is currently completing his PhD in Economics at Madison International Institute and Business School.

Joel understands first-hand the challenges faced by refugees who were forced to leave their country due to war or conflict. Uganda is currently hosting more than 1.2 million refugees, 60% of which are children (World Bank, 2017). As refugees, children are only allowed to attend local schools until the age of 12. This results in lower educational attainment, which will likely affect their future employment prospects.

Two girls at a laptop. Joel Bayubasire CoderDojo — 2000 Dojos

Joel has the motivation to overcome these challenges, because he understands the power of education. Therefore, he initiated a number of community-based activities to provide educational opportunities for refugee children. As part of this, he founded his first Dojo earlier in the year, with the aim of giving these children a chance to compete in today’s global knowledge-based economy.

Two boys at a laptop. Joel Bayubasire CoderDojo — 2000 Dojos

Aware that securing volunteer mentors would be a challenge, Joel trained eight young people from the community to become youth mentors to their peers. He explains:

I believe that the mastery of computer coding allows talented young people to thrive professionally and enables them to not only be consumers but creators of the interconnected world of today!

Based on the success of Joel’s first Dojo, he has now expanded the CoderDojo initiative in his community; his plan is to provide computer science training for more than 300 refugee youths in Kampala by 2019. If you’d like to learn more about Joel’s efforts, head to this website.

Join the movement

If you are interested in creating opportunities for the young people in your community, then join the growing CoderDojo movement — you can volunteer to start a Dojo or to support an existing one today!

The post CoderDojo: 2000 Dojos ever appeared first on Raspberry Pi.

Looking Forward to 2018

Post Syndicated from Let's Encrypt - Free SSL/TLS Certificates original https://letsencrypt.org//2017/12/07/looking-forward-to-2018.html

Let’s Encrypt had a great year in 2017. We more than doubled the number of active (unexpired) certificates we service to 46 million, we just about tripled the number of unique domains we service to 61 million, and we did it all while maintaining a stellar security and compliance track record. Most importantly though, the Web went from 46% encrypted page loads to 67% according to statistics from Mozilla – a gain of 21% in a single year – incredible. We’re proud to have contributed to that, and we’d like to thank all of the other people and organizations who also worked hard to create a more secure and privacy-respecting Web.

While we’re proud of what we accomplished in 2017, we are spending most of the final quarter of the year looking forward rather than back. As we wrap up our own planning process for 2018, I’d like to share some of our plans with you, including both the things we’re excited about and the challenges we’ll face. We’ll cover service growth, new features, infrastructure, and finances.

Service Growth

We are planning to double the number of active certificates and unique domains we service in 2018, to 90 million and 120 million, respectively. This anticipated growth is due to continuing high expectations for HTTPS growth in general in 2018.

Let’s Encrypt helps to drive HTTPS adoption by offering a free, easy to use, and globally available option for obtaining the certificates required to enable HTTPS. HTTPS adoption on the Web took off at an unprecedented rate from the day Let’s Encrypt launched to the public.

One of the reasons Let’s Encrypt is so easy to use is that our community has done great work making client software that works well for a wide variety of platforms. We’d like to thank everyone involved in the development of over 60 client software options for Let’s Encrypt. We’re particularly excited that support for the ACME protocol and Let’s Encrypt is being added to the Apache httpd server.

Other organizations and communities are also doing great work to promote HTTPS adoption, and thus stimulate demand for our services. For example, browsers are starting to make their users more aware of the risks associated with unencrypted HTTP (e.g. Firefox, Chrome). Many hosting providers and CDNs are making it easier than ever for all of their customers to use HTTPS. Government agencies are waking up to the need for stronger security to protect constituents. The media community is working to Secure the News.

New Features

We’ve got some exciting features planned for 2018.

First, we’re planning to introduce an ACME v2 protocol API endpoint and support for wildcard certificates along with it. Wildcard certificates will be free and available globally just like our other certificates. We are planning to have a public test API endpoint up by January 4, and we’ve set a date for the full launch: Tuesday, February 27.

Later in 2018 we plan to introduce ECDSA root and intermediate certificates. ECDSA is generally considered to be the future of digital signature algorithms on the Web due to the fact that it is more efficient than RSA. Let’s Encrypt will currently sign ECDSA keys from subscribers, but we sign with the RSA key from one of our intermediate certificates. Once we have an ECDSA root and intermediates, our subscribers will be able to deploy certificate chains which are entirely ECDSA.

Infrastructure

Our CA infrastructure is capable of issuing millions of certificates per day with multiple redundancy for stability and a wide variety of security safeguards, both physical and logical. Our infrastructure also generates and signs nearly 20 million OCSP responses daily, and serves those responses nearly 2 billion times per day. We expect issuance and OCSP numbers to double in 2018.

Our physical CA infrastructure currently occupies approximately 70 units of rack space, split between two datacenters, consisting primarily of compute servers, storage, HSMs, switches, and firewalls.

When we issue more certificates it puts the most stress on storage for our databases. We regularly invest in more and faster storage for our database servers, and that will continue in 2018.

We’ll need to add a few additional compute servers in 2018, and we’ll also start aging out hardware in 2018 for the first time since we launched. We’ll age out about ten 2u compute servers and replace them with new 1u servers, which will save space and be more energy efficient while providing better reliability and performance.

We’ll also add another infrastructure operations staff member, bringing that team to a total of six people. This is necessary in order to make sure we can keep up with demand while maintaining a high standard for security and compliance. Infrastructure operations staff are systems administrators responsible for building and maintaining all physical and logical CA infrastructure. The team also manages a 24/7/365 on-call schedule and they are primary participants in both security and compliance audits.

Finances

We pride ourselves on being an efficient organization. In 2018 Let’s Encrypt will secure a large portion of the Web with a budget of only $3.0M. For an overall increase in our budget of only 13%, we will be able to issue and service twice as many certificates as we did in 2017. We believe this represents an incredible value and that contributing to Let’s Encrypt is one of the most effective ways to help create a more secure and privacy-respecting Web.

Our 2018 fundraising efforts are off to a strong start with Platinum sponsorships from Mozilla, Akamai, OVH, Cisco, Google Chrome and the Electronic Frontier Foundation. The Ford Foundation has renewed their grant to Let’s Encrypt as well. We are seeking additional sponsorship and grant assistance to meet our full needs for 2018.

We had originally budgeted $2.91M for 2017 but we’ll likely come in under budget for the year at around $2.65M. The difference between our 2017 expenses of $2.65M and the 2018 budget of $3.0M consists primarily of the additional infrastructure operations costs previously mentioned.

Support Let’s Encrypt

We depend on contributions from our community of users and supporters in order to provide our services. If your company or organization would like to sponsor Let’s Encrypt please email us at [email protected]. We ask that you make an individual contribution if it is within your means.

We’re grateful for the industry and community support that we receive, and we look forward to continuing to create a more secure and privacy-respecting Web!

Running Windows Containers on Amazon ECS

Post Syndicated from Nathan Taber original https://aws.amazon.com/blogs/compute/running-windows-containers-on-amazon-ecs/

This post was developed and written by Jeremy Cowan, Thomas Fuller, Samuel Karp, and Akram Chetibi.

Containers have revolutionized the way that developers build, package, deploy, and run applications. Initially, containers only supported code and tooling for Linux applications. With the release of Docker Engine for Windows Server 2016, Windows developers have started to realize the gains that their Linux counterparts have experienced for the last several years.

This week, we’re adding support for running production workloads in Windows containers using Amazon Elastic Container Service (Amazon ECS). Now, Amazon ECS provides an ECS-Optimized Windows Server Amazon Machine Image (AMI). This AMI is based on the EC2 Windows Server 2016 AMI, and includes Docker 17.06 Enterprise Edition and the ECS Agent 1.16. This AMI provides improved instance and container launch time performance. It’s based on Windows Server 2016 Datacenter and includes Docker 17.06.2-ee-5, along with a new version of the ECS agent that now runs as a native Windows service.

In this post, I discuss the benefits of this new support, and walk you through getting started running Windows containers with Amazon ECS.

When AWS released the Windows Server 2016 Base with Containers AMI, the ECS agent ran as a process that made it difficult to monitor and manage. As a service, the agent can be health-checked, managed, and restarted no differently than other Windows services. The AMI also includes pre-cached images for Windows Server Core 2016 and Windows Server Nano Server 2016. By caching the images in the AMI, launching new Windows containers is significantly faster. When Docker images include a layer that’s already cached on the instance, Docker re-uses that layer instead of pulling it from the Docker registry.

The ECS agent and an accompanying ECS PowerShell module used to install, configure, and run the agent come pre-installed on the AMI. This guarantees there is a specific platform version available on the container instance at launch. Because the software is included, you don’t have to download it from the internet. This saves startup time.

The Windows-compatible ECS-optimized AMI also reports CPU and memory utilization and reservation metrics to Amazon CloudWatch. Using the CloudWatch integration with ECS, you can create alarms that trigger dynamic scaling events to automatically add or remove capacity to your EC2 instances and ECS tasks.

Getting started

To help you get started running Windows containers on ECS, I’ve forked the ECS reference architecture, to build an ECS cluster comprised of Windows instances instead of Linux instances. You can pull the latest version of the reference architecture for Windows.

The reference architecture is a layered CloudFormation stack, in that it calls other stacks to create the environment. Within the stack, the ecs-windows-cluster.yaml file contains the instructions for bootstrapping the Windows instances and configuring the ECS cluster. To configure the instances outside of AWS CloudFormation (for example, through the CLI or the console), you can add the following commands to your instance’s user data:

Import-Module ECSTools
Initialize-ECSAgent

Or

Import-Module ECSTools
Initialize-ECSAgent –Cluster MyCluster -EnableIAMTaskRole

If you don’t specify a cluster name when you initialize the agent, the instance is joined to the default cluster.

Adding -EnableIAMTaskRole when initializing the agent adds support for IAM roles for tasks. Previously, enabling this setting meant running a complex script and setting an environment variable before you could assign roles to your ECS tasks.

When you enable IAM roles for tasks on Windows, it consumes port 80 on the host. If you have tasks that listen on port 80 on the host, I recommend configuring a service for them that uses load balancing. You can use port 80 on the load balancer, and the traffic can be routed to another host port on your container instances. For more information, see Service Load Balancing.

Create a cluster

To create a new ECS cluster, choose Launch stack, or pull the GitHub project to your local machine and run the following command:

aws cloudformation create-stack –template-body file://<path to master-windows.yaml> --stack-name <name>

Upload your container image

Now that you have a cluster running, step through how to build and push an image into a container repository. You use a repository hosted in Amazon Elastic Container Registry (Amazon ECR) for this, but you could also use Docker Hub. To build and push an image to a repository, install Docker on your Windows* workstation. You also create a repository and assign the necessary permissions to the account that pushes your image to Amazon ECR. For detailed instructions, see Pushing an Image.

* If you are building an image that is based on Windows layers, then you must use a Windows environment to build and push your image to the registry.

Write your task definition

Now that your image is built and ready, the next step is to run your Windows containers using a task.

Start by creating a new task definition based on the windows-simple-iis image from Docker Hub.

  1. Open the ECS console.
  2. Choose Task Definitions, Create new task definition.
  3. Scroll to the bottom of the page and choose Configure via JSON.
  4. Copy and paste the following JSON into that field.
  5. Choose Save, Create.
{
   "family": "windows-simple-iis",
   "containerDefinitions": [
   {
     "name": "windows_sample_app",
     "image": "microsoft/iis",
     "cpu": 100,
     "entryPoint":["powershell", "-Command"],
     "command":["New-Item -Path C:\\inetpub\\wwwroot\\index.html -Type file -Value '<html><head><title>Amazon ECS Sample App</title> <style>body {margin-top: 40px; background-color: #333;} </style> </head><body> <div style=color:white;text-align:center><h1>Amazon ECS Sample App</h1> <h2>Congratulations!</h2> <p>Your application is now running on a container in Amazon ECS.</p></body></html>'; C:\\ServiceMonitor.exe w3svc"],
     "portMappings": [
     {
       "protocol": "tcp",
       "containerPort": 80,
       "hostPort": 8080
     }
     ],
     "memory": 500,
     "essential": true
   }
   ]
}

You can now go back into the Task Definition page and see windows-simple-iis as an available task definition.

There are a few important aspects of the task definition file to note when working with Windows containers. First, the hostPort is configured as 8080, which is necessary because the ECS agent currently uses port 80 to enable IAM roles for tasks required for least-privilege security configurations.

There are also some fairly standard task parameters that are intentionally not included. For example, network mode is not available with Windows at the time of this release, so keep that setting blank to allow Docker to configure WinNAT, the only option available today.

Also, some parameters work differently with Windows than they do with Linux. The CPU limits that you define in the task definition are absolute, whereas on Linux they are weights. For information about other task parameters that are supported or possibly different with Windows, see the documentation.

Run your containers

At this point, you are ready to run containers. There are two options to run containers with ECS:

  1. Task
  2. Service

A task is typically a short-lived process that ECS creates. It can’t be configured to actively monitor or scale. A service is meant for longer-running containers and can be configured to use a load balancer, minimum/maximum capacity settings, and a number of other knobs and switches to help ensure that your code keeps running. In both cases, you are able to pick a placement strategy and a specific IAM role for your container.

  1. Select the task definition that you created above and choose Action, Run Task.
  2. Leave the settings on the next page to the default values.
  3. Select the ECS cluster created when you ran the CloudFormation template.
  4. Choose Run Task to start the process of scheduling a Docker container on your ECS cluster.

You can now go to the cluster and watch the status of your task. It may take 5–10 minutes for the task to go from PENDING to RUNNING, mostly because it takes time to download all of the layers necessary to run the microsoft/iis image. After the status is RUNNING, you should see the following results:

You may have noticed that the example task definition is named windows-simple-iis:2. This is because I created a second version of the task definition, which is one of the powerful capabilities of using ECS. You can make the task definitions part of your source code and then version them. You can also roll out new versions and practice blue/green deployment, switching to reduce downtime and improve the velocity of your deployments!

After the task has moved to RUNNING, you can see your website hosted in ECS. Find the public IP or DNS for your ECS host. Remember that you are hosting on port 8080. Make sure that the security group allows ingress from your client IP address to that port and that your VPC has an internet gateway associated with it. You should see a page that looks like the following:

This is a nice start to deploying a simple single instance task, but what if you had a Web API to be scaled out and in based on usage? This is where you could look at defining a service and collecting CloudWatch data to add and remove both instances of the task. You could also use CloudWatch alarms to add more ECS container instances and keep up with the demand. The former is built into the configuration of your service.

  1. Select the task definition and choose Create Service.
  2. Associate a load balancer.
  3. Set up Auto Scaling.

The following screenshot shows an example where you would add an additional task instance when the CPU Utilization CloudWatch metric is over 60% on average over three consecutive measurements. This may not be aggressive enough for your requirements; it’s meant to show you the option to scale tasks the same way you scale ECS instances with an Auto Scaling group. The difference is that these tasks start much faster because all of the base layers are already on the ECS host.

Do not confuse task dynamic scaling with ECS instance dynamic scaling. To add additional hosts, see Tutorial: Scaling Container Instances with CloudWatch Alarms.

Conclusion

This is just scratching the surface of the flexibility that you get from using containers and Amazon ECS. For more information, see the Amazon ECS Developer Guide and ECS Resources.

– Jeremy, Thomas, Samuel, Akram

The Pi Towers Secret Santa Babbage

Post Syndicated from Mark Calleja original https://www.raspberrypi.org/blog/secret-santa-babbage/

Tired of pulling names out of a hat for office Secret Santa? Upgrade your festive tradition with a Raspberry Pi, thermal printer, and everybody’s favourite microcomputer mascot, Babbage Bear.

Raspberry Pi Babbage Bear Secret Santa

The name’s Santa. Secret Santa.

It’s that time of year again, when the cosiness gets turned up to 11 and everyone starts thinking about jolly fat men, reindeer, toys, and benevolent home invasion. At Raspberry Pi, we’re running a Secret Santa pool: everyone buys a gift for someone else in the office. Obviously, the person you buy for has to be picked in secret and at random, or the whole thing wouldn’t work. With that in mind, I created Secret Santa Babbage to do the somewhat mundane task of choosing gift recipients. This could’ve just been done with some names in a hat, but we’re Raspberry Pi! If we don’t make a Python-based Babbage robot wearing a jaunty hat and programmed to spread Christmas cheer, who will?

Secret Santa Babbage

Ho ho ho!

Mecha-Babbage Xmas shenanigans

The script the robot runs is pretty basic: a list of names entered as comma-separated strings is shuffled at the press of a GPIO button, then a name is popped off the end and stored as a variable. The name is matched to a photo of the person stored on the Raspberry Pi, and a thermal printer pinched from Alex’s super awesome PastyCam (blog post forthcoming, maybe) prints out the picture and name of the person you will need to shower with gifts at the Christmas party. (Well, OK — with one gift. No more than five quid’s worth. Nothing untoward.) There’s also a redo function, just in case you pick yourself: press another button and the last picked name — still stored as a variable — is appended to the list again, which is shuffled once more, and a new name is popped off the end.

Secret Santa Babbage prototyping

Prototyping!

As the build was a bit of a rush job undertaken at the request of our ‘Director of Vibe’ Emily, there are a few things I’d like to improve about this functionality that I didn’t get around to — more on that later. To add some extra holiday spirit to the project at the last minute, I used Pygame to play a WAV file of Santa’s jolly laugh while Babbage chooses a name for you. The file is included in the GitHub repo along with everything else, because ‘tis the season, etc., etc.

Secret Santa Babbage prototyping

Editor’s note: Considering these desk adornments, Mark’s Secret Santa gift-giver has a lot to go on.

Writing the code for Xmas Mecha-Babbage was fairly straightforward, though it uses some tricky bits for managing the thermal printer. You’ll need to install the drivers to make it go, as well as the CUPS package for managing the print hosting. You can find instructions for these things here, thanks to the wonderful Adafruit crew. Also, for reasons I couldn’t fathom, this will all only work on a Pi 2 and not a Pi 3, as there are some compatibility issues with the thermal printer otherwise. (I also tested the script on a Pi Zero W…no dice.)

Building a Christmassy throne

The hardest (well, fiddliest) parts of making the whole build were constructing the throne and wiring the bear. Using MakerCase, Inkscape, a bit of ingenuity, and a laser cutter, I was able to rig up a Christmassy plywood throne which has a hole through the seat so I could run the wires down from Babbage and to the Pi inside. I finished the throne by rubbing a couple of fingers of beeswax into it; as well as making the wood shine just a little bit and protecting it against getting wet, this had the added bonus of making it smell awesome.

Secret Santa Babbage inside

Next year’s iteration will be mulled wine–scented.

I next soldered two LEDs to some lengths of wire, and then ran the wires through holes at the top of the throne and down the back along a small channel I had carved with a narrow chisel to connect them to the Pi’s GPIO pins. The green LED will remain on as long as Babbage is running his program, and the red one will light up while he is processing your request. Once the red LED goes off again, the next person can have a go. I also laser-cut a final piece of wood to overlay the back of Babbage’s Xmas throne and cover the wiring a bit.

Creating a Xmas cyborg bear

Taking two 6 mm tactile buttons, I clipped the spiky metal legs off one side of each (the buttons were going into a stuffed christmas toy, after all) and soldered a length of wire to each of the remaining legs. Next, I made a small incision into Babbage with my trusty Swiss army knife (in a place that actually made me cringe a little) and fed the buttons up into his paws. At some point in this process I was standing in the office wrestling with the bear and muttering to myself, which elicited some very strange looks from my colleagues.

Secret Santa Babbage throne

Poor Babbage…

One thing to note here is to make sure the wires remain attached at the solder points while you push them up into Babbage’s paws. The first time I tried it, I snapped one of my connections and had to start again. It helped to remove some stuffing like a tunnel and then replace it afterward. Moreover, you can use your fingertip to support the joints as you poke the wire in. Finally, a couple of squirts of hot glue to keep Babbage’s furry cheeks firmly on the seat, and done!

Secret Santa Babbage

Next year: Game of Thrones–inspired candy cane throne

The Secret Santa Babbage masterpiece

The whole build process was the perfect holiday mix of cheerful and macabre, and while getting the thermal printer to work was a little time-consuming, the finished product definitely raised some smiles around the office and added a bit of interesting digital flavour to a staid office tradition. And it also helped people who are new to the office or from other branches of the Foundation to know for whom they will be buying a gift.

Secret Santa Babbage

Ready to dispense Christmas cheer!

There are a few ways in which I’ll polish this project before next year, such as having the script write the names to external text files to create a record that will persist in case of a reboot, and maybe having Secret Santa Babbage play you a random Christmas carol when you squeeze his paw instead of just laughing merrily every time. (I also thought about adding electric shocks for those people who are on the naughty list, but HR said no. Bah, humbug!)

Make your own

The code and laser cut plans for the whole build are available here. If you plan to make your own, let us know which stuffed toy you will be turning into a Secret Santa cyborg! And if you’ve been working on any other Christmas-themed Raspberry Pi projects, we’d like to see those too, so tag us on social media to share the festive maker cheer.

The post The Pi Towers Secret Santa Babbage appeared first on Raspberry Pi.

European Commission Steps Up Fight Against Online Piracy

Post Syndicated from Ernesto original https://torrentfreak.com/european-commission-steps-up-fight-against-online-piracy-171130/

The European Commission has had copyright issues at the top of its agenda for a while, resulting in several controversial proposals.

This week it presented a series of new measures to ensure that copyright holders are well protected, targeting both online piracy and counterfeit goods.

“Today we boost our collective ability to catch the ‘big fish’ behind fake goods and pirated content which harm our companies and our jobs – as well as our health and safety in areas such as medicines or toys,” Commissioner Elżbieta Bieńkowska announced.

The Commission notes that it’s stepping up the fight against counterfeiting and piracy. However, many of the proposals are not entirely new for those who follow anti-piracy issues around the globe.

One of the main goals is to focus on the people who facilitate copyright infringement, such as pirate site operators, and try to cut their revenue streams.

“The Commission seeks to deprive commercial-scale IP infringers of the revenue flows that make their criminal activity lucrative – this is the so-called ‘follow the money’ approach which focuses on the ‘big fish’ rather than individuals,” they write.

Instead of using legislation to reach this goal, the Commission prefers to continue its support for voluntary agreements between copyright holders and third-party services. This includes deals with advertising and payment services to cut their ties with pirate sites.

“Such agreements can lead to faster action against counterfeiting and piracy than court actions,” the Commission writes.

Another tool to fight piracy appears on the agenda for the first time. The European Commission notes that it will also support the quest for new anti-piracy initiatives, including the use of blockchain technology.

“Supporting industry-led initiatives to combat IP infringements, including work on Memoranda of Understanding and exploring the potential of new technologies such as blockchain to combat IP infringements in supply chains,” the suggestion reads.

No concrete examples were given but earlier this week, European Parliament member Brando Benifei wrote an article on the issue in Euractiv.

Benifei mentions that blockchain technology can help independent artists collect royalty payments without the need for middlemen. In a similar vein, blockchains can also be used to track the unauthorized distribution of works.

In addition to broadening the anti-piracy horizon, the European Commission also released a new guidance on how the current IPR Enforcement Directive (IPRED) should be interpreted, taking into account various recent developments, including landmark EU Court of Justice rulings.

The guidance explains how and when it’s appropriate to issue website blocking orders, for example. In general, blocking injunctions are warranted when they are proportional and aimed at preventing concrete infringements.

The comprehensive guidance also covers the issue of filtering. Interestingly, the Commission clarifies that third-party services can’t be required to “install and operate excessively broad, unspecific and expensive filtering systems.”

This appears to run counter to the mandatory piracy filters that were suggested as part of the copyright reform proposal.

However, the Commission notes that in some specific cases, hosting providers (e.g. YouTube) can be ordered to monitor uploads. This is in line with a recent communication which recommended that online services should implement measures to automatically detect and remove suspected illegal content.

While the new plans continue down the path of stronger copyright protections, not all rightsholders are happy. IFPI is glad that the main problems are highlighted, but would have liked to have seen more concrete plans.

“We are disappointed that despite the European Commission recognizing the need to modernize IPRED and years of evidence gathering, today’s result is merely guidance to EU Member State governments. Soft law does not give right holders the tools they need to take effective action against pirate services,” IFPI writes.

On the other side of the divide, opposition to the previously announced EU copyright reform plans continues as well. Earlier today a group of over 80 organizations urged EU member states to speak out against several controversial copyright proposals, including the upload filter.

“The signatories warn the Member states that the discussion around the Copyright Directive are on the verge of causing irreparable damage to our fundamental rights and freedoms, our economy and competitiveness, our education and research, our innovation and competition, our creativity and our culture,” they say.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

AWS PrivateLink Update – VPC Endpoints for Your Own Applications & Services

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-privatelink-update-vpc-endpoints-for-your-own-applications-services/

Earlier this month, my colleague Colm MacCárthaigh told you about AWS PrivateLink and showed you how to use it to access AWS services such as Amazon Kinesis Streams, AWS Service Catalog, EC2 Systems Manager, the EC2 APIs, and the ELB APIs by way of VPC Endpoints. The endpoint (represented by one or more Elastic Network Interfaces or ENIs) resides within your VPC and has IP addresses drawn from the VPC’s subnets, without the need for an Internet or NAT Gateway. This model is clear and easy to understand, not to mention secure and scalable!

Endpoints for Private Connectivity
Today we are building upon the initial launch and extending the PrivateLink model, allowing you to set up and use VPC Endpoints to access your own services and those made available by others. Even before we launched PrivateLink for AWS services, we had a lot of requests for this feature, so I expect it to be pretty popular. For example, one customer told us that they plan to create hundreds of VPCs, each hosting and providing a single microservice (read Microservices on AWS to learn more).

Companies can now create services and offer them for sale to other AWS customers, for access via a private connection. They create a service that accepts TCP traffic, host it behind a Network Load Balancer, and then make the service available, either directly or in AWS Marketplace. They will be notified of new subscription requests and can choose to accept or reject each one. I expect that this feature will be used to create a strong, vibrant ecosystem of service providers in 2018.

The service provider and the service consumer run in separate VPCs and AWS accounts and communicate solely through the endpoint, with all traffic flowing across Amazon’s private network. Service consumers don’t have to worry about overlapping IP addresses, arrange for VPC peering, or use a VPC Gateway. You can also use AWS Direct Connect to connect your existing data center to one of your VPCs in order to allow your cloud-based applications to access services running on-premises, or vice versa.

Providing and Consuming Services
This new feature puts a lot of power at your fingertips. You can set it all up using the VPC APIs, the VPC CLI, or the AWS Management Console. I’ll use the console, and will show you how to provide and then consume a service. I am going to do both within a single AWS account, but that’s just for demo purposes.

Let’s talk about providing a service. It must run behind a Network Load Balancer and must be accessible over TCP. It can be hosted on EC2 instances, ECS containers, or on-premises (configured as an IP target), and should be able to scale in order to meet the expected level of demand. For low latency and fault tolerance, we recommend using an NLB with targets in every AZ of its region. Here’s mine:

I open up the VPC Console and navigate to Endpoint Services, then click on Create Endpoint Service:

I choose my NLB (just one in this case, but I can choose two or more and they will be mapped to consumers on a round-robin basis). By clicking on Acceptance required, I get to control access to my endpoint on a request-by-request basis:

I click on Create service and my service is ready immediately:

If I was going to make this service available in AWS Marketplace, I would go ahead and create a listing now. Since I am going to be the producer and the consumer in this blog post, I’ll skip that step. I will, however, copy the Service name for use in the next step.

I return to the VPC Dashboard and navigate to Endpoints, then click on Create endpoint. Then I select Find service by name, paste the service name, and click on Verify to move ahead. Then I select the desired AZs, and a subnet in each one, pick my security groups, and click on Create endpoint:

Because I checked Acceptance required when I created the endpoint service, the connection is pending acceptance:

Back on the endpoint service side (typically in a separate AWS account), I can see and accept the pending request:

The endpoint becomes available and ready to use within a minute or so. If I was creating a service and selling access on a paid basis, I would accept the request as part of a larger, and perhaps automated, onboarding workflow for a new customer.

On the consumer side, my new endpoint is accessible via DNS name:

Services provided by AWS and services in AWS Marketplace are accessible through split-horizon DNS. Accessing the service through this name will resolve to the “best” endpoint, taking Region and Availability Zone into consideration.

In the Marketplace
As I noted earlier, this new PrivateLink feature creates an opportunity for new and existing sellers in AWS Marketplace. The following SaaS offerings are already available as endpoints and I expect many more to follow (read Sell on AWS Marketplace to get started):

CA TechnologiesCA App Experience Analytics Essentials.

Aqua SecurityAqua Container Image Security Scanner.

DynatraceCloud-Native Monitoring powered by AI.

Cisco StealthwatchPublic Cloud Monitoring – Metered, Public Cloud Monitoring – Contracts.

SigOptML Optimization & Tuning.

Available Today
This new PrivateLink feature is available now and you can start using it today!

Jeff;

 

Using AWS CodeCommit Pull Requests to request code reviews and discuss code

Post Syndicated from Chris Barclay original https://aws.amazon.com/blogs/devops/using-aws-codecommit-pull-requests-to-request-code-reviews-and-discuss-code/

Thank you to Michael Edge, Senior Cloud Architect, for a great blog on CodeCommit pull requests.

~~~~~~~

AWS CodeCommit is a fully managed service for securely hosting private Git repositories. CodeCommit now supports pull requests, which allows repository users to review, comment upon, and interactively iterate on code changes. Used as a collaboration tool between team members, pull requests help you to review potential changes to a CodeCommit repository before merging those changes into the repository. Each pull request goes through a simple lifecycle, as follows:

  • The new features to be merged are added as one or more commits to a feature branch. The commits are not merged into the destination branch.
  • The pull request is created, usually from the difference between two branches.
  • Team members review and comment on the pull request. The pull request might be updated with additional commits that contain changes made in response to comments, or include changes made to the destination branch.
  • Once team members are happy with the pull request, it is merged into the destination branch. The commits are applied to the destination branch in the same order they were added to the pull request.

Commenting is an integral part of the pull request process, and is used to collaborate between the developers and the reviewer. Reviewers add comments and questions to a pull request during the review process, and developers respond to these with explanations. Pull request comments can be added to the overall pull request, a file within the pull request, or a line within a file.

To make the comments more useful, sign in to the AWS Management Console as an AWS Identity and Access Management (IAM) user. The username will then be associated with the comment, indicating the owner of the comment. Pull request comments are a great quality improvement tool as they allow the entire development team visibility into what reviewers are looking for in the code. They also serve as a record of the discussion between team members at a point in time, and shouldn’t be deleted.

AWS CodeCommit is also introducing the ability to add comments to a commit, another useful collaboration feature that allows team members to discuss code changed as part of a commit. This helps you discuss changes made in a repository, including why the changes were made, whether further changes are necessary, or whether changes should be merged. As is the case with pull request comments, you can comment on an overall commit, on a file within a commit, or on a specific line or change within a file, and other repository users can respond to your comments. Comments are not restricted to commits, they can also be used to comment on the differences between two branches, or between two tags. Commit comments are separate from pull request comments, i.e. you will not see commit comments when reviewing a pull request – you will only see pull request comments.

A pull request example

Let’s get started by running through an example. We’ll take a typical pull request scenario and look at how we’d use CodeCommit and the AWS Management Console for each of the steps.

To try out this scenario, you’ll need:

  • An AWS CodeCommit repository with some sample code in the master branch. We’ve provided sample code below.
  • Two AWS Identity and Access Management (IAM) users, both with the AWSCodeCommitPowerUser managed policy applied to them.
  • Git installed on your local computer, and access configured for AWS CodeCommit.
  • A clone of the AWS CodeCommit repository on your local computer.

In the course of this example, you’ll sign in to the AWS CodeCommit console as one IAM user to create the pull request, and as the other IAM user to review the pull request. To learn more about how to set up your IAM users and how to connect to AWS CodeCommit with Git, see the following topics:

  • Information on creating an IAM user with AWS Management Console access.
  • Instructions on how to access CodeCommit using Git.
  • If you’d like to use the same ‘hello world’ application as used in this article, here is the source code:
package com.amazon.helloworld;

public class Main {
	public static void main(String[] args) {

		System.out.println("Hello, world");
	}
}

The scenario below uses the us-east-2 region.

Creating the branches

Before we jump in and create a pull request, we’ll need at least two branches. In this example, we’ll follow a branching strategy similar to the one described in GitFlow. We’ll create a new branch for our feature from the main development branch (the default branch). We’ll develop the feature in the feature branch. Once we’ve written and tested the code for the new feature in that branch, we’ll create a pull request that contains the differences between the feature branch and the main development branch. Our team lead (the second IAM user) will review the changes in the pull request. Once the changes have been reviewed, the feature branch will be merged into the development branch.

Figure 1: Pull request link

Sign in to the AWS CodeCommit console with the IAM user you want to use as the developer. You can use an existing repository or you can go ahead and create a new one. We won’t be merging any changes to the master branch of your repository, so it’s safe to use an existing repository for this example. You’ll find the Pull requests link has been added just above the Commits link (see Figure 1), and below Commits you’ll find the Branches link. Click Branches and create a new branch called ‘develop’, branched from the ‘master’ branch. Then create a new branch called ‘feature1’, branched from the ‘develop’ branch. You’ll end up with three branches, as you can see in Figure 2. (Your repository might contain other branches in addition to the three shown in the figure).

Figure 2: Create a feature branch

If you haven’t cloned your repo yet, go to the Code link in the CodeCommit console and click the Connect button. Follow the instructions to clone your repo (detailed instructions are here). Open a terminal or command line and paste the git clone command supplied in the Connect instructions for your repository. The example below shows cloning a repository named codecommit-demo:

git clone https://git-codecommit.us-east-2.amazonaws.com/v1/repos/codecommit-demo

If you’ve previously cloned the repo you’ll need to update your local repo with the branches you created. Open a terminal or command line and make sure you’re in the root directory of your repo, then run the following command:

git remote update origin

You’ll see your new branches pulled down to your local repository.

$ git remote update origin
Fetching origin
From https://git-codecommit.us-east-2.amazonaws.com/v1/repos/codecommit-demo
 * [new branch]      develop    -> origin/develop
 * [new branch]      feature1   -> origin/feature1

You can also see your new branches by typing:

git branch --all

$ git branch --all
* master
  remotes/origin/develop
  remotes/origin/feature1
  remotes/origin/master

Now we’ll make a change to the ‘feature1’ branch. Open a terminal or command line and check out the feature1 branch by running the following command:

git checkout feature1

$ git checkout feature1
Branch feature1 set up to track remote branch feature1 from origin.
Switched to a new branch 'feature1'

Make code changes

Edit a file in the repo using your favorite editor and save the changes. Commit your changes to the local repository, and push your changes to CodeCommit. For example:

git commit -am 'added new feature'
git push origin feature1

$ git commit -am 'added new feature'
[feature1 8f6cb28] added new feature
1 file changed, 1 insertion(+), 1 deletion(-)

$ git push origin feature1
Counting objects: 9, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (4/4), done.
Writing objects: 100% (9/9), 617 bytes | 617.00 KiB/s, done.
Total 9 (delta 2), reused 0 (delta 0)
To https://git-codecommit.us-east-2.amazonaws.com/v1/repos/codecommit-demo
   2774a53..8f6cb28  feature1 -> feature1

Creating the pull request

Now we have a ‘feature1’ branch that differs from the ‘develop’ branch. At this point we want to merge our changes into the ‘develop’ branch. We’ll create a pull request to notify our team members to review our changes and check whether they are ready for a merge.

In the AWS CodeCommit console, click Pull requests. Click Create pull request. On the next page select ‘develop’ as the destination branch and ‘feature1’ as the source branch. Click Compare. CodeCommit will check for merge conflicts and highlight whether the branches can be automatically merged using the fast-forward option, or whether a manual merge is necessary. A pull request can be created in both situations.

Figure 3: Create a pull request

After comparing the two branches, the CodeCommit console displays the information you’ll need in order to create the pull request. In the ‘Details’ section, the ‘Title’ for the pull request is mandatory, and you may optionally provide comments to your reviewers to explain the code change you have made and what you’d like them to review. In the ‘Notifications’ section, there is an option to set up notifications to notify subscribers of changes to your pull request. Notifications will be sent on creation of the pull request as well as for any pull request updates or comments. And finally, you can review the changes that make up this pull request. This includes both the individual commits (a pull request can contain one or more commits, available in the Commits tab) as well as the changes made to each file, i.e. the diff between the two branches referenced by the pull request, available in the Changes tab. After you have reviewed this information and added a title for your pull request, click the Create button. You will see a confirmation screen, as shown in Figure 4, indicating that your pull request has been successfully created, and can be merged without conflicts into the ‘develop’ branch.

Figure 4: Pull request confirmation page

Reviewing the pull request

Now let’s view the pull request from the perspective of the team lead. If you set up notifications for this CodeCommit repository, creating the pull request would have sent an email notification to the team lead, and he/she can use the links in the email to navigate directly to the pull request. In this example, sign in to the AWS CodeCommit console as the IAM user you’re using as the team lead, and click Pull requests. You will see the same information you did during creation of the pull request, plus a record of activity related to the pull request, as you can see in Figure 5.

Figure 5: Team lead reviewing the pull request

Commenting on the pull request

You now perform a thorough review of the changes and make a number of comments using the new pull request comment feature. To gain an overall perspective on the pull request, you might first go to the Commits tab and review how many commits are included in this pull request. Next, you might visit the Changes tab to review the changes, which displays the differences between the feature branch code and the develop branch code. At this point, you can add comments to the pull request as you work through each of the changes. Let’s go ahead and review the pull request. During the review, you can add review comments at three levels:

  • The overall pull request
  • A file within the pull request
  • An individual line within a file

The overall pull request
In the Changes tab near the bottom of the page you’ll see a ‘Comments on changes’ box. We’ll add comments here related to the overall pull request. Add your comments as shown in Figure 6 and click the Save button.

Figure 6: Pull request comment

A specific file in the pull request
Hovering your mouse over a filename in the Changes tab will cause a blue ‘comments’ icon to appear to the left of the filename. Clicking the icon will allow you to enter comments specific to this file, as in the example in Figure 7. Go ahead and add comments for one of the files changed by the developer. Click the Save button to save your comment.

Figure 7: File comment

A specific line in a file in the pull request
A blue ‘comments’ icon will appear as you hover over individual lines within each file in the pull request, allowing you to create comments against lines that have been added, removed or are unchanged. In Figure 8, you add comments against a line that has been added to the source code, encouraging the developer to review the naming standards. Go ahead and add line comments for one of the files changed by the developer. Click the Save button to save your comment.

Figure 8: Line comment

A pull request that has been commented at all three levels will look similar to Figure 9. The pull request comment is shown expanded in the ‘Comments on changes’ section, while the comments at file and line level are shown collapsed. A ‘comment’ icon indicates that comments exist at file and line level. Clicking the icon will expand and show the comment. Since you are expecting the developer to make further changes based on your comments, you won’t merge the pull request at this stage, but will leave it open awaiting feedback. Each comment you made results in a notification being sent to the developer, who can respond to the comments. This is great for remote working, where developers and team lead may be in different time zones.

Figure 9: Fully commented pull request

Adding a little complexity

A typical development team is going to be creating pull requests on a regular basis. It’s highly likely that the team lead will merge other pull requests into the ‘develop’ branch while pull requests on feature branches are in the review stage. This may result in a change to the ‘Mergable’ status of a pull request. Let’s add this scenario into the mix and check out how a developer will handle this.

To test this scenario, we could create a new pull request and ask the team lead to merge this to the ‘develop’ branch. But for the sake of simplicity we’ll take a shortcut. Clone your CodeCommit repo to a new folder, switch to the ‘develop’ branch, and make a change to one of the same files that were changed in your pull request. Make sure you change a line of code that was also changed in the pull request. Commit and push this back to CodeCommit. Since you’ve just changed a line of code in the ‘develop’ branch that has also been changed in the ‘feature1’ branch, the ‘feature1’ branch cannot be cleanly merged into the ‘develop’ branch. Your developer will need to resolve this merge conflict.

A developer reviewing the pull request would see the pull request now looks similar to Figure 10, with a ‘Resolve conflicts’ status rather than the ‘Mergable’ status it had previously (see Figure 5).

Figure 10: Pull request with merge conflicts

Reviewing the review comments

Once the team lead has completed his review, the developer will review the comments and make the suggested changes. As a developer, you’ll see the list of review comments made by the team lead in the pull request Activity tab, as shown in Figure 11. The Activity tab shows the history of the pull request, including commits and comments. You can reply to the review comments directly from the Activity tab, by clicking the Reply button, or you can do this from the Changes tab. The Changes tab shows the comments for the latest commit, as comments on previous commits may be associated with lines that have changed or been removed in the current commit. Comments for previous commits are available to view and reply to in the Activity tab.

In the Activity tab, use the shortcut link (which looks like this </>) to move quickly to the source code associated with the comment. In this example, you will make further changes to the source code to address the pull request review comments, so let’s go ahead and do this now. But first, you will need to resolve the ‘Resolve conflicts’ status.

Figure 11: Pull request activity

Resolving the ‘Resolve conflicts’ status

The ‘Resolve conflicts’ status indicates there is a merge conflict between the ‘develop’ branch and the ‘feature1’ branch. This will require manual intervention to restore the pull request back to the ‘Mergable’ state. We will resolve this conflict next.

Open a terminal or command line and check out the develop branch by running the following command:

git checkout develop

$ git checkout develop
Switched to branch 'develop'
Your branch is up-to-date with 'origin/develop'.

To incorporate the changes the team lead made to the ‘develop’ branch, merge the remote ‘develop’ branch with your local copy:

git pull

$ git pull
remote: Counting objects: 9, done.
Unpacking objects: 100% (9/9), done.
From https://git-codecommit.us-east-2.amazonaws.com/v1/repos/codecommit-demo
   af13c82..7b36f52  develop    -> origin/develop
Updating af13c82..7b36f52
Fast-forward
 src/main/java/com/amazon/helloworld/Main.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Then checkout the ‘feature1’ branch:

git checkout feature1

$ git checkout feature1
Switched to branch 'feature1'
Your branch is up-to-date with 'origin/feature1'.

Now merge the changes from the ‘develop’ branch into your ‘feature1’ branch:

git merge develop

$ git merge develop
Auto-merging src/main/java/com/amazon/helloworld/Main.java
CONFLICT (content): Merge conflict in src/main/java/com/amazon/helloworld/Main.java
Automatic merge failed; fix conflicts and then commit the result.

Yes, this fails. The file Main.java has been changed in both branches, resulting in a merge conflict that can’t be resolved automatically. However, Main.java will now contain markers that indicate where the conflicting code is, and you can use these to resolve the issues manually. Edit Main.java using your favorite IDE, and you’ll see it looks something like this:

package com.amazon.helloworld;

import java.util.*;

/**
 * This class prints a hello world message
 */

public class Main {
   public static void main(String[] args) {

<<<<<<< HEAD
        Date todaysdate = Calendar.getInstance().getTime();

        System.out.println("Hello, earthling. Today's date is: " + todaysdate);
=======
      System.out.println("Hello, earth");
>>>>>>> develop
   }
}

The code between HEAD and ‘===’ is the code the developer added in the ‘feature1’ branch (HEAD represents ‘feature1’ because this is the current checked out branch). The code between ‘===’ and ‘>>> develop’ is the code added to the ‘develop’ branch by the team lead. We’ll resolve the conflict by manually merging both changes, resulting in an updated Main.java:

package com.amazon.helloworld;

import java.util.*;

/**
 * This class prints a hello world message
 */

public class Main {
   public static void main(String[] args) {

        Date todaysdate = Calendar.getInstance().getTime();

        System.out.println("Hello, earth. Today's date is: " + todaysdate);
   }
}

After saving the change you can add and commit it to your local repo:

git add src/
git commit -m 'fixed merge conflict by merging changes'

Fixing issues raised by the reviewer

Now you are ready to address the comments made by the team lead. If you are no longer pointing to the ‘feature1’ branch, check out the ‘feature1’ branch by running the following command:

git checkout feature1

$ git checkout feature1
Branch feature1 set up to track remote branch feature1 from origin.
Switched to a new branch 'feature1'

Edit the source code in your favorite IDE and make the changes to address the comments. In this example, the developer has updated the source code as follows:

package com.amazon.helloworld;

import java.util.*;

/**
 *  This class prints a hello world message
 *
 * @author Michael Edge
 * @see HelloEarth
 * @version 1.0
 */

public class Main {
   public static void main(String[] args) {

        Date todaysDate = Calendar.getInstance().getTime();

        System.out.println("Hello, earth. Today's date is: " + todaysDate);
   }
}

After saving the changes, commit and push to the CodeCommit ‘feature1’ branch as you did previously:

git commit -am 'updated based on review comments'
git push origin feature1

Responding to the reviewer

Now that you’ve fixed the code issues you will want to respond to the review comments. In the AWS CodeCommit console, check that your latest commit appears in the pull request Commits tab. You now have a pull request consisting of more than one commit. The pull request in Figure 12 has four commits, which originated from the following activities:

  • 8th Nov: the original commit used to initiate this pull request
  • 10th Nov, 3 hours ago: the commit by the team lead to the ‘develop’ branch, merged into our ‘feature1’ branch
  • 10th Nov, 24 minutes ago: the commit by the developer that resolved the merge conflict
  • 10th Nov, 4 minutes ago: the final commit by the developer addressing the review comments

Figure 12: Pull request with multiple commits

Let’s reply to the review comments provided by the team lead. In the Activity tab, reply to the pull request comment and save it, as shown in Figure 13.

Figure 13: Replying to a pull request comment

At this stage, your code has been committed and you’ve updated your pull request comments, so you are ready for a final review by the team lead.

Final review

The team lead reviews the code changes and comments made by the developer. As team lead, you own the ‘develop’ branch and it’s your decision on whether to merge the changes in the pull request into the ‘develop’ branch. You can close the pull request with or without merging using the Merge and Close buttons at the bottom of the pull request page (see Figure 13). Clicking Close will allow you to add comments on why you are closing the pull request without merging. Merging will perform a fast-forward merge, incorporating the commits referenced by the pull request. Let’s go ahead and click the Merge button to merge the pull request into the ‘develop’ branch.

Figure 14: Merging the pull request

After merging a pull request, development of that feature is complete and the feature branch is no longer needed. It’s common practice to delete the feature branch after merging. CodeCommit provides a check box during merge to automatically delete the associated feature branch, as seen in Figure 14. Clicking the Merge button will merge the pull request into the ‘develop’ branch, as shown in Figure 15. This will update the status of the pull request to ‘Merged’, and will close the pull request.

Conclusion

This blog has demonstrated how pull requests can be used to request a code review, and enable reviewers to get a comprehensive summary of what is changing, provide feedback to the author, and merge the code into production. For more information on pull requests, see the documentation.

Ares Kodi Project Calls it Quits After Hollywood Cease & Desist

Post Syndicated from Andy original https://torrentfreak.com/ares-kodi-project-calls-it-quits-after-hollywood-cease-desist-171117/

This week has been particularly bad for those involved in the Kodi addon scene. Following cease-and-desist notices from the MPA-led anti-piracy coalition Alliance for Creativity and Entertainment, several addon developers and repositories shut down.

With Columbia, Disney, Paramount, Twentieth Century Fox, Universal, Warner, Netflix, Amazon and Sky TV all lined up for war, the third-party developers had little choice but to quit. One of those affected was the leader of the hugely popular Ares Project, which quietly disappeared mid-week.

The Ares Wizard was an extremely popular and important piece of software which allowed people to switch Kodi builds, install third-party addons, install popular repositories, change system settings, and carry out backups. It’s installed on huge numbers of machines worldwide but it will soon fall into disrepair.

The mighty Ares Wizard in action

“[This week] I was subject to a hand-delivered notice to cease-and-desist from MPA & ACE,” Ares Project leader Tekto informs TorrentFreak.

“Given the notice, we obviously shut down the repo and wizard as requested.”

The news that Ares Project is done and never coming back will be a huge blow to the community. The project just celebrated its second birthday and has grown exponentially since it first arrived on the scene.

“Ares Project started in Oct 2015. Originally it was to be a tool to setup up the video cache on Kodi correctly. However, many ideas were thrown into the pot and it became a wee bit more; such as a wizard to install community provided builds, common addons and few other tweaks and options,” Tekto says.

“For my own part I started blogging earlier that year as part of a longer-term goal to be self-funding. I always disliked seeing begging bowls out to support ‘server’ costs, many of which were cheap £5-10 per month servers that were used to gain £100s in donations.

“The blog, via affiliate links and ads, could and would provide the funds to cover our hosting costs without resorting to begging for money every weekend.”

Intrigued by this first wave of actions by ACE in Europe, TorrentFreak asked for a copy of the MPA/ACE cease-and-desist notice but unfortunately, Tekto flat-out refused. All he would tell us is that he’d agreed not to give out any copies or screenshots and that he was adhering to that 100%.

That only leaves speculation as to what grounds the MPA/ACE cited for closing the project but to be fair, it doesn’t take much thought to find a direct comparison. Earlier this year, in the BREIN v Filmspeler case, the European Court of Justice (ECJ) ruled that selling “fully-loaded” Kodi boxes amounted to illegally communicating copyrighted content to the public.

With that in mind, it doesn’t take much of a leap to see how this ruling could also apply to someone distributing “fully-loaded” Kodi software builds or addons via a website. It had previously been considered a legal gray area, of course, and it was in that space that the Ares team believed it operated. After all, it took ECJ clarification for local courts in the Netherlands to be satisfied with the legal position.

“There was never any question that what we were doing was illegal. We didn’t and never have hosted any content, we always prevented discussions about illegal paid services, and never sold any devices, pre-loaded or otherwise. That used to be enough to occupy the ‘gray’ area which meant we were safe to develop our applications. That changed in 2017 as we were to discover,” Tekto notes.

Up until this week and apparently oblivious to how the earlier ECJ ruling might affect their operation, things had been going extremely well for Ares. In mid-2016, the group moved to its own support forum that attracted 100,000 signed-up members and 300,000 visitors every month.

“This was quite an achievement in terms of viral marketing but ultimately this would become part of our downfall,” Tekto says.

“The recent innovation of the ‘basket driven’ Ares Portal system seems to have triggered the legal move to shut the project down completely. This simple system gave access to hundreds of add-ons. The system removed the need for builds, blogs and YouTubers – you just shopped on the site for addons and then installed them to your device with a simple 6 digit code.”

While Ares and Tekto still didn’t believe they were doing anything illegal (addons were linked, not hosted) it is now pretty clear to them that the previous gray area has been well and truly closed, at least as far as the MPA/ACE alliance is concerned. And with that in mind, the show is over. Done. Finished.

“We are not criminals or malicious hackers, we weren’t even careful about hiding our identities. You couldn’t meet a more ordinary bunch of folks in truth,” he says.

“There was never any question we would close our doors if what we were doing crossed any boundaries of legality. So with the notice served on us, we are closing our doors and removing all our websites and applications. It’s a sad day in many ways, but nobody wants to be facing court or a potential custodial sentence, for what is essentially a hobby.”

Finally, Tekto says that others like him might want to consider their positions carefully, before they too get a knock at the door. In the meantime, he gives thanks to the project’s supporters, who have remained loyal over the past two years.

“It just leaves me to thank our users for their support and step away from the Kodi scene,” he concludes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Twitter Sued Over Slow Response to DMCA Takedown Request

Post Syndicated from Ernesto original https://torrentfreak.com/twitter-sued-over-slow-response-to-dmca-takedown-request-171112/

In common with many other user-generated content sites, Twitter is used by some of its members to host or link to copyright-infringing material.

If rightsholders submit a takedown request, Twitter swiftly takes the infringing content down. Over the past several months the company has processed thousands of requests and complied with most of them.

However, a new lawsuit filed in a California federal court suggests that Twitter’s takedown efforts aren’t perfect.

Rhode Island-based photographer Kristen Pierson filed a complaint against Twitter, accusing the company of hosting and linking to one of her works without permission.

The photo in question, taken at an Alice in Chains concert in 2006, was posted by Twitter user Karen Juanita. After Pierson found out she sent a DMCA takedown notice to Twitter on April 26 of this year.

Twitter promptly replied that it had “disabled access” to the photo, but this didn’t happen right away. While Twitter noted that it could take some time for the removal to propagate, it appears that something went wrong.

Twitter’s response

According to the complaint, it took 90 days before it was effectively taken down. It seems unlikely that Twitter intentionally waited three months, but Pierson is not looking for an excuse. Instead, she’s demanding damages from the social media outfit.

“Twitter had actual knowledge of the direct infringement and contributory infringement. Pierson provided notice to Twitter in compliance with the DMCA, and Twitter failed to expeditiously disable access to or remove the Copyrighted Photograph from their servers,” the complaint notes.

“Alternatively, Twitter directly infringed Pierson’s copyrights by continuing to allow public access to the Copyrighted Photograph on Twitter’s server or on servers controlled by Twitter.”

Theoretically, damages could go up to $150,000, should willful copyright infringement be proven. However, it’s more likely that both parties will settle their differences, or that the case will be dismissed for other reasons.

This isn’t the first time that Twitter has been sued for failing to promptly remove infringing content. Several photographers, including Pierson herself, have done so before. In most cases, these lawsuits are settled after a few weeks, behind closed doors.

A copy of the complaint is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Sci-Hub Won’t Be Blocked by US ISPs Anytime Soon

Post Syndicated from Ernesto original https://torrentfreak.com/sci-hub-wont-be-blocked-by-us-isps-anytime-soon-171111/

Sci-Hub, often referred to as the “Pirate Bay of Science,” hasn’t had a particularly good run in US courts so far.

Following a $15 million defeat against Elsevier in June, the American Chemical Society won a default judgment of $4.8 million in copyright damages late last week.

In addition, the publisher was granted an unprecedented injunction, requiring various third-party services to stop providing access to the site.

The order specifically mentions domain registrars and hosting companies, but also search engines and ISPs, although only those who are in “active concert or participation” with the site. This order sparked fears that Google, Comcast, and others would be ordered to take action, but that’s not the case.

After the news broke ACS issued a press release clarifying that it would not go after search engines and ISPs when they are not in “active participation” with Sci-Hub. The problem is that this can be interpreted quite broadly, leaving plenty of room for uncertainty.

Luckily, ACS Director Glenn Ruskin was willing to provide more clarity. He stressed that search engines and ISPs won’t be targeted for simply linking users to Sci-Hub. Companies that host the content are a target though.

“The court’s affirmative ruling does not apply to search engines writ large, but only to those entities who have been in active concert or participation with Sci-Hub, such as websites that host ACS content stolen by Sci-Hub,” Ruskin said.

When we asked whether this means that ISPs such as Comcast are not likely to be targeted, the answer was affirmative.

“That is correct, unless the internet service provider has been in active concert or participation with SciHub. Simply linking to SciHub does not rise to be in active concert or participation,” Ruskin clarified.

The above suggests that ACS will go after domain name registrars, hosting companies, and perhaps Cloudflare, but not any further. Still, even if that’s the case there is cause for worry among several digital rights activists.

The Electronic Frontier Foundation believes that these type of orders set a dangerous precedent. The concept of “active concert or participation” should only cover close associates and co-conspirators, not everyone who provides a service to the defendant. Domain registrars and registries have often been compelled to take action in similar cases, but EFF says this goes too far.

“The courts need to limit who can be bound by orders like this one, to prevent them from being abused,” EFF Senior Staff Attorney Mitch Stoltz informs TorrentFreak.

“In particular, domain name registrars and registries shouldn’t be ordered to help take down a website because of a dispute over the site’s contents. That invites others to use the domain name system as a tool for censorship.”

News of the Sci-Hub injunction has sparked controversy and confusion in recent days, not least because Sci-hub.cc became unavailable soon after. Instead of showing the usual search box, visitors now see a “403 Forbidden” error message. On top of that, the bulletproof Tor version of the site also went offline.

The error message indicates that there’s a hosting issue. While it’s easy to conclude that the court’s injunction has something to do with this, that might not necessarily be the case. Sci-Hub’s hosting company isn’t tied to the US and has a history of protecting sites from takedown efforts.

We reached out to Sci-Hub founder Alexandra Elbakyan for comment but we’re yet to receive a response. The site hasn’t posted any relevant updates on its social media pages either.

That said, the site is far from done. In addition to the Tor domain, Sci-Hub has several other backups in place such as Sci-Hub.io and Sci-Hub.ac, which are up and running as usual.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

piwheels: making “pip install” fast

Post Syndicated from Ben Nuttall original https://www.raspberrypi.org/blog/piwheels/

TL;DR pip install numpy used to take ages, and now it’s super fast thanks to piwheels.

The Python Package Index (PyPI) is a package repository for Python modules. Members of the Python community publish software and libraries in it as an easy method of distribution. If you’ve ever used pip install, PyPI is the service that hosts the software you installed. You may have noticed that some installations can take a long time on the Raspberry Pi. That usually happens when modules have been implemented in C and require compilation.

XKCD comic of two people sword-fighting on office chairs while their code is compiling

No more slacking off! pip install numpy takes just a few seconds now \o/

Wheels for Python packages

A general solution to this problem exists: Python wheels are a standard for distributing pre-built versions of packages, saving users from having to build from source. However, when C code is compiled, it’s compiled for a particular architecture, so package maintainers usually publish wheels for 32-bit and 64-bit Windows, macOS, and Linux. Although Raspberry Pi runs Linux, its architecture is ARM, so Linux wheels are not compatible.

A comic of snakes biting their own tails to roll down a sand dune like wheels

What Python wheels are not

Pip works by browsing PyPI for a wheel matching the user’s architecture — and if it doesn’t find one, it falls back to the source distribution (usually a tarball or zip of the source code). Then the user has to build it themselves, which can take a long time, or may require certain dependencies. And if pip can’t find a source distribution, the installation fails.

Developing piwheels

In order to solve this problem, I decided to build wheels of every package on PyPI. I wrote some tooling for automating the process and used a postgres database to monitor the status of builds and log the output. Using a Pi 3 in my living room, I attempted to build wheels of the latest version of all 100 000 packages on PyPI and to host them on a web server on the Pi. This took a total of ten days, and my proof-of-concept seemed to show that it generally worked and was likely to be useful! You could install packages directly from the server, and installations were really fast.

A Raspberry Pi 3 sitting atop a Pi 2 on cloth

This Pi 3 was the piwheels beta server, sitting atop my SSH gateway Pi 2 at home

I proceeded to plan for version 2, which would attempt to build every version of every package — about 750 000 versions in total. I estimated this would take 75 days for one Pi, but I intended to scale it up to use multiple Pis. Our web hosts Mythic Beasts provide dedicated Pi 3 hosting, so I fired up 20 of them to share the load. With some help from Dave Jones, who created an efficient queuing system for the builders, we were able make this run like clockwork. In under two weeks, it was complete! Read ALL about the first build run drama on my blog.

A list of the mythic beasts cloud Pis

ALL the cloud Pis

Improving piwheels

We analysed the failures, made some tweaks, installed some key dependencies, and ran the build again to raise our success rate from 76% to 83%. We also rebuilt packages for Python 3.5 (the new default in Raspbian Stretch). The wheels we build are tagged ‘armv7l’, but because our Raspbian image is compatible with all Pi models, they’re really ARMv6, so they’re compatible with Pi 3, Pi 2, Pi 1 and Pi Zero. This means the ‘armv6l’-tagged wheels we provide are really just the ARMv7 wheels renamed.

The piwheels monitor interface created by Dave Jones

The wonderful piwheels monitor interface created by Dave

Now, you might be thinking “Why didn’t you just cross-compile?” I really wanted to have full compatibility, and building natively on Pis seemed to be the best way to achieve that. I had easy access to the Pis, and it really didn’t take all that long. Plus, you know, I wanted to eat my own dog food.

You might also be thinking “Why don’t you just apt install python3-numpy?” It’s true that some Python packages are distributed via the Raspbian/Debian archives too. However, if you’re in a virtual environment, or you need a more recent version than the one packaged for Debian, you need pip.

How it works

Now that the piwheels package repository is running as a service, hosted on a Pi 3 in the Mythic Beasts data centre in London. The pip package in Raspbian Stretch is configured to use piwheels as an additional index, so it falls back to PyPI if we’re missing a package. Just run sudo apt upgrade to get the configuration change. You’ll find that pip installs are much faster now! If you want to use piwheels on Raspbian Jessie, that’s possible too — find the instructions in our FAQs. And now, every time you pip install something, your files come from a web server running on a Raspberry Pi (that capable little machine)!

Try it for yourself in a virtual environment:

sudo apt install virtualenv python3-virtualenv -y
virtualenv -p /usr/bin/python3 testpip
source testpip/bin/activate
pip install numpy

This takes about 20 minutes on a Pi 3, 2.5 hours on a Pi 1, or just a few seconds on either if you use piwheels.

If you’re interested to see the details, try pip install numpy -v for verbose output. You’ll see that pip discovers two indexes to search:

2 location(s) to search for versions of numpy:
  * https://pypi.python.org/simple/numpy/
  * https://www.piwheels.hostedpi.com/simple/numpy/

Then it searches both indexes for available files. From this list of files, it determines the latest version available. Next it looks for a Python version and architecture match, and then opts for a wheel over a source distribution. If a new package or version is released, piwheels will automatically pick it up and add it to the build queue.

A flowchart of how piwheels works

How piwheels works

For the users unfamiliar with virtual environments I should mention that doing this isn’t a requirement — just an easy way of testing installations in a sandbox. Most pip usage will require sudo pip3 install {package}, which installs at a system level.

If you come across any issues with any packages from piwheels, please let us know in a GitHub issue.

Taking piwheels further

We currently provide over 670 000 wheels for more than 96 000 packages, all compiled natively on Raspberry Pi hardware. Moreover, we’ll keep building new packages as they are released.

Note that, at present, we have built wheels for Python 3.4 and 3.5 — we’re planning to add support for Python 3.6 and 2.7. The fact that piwheels is currently missing Python 2 wheels does not affect users: until we rebuild for Python 2, PyPI will be used as normal, it’ll just take longer than installing a Python 3 package for which we have a wheel. But remember, Python 2 end-of-life is less than three years away!

Many thanks to Dave Jones for his contributions to the project, and to Mythic Beasts for providing the excellent hosted Pi service.

Screenshot of the mythic beasts Raspberry Pi 3 server service website

Related/unrelated, check out my poster from the PyCon UK poster session:

A poster about Python and Raspberry Pi

Click to download the PDF!

The post piwheels: making “pip install” fast appeared first on Raspberry Pi.

US Court Grants ISPs and Search Engine Blockade of Sci-Hub

Post Syndicated from Ernesto original https://torrentfreak.com/us-court-grants-isps-and-search-engine-blockade-of-sci-hub-171106/

Earlier this year the American Chemical Society (ACS), a leading source of academic publications in the field of chemistry, filed a lawsuit against Sci-Hub and its operator Alexandra Elbakyan.

The non-profit organization publishes tens of thousands of articles a year in its peer-reviewed journals. Because many of these are available for free on Sci-Hub, ACS wants to be compensated.

Sci-Hub was made aware of the legal proceedings but did not appear in court. As a result, a default was entered against the site.

In addition to millions of dollars in damages, ACS also requested third-party Internet intermediaries to take action against the site.

The broad request was later adopted in a recommendation from Magistrate Judge John Anderson. This triggered a protest from the tech industry trade group CCIA, which represents global tech firms including Google, Facebook, and Microsoft, that warned against the broad implications. However, this amicus brief was denied.

Just before the weekend, US District Judge Leonie Brinkema issued a final decision which is a clear win for ACS. The publisher was awarded the maximum statutory damages of $4.8 million for 32 infringing works, as well as a permanent injunction.

The injunction is not limited to domain name registrars and hosting companies, but expands to search engines, ISPs and hosting companies too, who can be ordered to stop linking to or offering services to Sci-Hub.

“Ordered that any person or entity in active concert or participation with Defendant Sci-Hub and with notice of the injunction, including any Internet search engines, web hosting and Internet service providers, domain name registrars, and domain name registries, cease facilitating access to any or all domain names and websites through which Sci-Hub engages in unlawful access to, use, reproduction, and distribution of ACS’s trademarks or copyrighted works,” the injunction reads.

part of the injunction

There is a small difference with the recommendation from the Magistrate Judge. Instead of applying the injunction to all persons “in privity” with Sci-Hub, it now applies to those who are “in active concert or participation” with the pirate site.

The injunction means that Internet providers, such as Comcast, can be requested to block users from accessing Sci-Hub. That’s a big deal since pirate site blockades are not common in the United States. The same is true for search engine blocking of copyright-infringing sites.

It’s clear that the affected Internet services will not be happy with the outcome. While the CCIA’s attempt to be heard in the case failed, it’s likely that they will protest the injunction when ACS tries to enforce it.

Previously, Cloudflare objected to a similar injunction where the RIAA argued that it was “in active concert or participation” with the pirate site MP3Skull. Here, Cloudflare countered that the DMCA protects the company from liability for the copyright infringements of its customers, limiting the scope of anti-piracy injunctions.

However, a Florida federal court ruled that the DMCA doesn’t apply in these cases.

It’s likely that ISPs and search engines will lodge similar protests if ACS tries to enforce the injunction against them.

While this case is crucial for copyright holders and Internet services, Sci-Hub itself doesn’t seem too bothered by the blocking prospect or the millions in damages it must pay on paper.

It already owes Elsevier $15 million, which it can’t pay, and a few million more or less doesn’t change anything. Also, the site has a Tor version which can’t be blocked by Internet providers, so determined scientists will still be able to access the site if they want.

The full order is available here (pdf) and a copy of the injunction can be found here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Spam from Flock

Post Syndicated from Григор original http://www.gatchev.info/blog/?p=2095

Couple of weeks ago I received a mail from a site called Flock. It said that some guy invited me to join their social network. I would expect whoever invites me somewhere to do it in personal mail, without giving my e-mail address around. However, some people don’t think before acting – one should expect such things.

I wasn’t interested in joining and left that mail unanswered. However, during the next few days I got an avalanche of mails from Flock. Apparently they subscribe every e-mail address they lay their hands on to their spam.

One of their e-mails contained an unsubscription link. I clicked on it, only to learn that I have been unsubscribed from this invitation, and will continue to receive other e-mails from Flock. (Probably these, or at least a part of them, can be unsubscribed too. After you make an account with Flock and fill in all your personal info they might like to have. Guess what for.)

Naturally, that was the “enough is enough” line. I blocked all mails from Flock for the entire mail hosting that holds my e-mail – happily, I am the one responsible for it. So, far, the only reaction have been one thank-you from another victim of Flock whose mail is hosted there.

I am not evil. If Flock sends me a notarized legally binding declaration that they stop all spamming activities, I will unblock them happily. Until then, they will stay on my hosting’s blacklist. Unsubscribes, even complete, for me or other specific people don’t count. Any attempts of theirs for communication other than sending such a declaration will be automatically deleted before reaching me.

My suggestion to all mail providers around is to do the same. Think on how much money you lose due to spam, and decide if you want these losses to increase, or to decrease.

(Update: Forgot to add that the “unsubscription” does not unsubscribe you. As expected – spammers are spammers. Strange, eh?)

US Court Disarms Canada’s Global Site Blocking Order Against Google

Post Syndicated from Ernesto original https://torrentfreak.com/us-court-disarms-canadas-global-site-blocking-order-against-google-171103/

Google regularly removes infringing websites from its search results, but the company is also wary of abuse.

When the Canadian company Equustek Solutions requested the company to remove websites that offered unlawful and competing products, it refused to do so globally.

This resulted in a legal battle that came to a climax in June, when the Supreme Court of Canada ordered Google to remove a company’s websites from its search results. Not just in Canada, but all over the world.

With options to appeal exhausted in Canada, Google took the case to a federal court in the US. The search engine requested an injunction to disarm the Canadian order, arguing that a worldwide blocking order violates the First Amendment.

Surprisingly, Equustek decided not to defend itself and without opposition, a California District Court sided with Google yesterday.

During a hearing, Google attorney Margaret Caruso stressed that it should not be possible for foreign countries to implement measures that run contrary to core values of the United States.

The search engine argued that the Canadian order violated Section 230 of the Communications Decency Act, which immunizes Internet services from liability for content created by third parties. With this law, Congress specifically chose not to deter harmful online speech by imposing liability on Internet services.

In an order, signed shortly after the hearing, District Judge Edward Davila concludes that Google qualifies for Section 230 immunity in this case. As such, he rules that the Canadian Supreme Court’s global blocking order goes too far.

“Google is harmed because the Canadian order restricts activity that Section 230 protects. In addition, the balance of equities favors Google because the injunction would deprive it of the benefits of U.S. federal law,” Davila writes.

Rendering the order unenforceable is not just in the interest of Google, the District Court writes. It’s also best for the general public as free speech is clearly at stake here.

“Congress recognized that free speech on the internet would be severely restricted if websites were to face tort liability for hosting user-generated content. It responded by enacting Section 230, which grants broad immunity to online intermediaries,” Judge Davila writes.

“The Canadian order would eliminate Section 230 immunity for service providers that link to third-party websites. By forcing intermediaries to remove links to third-party material, the Canadian order undermines the policy goals of Section 230 and threatens free speech on the global internet.”

The preliminary injunction

The Court signed a preliminary injunction which prevents Equustek enforcing the Canadian order in the United States, which is exactly what Google was after. Since the Canadian company chose not to represent itself in the US case, this will likely stand.

The ruling is important in the broader scheme. If foreign courts are allowed to grant worldwide blockades, free speech could be severely hampered. Today it’s a relatively unknown Canadian company, but what if the Chinese Government asked Google to block the websites of VPN providers?

A copy of the full order is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

We’re switching to a DCO for source code contributions (GitLab blog)

Post Syndicated from jake original https://lwn.net/Articles/738048/rss

The GitLab open-source (and open-core) project hosting site has announced that it is moving away from its Contributor License Agreement (CLA) to a Developers Certificate of Origin (DCO), which is what is used by the Linux kernel, for example, to cover contributions made to its code base. “A Contributor License Agreement (CLA) is the industry standard for open source contributions to other projects, but it’s unpopular with developers, who don’t want to enter into legal terms and are put off by having to review a lengthy contract and potentially give up some of their rights. Contributors find the agreement unnecessarily restrictive, and it’s deterring developers of open source projects from using GitLab. We were approached by Debian developers to consider dropping the CLA, and that’s what we’re doing.” LWN looked at some of the background of this issue back in June.

Kim Dotcom Wins Settlement Over Military-Style Police Raid

Post Syndicated from Andy original https://torrentfreak.com/kim-dotcom-wins-settlement-military-style-police-raid-171103/

It’s been spoken about thousands of times in the past half-decade but the 2012 raid on Kim Dotcom’s home in New Zealand was extraordinary by any standard.

At the behest of the US Government, 72 police officers – including some from the elite heavily armed Special Tactics Group (STG) – descended on Dotcom’s Coatesville mansion. Two helicopters were used during the raid, footage from which was later released to the public as the scale and nature of the operation became clear.

To be clear, no one in the Dotcom residence had any history of violence. Nevertheless, considerable force was used to attack rooms in the building, all of it aimed at detaining the founder of what was then the world’s most famous file-hosting site. The FBI, it seems, would stop at nothing in pursuit of the man they claimed was the planet’s most notorious copyright infringer.

As the dust settled, it became clear that the overwhelming use of force was not only unprecedented but also completely unnecessary, a point Dotcom himself became intent on pressing home.

The entrepreneur was particularly angry at the treatment received by former wife Mona, who was seven months pregnant with twins at the time. So, in response, the Megaupload founder and his wife sued the police, hoping to hold the authorities to account for their actions.

The case has dragged on for years but this morning came news of a breakthrough. According to information released by Kim Dotcom, the lawsuit has been resolved after a settlement was reached with the police.

“Today, Mona and I are glad to reach a confidential settlement of our case against the New Zealand Police. We have respect for the Police in this country. They work hard and have, with this one exception, treated me and my family with courtesy and respect,” Dotcom said.

“We were shocked at the uncharacteristic handling of my arrest for a non-violent Internet copyright infringement charge brought by the United States, which is not even a crime in New Zealand.”

Dotcom said police could have simply asked to be let in, at which point he could have been arrested. Instead, under pressure from US authorities and “special interests in Hollywood”, they turned the whole event into a massive publicity stunt aimed at pleasing the US.

“The New Zealand Police we know do not carry guns. They try to resolve matters in a non-violent manner, unlike what we see from the United States. We are sad that our officers, good people simply doing their job, were tainted by US priorities and arrogance,” Dotcom said.

“We sued the Police because we believed their military-style raid on a family with children in a non-violent case went far beyond what a civilised community should expect from its police force. New Zealanders deserve and should expect better.”

Kim Dotcom has developed a reputation for fighting back across all aspects of his long-running case, and this particular action was no different. He’d planned to take the case all the way to the High Court but in the end decided that doing so wouldn’t be in the best interests of his family.

Noting that New Zealand has a new government “for the better”, Dotcom said that raking up the past would only serve to further disrupt his family.

“Our children are now settled and integrated safely here into their community and they love it. We do not want to relive past events. We do not want to disrupt our children’s new lives. We do not want to revictimise them. We want them to grow up happy,” he said.

“That is why we chose New Zealand to be our family home in the first place. We are fortunate to live here. Under the totality of the circumstances, we thought settlement was best for our children.”

According to NZ Herald, the Dotcoms aren’t the only ones to have made peace with the police. Other people arrested in 2012, including Dotcom associates Bram van der Kolk and Mathias Ortmann, were paid six-figure sums to settle. The publication speculates that as the main target of the raid, Dotcom’s settlment amount would’ve been more.

But while this matter is now closed, others remain. It was previously determined that Kiwi spy agency the Government Communications Security Bureau (GCSB) unlawfully spied on the Dotcoms over an extended period. Ron Mansfield, New Zealand counsel for the Dotcoms, says that case will continue.

“The GCSB refuses to disclose what it did or the actual private communications it stole. The Dotcoms understandably believe that they are entitled to know this. That action is pending appeal in the Court of Appeal,” he says.

Also before the Court of Appeal is the case to extradite Dotcom and his associates to the United States. That hearing is set for February 2018 but whatever the outcome, a further appeal to the Supreme Court is likely, meaning that Dotcom will remain in New Zealand until 2020, at least.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Pirate-Friendly Coinhive’s DNS Hacked, User Hashes Stolen

Post Syndicated from Andy original https://torrentfreak.com/pirate-friendly-coinhives-dns-hacked-user-hashes-stolen-171025/

Just over a month ago, a Javascript cryptocurrency miner was silently added to The Pirate Bay. Noticed by users who observed their CPU usage going through the roof, it later transpired the site was trialing a miner operated by Coinhive.

Many users were disappointed that The Pirate Bay had added the Javascript-based Monero coin miner without their permission. However, it didn’t take long for people to see the potential benefits, with a raft of other sites adding the miner in the hope of generating additional revenue.

Now, however, Coinhive has an unexpected and potentially serious problem to deal with. The company has just revealed that on Monday night its DNS records maintained at Cloudflare were accessed by a third-party, allowing an unnamed attacker to redirect user mining traffic to a server they controlled.

“The DNS records for coinhive.com have been manipulated to redirect requests for the coinhive.min.js to a third party server. This third party server hosted a modified version of the JavaScript file with a hardcoded site key. This essentially let the attacker ‘steal’ hashes from our users,” Coinhive said in a statement.

The company hasn’t revealed how long the unauthorized redirect stayed in place for, but it appears that all coins mined on sites hosting Coinhive’s script were ‘stolen’ during the period, instead of being credited to their accounts.

Coinhive stresses that no user account information was leaked and that its website and database servers were uncompromised. But while that’s good news, the method that the hackers used to access the company’s DNS provider lay in a basic security error.

Back in 2014, crowdfunding platform Kickstarter – which Coinhive used – fell victim to a security breach. After being advised of the fact by law enforcement officials, Kickstarter shut down unauthorized access, began strengthening its systems, while advising customers to do the same.

While Coinhive did respond to the warning to ensure that its data was safe, something slipped through the net. One piece of information – its Cloudflare account password – remained unchanged after the Kickstarter attack. It now seems the most likely culprit for this week’s DNS breach.

“The root cause for this incident was an insecure password for our Cloudflare account that was probably leaked with the Kickstarter data breach back in 2014,” Coinhive says.

“We have learned hard lessons about security and used 2FA and unique passwords with all services since, but we neglected to update our years old Cloudflare account.”

While not mentioning Coinhive explicitly, Kickstarter warned earlier this month that the 2014 incident may not be completely over. In an update posted on the site Oct 6, Kickstarter noted that some of its customers had recently been hearing more information about the breach from notification service Have I been pwned?.

In the meantime, Coinhive has issued an apology and indicated it will find ways to reimburse sites which have lost revenue as a result of the DNS hack.

“We’re deeply sorry about this severe oversight,” the company said. “Our current plan is to credit all sites with an additional 12 hours of their the daily average hashrate. Please give us a few hours to roll this out.”

Based on earlier calculations carried out by TF, The Pirate Bay (if it was mining during the breach) could be potentially owed around $200 for the lost hashes, give or take. After turning off mining in September, the site reactivated it again in October, with no opt-out. The situation appears fluid.

While the hack is obviously a disappointment, Coinhive appears to have advised its users quickly and transparently, which under the circumstances is exactly what’s required. The fact that it’s offering compensation to users will also be welcomed.

The breach is the latest controversy to hit the company. Earlier this month, Cloudflare began banning sites which implemented Coinhive mining without informing their users. The CDN company said it considered non-advised mining as malware.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Blockchain? It’s All Greek To Me…

Post Syndicated from Bozho original https://techblog.bozho.net/blockchain-its-all-greek-to-me/

The blockchain hype is huge, the ICO craze (“Coindike”) is generating millions if not billions of “funding” for businesses that claim to revolutionize basically anything.

I’ve been following all of that for a while. I got my first (and only) Bitcoin several years ago, I know how the technology works, I’ve implemented the data structure part, I’ve tried (with varying success) to install an Ethereum wallet since almost as soon as Ethereum appeared, and I’ve read and subscribed to newsletters about dozens of projects and new cryptocurrencies, including storj.io, siacoin, namecoin, etc. I would say I’m at least above average in terms of knowledge on how the cryptocurrencies, blockchain, smart contracts, EVM, proof-of-wahtever operates. And I’ve voiced my concerns about the technology in general.

Now it’s rant time.

I’ve been reading whitepapers of various projects, I’ve been to various meetups and talks, I’ve been reading the professed future applications of the blockchain, and I have to admit – it’s all Greek to me. I have no clue what these people are talking about. And why would all of that make any sense. I still think I’m not clever enough to understand the upcoming revolution, but there’s also a cynical side of me that says “this is all a scam”.

Why “X on the blockchain” somehow makes it magical and superior to a good old centralized solution? No, spare me the cliches about “immutable ledger”, “lack of central authority” and the likes. These are the phrases that a person learns after reading literally one article about blockchain. Have you actually written anything apart from a complex-sounding whitepaper or a hello-world smart contract? Do you really know how the overlay network works, how the economic incentives behind that network work, how all the cryptography works? Maybe there are many, many people that indeed know that and they know it better than me and are thus able to imagine the business case behind “X on the blockchain”.

I can’t. I can’t see why it would be useful to abandon a centralized database that you can query in dozens of ways, test easily and scale trivially in favour of a clunky write-only, low-throughput, hard-to-debug privacy nightmare that is any public blockchain. And how do you imagine to gain a substantial userbase with an ecosystem where the Windows client for the 2nd most popular blockchain (Ethereum) has been so buggy, I (a software engineer) couldn’t get it work and sync the whole chain. And why would building a website ontop of that clunky, user-unfriendly database has any benefit over a centralized competitor?

Do we all believe that somehow the huge datacenters with guarnateed power backups, regular hardware and network checks, regular backups and overall – guaranteed redundancy – will somehow be beaten by a few thousand machines hosting a software that has the sole purpose of guaranteeing integrity? Bitcoin has 10 thousand nodes. Ethereum has 22 thousand nodes. And while these nodes are probably very well GPU-equipped, they aren’t supercomputers. Amazon’s AWS has a million servers. How’s that for comparison. And why would anyone take seriously 22 thousand non-servers. Or even 220 thousand, if we believe in some inevitable growth.

Don’t get me wrong, the technology is really cool. The way tamper-evident data structures (hash chains) were combined with a consensus algorithm, an overlay network and a financial incentive is really awesome. When you add a distributed execution environment, it gets even cooler. But is it suitable for literally everything? I fail to see how.

I’m sure I’m missing something. The fact that many of those whitepapers sound increasingly like Greek to me might hint that I’m just a dumb developer and those enlightened people are really onto something huge. I guess time will tell.

But I happen to be living in a country that saw a transition to capitalism in the years of my childhood. And there were a lot of scams and ponzi schemes that people believed in. Because they didn’t know how capitalism works, how the market works. I’m seeing some similarities – we have no idea how the digital realm really works, and so a lot of scams are bound to appear, until we as a society learn the basics.

Until then – enjoy your ICO, enjoy your tokens, enjoy your big-player competitor with practically the same business model, only on a worse database.

And I hope that after the smoke of hype and fraud clears, we’ll be able to enjoy the true benefits of the blockchain innovation.

The post Blockchain? It’s All Greek To Me… appeared first on Bozho's tech blog.