Tag Archives: ACTA

In Case You Missed These: AWS Security Blog Posts from June, July, and August

Post Syndicated from Craig Liebendorfer original https://blogs.aws.amazon.com/security/post/Tx3KVD6T490MM47/In-Case-You-Missed-These-AWS-Security-Blog-Posts-from-June-July-and-August

In case you missed any AWS Security Blog posts from June, July, and August, they are summarized and linked to below. The posts are shown in reverse chronological order (most recent first), and the subject matter ranges from a tagging limit increase to recording SSH sessions established through a bastion host.


August 16: Updated Whitepaper Available: AWS Best Practices for DDoS Resiliency
We recently released the 2016 version of the AWS Best Practices for DDoS Resiliency Whitepaper, which can be helpful if you have public-facing endpoints that might attract unwanted distributed denial of service (DDoS) activity.

August 15: Now Organize Your AWS Resources by Using up to 50 Tags per Resource
Tagging AWS resources simplifies the way you organize and discover resources, allocate costs, and control resource access across services. Many of you have told us that as the number of applications, teams, and projects running on AWS increases, you need more than 10 tags per resource. Based on this feedback, we now support up to 50 tags per resource. You do not need to take additional action—you can begin applying as many as 50 tags per resource today.

August 11: New! Import Your Own Keys into AWS Key Management Service
Today, we are happy to announce the launch of the new import key feature that enables you to import keys from your own key management infrastructure (KMI) into AWS Key Management Service (KMS). After you have exported keys from your existing systems and imported them into KMS, you can use them in all KMS-integrated AWS services and custom applications.

August 2: Customer Update: Amazon Web Services and the EU-US Privacy Shield
Recently, the European Commission and the US Government agreed on a new framework called the EU-US Privacy Shield, and on July 12, the European Commission formally adopted it. AWS welcomes this new framework for transatlantic data flow. As the EU-US Privacy Shield replaces Safe Harbor, we understand many of our customers have questions about what this means for them. The security of our customers’ data is our number one priority, so I wanted to take a few moments to explain what this all means.

August 2: How to Remove Single Points of Failure by Using a High-Availability Partition Group in Your AWS CloudHSM Environment
In this post, I will walk you through steps to remove single points of failure in your AWS CloudHSM environment by setting up a high-availability (HA) partition group. Single points of failure occur when a single CloudHSM device fails in a non-HA configuration, which can result in the permanent loss of keys and data. The HA partition group, however, allows for one or more CloudHSM devices to fail, while still keeping your environment operational.


July 28: Enable Your Federated Users to Work in the AWS Management Console for up to 12 Hours
AWS Identity and Access Management (IAM) supports identity federation, which enables external identities, such as users in your corporate directory, to sign in to the AWS Management Console via single sign-on (SSO). Now with a small configuration change, your AWS administrators can allow your federated users to work in the AWS Management Console for up to 12 hours, instead of having to reauthenticate every 60 minutes. In addition, administrators can now revoke active federated user sessions. In this blog post, I will show how to configure the console session duration for two common federation use cases: using Security Assertion Markup Language (SAML) 2.0 and using a custom federation broker that leverages the sts:AssumeRole* APIs (see this downloadable sample of a federation proxy). I will wrap up this post with a walkthrough of the new session revocation process.

July 28: Amazon Cognito Your User Pools is Now Generally Available
Amazon Cognito makes it easy for developers to add sign-up, sign-in, and enhanced security functionality to mobile and web apps. With Amazon Cognito Your User Pools, you get a simple, fully managed service for creating and maintaining your own user directory that can scale to hundreds of millions of users.

July 27: How to Audit Cross-Account Roles Using AWS CloudTrail and Amazon CloudWatch Events
In this blog post, I will walk through the process of auditing access across AWS accounts by a cross-account role. This process links API calls that assume a role in one account to resource-related API calls in a different account. To develop this process, I will use AWS CloudTrail, Amazon CloudWatch Events, and AWS Lambda functions. When complete, the process will provide a full audit chain from end user to resource access across separate AWS accounts.

July 25: AWS Becomes First Cloud Service Provider to Adopt New PCI DSS 3.2
We are happy to announce the availability of the Amazon Web Services PCI DSS 3.2 Compliance Package for the 2016/2017 cycle. AWS is the first cloud service provider (CSP) to successfully complete the assessment against the newly released PCI Data Security Standard (PCI DSS) version 3.2, 18 months in advance of the mandatory February 1, 2018, deadline. The AWS Attestation of Compliance (AOC), available upon request, now features 26 PCI DSS certified services, including the latest additions of Amazon EC2 Container Service (ECS), AWS Config, and AWS WAF (a web application firewall). We at AWS are committed to this international information security and compliance program, and adopting the new standard as early as possible once again demonstrates our commitment to information security as our highest priority. Our customers (and customers of our customers) can operate confidently as they store and process credit card information (and any other sensitive data) in the cloud knowing that AWS products and services are tested against the latest and most mature set of PCI compliance requirements.

July 20: New AWS Compute Blog Post: Help Secure Container-Enabled Applications with IAM Roles for ECS Tasks
Amazon EC2 Container Service (ECS) now allows you to specify an IAM role that can be used by the containers in an ECS task, as a new AWS Compute Blog post explains. 

July 14: New Whitepaper Now Available: The Security Perspective of the AWS Cloud Adoption Framework
Today, AWS released the Security Perspective of the AWS Cloud Adoption Framework (AWS CAF). The AWS CAF provides a framework to help you structure and plan your cloud adoption journey, and build a comprehensive approach to cloud computing throughout the IT lifecycle. The framework provides seven specific areas of focus or Perspectives: business, platform, maturity, people, process, operations, and security.

July 14: New Amazon Inspector Blog Post on the AWS Blog
On the AWS Blog yesterday, Jeff Barr published a new security-related blog post written by AWS Principal Security Engineer Eric Fitzgerald. Here’s the beginning of the post, which is entitled, Scale Your Security Vulnerability Testing with Amazon Inspector:

July 12: How to Use AWS CloudFormation to Automate Your AWS WAF Configuration with Example Rules and Match Conditions
We recently announced AWS CloudFormation support for all current features of AWS WAF. This enables you to leverage CloudFormation templates to configure, customize, and test AWS WAF settings across all your web applications. Using CloudFormation templates can help you reduce the time required to configure AWS WAF. In this blog post, I will show you how to use CloudFormation to automate your AWS WAF configuration with example rules and match conditions.

July 11: How to Restrict Amazon S3 Bucket Access to a Specific IAM Role
In this blog post, I show how you can restrict S3 bucket access to a specific IAM role or user within an account using Conditions instead of with the NotPrincipal element. Even if another user in the same account has an Admin policy or a policy with s3:*, they will be denied if they are not explicitly listed. You can use this approach, for example, to configure a bucket for access by instances within an Auto Scaling group. You can also use this approach to limit access to a bucket with a high-level security need.

July 7: How to Use SAML to Automatically Direct Federated Users to a Specific AWS Management Console Page
In this blog post, I will show you how to create a deep link for federated users via the SAML 2.0 RelayState parameter in Active Directory Federation Services (AD FS). By using a deep link, your users will go directly to the specified console page without additional navigation.

July 6: How to Prevent Uploads of Unencrypted Objects to Amazon S3
In this blog post, I will show you how to create an S3 bucket policy that prevents users from uploading unencrypted objects, unless they are using server-side encryption with S3–managed encryption keys (SSE-S3) or server-side encryption with AWS KMS–managed keys (SSE-KMS).


June 30: The Top 20 AWS IAM Documentation Pages so Far This Year
The following 20 pages have been the most viewed AWS Identity and Access Management (IAM) documentation pages so far this year. I have included a brief description with each link to give you a clearer idea of what each page covers. Use this list to see what other people have been viewing and perhaps to pique your own interest about a topic you’ve been meaning to research. 

June 29: The Most Viewed AWS Security Blog Posts so Far in 2016
The following 10 posts are the most viewed AWS Security Blog posts that we published during the first six months of this year. You can use this list as a guide to catch up on your blog reading or even read a post again that you found particularly useful.

June 25: AWS Earns Department of Defense Impact Level 4 Provisional Authorization
I am pleased to share that, for our AWS GovCloud (US) Region, AWS has received a Defense Information Systems Agency (DISA) Provisional Authorization (PA) at Impact Level 4 (IL4). This will allow Department of Defense (DoD) agencies to use the AWS Cloud for production workloads with export-controlled data, privacy information, and protected health information as well as other controlled unclassified information. This new authorization continues to demonstrate our advanced work in the public sector space; you might recall AWS was the first cloud service provider to obtain an Impact Level 4 PA in August 2014, paving the way for DoD pilot workloads and applications in the cloud. Additionally, we recently achieved a FedRAMP High provisional Authorization to Operate (P-ATO) from the Joint Authorization Board (JAB), also for AWS GovCloud (US), and today’s announcement allows DoD mission owners to continue to leverage AWS for critical production applications.

June 23: AWS re:Invent 2016 Registration Is Now Open
Register now for the fifth annual AWS re:Invent, the largest gathering of the global cloud computing community. Join us in Las Vegas for opportunities to connect, collaborate, and learn about AWS solutions. This year we are offering all-new technical deep-dives on topics such as security, IoT, serverless computing, and containers. We are also delivering more than 400 sessions, more hands-on labs, bootcamps, and opportunities for one-on-one engagements with AWS experts.

June 23: AWS Achieves FedRAMP High JAB Provisional Authorization
We are pleased to announce that AWS has received a FedRAMP High JAB Provisional Authorization to Operate (P-ATO) from the Joint Authorization Board (JAB) for the AWS GovCloud (US) Region. The new Federal Risk and Authorization Management Program (FedRAMP) High JAB Provisional Authorization is mapped to more than 400 National Institute of Standards and Technology (NIST) security controls. This P-ATO recognizes AWS GovCloud (US) as a secure environment on which to run highly sensitive government workloads, including Personally Identifiable Information (PII), sensitive patient records, financial data, law enforcement data, and other Controlled Unclassified Information (CUI).

June 22: AWS IAM Service Last Accessed Data Now Available for South America (Sao Paulo) and Asia Pacific (Seoul) Regions
In December, AWS IAM released service last accessed data, which helps you identify overly permissive policies attached to an IAM entity (a user, group, or role). Today, we have extended service last accessed data to support two additional regions: South America (Sao Paulo) and Asia Pacific (Seoul). With this release, you can now view the date when an IAM entity last accessed an AWS service in these two regions. You can use this information to identify unnecessary permissions and update policies to remove access to unused services.

June 20: New Twitter Handle Now Live: @AWSSecurityInfo
Today, we launched a new Twitter handle: @AWSSecurityInfo. The purpose of this new handle is to share security bulletins, security whitepapers, compliance news and information, and other AWS security-related and compliance-related information. The scope of this handle is broader than that of @AWSIdentity, which focuses primarily on Security Blog posts. However, feel free to follow both handles!

June 15: Announcing Two New AWS Quick Start Reference Deployments for Compliance
As part of the Professional Services Enterprise Accelerator – Compliance program, AWS has published two new Quick Start reference deployments to assist federal government customers and others who need to meet National Institute of Standards and Technology (NIST) SP 800-53 (Revision 4) security control requirements, including those at the high-impact level. The new Quick Starts are AWS Enterprise Accelerator – Compliance: NIST-based Assurance Frameworks and AWS Enterprise Accelerator – Compliance: Standardized Architecture for NIST High-Impact Controls Featuring Trend Micro Deep Security. These Quick Starts address many of the NIST controls at the infrastructure layer. Furthermore, for systems categorized as high impact, AWS has worked with Trend Micro to incorporate its Deep Security product into a Quick Start deployment in order to address many additional high-impact controls at the workload layer (app, data, and operating system). In addition, we have worked with Telos Corporation to populate security control implementation details for each of these Quick Starts into the Xacta product suite for customers who rely upon that suite for governance, risk, and compliance workflows.

June 14: Now Available: Get Even More Details from Service Last Accessed Data
In December, AWS IAM released service last accessed data, which shows the time when an IAM entity (a user, group, or role) last accessed an AWS service. This provided a powerful tool to help you grant least privilege permissions. Starting today, it’s easier to identify where you can reduce permissions based on additional service last accessed data.

June 14: How to Record SSH Sessions Established Through a Bastion Host
A bastion host is a server whose purpose is to provide access to a private network from an external network, such as the Internet. Because of its exposure to potential attack, a bastion host must minimize the chances of penetration. For example, you can use a bastion host to mitigate the risk of allowing SSH connections from an external network to the Linux instances launched in a private subnet of your Amazon Virtual Private Cloud (VPC). In this blog post, I will show you how to leverage a bastion host to record all SSH sessions established with Linux instances. Recording SSH sessions enables auditing and can help in your efforts to comply with regulatory requirements.

June 14: AWS Granted Authority to Operate for Department of Commerce and NOAA
AWS already has a number of federal agencies onboarded to the cloud, including the Department of Energy, The Department of the Interior, and NASA. Today we are pleased to announce the addition of two more ATOs (authority to operate) for the Department of Commerce (DOC) and the National Oceanic and Atmospheric Administration (NOAA). Specifically, the DOC will be utilizing AWS for their Commerce Data Service, and NOAA will be leveraging the cloud for their “Big Data Project." According to NOAA, the goal of the Big Data Project is to “create a sustainable, market-driven ecosystem that lowers the cost barrier to data publication. This project will create a new economic space for growth and job creation while providing the public far greater access to the data created with its tax dollars.”

June 2: How to Set Up DNS Resolution Between On-Premises Networks and AWS by Using Unbound
In previous AWS Security Blog posts, Drew Dennis covered two options for establishing DNS connectivity between your on-premises networks and your Amazon Virtual Private Cloud (Amazon VPC) environments. His first post explained how to use Simple AD to forward DNS requests originating from on-premises networks to an Amazon Route 53 private hosted zone. His second post showed how you can use Microsoft Active Directory (also provisioned with AWS Directory Service) to provide the same DNS resolution with some additional forwarding capabilities. In this post, I will explain how you can set up DNS resolution between your on-premises DNS with Amazon VPC by using Unbound, an open-source, recursive DNS resolver. This solution is not a managed solution like Microsoft AD and Simple AD, but it does provide the ability to route DNS requests between on-premises environments and an Amazon VPC–provided DNS.

June 1: How to Manage Secrets for Amazon EC2 Container Service–Based Applications by Using Amazon S3 and Docker
In this blog post, I will show you how to store secrets on Amazon S3, and use AWS IAM roles to grant access to those stored secrets using an example WordPress application deployed as a Docker image using ECS. Using IAM roles means that developers and operations staff do not have the credentials to access secrets. Only the application and staff who are responsible for managing the secrets can access them. The deployment model for ECS ensures that tasks are run on dedicated EC2 instances for the same AWS account and are not shared between customers, which gives sufficient isolation between different container environments.

If you have comments  about any of these posts, please add your comments in the "Comments" section of the appropriate post. If you have questions about or issues implementing the solutions in any of these posts, please start a new thread on the AWS IAM forum.

– Craig

I’ve bought some more awful IoT stuff

Post Syndicated from Matthew Garrett original http://mjg59.dreamwidth.org/43486.html

I bought some awful WiFi lightbulbs a few months ago. The short version: they introduced terrible vulnerabilities on your network, they violated the GPL and they were also just bad at being lightbulbs. Since then I’ve bought some other Internet of Things devices, and since people seem to have a bizarre level of fascination with figuring out just what kind of fractal of poor design choices these things frequently embody, I thought I’d oblige.

Today we’re going to be talking about the KanKun SP3, a plug that’s been around for a while. The idea here is pretty simple – there’s lots of devices that you’d like to be able to turn on and off in a programmatic way, and rather than rewiring them the simplest thing to do is just to insert a control device in between the wall and the device andn ow you can turn your foot bath on and off from your phone. Most vendors go further and also allow you to program timers and even provide some sort of remote tunneling protocol so you can turn off your lights from the comfort of somebody else’s home.

The KanKun has all of these features and a bunch more, although when I say “features” I kind of mean the opposite. I plugged mine in and followed the install instructions. As is pretty typical, this took the form of the plug bringing up its own Wifi access point, the app on the phone connecting to it and sending configuration data, and the plug then using that data to join your network. Except it didn’t work. I connected to the plug’s network, gave it my SSID and password and waited. Nothing happened. No useful diagnostic data. Eventually I plugged my phone into my laptop and ran adb logcat, and the Android debug logs told me that the app was trying to modify a network that it hadn’t created. Apparently this isn’t permitted as of Android 6, but the app was handling this denial by just trying again. I deleted the network from the system settings, restarted the app, and this time the app created the network record and could modify it. It still didn’t work, but that’s because it let me give it a 5GHz network and it only has a 2.4GHz radio, so one reset later and I finally had it online.

The first thing I normally do to one of these things is run nmap with the -O argument, which gives you an indication of what OS it’s running. I didn’t really need to in this case, because if I just telnetted to port 22 I got a dropbear ssh banner. Googling turned up the root password (“p9z34c”) and I was logged into a lightly hacked (and fairly obsolete) OpenWRT environment.

It turns out that here’s a whole community of people playing with these plugs, and it’s common for people to install CGI scripts on them so they can turn them on and off via an API. At first this sounds somewhat confusing, because if the phone app can control the plug then there clearly is some kind of API, right? Well ha yeah ok that’s a great question and oh good lord do things start getting bad quickly at this point.

I’d grabbed the apk for the app and a copy of jadx, an incredibly useful piece of code that’s surprisingly good at turning compiled Android apps into something resembling Java source. I dug through that for a while before figuring out that before packets were being sent, they were being handed off to some sort of encryption code. I couldn’t find that in the app, but there was a native ARM library shipped with it. Running strings on that showed functions with names matching the calls in the Java code, so that made sense. There were also references to AES, which explained why when I ran tcpdump I only saw bizarre garbage packets.

But what was surprising was that most of these packets were substantially similar. There were a load that were identical other than a 16-byte chunk in the middle. That plus the fact that every payload length was a multiple of 16 bytes strongly indicated that AES was being used in ECB mode. In ECB mode each plaintext is split up into 16-byte chunks and encrypted with the same key. The same plaintext will always result in the same encrypted output. This implied that the packets were substantially similar and that the encryption key was static.

Some more digging showed that someone had figured out the encryption key last year, and that someone else had written some tools to control the plug without needing to modify it. The protocol is basically ascii and consists mostly of the MAC address of the target device, a password and a command. This is then encrypted and sent to the device’s IP address. The device then sends a challenge packet containing a random number. The app has to decrypt this, obtain the random number, create a response, encrypt that and send it before the command takes effect. This avoids the most obvious weakness around using ECB – since the same plaintext always encrypts to the same ciphertext, you could just watch encrypted packets go past and replay them to get the same effect, even if you didn’t have the encryption key. Using a random number in a challenge forces you to prove that you actually have the key.

At least, it would do if the numbers were actually random. It turns out that the plug is just calling rand(). Further, it turns out that it never calls srand(). This means that the plug will always generate the same sequence of challenges after a reboot, which means you can still carry out replay attacks if you can reboot the plug. Strong work.

But there was still the question of how the remote control works, since the code on github only worked locally. tcpdumping the traffic from the server and trying to decrypt it in the same way as local packets worked fine, and showed that the only difference was that the packet started “wan” rather than “lan”. The server decrypts the packet, looks at the MAC address, re-encrypts it and sends it over the tunnel to the plug that registered with that address.

That’s not really a great deal of authentication. The protocol permits a password, but the app doesn’t insist on it – some quick playing suggests that about 90% of these devices still use the default password. And the devices are all based on the same wifi module, so the MAC addresses are all in the same range. The process of sending status check packets to the server with every MAC address wouldn’t take that long and would tell you how many of these devices are out there. If they’re using the default password, that’s enough to have full control over them.

There’s some other failings. The github repo mentioned earlier includes a script that allows arbitrary command execution – the wifi configuration information is passed to the system() command, so leaving a semicolon in the middle of it will result in your own commands being executed. Thankfully this doesn’t seem to be true of the daemon that’s listening for the remote control packets, which seems to restrict its use of system() to data entirely under its control. But even if you change the default root password, anyone on your local network can get root on the plug. So that’s a thing. It also downloads firmware updates over http and doesn’t appear to check signatures on them, so there’s the potential for MITM attacks on the plug itself. The remote control server is on AWS unless your timezone is GMT+8, in which case it’s in China. Sorry, Western Australia.

It’s running Linux and includes Busybox and dnsmasq, so plenty of GPLed code. I emailed the manufacturer asking for a copy and got told that they wouldn’t give it to me, which is unsurprising but still disappointing.

The use of AES is still somewhat confusing, given the relatively small amount of security it provides. One thing I’ve wondered is whether it’s not actually intended to provide security at all. The remote servers need to accept connections from anywhere and funnel decent amounts of traffic around from phones to switches. If that weren’t restricted in any way, competitors would be able to use existing servers rather than setting up their own. Using AES at least provides a minor obstacle that might encourage them to set up their own server.

Overall: the hardware seems fine, the software is shoddy and the security is terrible. If you have one of these, set a strong password. There’s no rate-limiting on the server, so a weak password will be broken pretty quickly. It’s also infringing my copyright, so I’d recommend against it on that point alone.

comment count unavailable comments

Announcing Two New AWS Quick Start Reference Deployments for Compliance

Post Syndicated from Lou Vecchioni original https://blogs.aws.amazon.com/security/post/Tx1AAQ3GYTYNVSC/Announcing-Two-New-AWS-Quick-Start-Reference-Deployments-for-Compliance

As part of the Professional Services Enterprise Accelerator – Compliance program, AWS has published two new Quick Start reference deployments to assist federal government customers and others who need to meet National Institute of Standards and Technology (NIST) SP 800-53 (Revision 4) security control requirements, including those at the high-impact level. The new Quick Starts are AWS Enterprise Accelerator – Compliance: NIST-based Assurance Frameworks and AWS Enterprise Accelerator – Compliance: Standardized Architecture for NIST High-Impact Controls Featuring Trend Micro Deep Security. These Quick Starts address many of the NIST controls at the infrastructure layer. Furthermore, for systems categorized as high impact, AWS has worked with Trend Micro to incorporate its Deep Security product into a Quick Start deployment in order to address many additional high-impact controls at the workload layer (app, data, and operating system). In addition, we have worked with Telos Corporation to populate security control implementation details for each of these Quick Starts into the Xacta product suite for customers who rely upon that suite for governance, risk, and compliance workflows.

These two Quick Starts are designed to help various customers, including those who deploy systems that must:

  • Go through a NIST-based assessment and authorization (A&A).
  • Meet NIST SP 800-171 requirements related to Controlled Unclassified Information (CUI).
  • Provide Trusted Internet Connection (TIC) capabilities.
  • Meet Department of Defense (DoD) Cloud Security Requirements Guide (SRG) requirements for levels 4–5.

Each Quick Start builds a recommended architecture which, when deployed as a package, provides a baseline AWS security-related configuration, and for the NIST high-impact Quick Start, includes a Trend Micro Deep Security configuration. The architectures are instantiated by these Quick Starts through sets of nested AWS CloudFormation templates and user data scripts that build an example environment with a two-VPC, multi-tiered web service. The NIST high-impact version launches Trend Micro Deep Security and deploys a single agent with relevant security configurations on all Amazon EC2 instances within the architecture. For more information about Deep Security, go to Defend your AWS workloads.

The Quick Starts also include:

  • AWS Identity and Access Management (IAM) resources – Policies, groups, roles, and instance profiles.
  • Amazon S3 buckets – Encrypted web content, logging, and backup.
  • A bastion host for troubleshooting and administration.
  • An encrypted Amazon RDS database instance running in multiple Availability Zones.
  • A logging/monitoring/alerting configuration that makes use of AWS CloudTrail, Amazon CloudWatch, and AWS Config Rules.

The recommended architecture supports a wide variety of AWS best practices (all of which are detailed in the document) that include the use of multiple Availability Zones, isolation using public and private subnets, load balancing, and auto scaling.

Both Quick Start packages include a deployment guide with detailed instructions, and a security controls matrix that describes how NIST SP 800-53 controls are addressed by the deployment. The matrix should be reviewed by your IT security assessors and risk decision makers so that they can understand the extent of the implementation of the controls within the architecture. The security controls matrix also identifies the specific resources within the CloudFormation templates that affect each control, and contains cross-references to:

Though it is impossible to automatically implement all system-specific controls within a baseline architecture, these two Quick Start packages can help you address and document a significant portion of the technical controls (including up to 38% of the NIST high-impact controls, in the case of the NIST high-impact Quick Start variant).

In addition, Telos Corporation has integrated the controls compliance and guidance information from these packages into its Xacta product. This makes it simpler for you to inherit this information for your IT governance, risk, and compliance programs, and to document the differences or changes in your security compliance posture. For users of the Xacta product suite, this integration can reduce the effort required to produce bodies of evidence for A&A activities when leveraging AWS Cloud infrastructure. For more information about Telos and Xacta, see Telos and Amazon Web Services: Accelerating Secure and Compliant Cloud Deployments.

To access these new Quick Starts, see the Compliance, security, and identity management section on the AWS Quick Start Reference Deployments page.

If you have questions about these Quick Starts, please contact the AWS Compliance Accelerator team.

– Lou

MPAA Lobbyist / SOPA Sponsor to Draft Democratic Party Platform

Post Syndicated from Andy original https://torrentfreak.com/mpaa-lobbyist-sopa-sponsor-to-draft-democratic-party-platform-160531/

berman-smallLast week Hillary Clinton, Bernie Sanders and Democratic National Committee Chair Rep. Debbie Wasserman Schultz chose a panel of individuals to draft the party’s platform.

As previously reported, 15 were selected, with six chosen by Clinton, five chosen by Bernie Sanders and four chosen by Wasserman Schultz. While other publications will certainly pick over the bones of the rest of the committee, one in particular stands out as interesting to TF readers.

Howard L Berman is an attorney and former U.S. Representative. He’s employed at Covington & Burling as a lobbyist and represents the MPAA on matters including “Intellectual property issues in trade agreements, bilateral investment treaties, copyright, and related legislation.”

It will come as no surprise then that the major studios have been donors throughout Berman’s political career. As shown in the image below, the top five contributors are all major movie companies.


Born in 1941, Berman’s work with the film industry earned him the nickname “the congressman from Hollywood” and over the years he’s been at the root of some of the most heated debates over the protection of intellectual property.

In 2007 and as later confirmed by Wikileaks, Berman was one of the main proponents of ACTA, the Anti-Counterfeiting Trade Agreement.

Just five short years later Berman was at the heart of perhaps the biggest copyright controversy the world has ever seen when he became a co-sponsor of the Stop Online Piracy Act (SOPA).

“The theft of American Intellectual Property not only robs those in the creative chain of adequate compensation, but it also stunts potential for economic growth, cheats our communities out of good paying jobs, and threatens future American innovation,” Berman said in the run-up to SOPA.

While these kinds of soundbites are somewhat common, it’s interesting to note that Berman showed particular aggression towards Google during hearings focusing on SOPA. On November 16, 2011, Berman challenged the search giant over its indexing of The Pirate Bay.

google-bayInsisting that there “is no contradiction between intellectual property rights protection and enforcement ensuring freedom of expression on the Internet,” Berman said that Google’s refusal to delist the entire site was unacceptable.

“All right. Well, explain to me this one,” Berman demanded of Google policy counsel Katherine Oyama.

“The Pirate Bay is a notorious pirate site, a fact that its founders proudly proclaim in the name of the site itself. In fact, the site’s operators have been criminally convicted in Europe. And yet…..U.S.-Google continues to send U.S. consumers to the site by linking to the site in your search results. Why does Google refuse to de-index the site in your search results?” he said.

Oyama tried to answer, noting that Google invests tens of millions of dollars into the problem. “We have hundreds of people around the world that work on it,” she said. “When it comes to copyright….”

Berman didn’t allow her to finish, repeating his question about delisting the whole site, again and again. Before Berman’s time ran out, Oyama was interrupted several more times while trying to explain that the DMCA requires takedowns of specific links, not entire domains. Instead, Berman suggested that Oyama should “infuse herself” with the notion that Google wanted to stop “digital theft.”

“[T]he DMCA is not doing the job. That is so obvious,” he said. “[Y]ou cannot look at what is going on since the passage of the DMCA and say Congress got it just right. Maintain the status quo.”

These arguments continue today in the “takedown, staydown” debate surrounding the ongoing review of the DMCA, with Hollywood lining up on one side and Google being held responsible for the actions of others on the other. But simply complaining about the DMCA is a little moderate for Berman.

Almost one and a half decades ago in the wake of Napster and before the rise of BitTorrent, Berman had a dream of dealing with peer-to-peer file-sharing by force. In 2002 he proposed the Peer To Peer Piracy Prevention Act, which would have allowed copyright holders to take extraordinary technical measures against file-sharers in order to stop the unauthorized distribution of their content.

H.R.5211 sought to amend Federal copyright law to protect a copyright owner from liability in any criminal or civil action “for impairing, with appropriate technology, the unauthorized distribution, display, performance, or reproduction of his or her copyrighted work on a publicly accessible peer-to-peer file trading network.”

The bill didn’t deal in specifics, but “impairing” was widely believed to be a euphemism for DDoS and poisoning attacks on individual file-sharers in order to make sharing impossible from their computers.

At the time “shared-folder” type sharing apps were still popular so bombarding networks with fake and badly named files would also have been fair game, although distributing viruses and malware were not on the table. Eventually, however, the bill died.

Berman, on the other hand, appears to be very much alive and will be soon helping to draft the Democratic Party platform. On past experience his input might not be too difficult to spot.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Perlin noise

Post Syndicated from Eevee original https://eev.ee/blog/2016/05/29/perlin-noise/

I used Perlin noise for the fog effect and title screen in Under Construction. I tweeted about my efforts to speed it up, and several people replied either confused about how Perlin noise works or not clear on what it actually is.

I admit I only (somewhat) understand Perlin noise in the first place because I’ve implemented it before, for flax, and that took several days of poring over half a dozen clumsy explanations that were more interested in showing off tech demos than actually explaining what was going on. The few helpful resources I found were often wrong, and left me with no real intuitive grasp of how and why it works.

Here’s the post I wish I could’ve read in the first place.

What Perlin noise is

In a casual sense, “noise” is random garbage. Here’s some visual noise.

white noise

This is white noise, which roughly means that all the pixels are random and unrelated to each other. The average of all these pixels should be #808080, a medium gray — it turns out to be #848484, which is pretty close.

Noise is useful for generating random patterns, especially for unpredictable natural phenomena. That image above might be a good starting point for creating, say, a gravel texture.

However, most things aren’t purely random. Smoke and clouds and terrain may look like they have elements of randomness, but they were created by a set of very complex interactions between lots of tiny particles. White noise is defined by having all the particles (or pixels) not depend on each other. To generate something more interesting than gravel, we need a different kind of noise.

That noise is often Perlin noise, which looks like this.

Perlin noise

That hopefully looks familiar, even if only as a Photoshop filter you tried out once. (I created it with GIMP’s Filters → Render → Clouds → Solid Noise.)

The most obvious difference is that it looks cloudy. More technically, it’s continuous — if you zoom in far enough, you’ll always see a smooth gradient. There are no jarring transitions from black to white. That makes it work surprisingly well when you want something “random”, but not, you know… too random. Here’s a quick sky, made from the exact same image.

Clouds derived from Perlin noise

All I did was add a layer of solid blue underneath; use Levels to chop off the darkest parts of the noise; and use what was left as a transparent white on top. It could be better, but it’s not bad for about ten seconds of work. It even tiles seamlessly!

Perlin noise from scratch

You can make Perlin noise in any number of dimensions. The above image is of course 2-D, but it’s much easier to explain in 1-D, where there’s only a single input and a single output. That makes it easy to illustrate with a regular old graph.

First, at every integer x = 0, 1, 2, 3, …, choose a random number between -1 and 1. This will be the slope of a line at that point, which I’ve drawn in light blue.

It doesn’t really matter how many points you choose. Each segment — the space between two tick marks — works pretty much the same way. As long as you can remember all the slopes you picked, you can extend this line as far as you want in either direction. These sloped lines are all you need to make Perlin noise.

Any given x lies between two tick marks, and thus between two sloped lines. Call the slopes of those lines a and b. The function for a straight line can be written as m(x – x₀), where m is the slope and x₀ is the x-value where the line crosses the x-axis.

Armed with this knowledge, it’s easy to find out where those two sloped lines cross a given x. For simplicitly, I’ll assume the tick mark on the left is at x = 0; that produces values of ax and b(x – 1) (because the rightmost line crosses the axis at 1, not 0).

I’ve drawn a bunch of example points in orange. You can see how they follow the two sloped lines on either side.

The question now becomes: how can these pairs of points combine into a smooth curve? The most obvious thing is to average them, but a quick look at the graph shows that that won’t work. The “average” of two lines is just a line drawn halfway between them, and the average line for each segment wouldn’t even touch its neighbor.

We need something that treats the left end as more important when we’re closer to the left end, and the right end as more important when we’re closer to the right end. The simplest way to do this is linear interpolation (sometimes abbreviated as “lerp”).

Linear interpolation is pretty simple: if you’re t of the way between two extremes A and B, then the linear interpolation is A + t(B – A) or, equivalently, A(1 – t) + Bt. For t = 0, you get A; for t = 1, you get B; for t = ½, you get ½(A + B), the average.

Let’s give that a try. I’ve marked the interpolated points in red.

Not a bad start. It’s not quite Perlin noise yet; there’s one little problem, though it may not be immediately obvious from just these points. I can make it more visible, if you’ll forgive a brief tangent.

Each of these curves is a parabola, given by (a – b)(x – x²). I wanted to be able to draw this exactly, rather than approximate it with some points, which meant figuring out how to convert it to a Bézier curve.

Bézier curves are the same curves you see used for path drawing in vector graphics editors like Inkscape, Illustrator, or Flash. (Or in the SVG vector format, which I used to make all these illustrations.) You choose a start and end points, and the program provides an extra handle attached to each point. Dragging the handles around changes the shape of the curve. Those are cubic Bézier curves, so called because the actual function contains a , but you can have a Bézier curve of any order.

As I looked into the actual math behind Bézier curves, I learned a few interesting things:

  1. SVG supports quadratic Bézier curves! That’s convenient, since I have a quadratic function.

  2. Bézier curves are defined by repeated linear interpolation! In fact, there’s such a thing as a linear Bézier “curve”, which is linear interpolation — a straight line between two chosen points. A quadratic curve has three points; you interpolate between the first/second and second/third, then interpolate the results together. Cubic has four points and goes one step further, and so on. Fascinating.

  3. The parabolas formed by the red dots are very easy to draw with Bézier curves — in fact, Perlin noise bears a striking similarity to Bézier curves! It makes sense, in a way. The first step produced values of ax and b(x – 1), which can be written -b(1 – x), and that’s reminiscent of linear interpolation. The second step was a straightforward round of linear interpolation. Two rounds of linear interpolation is how you make a quadratic Bézier curve.

I think I have a geometric interpretation of what happened here.

Pick a segment. It has a “half”-line jutting into it from each end. Mirror them both, creating two half-lines on each end. Add their slopes together to make two new lines, which are each other’s mirror images, because they were both formed from one line plus a reflection of the other. Draw a curve between those two lines.

I’ve illustrated this below. The dotted lines are the mirrored images; the darker blue lines are the summed new lines; the darker blue points are their intersections (and the third handle for a quadratic Bézier curve); and the red arc is the exact curve following the red points. If you know a little calculus, you can confirm that the slope on the left side is a – b and the slope on the right side is b – a, which indeed makes them mirror images.

Hopefully the problem is now more obvious: these curves don’t transition smoothly into each other. There’s a distinct corner at each tick mark. The one at the end, where both curves are pointing downwards, is particularly bad.

Linear interpolation, as it turns out, is not quite enough. Even as we get pretty close to one endpoint, the other endpoint still has a fairly significant influence, which pulls the slope of the curve away from the slope of the original random line. That influence needs to drop away more sharply towards the ends.

Thankfully, someone has already figured this out for us. Before we do the linear interpolation, we can bias t with the smoothstep function. It’s an S-shaped curve that’s similar to the straight diagonal line y = x, but at 0 and 1 it flattens out. Towards the middle, it doesn’t change a lot — ½ just becomes ½ — but it adds a strong bias near the endpoints. Put in 0.9, and you’ll get 0.972.

The result is quartic — t⁴, which SVG’s cubic Béziers can’t exactly trace — so the gray line is a rough approximation. I’ve compensated by adding many more points, which are in black. Those points came out of Ken Perlin’s original noise1 function, by the way; this is true Perlin noise.

The new dots are close to the red quadratic curves, but near the tick marks, they shift to follow the original light blue slopes. The midpoints have exactly the same values, because smoothstep doesn’t change ½.

(The function for each segment is now 2(a – b)x⁴ – (3a – 5b)x³ – 3bx² + ax, rather more of a mouthful. Feel free to differentiate and convince yourself that the slopes at the endpoints are, in fact, exactly a and b.)

If you want to get a better handle on how this feels, here’s that same graph, but live! Click and drag to mess with the slopes.


You may have noticed that these valleys and hills, while smooth, don’t look much like the cloudy noise I advertised in the beginning.

In a shocking twist, the cloudiness isn’t actually part of Perlin noise. It’s a clever thing you can do on top of Perlin noise.

Create another Perlin curve (or reuse the same one), but double the resolution — so there’s a randomly-chosen line at x = 0, ½, 1, …. Then halve the output value, and add it on top of the first curve. You get something like this.

The entire graph has been scaled down to half size, but extended to the full range of the first graph.

You can repeat this however many times you want, making each graph half the size of the previous one. Each separate graph is called an octave (from music, where one octave has twice the frequency of the previous), and the results look nice and jittery after four or five octaves.

Extending to 2-D

The idea is the same, but instead of a line with some slopes on it, the input is a grid. Also, to extend the idea of “slope” into two dimensions, each grid point has a vector.

I shortened the arrows so they’d actually fit in the image and not overlap each other, but these are intended to be unit vectors — that is, every arrow actually has a length of 1, which means it only carries information about direction and not distance. I’ll get into actually generating these later.

It looks similar, but there’s a crucial difference. Before, the input was horizontal — the position on the x-axis — and the output was vertical. Here, the input is both directions — where are we?

A rough algorithm for what we did before might look like this:

  1. Find the distance to the nearest point to the left, and multiply by that point’s slope.
  2. Do the same for the point to the right.
  3. Interpolate those two values together.

This needs a couple major changes to work in two dimensions. Each point now has four neighboring grid points, and there still needs to be some kind of distance multiplication.

The solution is to use dot products. If you have two vectors (a, b) and (c, d), their dot product is the sum of the pairwise products: ac + bd. In the above diagram, each surrounding grid point now has two vectors attached to it: the random one, and one pointing at the chosen point. The distance multiplication can just be the dot product of those two vectors.

But, ah, why? Good question.

The geometric explanation of the dot product is that it’s the product of the lengths of the two vectors, times the cosine of the angle between them. That has never in my life clarified anything, and it’s impossible to make a diagram out of, but let’s think about this for a moment.

All of the random vectors are unit vectors, so their lengths are 1. That contributes nothing to the product. So what’s left is the (scalar) distance from a chosen point to the grid point, times the cosine of the angle between these two vectors.

Cosine tells you how small an angle is — or in this case, how close together two vectors are. If they’re pointing the same direction, then the cosine is 1. As they move farther apart, the cosine gets smaller: at right angles it becomes zero (and the dot product is always zero), and if they’re pointing in opposite directions then the cosine falls to -1.

To visualize what this means, I plotted only the dot product between a point and its nearest grid point. This is the equivalent of the orange dots from the 1-D graph.

That’s pretty interesting. Remember, this is a kind of top-down view of a 3-D graph. The x- and y-coordinates are the input, the point of our choosing, and the z-coordinate is the output. It’s usually drawn with a shade of gray, as I’ve done here, but you can also picture it as a depth. White points are coming out of the screen towards you, and black points are deeper into the screen.

In that sense, each grid point has a sloped plane stuck to it, versus the sloped lines we had with 1-D noise. The vector points towards the white end, the end that sticks out of the screen towards you. For points closer to the grid point, the dot product is close to zero, just like the 1-D graph; for points further away, the result is usually more extreme.

You may not be surprised to learn at this point that the random vectors are usually referred to as gradients.

Now, each point has four dot products, which need to be combined together somehow. Linear interpolation only works for exactly two values, so instead, we have to interpolate them in pairs. One round of interpolation will get us down to two values, then another round will produce a single value, which is the output.

I tried to make a diagram showing an intermediate state, to help demonstrate how the multiple gradients combine, but I couldn’t quite figure out how to make it work sensibly.

Instead, please accept this interactive Perlin noise grid, where you can once again drag the arrows around to see how it affects the resulting pattern.

3-D and beyond

From here, it’s not too hard to extend the idea to any number of dimensions. Pick more unit vectors in the right number of dimensions, calculate more dot products, and interpolate them all together.

One point to keep in mind: each round of interpolation should “collapse” an axis.

Consider 2-D, which requires dot products for each of the four neighboring points: top-left, top-right, bottom-left, bottom-right. There are two dimensions, so two rounds of interpolation are needed.

You might interpolate top-left with top-right and bottom-left with bottom-right; that would “collapse” the horizontal axis and leave only two values, top and bottom. Or you might do it the other way, top-left with bottom-left and top-right with bottom-right, collapsing the vertical axis and leaving left and right. Either is fine.

What you definitely can’t do is interpolate top-right with bottom-left and top-left with bottom-right. You need to know how much to interpolate by, which means knowing how far between points you are. Interpolating horizontally is fine, because we know our horizontal distance from that grid point. Interpolating vertically is fine, because we know our vertical distance from that grid point. Interpolating… diagonally? What does that mean? What’s our value of t? And how on Earth do we interpolate the two resulting values?

This is a little tricky to keep track of in three or more dimensions, so keep an eye out.


The idea behind Perlin noise is pretty simple, and you can modify almost any of it for various effects.

Picking the gradients

I know of three different ways to choose the gradients.

The most obvious way is to pick n random numbers from -1 to 1 (for n dimensions), use the Pythagorean theorem to get the length of that vector, then divide all the numbers by the length. Voilà, you have a unit vector. This is what Ken Perlin’s original code did.

That’ll work, but it’ll produce more vectors pointing diagonally than vectors pointing along an axis, for the same reason that rolling two dice produces 7 more than anything else. You really want a random angle, not a random coordinate.

For two dimensions, that’s easy: pick a random angle! Roll a single number between 0 and 2π. Call it θ. Your vector is (cos θ, sin θ). Done.

For three dimensions… uh.

Well, I looked this up on MathWorld and found a solution that I believe works in any dimension. It works exactly the same as before: pick n random numbers and scale down to length 1. The difference is that the numbers have to be normally distributed — that is, picked from a bell curve. In Python, there’s random.gauss() for this; in C, well, I think you can fake it by picking a lot of plain random numbers and adding them together. Again, same idea as rolling multiple dice.

This doesn’t work so well for one dimension, where dividing by the length will always give you 1 or -1, and your noise will look terrible. In that case, always just pick a single random number from -1 to 1.

The third method, as proposed in Ken Perlin’s paper “Improving Noise“, is to not choose random gradients at all! Instead, use a fixed set of vectors, and pick from among them randomly.

Why would you do this? Part of the reason was to address clumping, which I suspect was partly caused by that diagonal bias. But another reason was…


I’ve neglected to mention a key point of Ken Perlin’s original implementation. It didn’t quite create a set of random gradients for every grid point; it created a set of random gradients of a fixed size, then used a scheme to pick one of those gradients given any possible grid point. It kept memory use low and was designed to be fast, but still allowed points to be arbitrarily high or low.

The “improved” proposal does away with the random gradients entirely, substituting (for the 3-D case) a set of 16 gradients where one coordinate is 0 and the other two are either -1 or 1. These aren’t unit vectors any more, but the dot products are much easier to compute. The randomness is provided by the scheme for picking a gradient given a grid point. For large swaths of noise, it turns out that this works just as well.

Neither optimization is necessary to have Perlin noise, but you might want to look into them if you’re generating a lot of noise in realtime.


The choice of smoothstep is somewhat arbitrary; you could use any function that flattens out towards 0 and 1.

Improving Noise” proposes an alternative in smootherstep, which is 6x⁵ – 15x⁴ + 10x³. It’s possible to make higher-order polynomials the same way, though of course they get increasingly time-consuming to evaluate.

There’s no reason you couldn’t also adapt, say a sine curve: ½sin(π(x – ½)) + ½. Not necessarily practical, and almost certainly much slower than a few multiplications, but it’s perfectly valid.

Comparison of the three curves

Looks pretty similar to me. smootherstep has some harsher extremes, which is interesting.


There’s nothing special about the way I did octaves above; it’s just a simple kind of fractal. As long as you keep adding more detailed noise with a smaller range, you’ll probably get something interesting.

This talk by Ken Perlin has some examples, such as taking the absolute value of each octave before adding them together to make a turbulent effect, or taking the sine to create a swirling marbled effect.

Throw math at it and see what happens. That’s what math is for!


Make the grid of gradients tile, and the resulting noise will tile. Super easy.

Simplex noise

In higher dimensions, even finding the noise for a single point requires a lot of math. Even in 3-D, there are eight surrounding points: eight dot products, seven linear interpolations, three _smoothstep_s.

Simplex noise is a variation that uses triangles rather than squares. In 3-D, that’s a tetrahedron, which is only four points rather than eight. In 4-D it’s five points rather than sixteen.

I’ve never needed so much noise so fast that I’ve had reason to implement this, but if you’re generating noise in 4-D or 5-D (!), it might be helpful.

Some properties

The value of Perlin noise at every grid point, in any number of dimensions, is zero.

Perlin noise is continuous — there are no abrupt changes. Even if you use lots and lots of octaves, no matter how jittery it may look, if you zoom in enough, you’ll still have a smooth curve.

Contrary to popular belief — one espoused even by Ken Perlin! — the output range of Perlin noise is not -1 to 1. It’s ±½√n, where n is the number of dimensions. So a 1-D plane can never extend beyond -½ or ½ (which I bet you could prove if you thought about it), and the familiar 2-D noise is trapped within ±½√2 ≈ ±0.707 (which is also provable by extending the same thought a little further). If you’re using Perlin noise and expecting to span a specific range, make sure to take this into account.

Octaves will change the output range, of course. One octave will increase the range by 50%; two will increase it by 75%; three, by 87.5%; and so on.

The output of Perlin noise is not evenly distributed, not by a long shot. Here’s a histogram of some single-octave 2-D Perlin noise generated by GIMP:

I have no idea what this distribution is; it’s sure not a bell curve. Notice in particular that the top and bottom ⅛ of values are virtually nonexistent. If you play with the live 2-D demo, you can probably figure out why that happens: you only get a bright white if all four neighboring arrows are pointing towards the center of the square, and only get black if all four arrows are pointing away. Both of those cases are relatively unlikely.

If you want a little more weighting on the edges, you could try feeding the noise through a function that biases towards its endpoints… like, say, smoothstep. Just be sure you normalize to [0, 1] first.

Surprisingly, adding octaves doesn’t change the distribution all that much, assuming you scale back down to the same range.

There are quite a lot of applications for Perlin noise. I’ve seen wood grain faked by throwing in some modulus. I’ve used it to make a path through a roguelike forest. You can get easy clouds just by cutting it off at some arbitrary value. Under Construction‘s smoke is Perlin noise, where positive values are smoke and negative values are clear.

Because it’s continuous, you can use one axis as time, and get noise that also changes smoothly over time. Here’s a loop of some noise, where each frame is a 2-D slice of a (tiling) 3-D block of noise:

Give me some code already

Right, right, yes, of course. Here’s a Python implementation in a gist. I don’t know if I can justify turning it into a module and also I’m lazy.

Some cool features up in here:

  • Unlimited range
  • Unlimited dimensions
  • Built-in octave support
  • Built-in support for tiling
  • Lots of comments, some of which might even be correct
  • Probably pretty slow

Here’s the code that created the above animation:

from perlin import PerlinNoiseFactory
import PIL.Image

size = 200
res = 40
frames = 20
frameres = 5
space_range = size//res
frame_range = frames//frameres

pnf = PerlinNoiseFactory(3, octaves=4, tile=(space_range, space_range, frame_range))

for t in range(frames):
    img = PIL.Image.new('L', (size, size))
    for x in range(size):
        for y in range(size):
            n = pnf(x/res, y/res, t/frameres)
            img.putpixel((x, y), int((n + 1) / 2 * 255 + 0.5))


Ran in 1m40s with PyPy 3.

Federal Appeals Court Decision in Oracle v. Google

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2014/05/10/oracle-google.html

[ Update on 2014-05-13: If you’re more of a listening
rather than reading type, you might
enjoy the Free as
in Freedom oggcast that
Karen Sandler and I recorded about this topic
. ]

I have a strange relationship with copyright law. Many copyright policies
of various jurisdictions, the USA in particular, are draconian at best and
downright vindictive at worst. For example, during the public comment
period on ACTA, I
commented that
I think it’s always wrong, as a policy matter, for
copyright infringement to carry criminal penalties.

That said, much of what I do in my work in the software freedom movement
is enforcement of copyleft: assuring that the primary legal tool, which
defends the freedom of the Free Software, functions properly, and actually
works — in the real world — the way it should.

As I’ve written
about before at great length
, copyleft functions primarily because it
uses copyright law to stand up and
defend the four
. It’s commonly called a hack on copyright: turning the
copyright system which is canonically used to restrict users’ rights, into
a system of justice for the equality of users.

However, it’s this very activity that leaves me with a weird relationship
with copyright. Copyleft uses the restrictive force of copyright in the
other direction, but that means the greater the negative force, the more
powerful the positive force. So, as I read yesterday
the Federal
Circuit Appeals Court’s decision in Oracle v. Google
, I had that
strange feeling of simultaneous annoyance and contentment. In this blog
post, I attempt to state why I am both glad for and annoyed with the

I stated clearly
after Alsup’s decision NDCA decision in this case
that I never thought
APIs were copyrightable, nor does any developer really think so in
practice. But, when considering the appeal, note carefully that the
court of appeals wasn’t assigned the general job of considering whether
APIs are copyrightable. Their job is to figure out if the lower court
made an error in judgment in this particular case, and to
discern any issues that were missed previously. I think that’s what the
Federal Circuit Court attempted to do here, and while IMO they too erred
regarding a factual issue, I don’t think their decision is wholly useless
nor categorically incorrect.

Their decision is worth reading in full. I’d also urge anyone who wants
to opine on this decision to actually read the whole thing (which
so often rarely happens in these situations). I bet most pundits out there
opining already didn’t read the whole thing. I read the decision as soon
as it was announced, and I didn’t get this post up until early Saturday
morning, because it took that long to read the opinion in detail, go back
to other related texts and verify some details and then write down my
analysis. So, please, go ahead, read it now before reading this blog post
further. My post will still be here when you get back. (And, BTW, don’t
fall for that self-aggrandizing ballyhoo some lawyers will feed you that
only they can understand things like court decisions. In fact, I think
programmers are going to have an easier time reading decisions about this
topic than lawyers, as the technical facts are highly pertinent.)

you’ve read
the decision now
? Good. Now, I’ll tell you what I think in detail: (As
always, my opinions on this are my own,
TINLA and these are my
personal thoughts on the question.)

The most interesting thing, IMO,
about this decision is that the Court focused on a fact from trial that
clearly has more nuance than they realize. Specifically, the Court claims
many times in this decision that Google conceded that it copied the
declaring code used in the 37 packages verbatim (pg 12 of the Appeals

I suspect the Court imagined the situation too simply: that there was a
huge body of source code text, and that Google engineers sat there, simply
cutting-and-pasting from Oracle’s code right into their own code for each of
the 7,000 lines or so of function declarations. However, I’ve chatted with
some people (including Mark
J. Wielaard
) who are much more deeply embedded in the Free Software Java
world than I am, and they pointed out it’s highly unlikely anyone did a
blatant cut-and-paste job to implement Java’s core library API, for various
reasons. I thus suspect that Google didn’t do it that way either.

So, how did the Appeals Court come to this erroneous conclusion? On page
27 of their decision, they write: Google conceded that it copied it
verbatim. Indeed, the district court specifically instructed the jury that
‘Google agrees that it uses the same names and declarations’ in
Android. Charge to the Jury at 10. So, I reread
10 of the final charge to the jury
. It actually says something much
more verbose and nuanced. I’ve pasted together below all the parts where
the Alsup’s jury charge mentions this issue (emphasis mine):

Google denies infringing any such copyrighted material … Google agrees
that the structure, sequence and organization of the 37 accused API packages
in Android is substantially the same as the structure, sequence and
organization of the corresponding 37 API packages in Java. …
The copyrighted Java platform has more than 37 API packages and so
does the accused Android platform. As for the 37 API packages that overlap,
Google agrees that it uses the same names and declarations but contends that
its line-by-line implementations are different … Google agrees that
the structure, sequence and organization of the 37 accused API packages in
Android is substantially the same as the structure, sequence and
organization of the corresponding 37 API packages in Java. Google states,
however, that the elements it has used are not infringing …
With respect to the API documentation, Oracle contends Google copied
the English-language comments in the registered copyrighted work and moved
them over to the documentation for the 37 API packages in Android. Google
agrees that there are similarities in the wording but, pointing to differences as
well, denies that its documentation is a copy. Google further asserts that the
similarities are largely the result of the fact that each API carries out the same
functions in both systems.

Thus, in the original trial, Google did not admit to
copying of any of Oracle’s text, documentation or code (other than the
rangeCheck thing, which is moot on the API copyrightability issue).
Rather, Google said two separate things: (a) they did not copy any material
(other than rangeCheck), and (b) admitted that the names and declarations
are the same, not because Google copied those names and
declarations from Oracle’s own work, but because they perform the same
functions. In other words, Google makes various arguments of why those
names and declarations look the same, but for reasons other than
“mundane cut-and-paste copying from Oracle’s copyrighted

For we programmers, this is of course a distinction without any
difference. Frankly, programmers, when we look at this situation, we’d
make many obvious logical leaps at once. Specifically, we all think APIs
in the abstract can’t possibly be copyrightable (since that’s absurd), and
we work backwards from there with some quick thinking, that goes something
like this: it doesn’t make sense for APIs to be copyrightable because if
you explain to me with enough detail what the API has to, such that I have
sufficient information to implement, my declarations of the functions of
that API are going to necessarily be quite similar to yours — so much
so that it’ll be nearly indistinguishable from what those function
declarations might look like if I cut-and-pasted them. So, the fact is, if
we both sit down separately to implement the same API, well, then we’re
likely going to have two works that look similar. However, it doesn’t mean
I copied your work. And, besides, it makes no sense for APIs, as a general
concept, to be copyrightable so why are we discussing this

But this is reasoning a programmer can love but the Courts hate. The
Courts want to take a set of laws the legislature passed, some precedents
that their system gave them, along with a specific set of facts, and then
see what happens when the law is applied to those facts. Juries, in turn,
have the job of finding which facts are accurate, which aren’t, and then
coming to a verdict, upon receiving instructions about the law from the

And that’s right where the confusion began in this case, IMO. The
original jury, to start with, likely had trouble distinguishing three
distinct things: the general concept of an API, the specification of the
API, and the implementation of an API. Plus, they were told by the judge
to assume API’s were copyrightable anyway. Then, it got more confusing
when they looked at two implementations of an API, parts of which looked
similar for purely mundane technical reasons, and assumed (incorrectly)
that textual copying from one file to another was the only way to get to
that same result. Meanwhile, the jury was likely further confused that
Google argued
various affirmative
against copyright
infringement in
the alternative

So, what happens with the Appeals Court? The Appeals court, of course,
has no reason to believe the finding of fact of the jury is wrong, and it’s
simply not the appeals court’s job to replace the original jury’s job, but to
analyze the matters of law decided by the lower court. That’s why I’m
admittedly troubled and downright confused that the ruling from the Appeals
court seems to conflate the issue of literal copying of text and
similarities in independently developed text. That is a factual issue in
any given case, but that question of fact is the central nuance to API
copyrightiable and it seems the Appeals Court glossed over it. The Appeals
Court simply fails to distinguish between literal cut-and-paste copying
from a given API’s implementation and serendipitous similarities that are
likely to happen when two API implementations support the same API.

But that error isn’t the interesting part. Of course, this error is a
fundamental incorrect assumption by the Appeals Court, and as such the
primary ruling are effectively conclusions based on a hypothetical fact
pattern and not the actual fact pattern in this case. However, after
poring over the decision for hours, it’s the only error that I found in
the appeals ruling. Thus, setting the fundamental error aside, their
ruling has some good parts. For example, I’m rather impressed and swayed
by their argument that the lower court misapplied the merger doctrine
because it analyzed the situation based on the decisions Google had with
regard to functionality, rather than the decisions of Sun/Oracle. To

We further find that the district court erred in focusing its merger analysis
on the options available to Google at the time of copying. It is
well-established that copyrightability and the scope of protectable activity
are to be evaluated at the time of creation, not at the time of infringement.
… The focus is, therefore, on the options that were available to
Sun/Oracle at the time it created the API packages.

Of course, cropping up again in that analysis is that same darned
confusion the Court had with regard to copying this declaration code. The
ruling goes on to say: But, as the court acknowledged, nothing prevented
Google from writing its own declaring code, along with its own implementing
code, to achieve the same result.

To go back to my earlier point, Google likely did write their own
declaring code, and the code ended up looking the same as the
other code, because there was no other way to implement the same API.

In the end, Mark J. Wielaard put it best when he read the decision,
pointing out to me that the Appeals Court seemed almost angry that the jury
hung on the fair use question. It reads to me, too, like Appeals Court is
slyly saying: the right affirmative defense for Google here is fair use,
and that a new jury really needs to sit and look at it.

My conclusion is that this just isn’t a decision about the copyrightable
of APIs in the general sense. The question the Court would need to
consider to actually settle that question would be: “If we believe an
API itself isn’t copyrightable, but its implementation is, how do we figure
out when copyright infringement has occurred when there are multiple
implementations of the same API floating around, which of course have
declarations that look similar?” But the court did not consider that
fundamental question, because the Court assumed (incorrectly)
there was textual cut-and-paste copying. The decision here, in my
view, is about a more narrow, hypothetical question that the Court decided
to ask itself instead: “If someone textually copies parts of your API
implementation, are
, scènes
à faire
and de
affirmative defenses like to succeed?“ In this
hypothetical scenario, the Appeals Court claims “such defenses rarely help you, but
a fair use defense might help you”.

However, on this point, in my copyleft-defender role, I don’t mind this
decision very much. The one thing this decision clearly seems to declare
is: “if there is even a modicum of evidence that direct textual
copying occurred, then the alleged infringer must pass an extremely high
bar of affirmative defense to show infringement didn’t occur”. In most GPL violation cases,
the facts aren’t nuanced: there is always clearly an intention to
incorporate and distribute large textual parts of the GPL’d code (i.e., not
just a few function declarations). As such, this decision is probably good
for copyleft, since on its narrowest reading, this decision upholds the
idea that if you go mixing in other copyrighted stuff, via copying and
distribution, then it will be difficult to show no copyright infringement

OTOH, I suspect that most pundits are going to look at this in an overly
contrasted way: NDCA said API’s aren’t copyrightable, and the Appeals Court
said they are. That’s not what happened here, and if you look at the
situation that way, you’re making the same kinds of oversimplications that
the Appeals Court seems to have erroneously made.

The most positive outcome here is that a new jury can now narrowly
consider the question of fair use as it relates to serendipitous similarity
of multiple API function declaration code. I suspect a fresh jury focused
on that narrow question will do a much better job. The previous jury had
so many complex issues before them, I suspect that they were easily
conflated. (Recall that the previous
jury considered
patent questions as well
.) I’ve found that people who haven’t spent
their lives training (as programmers and lawyers have) to delineate complex
matters and separate truly unrelated issues do a poor job at such. Thus, I
suspect the jury won’t hang the second time if they’re just considering the
fair use question.

Finally, with regard to this ruling, I suspect this won’t become
immediate, frequently cited precedent. The case is remanded, so a new jury
will first sit down and consider the fair use question. If that jury finds
fair use and thus no infringement, Oracle’s next appeal will be quite weak,
and the Appeals Court likely won’t reexamine the question in any detail.
In that outcome, very little has changed overall: we’ll have certainty that
API’s aren’t copyrightable, as long as any textual copying that occurs
during reimplementation is easily called fair use. By contrast, if the new
jury rejects Google’s fair use defense, I suspect Google will have to
appeal all the way to SCOTUS. It’s thus going to be at least two years
before anything definitive is decided, and the big winners will be wealthy
litigation attorneys — as usual.

0This is of course true
for any sufficiently simple programming task. I used to be a high-school
computer science teacher. Frankly, while I was successful twice in detecting
student plagiarism, it was pretty easy to get false positives sometimes. And
certainly I had plenty of student programmers who wrote their function
declarations the same for the same job! And no, those weren’t the
students who plagiarized.

АСТА е мъртва, да живее АСТА?

Post Syndicated from Selene original https://nookofselene.wordpress.com/2012/07/05/acta-in-bulgaria-again/

Вчера хората, които излязохме през зимата да протестираме срещу АСТА, официално получихме това, което искаме – Европейският парламент отхвърли ратифицирането на така нареченото търговско споразумение за борба с фалшифицирането.
Това се случи след като стотици хиляди европейци изпълниха европейските площади и протестираха срещу АСТА и срещу така наречената „борба с пиратството“, в името на която се готвеше окупиране на Интернет – но не от движението „Окупирай Уолстрийт“, а от същите корпорации, които вече са окупирали почти всичко, намиращо се оффлайн и бързат да завземат и онлайн пространството.
Сред тези европейци бяхме и ние. На 11-ти февруари многохилядно шествие скандира „НЕ на ACTA“ по улиците на София и изпълни целия площад пред парламента, принуждавайки управляващите да обещаят, че ще „приемат споразумението с резерви“, а именно без частта за следенето на Интернет. Крайно недостатъчно, защото това споразумение или се приемаше цялото или не се приемаше цялото (да не говорим, че имаше ужасно гнили неща и сред незасягащите Интернет), но – забележете – поеха ангажимент да не осъществяват това следене на Интернет.
Сега, след като Европарламентът сложи край на цялото споразумение, би трябвало да си отдъхнем? Уви, не.
Защото проектът за нов Наказателен кодекс на България е предвидил нови драконовски мерки. Минимална предвидена присъда за т.нар. „пиратство“ е 1 година затвор или пробация, присъдата за масовия случай се вдига от 5 на 6 години, т.е. става тежко престъпление, щом е над 5 години затвор. Това означава, че „пиратстството“ се наказва в някои случаи по-строго от престъпления като изнасилване или от убийство по непредпазливост. Отпадат и административните наказания за „маловажни“ случаи – заменят се директно със затвор или пробация.
Ще цитирам конкретните членове от проекта за нов НК. Целият можете да прочетете тук.

Ползване на чужда интелектуална собственост
Чл. 195. (1) Който противозаконно записва или използва по какъвто и да е начин предмет по чл. 193 без съгласието на носителя на съответното право, се наказва с лишаване от свобода до пет години и с глоба.
(2) Който противозаконно държи носители на предмет по чл. 193 в големи размери или в голямо количество, или държи матрица за възпроизвеждане на такива носители, се наказва с лишаване от свобода от две до пет години и с глоба не по-малко от две хиляди лева.
(3) Когато предметът по ал. 2 е в особено големи размери, наказанието е лишаване от свобода от две до осем години и глоба от три хиляди до петнадесет хиляди лева, като съдът може да наложи и конфискация до една втора от имуществото на дееца.
(4) Когато с деянието по ал. 1 – 3 са причинени значителни вредни последици, наказанието е:
1. в случаите по ал. 1 или 2 – лишаване от свобода от една до шест години и глоба от три хиляди до десет хиляди лева;
2. в случаите по ал. 3 – лишаване от свобода от три до десет години, глоба от пет хиляди до двадесет хиляди лева и конфискация на част или на цялото имущество на дееца.
(5) Когато деянието по ал. 1 или 2 представлява маловажен случай, наказанието е лишаване от свобода до една година или пробация.
(6) Предметът на престъплението по ал. 2 и 3 се отнема в полза на държавата и се унищожава.

Извод 1 – ACTA официално е отхвърлена от цялата Европейска общност, но нашите управляващи са решили да си направят българска АCTA.
Извод 2 (нищо ново) – на думите на управляващите в България не може да се вярва.
Извод 3 – в името на интересите на малцина, се правят свръхнаказания и свръхмерки за ограничаване на свободата на информация и свободата в Интернет като цяло на всички останали. Защото както от опит знаем, борбата с пиратстото не се изчерпва само със защитаване на правата на (евентуално и хипотетично) ощетени творци. Обикновено тя бързо се изражда в перфектното оръжие срещу свободата на словото. И без този Наказателен кодекс все още да действа такива случаи вече има, просто без наказание за нещастния човек, който си е позволил да изкаже критика.
Извод 4 – ако искаме да спрем тази лудост, ще трябва отново да протестираме, отново за същото нещо. Имам чувството, че разчитат на нашата късопаметност, разсеяност и умора. Че очакват да ни писне, да вдигнем рамене и да кажем „Правете каквото искате, майната ви!!“. Само че аз не мисля, че това ще се случи.
А вие?

Everyone in USA: Comment against ACTA today!

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2011/02/15/acta.html

In the USA, the deadline for comments on ACTA
is today (Tuesday 15 February 2011) at 17:00 US/Eastern.
It’s absolutely imperative that every USA citizen submit a comment on
this. The Free
Software Foundation has details on how to do so

ACTA is a dangerous international agreement that would establish
additional criminal penalties, promulgate DMCA/EUCD-like legislation
around the world, and otherwise extend copyright law into places it
should not go. Copyright law is already much stronger than
anyone needs.

On a meta-point, it’s extremely important that USA citizens participate
in comment processes like this. The reason that things like ACTA can
happen in the USA is because most of the citizens don’t pay attention.
By way of hyperbolic fantasy, imagine if every citizen of the
USA wrote a letter today to Mr. McCoy about ACTA. It’d be a news story
on all the major news networks tonight, and would probably be in the
headlines in print/online news stories tomorrow. Our whole country
would suddenly be debating whether or not we should have criminal
penalties for copying TV shows, and whether breaking a DVD’s DRM should
be illegal.

Obviously, that fantasy won’t happen, but getting from where we are to
that wonderful fantasy is actually linear; each person who
writes to Mr. McCoy today makes a difference! Please take 15 minutes
out of your day today and do so. It’s the least you can do on this

The Free
Software Foundation has a sample letter you can use
if you don’t
have time to write your own. I wrote my own, giving some of my unique
perspective, which I include below.

The automated
system on regulations.gov
assigned this comment below the tracking
number of 80bef9a1 (cool, it’s in hex! 🙂

Stanford K. McCoy
Assistant U.S. Trade Representative for Intellectual Property and Innovation
Office of the United States Trade Representative
600 17th St NW
Washington, DC 20006

Re: ACTA Public Comments (Docket no. USTR-2010-0014)

Dear Mr. McCoy:

I am a USA citizen writing to urge that the USA not sign
ACTA. Copyright law already reaches too far. ACTA would extend
problematic, overly-broad copyright rules around the world and would
increase the already inappropriate criminal penalties for copyright
infringement here in the USA.

Both individually and as an agent of my employer, I am regularly involved
in copyright enforcement efforts to defend the Free Software license
called the GNU General Public License (GPL). I therefore think my
perspective can be uniquely contrasted with other copyright holders who
support ACTA.

Specifically, when engaging in copyright enforcement for the GPL, we treat
it as purely a civil issue, not a criminal one. We have been successful
in defending the rights of software authors in this regard without the
need for criminal penalties for the rampant copyright infringement that we
often encounter.

I realize that many powerful corporate copyright holders wish to see
criminal penalties for copyright infringement expanded. As someone who
has worked in the area of copyright enforcement regularly for 12 years, I
see absolutely no reason that any copyright infringement of any kind ever
should be considered a criminal matter. Copyright holders who believe
their rights have been infringed have the full power of civil law to
defend their rights. Using the power of government to impose criminal
penalties for copyright infringement is an inappropriate use of government
to interfere in civil disputes between its citizens.

Finally, ACTA would introduce new barriers for those of us trying to
change our copyright law here in the USA. The USA should neither impose
its desired copyright regime on other countries, nor should the USA bind
itself in international agreements on an issue where its citizens are in
great disagreement about correct policy.

Thank you for considering my opinion, and please do not allow the USA to
sign ACTA.

Bradley M. Kuhn

Dear Lazyweb!

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/mexico-lamp.html

Let’s see how well Lazyweb works for me!

One of the nicest types of lamps I know is depicted on this photo:

mexico lamp

This lamp is built from a number (16 or so, it’s so difficult to count) of
identical shapes which are put together (a mano) in a very simple, mathematical
fashion. No glue or anything else is need to make it a very robust object. The
lamp looks a little bit like certain Julia fractals, its geometrical structure
is just beautiful. Every mathematical mind will enjoy it.

This particular specimen has been bought from a street dealer in Mexico
City, and has been made of thin plastic sheets. I saw the same model made from
paper on a market near Barcelona this summer (during GUADEC). Unfortunately I
didn’t seize the chance to buy any back then, and now I am regretting it!

I’ve been trying to find this model in German and US shops for the last
months (Christmas is approaching fast!) but couldn’t find a single specimen. I
wonder who designed this ingenious lamp and who produces it. It looks like a
scandinavian design to me, but that’s just an uneducated guess.

If you have any information about this specific lamp model, or could even
provide me with a pointer where to buy or how to order these lamps in/from
Germany, please leave a comment to this blog story, or write me an email to
mzynzcr (at) 0pointer (dot) de! Thank you very much!

Fractals with Python

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/mandelbrot.html

It’s impressing how easy it is to draw fractals with Python. Using the ubercool Python Imaging Library and native complex number support in Python you can code an elaborate and easy to understand fractal generator in less than 50 lines of code:

import Image, ImageDraw, math, colorsys

dimensions = (800, 800)
scale = 1.0/(dimensions[0]/3)
center = (2.2, 1.5) # Use this for Mandelbrot set
#center = (1.5, 1.5) # Use this for Julia set
iterate_max = 100
colors_max = 50

img = Image.new(“RGB”, dimensions)
d = ImageDraw.Draw(img)

# Calculate a tolerable palette
palette = [0] * colors_max
for i in xrange(colors_max):
f = 1-abs((float(i)/colors_max-1)**15)
r, g, b = colorsys.hsv_to_rgb(.66+f/3, 1-f/2, f)
palette[i] = (int(r*255), int(g*255), int(b*255))

# Calculate the mandelbrot sequence for the point c with start value z
def iterate_mandelbrot(c, z = 0):
for n in xrange(iterate_max + 1):
z = z*z +c
if abs(z) > 2:
return n
return None

# Draw our image
for y in xrange(dimensions[1]):
for x in xrange(dimensions[0]):
c = complex(x * scale – center[0], y * scale – center[1])

n = iterate_mandelbrot(c) # Use this for Mandelbrot set
#n = iterate_mandelbrot(complex(0.3, 0.6), c) # Use this for Julia set

if n is None:
v = 1
v = n/100.0

d.point((x, y), fill = palette[int(v * (colors_max-1))])

del d

Some example pictures:

Julia Set Mandelbrot Set.