Ubuntu 18.10 (Cosmic Cuttlefish) released

Post Syndicated from jake original https://lwn.net/Articles/768835/rss

Ubuntu has announced the release of its latest version, 18.10 (or “Cosmic Cuttlefish”). It has lots of updated packages and such, and is available in both a desktop and server version; there are also multiple flavors that were released as well. More information can be found in the release notes. “The Ubuntu kernel has been updated to the 4.18 based Linux kernel,
our default toolchain has moved to gcc 8.2 with glibc 2.28, and we’ve
also updated to openssl 1.1.1 and gnutls 3.6.4 with TLS1.3 support.

Ubuntu Desktop 18.04 LTS brings a fresh look with the community-driven
Yaru theme replacing our long-serving Ambiance and Radiance themes. We
are shipping the latest GNOME 3.30, Firefox 63, LibreOffice 6.1.2, and
many others.

Ubuntu Server 18.10 includes the Rocky release of OpenStack including
the clustering enabled LXD 3.0, new network configuration via netplan.io,
and iteration on the next-generation fast server installer. Ubuntu Server
brings major updates to industry standard packages available on private
clouds, public clouds, containers or bare metal in your datacentre.”

PostgreSQL 11 released

Post Syndicated from corbet original https://lwn.net/Articles/768822/rss

The PostgreSQL 11 release is out. “PostgreSQL 11 provides users with improvements to overall performance of
the database system, with specific enhancements associated with very
large databases and high computational workloads. Further, PostgreSQL 11
makes significant improvements to the table partitioning system, adds
support for stored procedures capable of transaction management,
improves query parallelism and adds parallelized data definition
capabilities, and introduces just-in-time (JIT) compilation for
accelerating the execution of expressions in queries.
” See this article for a detailed overview of what
is in this release.

[$] Making the GPL more scary

Post Syndicated from corbet original https://lwn.net/Articles/768670/rss

For some years now, one has not had to look far to find articles
proclaiming the demise of the GNU General Public License. That license, we
are told, is too frightening for many businesses, which prefer to use
software under the far weaker permissive class of license. But there is a
business model that is based on the allegedly scary nature of
the GPL, and there are those who would like to make it more lucrative; the
only problem is that the GPL isn’t quite scary enough yet.

Security updates for Thursday

Post Syndicated from jake original https://lwn.net/Articles/768776/rss

Security updates have been issued by Arch Linux (chromium, libssh, and net-snmp), Debian (libssh and xen), Fedora (audiofile), openSUSE (axis, GraphicsMagick, ImageMagick, kernel, libssh, samba, and texlive), Oracle (java-1.8.0-openjdk), Red Hat (java-1.8.0-openjdk, rh-nodejs6-nodejs, and rh-nodejs8-nodejs), SUSE (binutils and fuse), and Ubuntu (paramiko).

Introducing the Raspberry Pi TV HAT

Post Syndicated from Roger Thornton original https://www.raspberrypi.org/blog/raspberry-pi-tv-hat/

Today we are excited to launch a new add-on board for your Raspberry Pi: the Raspberry Pi TV HAT.

A photograph of a Raspberry Pi a TV HAT with aerial lead connected Oct 2018

The TV HAT connects to the 40-pin GPIO header and to a suitable antenna, allowing your Raspberry Pi to receive DVB-T2 television broadcasts.

A photograph of a Raspberry Pi Zero W with TV HAT connected Oct 2018

Watch TV with your Raspberry Pi

With the board, you can receive and view television on a Raspberry Pi, or you can use your Pi as a server to stream television over a network to other devices. The TV HAT works with all 40-pin GPIO Raspberry Pi boards when running as a server. If you want to watch TV on the Pi itself, we recommend using a Pi 2, 3, or 3B+, as you may need more processing power for this.

A photograph of a Raspberry Pi 3 Model B+ with TV HAT connected Oct 2018

Stream television over your network

Viewing television is not restricted to Raspberry Pi computers: with a TV HAT connected to your network, you can view streams on any network-connected device. That includes other computers, mobile phones, and tablets. You can find instructions for setting up your TV HAT in our step-by-step guide.

A photograph of a Raspberry Pi 3 Model B+ with TV HAT connected Oct 2018
A photograph of a Raspberry Pi a TV HAT with aerial lead connected Oct 2018
A photograph of a Raspberry Pi Zero W with TV HAT connected Oct 2018

New HAT form factor

The Raspberry Pi TV HAT follows a new form factor of HAT (Hardware Attached on Top), which we are also announcing today. The TV HAT is a half-size HAT that matches the outline of Raspberry Pi Zero boards. A new HAT spec is available now. No features have changed electrically – this is a purely mechanical change.

Raspberry Pi TV HAT mechanical drawing Oct 2018

A mechanical drawing of a Raspberry Pi TV HAT, exemplifying the spec of the new HAT form factor. Click to embiggen.

The TV HAT has three bolt holes; we omitted the fourth so that the HAT can be placed on a large-size Pi without obstructing the display connector.

The board comes with a set of mechanical spacers, a 40-way header, and an aerial adaptor.

A photograph of a Raspberry Pi TV HAT Oct 2018


Digital Video Broadcast (DVB) is a widely adopted standard for transmitting broadcast television; see countries that have adopted the DVB standard here.

Initially, we will be offering the TV HAT in Europe only. Compliance work is already underway to open other DVB-T2 regions. If you purchase a TV HAT, you must have the appropriate license or approval to receive broadcast television. You can find a list of licenses for Europe here. If in doubt, please contact your local licensing body.

The Raspberry Pi TV HAT opens up some fantastic opportunities for people looking to embed a TV receiver into their networks. Head over to the TV HAT product page to find out where to get hold of yours. We can’t wait to see what you use it for!

The post Introducing the Raspberry Pi TV HAT appeared first on Raspberry Pi.

[$] A new direction for i965

Post Syndicated from jake original https://lwn.net/Articles/768410/rss

Graphical applications are always pushing the limits of what the hardware
can do and
recent developments in the graphics world have caused Intel to rethink its
3D graphics driver. In particular, the lower CPU overhead that the Vulkan
driver on Intel hardware can
provide is becoming more attractive for OpenGL as well. At the 2018 X.Org Developers Conference Kenneth
talked about an experimental re-architecting of the i965 driver using Gallium3D—a
development that came as something of a surprise to many, including him.

How to create and manage users within AWS Single Sign-On

Post Syndicated from Vijay Sharma original https://aws.amazon.com/blogs/security/how-to-create-and-manage-users-within-aws-sso/

AWS Single Sign-On (AWS SSO) is a cloud service that allows you to grant your users access to AWS resources, such as Amazon EC2 instances, across multiple AWS accounts. By default, AWS SSO now provides a directory that you can use to create users, organize them in groups, and set permissions across those groups. You can also grant the users that you create in AWS SSO permissions to applications such Salesforce, Box, and Office 365. AWS SSO and its directory are available at no additional cost to you.

A directory is a key building block that allows you to manage the users to whom you want to grant access to AWS resources and applications. AWS Identity and Access Management (IAM) provides a way to create users that can be used to access AWS resources within one AWS account. However, many businesses prefer an approach that enables users to sign in once with a single credential and access multiple AWS accounts and applications. You can now create your users centrally in AWS SSO and manage user access to all your AWS accounts and applications. Your users sign in to a user portal with a single set of credentials configured in AWS SSO, allowing them to access all of their assigned accounts and applications in a single place.

Note: If you manage your users in a Microsoft Active Directory (Microsoft AD) directory, AWS SSO already provides you with an option to connect to a Microsoft AD directory. By connecting your Microsoft AD directory once with AWS SSO, you can assign permissions for AWS accounts and applications directly to your users by easily looking up users and groups from your Microsoft AD directory. Your users can then use their existing Microsoft AD credentials to sign into the AWS SSO user portal and access their assigned accounts and applications in a single place. Customers who manage their users in an existing Lightweight Directory Access Protocol (LDAP) directory or through a cloud identity provider such as Microsoft Azure AD can continue to use IAM federation to enable their users’ access to AWS resources.

How to create users and groups in AWS SSO

You can create users in AWS SSO by configuring their email address and name. When you create a user, AWS SSO sends an email to the user by default so that they can set their own password. Your user will use their email address and a password they configure in AWS SSO to sign into the user portal and access all of their assigned accounts and applications in a single place.

You can also add the users that you create in AWS SSO to groups you create in AWS SSO. In addition, you can create permissions sets that define permitted actions on an AWS resource, and assign them to your users and groups. For example, you can grant the DevOps group permissions to your production AWS accounts. When you add users to the DevOps group, they get access to your production AWS accounts automatically.

In this post, I will show you how to create users and groups in AWS SSO, how to create permission sets, how to assign your groups and users to permission sets and AWS accounts, and how your users can sign into the AWS SSO user portal to access AWS accounts. To learn more about how to grant users that you create in AWS SSO permissions to business applications such as Office 365 and Salesforce, see Manage SSO to Your Applications.

Walk-through prerequisites

For this walk-through, I assume the following:


To illustrate how to add users in AWS SSO and how to grant permissions to multiple AWS accounts, imagine that you’re the IT manager for a company, Example.com, that wants to make it easy for its users to access resources in multiple AWS accounts. Example.com has five AWS accounts: a master account (called MasterAcct), two developer accounts (DevAccount1 and DevAccount2), and two production accounts (ProdAccount1 and ProdAccount2). Example.com uses AWS Organizations to manage these accounts and has already enabled AWS SSO.

Example.com has two developers, Martha and Richard, who need full access to Amazon EC2 and Amazon S3 in the developer accounts (DevAccount1 and DevAccount2) and read-only access to EC2 and S3 resources in the production accounts (ProdAccount1 and ProdAccount2).

The following diagram illustrates how you can grant Martha and Richard permissions to the developer and production accounts in four steps:

  1. Add users and groups in AWS SSO: Add users Martha and Richard in AWS SSO by configuring their names and email addresses. Add a group called Developers in AWS SSO and add Martha and Richard to the Developers group.
  2. Create permission sets: Create two permission sets. In the first permission set, include policies that give full access to Amazon EC2 and Amazon S3. In second permission set, include policies that give read-only access to Amazon EC2 and Amazon S3.
  3. Assign groups to accounts and permission sets: Assign the Developers group to your developer accounts and assign the permission set that gives full access to Amazon EC2 and Amazon S3. Assign the Developers group to your production accounts, too, and assign the permission set that gives read-only access to Amazon EC2 and Amazon S3. Martha and Richard now have full access to Amazon EC2 and Amazon S3 in the developer accounts and read-only access in the production accounts.
  4. Users sign into the User Portal to access accounts: Martha and Richard receive email from AWS to set their passwords with AWS SSO. Martha and Richard can now sign into the AWS SSO User Portal using their email addresses and the passwords they set with AWS SSO, allowing them to access their assigned AWS accounts.
Figure 1: Architecture diagram

Figure 1: Architecture diagram

Step 1: Add users and groups in AWS SSO

To add users in AWS SSO, navigate to the AWS SSO Console. Then, follow the steps below to add Martha as a user, to create a group called Developers, and to add Martha to the Developers group in AWS SSO.

  1. In the AWS SSO Dashboard, choose Manage your directory to navigate to the Directory tab.
    Figure 2: Navigating to the "Manage your directory" page

    Figure 2: Navigating to the “Manage your directory” page

  2. By default, AWS SSO provides you a directory that you can use to manage users and groups in AWS SSO. To add a user in AWS SSO, choose Add user. If you previously connected a Microsoft AD directory with AWS SSO, you can switch to using the directory that AWS SSO now provides by default by following the steps in Change Directory.
    Figure 3: Adding new users to your directory

    Figure 3: Adding new users to your directory

  3. On the Add User page, enter an email address, first name, and last name for the user, then create a display name. In this example, you’re adding “Martha Rivera” as a user. For the password, choose Send an email to the user with password instructions. This allows users to set their own passwords.

    Optionally, you can also set a mobile phone number and add additional user attributes.

    Figure 4: Adding user details

    Figure 4: Adding user details

  4. Next, you’re ready to add the user to groups. First, you need to create a group. Later, in Step 3, you can grant your group permissions to an AWS account so that any users added to the group will inherit the group’s permissions automatically. In this example, you will create a group called Developers and add Martha to the group. To do so, from the Add user to groups page, choose Create group.
    Figure 5: Creating a group

    Figure 5: Creating a new group

  5. In the Create group window, title your group by filling out the Group name field. For this example, enter Developers. Optionally, you can also enter a description of the group in the Description field. Choose Create to create the group.
    Figure 6: Adding a name and description to your new group

    Figure 6: Adding a name and description to your new group

  6. On the Add users to group page, check the box next to the group you just created, and then choose Add user. Following this process will allow you to add Martha to the Developers group.
    Figure 7: Adding a user to your group

    Figure 7: Adding a user to your new group

You’ve successfully created the user Martha and added her to the Developers group. You can repeat sub-steps 2, 3, and 6 above to create more users and add them to the group. This is the process you should follow to create the user Richard and add him to the Developers group.

Next, you’ll grant the Developers group permissions to AWS resources within multiple AWS accounts. To follow along, you’ll first need to create permission sets.

Step 2: Create permission sets

To grant user permissions to AWS resources, you must create permission sets. A permission set is a collection of administrator-defined policies that AWS SSO uses to determine a user’s permissions for any given AWS account. Permission sets can contain either AWS managed policies or custom policies that are stored in AWS SSO. Policies contain statements that represent individual access controls (allow or deny) for various tasks. This determines what tasks users can or cannot perform within the AWS account. To learn more about permission sets, see Permission Sets.

For this use case, you’ll create two permissions sets: 1) EC2AndS3FullAccess, which has AmazonEC2FullAccess and AmazonS3FullAccess managed policies attached and 2) EC2AndS3ReadAccess, which has AmazonEC2ReadOnlyAccess and AmazonS3ReadOnlyAccess managed policies attached. Later, in Step 3, you can assign groups to these permissions sets and AWS accounts, so that your users have access to these resources. To learn more about creating permission sets with different levels of access, see Create Permission Set.

Follow the steps below to create permission sets:

  1. Navigate to the AWS SSO Console and choose AWS accounts in the left-hand navigation menu.
  2. Switch to the Permission sets tab on the AWS Accounts page, and then choose Create permissions set.
    Figure 8: Creating a permission set

    Figure 8: Creating a permission set

  3. On the Create new permissions set page, choose Create a custom permission set. To learn more about choosing between an existing job function policy and a custom permission set, see Create Permission Set.
    Figure 9: Customizing a permission set

    Figure 9: Customizing a permission set

  4. Enter EC2AndS3FullAccess in the Name field and choose Attach AWS managed policies. Then choose AmazonEC2FullAccess and AmazonS3FullAccess. Choose Create to create the permission set.
    Figure 10: Attaching AWS managed policies to your permission set

    Figure 10: Attaching AWS managed policies to your permission set

You’ve successfully created a permission set. You can use the steps above to create another permission set, called EC2AndS3ReadAccess, by attaching the AmazonEC2ReadOnlyAccess and AmazonS3ReadOnlyAccess managed policies. Now you’re ready to assign your groups to accounts and permission sets.

Step 3: Assign groups to accounts and permission sets

In this step, you’ll assign your Developers group full access to Amazon EC2 and Amazon S3 in the developer accounts and read-only access to these resources in the production accounts. To do so, you’ll assign the Developers group to the EC2AndS3FullAccess permission set and to the two developer accounts (DevAccount1 and DevAccount2). Similarly, you’ll assign the Developers group to the EC2AndS3ReadAccess permission set and to the production AWS accounts (ProdAccount1 and ProdAccount2).

Follow the steps below to assign the Developers group to the EC2AndS3FullAccess permission set and developer accounts (DevAccount1 and DevAccount2). To learn more about how to manage access to your AWS accounts, see Manage SSO to Your AWS Accounts.

  1. Navigate to the AWS SSO Console and choose AWS Accounts in the left-hand navigation menu.
  2. Switch to the AWS organization tab and choose the accounts to which you want to assign your group. For this example, select accounts DevAccount1 and DevAccount2 from the list of AWS accounts. Next, choose Assign users.
    Figure 11: Assigning users to your accounts

    Figure 11: Assigning users to your accounts

  3. On the Select users and groups page, type the name of the group you want to add into the search box and choose Search. For this example, you will be looking for the group called Developers. Check the box next to the correct group and choose Next: Permission Sets.
    Figure 12: Setting permissions for the "Developers" group

    Figure 12: Setting permissions for the “Developers” group

  4. On the Select permissions sets page, select the permission sets that you want to assign to your group. For this use case, you’ll select the EC2AndS3FullAccess permission set. Then choose Finish.
    Figure 13: Choosing permission sets

    Figure 13: Choosing permission sets

You’ve successfully granted users in the Developers group access to accounts DevAccount1 and DevAccount2, with full access to Amazon EC2 and Amazon S3.

You can follow the same steps above to grant users in the Developers group access to accounts ProdAccount1 and ProdAccount2 with the permissions in the EC2AndS3ReadAccess permission set. This will grant the users in the Developers group read-only access to Amazon EC2 and Amazon S3 in the production accounts.

Figure 14: Notification of successful account configuration

Figure 14: Notification of successful account configuration

Step 4: Users sign into User Portal to access accounts

Your users can now sign into the AWS SSO User Portal to manage resources in their assigned AWS accounts. The user portal provides your users with single sign-on access to all their assigned accounts and business applications. From the user portal, your users can sign into multiple AWS accounts by choosing the AWS account icon in the portal and selecting the account that they want to access.

You can follow the steps below to see how Martha signs into the user portal to access her assigned AWS accounts.

  1. When you added Martha as a user in Step 1, you selected the option Send the user an email with password setup instructions. AWS SSO sent instructions to set a password to Martha at the email that you configured when creating the user. This is the email that Martha received:
    Figure 15: AWS SSO password configuration email

    Figure 15: AWS SSO password configuration email

  2. To set her password, Martha will select Accept invitation in the email that she received from AWS SSO. Selecting Accept invitation will take Martha to a page where she can set her password. After Martha sets her password, she can navigate to the User Portal.
    Figure 16: User Portal sign-in

    Figure 16: User Portal sign-in

  3. In the User Portal, Martha can select the AWS Account icon to view all the AWS accounts to which she has permissions.
    Figure 17: View of AWS Account icon from User Portal

    Figure 17: View of AWS Account icon from User Portal

  4. Martha can now see the developer and production accounts that you granted her permissions to in previous steps. For each account, she can also see the list of roles that she can assume within the account. For example, for DevAccount1 and DevAccount2, Martha can assume the EC2AndS3FullAccess role that gives her full access to manage Amazon EC2 and Amazon S3. Similarly, for ProdAccount1 and ProdAccount2, Martha can assume the EC2AndS3ReadAccess role that gives her read-only access to Amazon EC2 and Amazon S3. Martha can select accounts and choose Management Console next to the role she wants to assume, letting her sign into the AWS Management Console to manage AWS resources. To switch to a different account, Martha can navigate to the User Portal and select a different account. From the User Portal, Martha can also get temporary security credentials for short-term access to resources in an AWS account using AWS Command Line Interface (CLI). To learn more, see How to Get Credentials of an IAM Role for Use with CLI Access to an AWS Account.
    Figure 17: Switching accounts from the User Portal

    Figure 18: Switching accounts from the User Portal

  5. Martha bookmarks the user portal URL in her browser so that she can quickly access the user portal the next time she wants to access AWS accounts.


By default, AWS now provides you with a directory that you can use to manage users and groups within AWS SSO and to grant user permissions to resources in multiple AWS accounts and business applications. In this blog post, I showed you how to manage users and groups within AWS SSO and grant them permissions to multiple AWS accounts. I also showed how your users sign into the user portal to access their assigned AWS accounts.

If you have feedback or questions about AWS SSO, start a new thread on the AWS SSO forum or contact AWS Support. If you have feedback about this blog post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Vijay Sharma

Vijay is a Senior Product Manager with AWS Identity.

Measuring service chargeback in Amazon ECS

Post Syndicated from Anuneet Kumar original https://aws.amazon.com/blogs/compute/measuring-service-chargeback-in-amazon-ecs/

Contributed by Subhrangshu Kumar Sarkar, Sr. Technical Account Manager, and Shiva Kumar Subramanian, Sr. Technical Account Manager

Amazon Elastic Container Service (ECS) users have been asking us for a way to allocate cost to the deployed services in a shared Amazon ECS cluster. This blog post can help customers think through different techniques to allocate costs incurred by running Amazon ECS services to owners who include specific teams or individual users. The post dives in to one technique that gives customers a granular way to allocate costs to Amazon ECS service owners.

Amazon ECS pricing models

Amazon ECS has two pricing models.  In the Amazon EC2 launch type model, you pay for the AWS resources (e.g., Amazon EC2 instances or Amazon EBS volumes) that you create to store and run your application. Right now, it’s difficult to calculate the aggregate cost of an Amazon ECS service that consists of multiple tasks. In the AWS Fargate launch type model, you pay for vCPU and memory resources that your containerized application requests. Although the user knows the cost that the tasks incur, there is no out-of-box way to associate that cost to a service.

Possible solutions

There are two possible solutions to this problem.

A. Billing based on the usage of container instances in a partitioned cluster.

One solution for service chargeback is to associate specific container instances with respective teams or customers. Then use task placement constraints to restrict the services that they deploy to only those container instances. The following image shows how this solution works.

Here, user A is allowed to deploy services only the blue container instances and user B is allowed on the green ones. Both users can be charged based on the AWS resources they use. E.g. the EC2 instances and the ALB etc.

This solution is useful when you don’t want to host services from different teams or users on the same set of container instances. However, an Amazon ECS cluster is getting shared, and the end users are still getting charged for the Amazon EC2 instances and other AWS assets that they’re using rather than for the exact vCPU and memory resources that their service is using. The disadvantage to this approach is that you could have provisioned excess capacity for your users and end up wasting resources. You also need to use placement constraints in all of your task definitions.

B. Billing based on resource usage at the task level.

Another solution could be to develop a mechanism to let the Amazon ECS cluster owners calculate the aggregate cost of an Amazon ECS service that consists of multiple tasks. The solution would have a metering mechanism and a chargeback measurement. When deployed for Amazon EC2 launch type tasks, the metering mechanism tracks the vCPU and memory that Amazon ECS reserves in the tasks’ lifetime. Then, with the chargeback measurement, the cluster owner can associate a cost with these tasks based on the cost incurred by the container instances that they’re running on. The following image shows how this solution works.

Here, unlike the previous solution, both users can use all the container instances of the ECS cluster.

With this solution, customers can start using a shared Amazon ECS cluster to deploy their tasks on any of the container instances. After the solution has been deployed, the cost for a service can be calculated at any point in time, using the cluster and the service name as input parameters.

With Fargate tasks, the vCPU and memory usage details are already available in vCPU-hours and GB-hours, respectively. The chargeback measurement in the solution aggregates the CPU and memory reservation of all the tasks that ever ran as part of a service. It associates a cost to this aggregated CPU and memory reservation by multiplying it with Fargate’s per vCPU per hour and perGB per hour cost, respectively.

This solution has the following considerations:

  • Amazon EC2 pricing: For the base price of the container instance, we’re considering the On-Demand price.
  • Platform costs: Common costs for the cluster (the Amazon EBS volume that the containers are launched from, Amazon ECR, etc.) are treated as the platform cost for all of the services running on the cluster.
  • Networking cost: When you’re using bridge or host networking, there is no mechanism to divide costs among different tasks that are launched on the container instance.
  • Elastic Load Balancing or Application Load Balancer costs: If services sit behind multiple target groups of an Application Load Balancer, there is no direct way of dividing costs per target group.

Solution components

The solution has two components: a metering mechanism and a chargeback measurement.

The metering mechanism consists of the following parts:

The chargeback measurement consists of the following parts:

  • Python script
  • AWS Price List Service API

Metering mechanism

The following image shows the architecture of the solution’s metering mechanism.

As part of the deployment of the Metering mechanism, the user needs to do the following.

  1. A CloudWatch Events rule is created by the user to trigger a Lambda function on an Amazon ECS task state change event. Typically, a task state change event is generated with a call to the StartTask, RunTask, and StopTask API operations or when an Amazon ECS service scheduler starts or stops a task.
  2. User needs to create a DynamoDB table, which the Lambda function can update.
  3. Every time the Lambda function is invoked, it updates the DynamoDB table with details of the Amazon ECS task.

With the first run of the metering mechanism, it takes stock of all running Amazon ECS tasks across all services across all clusters. This data resides in DynamoDB from then on, and the solution’s chargeback measurement uses it.

Chargeback measurement

The following image shows the architecture of the chargeback measurement.

When you need to find the cost associated with a service, run the ecs-chargeback Python script with the cluster and service names as parameters. This script performs the following actions.

  1. Find all the tasks that have ever run or are currently running as part of the service.
  2. For each task, calculate the up time.
  3. For each task, find the container instance type (for Amazon EC2 type tasks).
  4. Find what percentage of the host’s compute or memory resources the task has reserved. If there is no task-level CPU reservation for Amazon EC2 launch type tasks, a CPU reservation of 128 CPU shares (0.125 vCPUs) is assumed. In Amazon EC2 launch type tasks, you have to specify memory reservation at the task or container level during creation of the task definition.
  5. Associate that percentage with a cost.
  6. (Optional) Use the following parameters:
    • Duration: By default, the script shows the service cost for its complete uptime. You can use the duration parameter to get the cost for a particular month, the month to date, or the last n days.
    • Weight: This parameter is a weighted fraction that you can use to disproportionately divide the instance cost between vCPU and memory. By default, this value is 0.5.

The vCPU and memory costs are calculated using the following formulas:

  • Task vCPU cost = (task vCPU reservation/total vCPUs in the instance) * (cost of the instance) * (vCPU/memory weight) * task run time in seconds
  • Task memory cost = (task memory reservation/total memory in the instance) * (cost of the instance) * (1- vCPU/memory weight) * task run time in seconds

Solution deployment and cost measurement

Here are the steps to deploy the solution in your AWS account and then calculate the service chargeback.

Metering mechanics

1. Create a DynamoDB table named ECSTaskStatus to capture details of an ECS task state change CloudWatch event.

Primary partition key: taskArn. Type: string.

Provision RCUs or WCUs depending on your Amazon ECS usage.

For the rest, keep the default values.

aws dynamodb create-table --table-name ECSTaskStatus \
--attribute-definitions AttributeName=taskArn,AttributeType=S \
--key-schema AttributeName=taskArn,KeyType=HASH \
--provisioned-throughput ReadCapacityUnits=10,WriteCapacityUnits=20

2. Create an IAM policy named LambdaECSTaskStatusPolicy that allows the Lambda function to make    the following API calls. Create a local copy of the policy document LambdaECSTaskStatusPolicy.JSON from GitHub.

o	ecs: DescribeContainerInstances
o	dynamodb: BatchGetItem, BatchWriteItem, PutItem, GetItem, and UpdateItem

o	logs: CreateLogGroup, CreateLogStream, and PutLogEvents

aws iam create-policy --policy-name LambdaECSTaskStatusPolicy \
--policy-document file://LambdaECSTaskStatusPolicy.JSON

3. Create an IAM role named LambdaECSTaskStatusRole and attach the policy to the role. Replace <Policy ARN> with the Amazon Resource Name (ARN) of the IAM policy.

aws iam create-role --role-name LambdaECSTaskStatusRole \
--assume-role-policy-document \
'{ "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Principal": {"Service": "lambda.amazonaws.com"}, "Action": "sts:AssumeRole"}}'

aws iam attach-role-policy --policy-arn <Policy ARN> --role-name LambdaECSTaskStatusRole

4. Create a Lambda function named ecsTaskStatus that PUTs or UPDATEs the Amazon ECS task details to the ECSTaskStatus DynamoDB table. This function has the following details:

o   Runtime: Python 3.6.

o   Memory setting: 128 MB.

o   Timeout: 3 seconds.

o   Execution role: LambdaECSTaskStatusRole.

o   Code: ecsTaskStatus.py. Use the inline code editor on the Lambda console to author the function.


5. Create a CloudWatch Events rule for Amazon ECS task state change events and configure the Lambda function as the target. The function puts or updates items in the ECSTaskStatus DynamoDB table with every Amazon ECS task’s details.

a.     Create the CloudWatch Events rule.

aws events put-rule --name ECSTaskStatusRule \
--event-pattern '{"source": ["aws.ecs"], "detail-type": ["ECS Task State Change"], "detail": {"lastStatus": ["RUNNING", "STOPPED"]}}'

b.     Add the Lambda function as a target to the CloudWatch Events rule. Replace <Lambda ARN> with the ARN of the Lambda function that you created in step 4.

aws events put-targets --rule ECSTaskStatusRule --targets "Id"="1","Arn"="<Lambda ARN>"

c.     Add permissions for CloudWatch Events to invoke Lambda. Replace <CW Events Rule ARN> with the ARN of the CloudWatch Events rule that you created in step 5a.

aws lambda add-permission --function-name ecsTaskStatus \
--action 'lambda:InvokeFunction' --statement-id "LambdaAddPermission" \
--principal events.amazonaws.com --source-arn <CW Events Rule ARN>

The solution invokes the Lambda function only when an Amazon ECS task state change event occurs. Therefore, when the solution is deployed, no event is raised for current running tasks, and task details aren’t populated into the DynamoDB table. If you want to meter current running tasks, you can run the script ecsTaskStatus-FirstRun.py after creation of the DynamoDB table. This populates all running tasks’ details into the DynamoDB table. The script is idempotent.

ecsTaskStatus-FirstRun.py --region eu-west-1

Chargeback measurement

To find the cost for running a service, run the Python script ecs-chargeback, which has the following usage and arguments.

./ecs-chargeback -h
usage: ecs-chargeback [-h] --region REGION --cluster CLUSTER --service SERVICE
                      [--weight WEIGHT] [-v]
                      [--month MONTH | --days DAYS | --hours HOURS]

optional arguments:
  -h, --help            show this help message and exit
  --region REGION, -r REGION
                        AWS Region in which Amazon ECS service is running.
  --cluster CLUSTER, -c CLUSTER
                        ClusterARN in which Amazon ECS service is running.
  --service SERVICE, -s SERVICE
                        Name of the AWS ECS service for which cost has to be
  --weight WEIGHT, -w WEIGHT
                        Floating point value that defines CPU:Memory Cost
                        Ratio to be used for dividing EC2 pricing
  -v, --verbose
  --month MONTH, -M MONTH
                        Show charges for a service for a particular month
  --days DAYS, -D DAYS  Show charges for a service for last N days
  --hours HOURS, -H HOURS
                        Show charges for a service for last N hours


To calculate the cost that a service incurs with Amazon EC2 launch type tasks, run the script as follows.

./ecs-chargeback -r eu-west-1 -c ecs-chargeback -s nginxsvc

The following is sample output of running this script.

# ECS Region  : eu-west-1, ECS Service Name: nginxsvc
# ECS Cluster : arn:aws:ecs:eu-west-1:675410410211:cluster/ecs-chargeback
# Amazon ECS Service Cost           : 26.547270 USD
#             (Launch Type : EC2)
#         EC2 vCPU Usage Cost       : 21.237816 USD
#         EC2 Memory Usage Cost     : 5.309454 USD

To get the chargeback for Fargate launch type tasks, run the script as follows.

./ecs-chargeback -r eu-west-1 -c ecs-chargeback -s fargatesvc

The following is sample output of this script.

# ECS Region  : eu-west-1, ECS Service Name: fargatesvc
# ECS Cluster : arn:aws:ecs:eu-west-1:675410410211:cluster/ecs-chargeback
# Amazon ECS Service Cost           : 118.653359 USD
#             (Launch Type : FARGATE)
#         Fargate vCPU Usage Cost   : 78.998157 USD
#         Fargate Memory Usage Cost : 39.655201 USD


This solution can help Amazon ECS users track and allocate costs for their deployed workloads. It might also help them save some costs by letting them share an Amazon ECS cluster among multiple users or teams. We welcome your comments and questions below. Please reach out to us if you would like to contribute to the solution.

Advanced analytics with table calculations in Amazon QuickSight

Post Syndicated from Sahitya Pandiri original https://aws.amazon.com/blogs/big-data/advanced-analytics-with-table-calculations-in-amazon-quicksight/

Amazon QuickSight recently launched table calculations, which enable you to perform complex calculations on your data to derive meaningful insights. In this blog post, we go through examples of applying these calculations to a sample sales data set so that you can start using these for your own needs.

You can find the sample data set used here.

What are table calculations?

By using a table calculation in Amazon QuickSight, you can derive metrics such as period-over-period trends. You can also create calculations within a specified window to compute metrics within that window, or benchmark against a fixed window calculation. Also, you can perform all these tasks at custom levels of detail. For example, you can compute year-over-year increase in sales within  industries, or the percentage contribution of a particular industry within a state. You can also compute cumulative month-over-month sales within a year, how an industry ranks in sales within a state, and more.

You can compute these metrics using a combination of functions. These functions include runningSum, percentOfTotal, and percentDifference, plus a handful of base partition functions. The base partition functions that you can use for this case include sum, avg, count, distinct_count, rank, and denseRank. They also include minOver and maxOver, which can calculate minimum and maximum over partitions.

Partition functions

Before you apply these calculations, look at the following brief introduction to partition functions. Partitions enable you to specify the dimension that is the window within which a calculation is contained. That is, a partition helps define the window within which a calculation is performed.

As an example, let’s calculate the average sales within each industry across segments. We start by adding industry, segment, and sales to the visual. Adding a regular calculated field avg(sales) to the table gives the average of each segment within the industry, but not the average across each industry. To achieve this, create a calculated field using the avgOver calculation.

avgOver(aggregated measure, [partition by attribute, ...])

The aggregated measure here refers to the calculation to perform on the measure when it’s grouped by the dimensions in the visual. This calculation occurs before an average is applied within each industry partition.

Average by industry = avgOver(sum(sales), [industry])

Similarly, you can calculate the sum of sales, minimum and maximum of sales, and count of segments within each industry by using the sumOver, minOver, maxOver, and countOver functions, respectively.

Benchmark vs. actual sales

Let’s take another use case and see how each industry within a state performs when benchmarked against the average sales in the state.

To achieve this, add state, industry, and sales to a table visual and sort by the state. To calculate the benchmark, create a calculated field with the avgOver function partitioned by the State dimension.

avgOver(aggregated measure, [partition by attribute, ...])

State average = avgOver(sum(Sales), [ship_state])

Given that we added state, industry, and sales to the table, sum(sales) calculates the total sales of an industry within a state. To determine the variance of this value from the benchmark, simply create another calculated field.

Actual vs. benchmark = sum(sales) – State average

As with the calculations preceding, you can derive the percentage of sales within an industry compared to the total sales within the state by using percentOfTotal calculations.

Running Sum, Difference, and Percent Difference

We can illustrate several more functions with use cases, following.

Use case 1: As a sales analyst, I want to create a report that shows cumulative sales by month within each industry from the beginning till the end of each calendar year.

To derive the cumulative monthly sales by industry, I need industry, date, and sales represented in a table chart. After adding the date field, I change the aggregation to month (as shown following).

I add a new calculated field to the analysis using the runningSum function. The runningSum function has the following syntax.

runningSum(aggregated measure, [sort attribute ASC/DESC, ...], [partition by attribute, ...]

The aggregated measure here refers to the aggregation that we want when grouping by the dimensions included in the visual. The sort attribute refers to the attribute that we need sorted before we perform the running sum. As mentioned preceding, partitioning by attribute specifies the dimensions where the running sum is contained within each value of the dimension.

In this use case, the aggregate measure that we want to measure is sum(sales), sorted by date and partitioned by industry and year. Plugging in these attributes, we arrive at the formula following.

runningSum(sum(sales),[Date ASC],[industry, truncDate("YYYY",Date)])

The square brackets within the sort fields and partition lists are mandatory. We then add this calculated field to the visual and sort the order of the industry. Without the partition on year, runningSum calculates the cumulative sum across all months starting from 2016 through the end of 2017.

You can also represent cumulative monthly sales by using line charts and other chart types. In a line chart, the slope of the lines shows the rate at which the industry is growing in a year. For example, the growth of the tech industry seems slow in 2016 but picked up rapidly in 2017.

You can also represent the total sales and cumulative sales in a combo chart and filter by the industry.

Use case 2: Let’s now calculate the percentage increase in sales month-over-month per industry within a calendar year. We can achieve this by using the percentDifference function. This function calculates percent variance in a metric compared to the previous or following metric, sorted and partitioned by the set of specified dimensions.

percentDifference(aggregated measure, [sort attribute ASC/DESC, ...], -1 or 1, [partition by attribute, ...])

In this formula, the -1 or 1 value indicates whether the difference should be calculated on the preceding or succeeding values respectively. Plugging in the required fields, we arrive at the formula following.

percentDifference(sum(sales),[Date ASC],-1,[industry, truncDate("YYYY",Date)])

If you want only the difference, use the difference function.

difference(aggregated measure, [sort attribute ASC/DESC, ...], lookup index -1 or 1, [partition by attribute, ...])


Table calculations are available in both Standard and Enterprise editions, in all supported AWS Regions. For more information, see the Amazon QuickSight documentation.


About the Author

Sahitya Pandiri is a technical program manager with Amazon Web Services. Sahitya has been in the product/program management for 5 years now, and has built multiple products in the retail, healthcare and analytics spaces. She enjoys problem solving, and leveraging technology to simplify processes.




Pakistan causes YouTube outage for two-thirds of world (ABC)

Post Syndicated from corbet original https://lwn.net/Articles/768655/rss

ABC News has the
on why YouTube went down; it’s a good example of just how robust
the Internet is (or isn’t) anymore. “An Internet expert explained that Sunday’s problems arose when a Pakistani telecommunications company accidentally identified itself to Internet computers as the world’s fastest route to YouTube. But instead of serving up videos of skateboarding dogs, it sent the traffic into oblivion.

On Friday, the Pakistan Telecommunication Authority ordered 70 Internet service providers to block access to YouTube.com, because of anti-Islamic movies on the video-sharing site, which is owned by Google.”

Sharing Vespa (Open Source Big Data Serving Engine) at the SF Big Analytics Meetup

Post Syndicated from amberwilsonla original https://yahooeng.tumblr.com/post/179150583591


By Jon Bratseth, Distinguished Architect, Oath

I had the wonderful opportunity to present Vespa at the SF Big Analytics Meetup on September 26th, hosted by Amplitude. Several members of the Vespa team (Kim, Frode and Kristian) also attended. We all enjoyed meeting with members of the Big Analytics community to discuss how Vespa could be helpful for their companies. Thank you to Chester Chen, T.J. Bay, and Jin Hao Wan for planning the meetup, and here’s our presentation, in case you missed it (slides are also available here):

Largely developed by Yahoo engineers, Vespa is our big data processing and serving engine, available as open source on GitHub. It’s in use by many products, such as Yahoo News, Yahoo Sports, Yahoo Finance and Oath Ads Platforms. 

Vespa use is growing even more rapidly; since it is open source under a permissive Apache license, Vespa can power other external third-party apps as well. 

A great example is Zedge, which uses Vespa for search and recommender systems to support content discovery for personalization of mobile phones (Android, iOS, and Web). Zedge uses Vespa in production to serve millions of monthly active users.

Visit https://vespa.ai/ to learn more and download the code. We encourage code contributions and welcome opportunities to collaborate.

[$] Secure key handling using the TPM

Post Syndicated from jake original https://lwn.net/Articles/768419/rss

Trusted Computing has not had the best
reputation over the years — Richard Stallman dubbing it “Treacherous
Computing” probably hasn’t helped — though those fears of taking away
users’ control of their computers have not proven to be founded, at least yet.
But the Trusted
Platform Module
, or TPM, inside your computer can do more than just
potentially enable lockdown. In our second report from
Kernel Recipes 2018,
we look at a talk from James Bottomley about how the TPM works,
how to talk to it, and how he’s using it to improve his key handling.

Security updates for Wednesday

Post Syndicated from ris original https://lwn.net/Articles/768617/rss

Security updates have been issued by CentOS (tomcat), Debian (asterisk, graphicsmagick, and libpdfbox-java), openSUSE (apache2 and git), Oracle (tomcat), Red Hat (kernel and Satellite 6.4), Slackware (libssh), SUSE (binutils, ImageMagick, and libssh), and Ubuntu (clamav, libssh, moin, and paramiko).

Four Year Old libssh Bug Leaves Servers Wide Open

Post Syndicated from Darknet original https://www.darknet.org.uk/2018/10/four-year-old-libssh-bug-leaves-servers-wide-open/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

Four Year Old libssh Bug Leaves Servers Wide Open

A fairly serious 4-year old libssh bug has left servers vulnerable to remote compromise, fortunately, the attack surface isn’t that big as neither OpenSSH or the GitHub implementation are affected.

The bug is in the not so widely used libSSH library, not to be confused with libssh2 or OpenSSH – which are very widely used.

There’s a four-year-old bug in the Secure Shell implementation known as libssh that makes it trivial for just about anyone to gain unfettered administrative control of a vulnerable server.

Read the rest of Four Year Old libssh Bug Leaves Servers Wide Open now! Only available at Darknet.

HackSpace magazine 12: build your first rocket!

Post Syndicated from Andrew Gregory original https://www.raspberrypi.org/blog/hackspace-magazine-12-build-your-first-rocket/

Move over, Elon Musk — there’s a new rocket maverick in town: YOU!


Step inside the UK rocketry scene, build and launch a rocket, design your own one, and discover the open-source rocket programmes around the world! In issue 12, we go behind the scenes at a top-secret launch site in the English Midlands to have a go at our own rocket launch, find the most welcoming bunch of people we’ve ever met, and learn about centre of gravity, centre of pressure, acceleration, thrust, and a load of other terms that make us feel like NASA scientists.

Meet the Maker: Josef Prusa

In makerception news, we meet the maker who makes makers, Josef Prusa, aka Mr 3D Printing, and we find out what’s next for his open-source hardware empire.

Open Science Hardware

There are more than seven billion people on the planet, and 90-odd percent of them are locked out of the pursuit of science. Fishing, climate change, agriculture: it all needs data, and we’re just not collecting as much as we should. Global Open Science Hardware is working to change that by using open, shared tech — read all about it in issue 12!

And there’s more…

As always, the new issue is packed with projects: make a way-home machine to let your family know exactly when you’ll walk through the front door; build an Alexa-powered wheel of fortune to remove the burden of making your own decisions; and pay homage to Indiana Jones and the chilled monkey brains in Temple of Doom with a capacitive touch haunted monkey skull (no monkeys were harmed in the making of this issue). All that, plus steampunk lighting, LEDs, drills, the world’s biggest selfie machine, and more, just for you. So go forth and make something!

Get your copy of HackSpace magazine

If you like the sound of this month’s content, you can find HackSpace magazine in WHSmith, Tesco, Sainsbury’s, and independent newsagents in the UK from tomorrow. If you live in the US, check out your local Barnes & Noble, Fry’s, or Micro Center next week. We’re also shipping to stores in Australia, Hong Kong, Canada, Singapore, Belgium, and Brazil, so be sure to ask your local newsagent whether they’ll be getting HackSpace magazine. And if you’d rather try before you buy, you can always download the free PDF now.

Subscribe now

Subscribe now” may not be subtle as a marketing message, but we really think you should. You’ll get the magazine early, plus a lovely physical paper copy, which has a really good battery life.

Oh, and twelve-month print subscribers get an Adafruit Circuit Playground Express loaded with inputs and sensors and ready for your next project. Tempted?

The post HackSpace magazine 12: build your first rocket! appeared first on Raspberry Pi.

Toward Community-Oriented, Public & Transparent Copyleft Policy Planning

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2018/10/16/mongodb-copyleft-drafting.html

[ A similar version
was crossposted
on Conservancy’s blog
. ]

More than 15 years ago, Free, Libre, and Open Source Software (FLOSS)
community activists successfully argued that licensing proliferation was a
serious threat to the viability of FLOSS. We convinced companies to end
the era of
“vanity” licenses. Different charities — from the Open Source Initiative (OSI) to
the Free Software Foundation (FSF) to the Apache Software Foundation — all agreed we were better
off with fewer FLOSS licenses. We de-facto instituted what my colleague
Richard Fontana once called the “Rule of Three” —
assuring that any potential FLOSS license should be met with suspicion
unless (a) the OSI declares that it meets their Open Source Definition,
(b) the FSF declares that it meets their Free Software Definition, and (c)
the Debian Project declares that it meets their Debian Free Software
. The work for those organizations quelled license proliferation
from radioactive threat to safe background noise. Everyone thought the
problem was solved. Pointless license drafting had become a rare practice,
and updated versions of established licenses were handled with public engagement
and close discussion with the OSI and other license evaluation experts.

Sadly, the age of
license proliferation has returned. It’s harder to stop this time, because
this isn’t merely about corporate vanity licenses. Companies now have complex FLOSS policy
agendas, and those agendas are not to guarantee software
freedom for all. While it is annoying that our community must again confront an
old threat, we are fortunate the problem is not hidden: companies proposing
their own licenses are now straightforward about their new FLOSS licenses’ purposes: to maximize profits.

licenses are now common, but seem like FLOSS licenses only to the most casual of readers.
We’ve succeeded in convincing everyone to “check the OSI license
list before you buy”. We can therefore easily dismiss licenses like Common
Clause merely
by stating they are non-free/non-open-source
and urging the community to
avoid them. But, the next stage of tactics have begun, and they are
harder to combat. What happens when for-profit companies promulgate their
own hyper-aggressive (quasi-)copyleft licenses that seek to pursue the key
policy goal of “selling proprietary licenses” over
“defending software freedom”? We’re about to find out,
because, yesterday,
MongoDB declared themselves the arbiter of what “strong copyleft” means.

Understanding MongoDB’s Business Model

To understand the policy threat inherent in MongoDB’s so-called
Side Public License, Version 1”
, one must first understand the
fundamental business model for MongoDB and companies like them. These
companies use copyleft for profit-making rather than freedom-protecting. First, they require full control (either via ©AA or CLA) of
all copyrights in the work, and second, they offer two independent lines of
licensing. Publicly, they provide the software under the strongest
copyleft license available. Privately, the same (or secretly improved)
versions of the software are available under fully proprietary terms. In
theory, this could be
merely selling
: a benign manner of funding more Free Software code —
giving the proprietary option only to those who request it. In practice
— in all examples that have been even mildly successful (such as
MongoDB and MySQL) — this mechanism serves as a warped proprietary
licensing shake-down: “Gee, it looks like you’re violating the
copyleft license. That’s a shame. I guess you just need to abandon the
copyleft version and buy a proprietary license from us to get yourself out
of this jam, since we don’t plan to reinstate any lost rights and
permissions under the copyleft license.” In other words, this
structure grants exclusive and dictatorial power to a for-profit company as
the arbiter of copyleft compliance. Indeed, we have never seen any of
these companies follow or endorse the Principles of
Community-Oriented GPL Enforcement
. While it has made me unpopular with some, I still make no apologies that I have since 2004
consistently criticized this “proprietary relicensing” business
model as “nefarious”, once I started hearing regular reports that MySQL AB (now
Oracle) asserts GPL violations against compliant uses merely to scare
users into becoming “customers”. Other companies,
including MongoDB, have since emulated this activity.

Why Seek Even Stronger Copyleft?

The GNU Affero General Public License (AGPL) has done a wonderful job defending the software freedom of
community-developed projects
like Mastodon
and Mediagoblin.
So, we should answer with skepticism
a solitary
for-profit company coming
forward to claim
that “Affero GPL has not resulted in sufficient
legal incentives for some of the largest users of infrastructure software
… to participate in the community. Many open source developers are
struggling with a similar reality”. If the last sentence were on
Wikipedia, I’d edit it to add a Citation Needed tag, as I know
of nomulti-copyright-held or charity-based AGPL’d project
that has “struggled with this reality”. In fact, it’s only a
“reality” for those that engage in proprietary relicensing.
Eliot Horowitz, co-founder of MongoDB and promulgator of their new license, neglects to mention that.

The most glaring problem with this license, which Horowitz admits in his OSI license-review list post, is that there was no community drafting process. Instead, a for-profit company, whose primary goal is to
use copyleft as a weapon against the software-sharing community for the purpose of converting that “community” into paying
customers, published this license as a fait accompli without prior public discussion of the license text.

If this action were an isolated incident by one company, ignoring it is surely the best response. Indeed,
I urged everyone to simply ignore the Commons Clause. Now, we see
a repackaging of the Commons Clause into a copyleft-like box (with reuse of Commons Clause’s text
such as “whose value derives, entirely or substantially, from the functionality of the Software”). Since
both licenses were drafted in secret, we cannot know if the reuse of text was simply because the same lawyer was
employed to write both, or if MongoDB has joined a broader and more significant industry-wide strategy to replace
existing FLOSS licensing with alternatives that favor businesses over individuals.

The Community Creation Process Matters

Admittedly, the history of copyleft has been one of slowly evolving
community-orientation. GPLv1 and GPLv2 were drafted in private, too, by
Richard Stallman and FSF’s (then) law firm lawyer, Jerry Cohen. However, from
the start, the license steward was not Stallman himself, nor the law firm,
but the FSF, a 501(c)(3) charity dedicated to
serve the public good. As such, the FSF made substantial efforts in the
GPLv3 process to reorient the drafting of copyleft licenses as a public
policy and legislative process. Like all legislative processes, GPLv3 was
not ideal — and I was even personally miffed to be relegated to the
oft-ignored “GPLv3 Discussion Committee D” — but the GPLv3 process was
undoubtedly a step forward in FLOSS community license drafting.
Corporation made efforts for community collaboration in redrafting the
, and specifically included the OSI and the FSF (arbiters of the
Open Source Definition and Free Software Definition (respectively)) in
MPL’s drafting deliberations. The modern acceptable standard is a leap rather
than a step forward: a fully public, transparent drafting process with a fully
public draft repository, as the copyleft-next project
has done
. I think we should now meet with utmost suspicion any license
that does not use copyleft-next’s approach of “running licensing drafting
as a Free Software project”.

I was admittedly skeptical of that approach at first. What I have seen
six years since Richard Fontana started copyleft-next is that, simply put,
the key people who are impacted most fundamentally by a software
license are mostly likely to be
aware of, and engage in, a process if it is fully public, community-oriented,
and uses community tools, like Git.

Like legislation, the policies outlined in copyleft licenses impact the
general public, so the general public should be welcomed to the
drafting. At Conservancy, we don’t draft our own
licenses0, so our contracts with
software developers and agreements with member projects state that the
licenses be both “OSI-approved Open Source” and
“FSF-approved GPL-compatible Free Software”. However, you can
imagine that Conservancy has a serious vested interest in what licenses are
ultimately approved by the OSI and the FSF. Indeed, with so much money
flowing to software developers bound by those licenses, our very charitable
mission could be at stake if OSI and the FSF began approving proprietary
licenses as Open, Free, and/or GPL-compatible. I want to therefore see
license stewards work, as Mozilla did, to make the vetting process easier,
not harder, for these organizations.

A community drafting process allows everyone to vet the license text early and often,
to investigate the community and industry impact of the license, and to probe the license drafter’s intent through the acceptance and rejection of proposed modified text (ideally through a DVCS). With for-profit actors seeking to
gain policy control of fundamental questions such as “what is strong
copyleft?”, we must demand full drafting transparency and frank public

The Challenge Licensing Arbiters Face

OSI, FSF, and Debian have a huge challenge before them. Historically, the
FSF was the only organization who sought to push the boundary of strong
copyleft. (Full disclosure: I created the Affero clause while working for
the FSF in 2002, inspired by Henry Poole’s useful and timely demands for a true network
services copyleft.) Yet, the Affero clause was itself controversial. Many complained that it changed the fundamental rules of
copyleft. While “triggered only on distribution, not
modification” was a fundamental rule of the regular GPL, we
as a community — over time and much public debate — decided the Affero clause is a legitimate copyleft, and AGPL was
declared Open Source by OSI
and DFSG-free
by Debian

That debate was obviously framed by the FSF. The FSF, due
to public pressure, compromised by leaving the AGPL as an indefinite
fork of the GPL (i.e., the FSF did not include the Affero clause in plain GPL. While I
personally lobbied (from GPLv3 Discussion Committee D and elsewhere) for the merger
of AGPL and GPL during the GPLv3 drafting process, I respect the decision
of the FSF, which was informed not by my one voice,
but the voices of the entire community.

Furthermore, the FSF is a charity, chartered to serve the public good
and the advancement of software freedom for users and developers. MongoDB
is a for-profit company, chartered to serve the wallets of its owners.
While MongoDB (like any other company) should be welcomed on equal footing
to individuals, charities, and trade-associations to the debate about the
future of copyleft, we should not accept their active framing of that
debate. By submitting this license to OSI for approval without any public
community discussion, and without any discussion whatsoever with the key
charities in the community, is unacceptable. The OSI should now adopt a new requirement for license approval — namely, that licenses without a community-oriented drafting
process should be rejected for the meta-reason of “non-transparent
drafting”, regardless of their actual text. This will have the added
benefit of forcing future license drafters to come to OSI, on their public mailing
lists, before the license is finalized. That will save OSI the painstaking
work of walking back bad license drafts, which has in recent years consumed
much expert time by OSI’s volunteers.

Welcoming All To Public Discussion

Earlier this year, Conservancy announced plans to host and organize
the first annual CopyleftConf.
Conservancy decided to do this because Conservancy seeks to create a truly
open, friendly, and
forum for discussion about the past and future of copyleft as
a strategy for defending software freedom. We had no idea when
Karen and I first mentioned the possibility of running CopyleftConf (during
the Organizers’ Panel at the end of the Legal and Policy DevRoom at FOSDEM
2018 in February 2018) that multiple companies would come forward and seek
to control the microphone on the future of copyleft. Now that MongoDB has
done so, I’m very glad that the conference is already organized and on the
calendar before they did so.

Despite my criticisms of MongoDB, I welcome Eliot Horowitz, Heather Meeker (the law firm lawyer who drafted MongoDB’s new license and the Commons Clause), or anyone else who was involved in the
creation of MongoDB’s new license to submit a talk.
Conservancy will be announcing soon the independent group of copyleft
experts (and critics!) who will make up the Program Committee and will
independently evaluate the submissions. Even if a talk is rejected, I
welcome rejected proposers to attend and speak about their views in the hallway track and
the breakout sessions.

One of the most important principles in copyleft policy that our community
has learned is that commercial, non-commercial, and individual actors
should have equal footing with regard to rights assured by the copyleft
licenses themselves. There is no debate about that; we all agree that
copyleft codebases become meeting places for hobbyists, companies, charities,
and trade associations to work together toward common goals and in harmony
and software freedom. With this blog post, I call on everyone to continue
on the long road to applying that same principle to the meta-level of how
these licenses are drafted and how
are enforced
. While we have done some work recently on the latter, not
enough has been done on the former. MongoDB’s actions today give us an
opportunity to begin that work anew.

0 While Conservancy does
not draft any main FLOSS license texts, Conservancy does
with the drafting of additional permissions
upon the request of our
member projects. Note that additional permissions (sometimes called license
exceptions) grant permission to engage in activities that the main license
would otherwise prohibit. As such, by default, additional permissions can
only make a copyleft license weaker, never stronger.

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.