Tag Archives: tracker

Съд на ЕС: за достъпа до The Pirate Bay

Post Syndicated from nellyo original https://nellyo.wordpress.com/2017/06/15/the-pirate-bay-5/

Вчера беше публикувано решението на Съда на ЕС по дело C‑610/15 Stichting Brein срещу Ziggo BV и XS4ALL Internet BV.

Решението засяга функционирането и достъпа до The Pirate Bay.

Спорът

9 Ziggo и XS4ALL са доставчици на достъп до интернет. Значителна част от техните абонати ползват платформата за онлайн споделяне TPB, индексатор на BitTorrent файлове. BitTorrent е протокол, чрез който потребителите (наричани „равноправни устройства“ или „peers“) могат да споделят файлове. Съществената характеристика на BitTorrent се състои в това, че файловете, които се споделят, са разделени на малки сегменти, като по този начин отпада необходимостта от централен сървър за съхраняване на тези файлове, което облекчава тежестта на индивидуалните сървъри в процеса на споделянето. За да могат да споделят файлове, потребителите трябва най-напред да свалят специален софтуер, наречен „BitTorrent клиент“, който не се предлага от платформата за онлайн споделяне TPB. Този BitTorrent клиент представлява софтуер, който позволява създаването на торент файлове.

10      Потребителите (наричани „seeders“ [сийдъри]), които желаят да предоставят файл от своя компютър на разположение на други потребители (наричани „leechers“ [лийчъри]), трябва да създадат торент файл чрез своя BitTorrent клиент. Торент файловете препращат към централен сървър (наричан „tracker“ [тракер]), който идентифицира потребители, които могат да споделят конкретен торент файл, както и прилежащия към него медиен файл. Тези торент файлове се качват (upload) от сийдърите (на платформа за онлайн споделяне, каквато е TPB, която след това ги индексира, за да могат те да бъдат намирани от потребителите на платформата за онлайн споделяне и произведенията, към които тези торент файлове препращат, да могат да бъдат сваляни (download) на компютрите на последните на отделни сегменти чрез техния BitTorrent клиент.

11      Често пъти вместо торенти се използват магнитни линкове. Тези линкове идентифицират съдържанието на торента и препращат към него чрез цифров отпечатък.

12      Голямото мнозинство от предлаганите на платформата за онлайн споделяне TPB торент файлове препращат към произведения, които са обект на закрила от авторски права, без да е дадено разрешение от носителите на авторското право на администраторите и на потребителите на тази платформа за извършване на действията по споделянето.

13      Главното искане на Stichting Brein в производството пред националната юрисдикция е да разпореди на Ziggo и на XS4ALL да блокират имената на домейни и интернет адресите на платформата за онлайн споделяне TPB с цел да се предотврати възможността за ползване на услугите на тези доставчици на достъп до интернет за нарушаване на авторското и сродните му права на носителите на правата, чиито интереси защитава Stichting Brein.

Въпросите

 При тези обстоятелства Hoge Raad der Nederlanden (Върховен съд на Нидерландия) решава да спре производството по делото и да постави на Съда следните преюдициални въпроси:

„1)      Налице ли е публично разгласяване по смисъла на член 3, параграф 1 от Директива 2001/29 от страна на администратора на уебсайт, ако на този уебсайт не са налице защитени произведения, но съществува система […], с която намиращи се в компютрите на потребителите метаданни за защитени произведения се индексират и категоризират за потребителите по начин, по който последните могат да проследяват, да качват онлайн, както и да свалят закриляните произведения?

2)      При отрицателен отговор на първия въпрос:

Дават ли член 8, параграф 3 от Директива 2001/29 и член 11 от Директива 2004/48 основание за издаването на забрана по отношение на посредник по смисъла на тези разпоредби, който по описания във въпрос 1 начин улеснява извършването на нарушения от трети лица?“.

Вече имаме заключението на Генералния адвокат Szpunar, според което

обстоятелството, че операторът на уебсайт индексира файлове, съдържащи закриляни с авторско право произведения, които се предлагат за споделяне в peer-to-peer мрежа, и предоставя търсачка, с което позволява тези файлове да бъдат намирани, представлява публично разгласяване по смисъла на член 3, параграф 1 от Директива 2001/29, когато операторът знае, че дадено произведение е предоставено на разположение в мрежата без съгласието на носителите на авторските права, но не предприема действия за блокиране на достъпа до това произведение.

Решението

Понятието „публично разгласяване“ обединява два кумулативни елемента, а именно „акт на разгласяване“ на произведение и „публичност“ на разгласяването (решение от 26 април 2017 г., Stichting Brein, C‑527/15, EU:C:2017:300, т. 29 и цитираната съдебна практика). За да се прецени дали даден ползвател извършва акт на публично разгласяване по смисъла на член 3, параграф 1 от Директива 2001/29, трябва да се отчетат няколко допълнителни критерия, които не са самостоятелни и са взаимозависими.

  • ключовата роля на потребителя и съзнателния характер на неговата намеса. Всъщност този потребител извършва акт на разгласяване, когато, като съзнава напълно последиците от своето поведение, се намесва, за да предостави на клиентите си достъп до произведение, което е обект на закрила, и по-специално когато без неговата намеса тези клиенти по принцип не биха могли да се ползват от разпространеното произведение. (вж. в този смисъл решение от 26 април 2017 г., Stichting Brein, C‑527/15, EU:C:2017:300, т. 31 и цитираната съдебна практика).
  • понятието „публично“ се отнася до неопределен брой потенциални адресати и освен това предполага наличие на доста голям брой лица (решение от 26 април 2017 г., Stichting Brein, C‑527/15, EU:C:2017:300, т. 32 и цитираната съдебна практика).
  • закриляното произведение трябва да бъде разгласено, като се използва специфичен технически способ, различен от използваните дотогава, или, ако не е използван такъв способ — пред „нова публика“, тоест публика, която не е била вече взета предвид от носителите на авторското право при даването на разрешение за първоначалното публично разгласяване на произведението им (решение от 26 април 2017 г., Stichting Brein, C‑527/15, EU:C:2017:300, т. 33 и цитираната съдебна практика).
  • дали публичното разгласяване  е извършено с цел печалба (решение от 26 април 2017 г., Stichting Brein, C‑527/15, EU:C:2017:300, т. 34 и цитираната съдебна практика).

42 В случая, видно от акта за преюдициално запитване, значителна част от абонатите на Ziggo и XS4ALL са сваляли медийни файлове чрез платформата за онлайн споделяне TPB. Както следва и от представените пред Съда становища, тази платформа се използва от значителен брой лица, като администраторите от TPB съобщават на своята платформа за онлайн споделяне за десетки милиони „потребители“. В това отношение разглежданото в главното производство разгласяване се отнася най-малкото до всички потребители на тази платформа. Тези потребители могат да имат достъп във всеки момент и едновременно до защитените произведения, които са споделени посредством посочената платформа. Следователно това разгласяване се отнася до неопределен брой потенциални адресати и предполага наличие на голям брой лица (вж. в този смисъл решение от 26 април 2017 г., Stichting Brein, C‑527/15, EU:C:2017:300, т. 45 и цитираната съдебна практика).

43      От това следва, че с разгласяване като разглежданото в главното производство закриляни произведения действително се разгласяват „публично“ по смисъла на член 3, параграф 1 от Директива 2001/29.

44      Освен това, що се отнася до въпроса дали тези произведения са разгласяват на „нова“ публика по смисъла на съдебната практика, цитирана в точка 28 от настоящото съдебно решение, следва да се посочи, че в решението си от 13 февруари 2014 г., Svensson и др. (C‑466/12, EU:C:2014:76, т. 24 и 31), както и в определението си от 21 октомври 2014 г., BestWater International (C‑348/13, EU:C:2014:2315, т. 14) Съдът е приел, че това е публика, която носителите на авторските права не са имали предвид, когато са дали разрешение за първоначалното разгласяване.

45      В случая, видно от становищата, представени пред Съда, от една страна, администраторите на платформата за онлайн споделяне TPB са знаели, че тази платформа, която предоставят на разположение на потребителите и която администрират, дава достъп до произведения, публикувани без разрешение на носителите на правата, и от друга страна, че същите администратори изразяват изрично в блоговете и форумите на тази платформа своята цел да предоставят закриляните произведения на разположение на потребителите и поощряват последните да реализират копия от тези произведения. Във всички случаи, видно от акта за преюдициално запитване, администраторите на онлайн платформата TPB не може да не са знаели, че тази платформа дава достъп до произведения, публикувани без разрешението на носителите на правата, с оглед на обстоятелството, което се подчертава изрично от запитващата юрисдикция, че голяма част от торент файловете, които се намират на платформата за онлайн споделяне TPB, препращат към произведения, публикувани без разрешението на носителите на правата. При тези обстоятелства следва да се приеме, че е налице разгласяване пред „нова публика“ (вж. в този смисъл решение от 26 април 2017 г., Stichting Brein, C‑527/15, EU:C:2017:300, т. 50).

46      От друга страна, не може да се оспори, че предоставянето на разположение и администрирането на платформа за онлайн споделяне като разглежданата в главното производство се извършва с цел да се извлече печалба, тъй като тази платформа генерира, видно от становищата, представени пред Съда, значителни приходи от реклама.

47      Вследствие на това трябва да се приеме, че предоставянето на разположение и администрирането на платформа за онлайн споделяне като разглежданата в главното производство, съставлява „публично разгласяване“ по смисъла на член 3, параграф 1 от Директива 2001/29.

48      С оглед на всички изложени съображения на първия въпрос следва да се отговори, че понятието „публично разгласяване“  трябва да се тълкува в смисъл, че  в неговия обхват попада предоставянето на разположение и администрирането в интернет на платформа за споделяне, която чрез индексиране на метаданните относно закриляните произведения и с предлагането на търсачка позволява на потребителите на платформата да намират тези произведения и да ги споделят в рамките на мрежа с равноправен достъп (peer-to-peer).

Масовите коментари са, че решението засилва позициите на търсещите блокиране организации.

Filed under: Digital, EU Law, Media Law Tagged: съд на ес

[$] The state of bugs.python.org

Post Syndicated from jake original https://lwn.net/Articles/723513/rss

In a brief session at the 2017 Python Language Summit, Maciej Szulik gave
an update on the state and plans for bugs.python.org (bpo). It is the Roundup-based bug tracker for
Python; moving to GitHub has not changed that. He described the work that
two Google Summer of Code (GSoC) students have done to improve the bug
tracker.

New – USASpending.gov on an Amazon RDS Snapshot

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-usaspending-gov-on-an-amazon-rds-snapshot/

My colleague Jed Sundwall runs the AWS Public Datasets program. He wrote the guest post below to tell you about an important new dataset that is available as an Amazon RDS Snapshot. In the post, Jed introduces the dataset and shows you how to create an Amazon RDS DB Instance from the snapshot.

Jeff;


I am very excited to announce that, starting today, the entire public USAspending.gov database is available for anyone to copy via Amazon Relational Database Service (RDS). USAspending.gov data includes data on all spending by the federal government, including contracts, grants, loans, employee salaries, and more. The data is available via a PostgreSQL snapshot, which provides bulk access to the entire USAspending.gov database, and is updated nightly. At this time, the database includes all USAspending.gov for the second quarter of fiscal year 2017, and data going back to the year 2000 will be added over the summer. You can learn more about the database and how to access it on its AWS Public Dataset landing page.

Through the AWS Public Datasets program, we work with AWS customers to experiment with ways that the cloud can make data more accessible to more people. Most of our AWS Public Datasets are made available through Amazon S3 because of its tremendous flexibility and ability to scale to serve any volume of any kind of data files. What’s exciting about the USAspending.gov database is that it provides a great example of how Amazon RDS can be used to share an entire relational database quickly and easily. Typically, sharing a relational database requires extract, transfer, and load (ETL) processes that require redundant storage capacity, time for data transfer, and often scripts to migrate your database schema from one database engine to another. ETL processes can be so intimidating and cumbersome that they’re effectively impossible for many people to carry out.

By making their data available as a public Amazon RDS snapshot, the team at USASPending.gov has made it easy for anyone to get a copy of their entire production database for their own use within minutes. This will be useful for researchers and businesses who want to work with real data about all US Government spending and quickly combine it with their own data or other data resources.

Deploying the USASpending.gov Database Using the AWS Management Console
Let’s go through the steps involved in deploying the database in your AWS account using the AWS Management Console.

  1. Sign in to the AWS Management Console and select the US East (N. Virginia) region in the menu bar.
  2. Open the Amazon RDS Console and choose Snapshots in the navigation pane.
  3. In the filter for the search bar, select All Public Snapshots and search for 515495268755:
  4. Select the snapshot named arn:aws:rds:us-east-1:515495268755:snapshot:usaspending-db.
  5. Select Snapshot Actions -> Restore Snapshot. Select an instance size, and enter the other details, then click on Restore DB Instance.
  6. You will see that a DB Instance is being created from the snapshot, within your AWS account.
  7. After a few minutes, the status of the instance will change to Available.
  8. You can see the endpoint for your database on the main page along with other useful info:

Deploying the USASpending.gov Database Using the AWS CLI
You can also install the AWS Command Line Interface (CLI) and use it to create a DB Instance from the snapshot. Here’s a sample command:

$ aws rds restore-db-instance-from-db-snapshot --db-instance-identifier my-test-db-cli \
  --db-snapshot-identifier arn:aws:rds:us-east-1:515495268755:snapshot:usaspending-db \
  --region us-east-1

This will give you an ARN (Amazon Resource Name) that you can use to reference the DB Instance. For example:

$ aws rds describe-db-instances \
  --db-instance-identifier arn:aws:rds:us-east-1:917192695859:db:my-test-db-cli

This command will display the Endpoint.Address that you use to connect to the database.

Connecting to the DB Instance
After following the AWS Management Console or AWS CLI instructions above, you will have access to the full USAspending.gov database within this Amazon RDS DB instance, and you can connect to it using any PostgreSQL client using the following credentials:

  • Username: root
  • Password: password
  • Database: data_store_api

If you use psql, you can access the database using this command:

$ psql -h my-endpoint.rds.amazonaws.com -U root -d data_store_api

You should change the database password after you log in:

ALTER USER "root" WITH ENCRYPTED PASSWORD '{new password}';

If you can’t connect to your instance but think you should be able to, you may need to check your VPC Security Groups and make sure inbound and outbound traffic on the port (usually 5432) is allowed from your IP address.

Exploring the Data
The USAspending.gov data is very rich, so it will be hard to do it justice in this blog post, but hopefully these queries will give you an idea of what’s possible. To learn about the contents of the database, please review the USAspending.gov Data Dictionary.

The following query will return the total amount of money the government is obligated to pay for contracts awarded by NASA that include “Mars” or “Martian” in the description of the award:

select sum(total_obligation) from awards, subtier_agency 
  where (awards.description like '% MARTIAN %' OR awards.description like '% MARS %') 
  AND subtier_agency.name = 'National Aeronautics and Space Administration';

As I write this, the result I get for this query is $55,411,025.42. Note that the database is updated nightly and will include more historical data in the coming months, so you may get a different result if you run this query.

Now, here’s the same query, but looking for awards with “Jupiter” or “Jovian” in the description:

select sum(total_obligation) from awards, subtier_agency
  where (awards.description like '%JUPITER%' OR awards.description like '%JOVIAN%') 
  AND subtier_agency.name = 'National Aeronautics and Space Administration';

The result I get is $14,766,392.96.

Questions & Comments
I’m looking forward to seeing what people can do with this data. If you have any questions about the data, please create an issue on the USAspending.gov API’s issue tracker on GitHub.

— Jed

Hacker dumps, magnet links, and you

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/05/hacker-dumps-magnet-links-and-you.html

In an excellent post pointing out Wikileaks deserves none of the credit given them in the #MacronLeaks, the author erroneously stated that after Archive.org took down the files, that Wikileaks provided links to a second archive. This is not true. Instead, Wikileaks simply pointed to what’s known as “magnet links” of the first archive. Understanding magnet links is critical to understanding all these links and dumps, so I thought I’d describe them.

The tl;dr version is this: anything published via BitTorrent has a matching “magnet link” address, and the contents can still be reached via magnet links when the original publisher goes away.

In this case, the leaker uploaded to “archive.org”, a popular Internet archiving resource. This website allows you to either download files directly, which is slow, or via peer-to-peer using BitTorrent, which is fast. As you know, BitTorrent works by all the downloaders exchanging pieces with each other, rather getting them from the server. I give you a piece you don’t have, in exchange for a piece I don’t have.

BitTorrent, though still requires a “torrent” (a ~30k file that lists all the pieces) and a “tracker” (http://bt1.archive.org:6969/announce) that keeps a list of all the peers so they can find each other. The tracker also makes sure that every piece is available from at least one peer.

When “archive.org” realized what was happening, they deleted the leaked files, the torrent, and the tracking.

However, BitTorrent has another feature called “magnet links”. This is simply the “hash” of the “torrent” file contents, which looks something like “06724742e86176c0ec82e294d299fba4aa28901a“. (This isn’t a hash of the entire file, but just the important parts, such as the filenames and sizes).

Along with downloading files, BitTorrent software on your computer also participates in a “distributed hash” network. When using a torrent file to download, your BitTorrent software still tell other random BitTorrent clients about the hash. Knowledge of this hash thus spreads throughout the BitTorrent world. It’s only 16 bytes in size, so the average BitTorrent client can keep track of millions of such hashes while consuming very little memory or bandwidth.

If somebody decides they want to download the BitTorrent with that hash, they broadcast that request throughout this “distributed hash” network until they find one or more people with the full torrent. They then get the torrent description file from them, and also a list of peers in the “swarm” who are downloading the file.

Thus, when the original torrent description file, the tracker, and original copy goes away, you can still locate the swarm of downloaders through this hash. As long as all the individual pieces exist in the swarm, you can still successfully download the original file.

In this case, one of the leaked documents was a 2.3 gigabyte file called “langannerch.rar”. The torrent description file called “langanerch_archive.torrent” is 26 kilobytes in size. The hash (magnet link) is 16 bytes in size, written “magnet:?xt=urn:btih:06724742e86176c0ec82e294d299fba4aa28901a“. If you’ve got BitTorrent software installed and click on the link, you’ll join the swarm and start downloading the file, even though the original torrent/tracker/files have gone away.

According to my BitTorrent client, there are currently 108 people in the swarm downloading this file world-wide. I’m currently connected to 11 of them. Most of them appear to be located in France.

Looking at the General tab, I see that “availability” is 2.95. That means there exist 2.95 complete copies of the download. In other words, if there are 20 pieces, it means that for one of the pieces in the swarm, only 2 people have it. This is dangerously small — if those two people leave the network, then a complete copy of the dump will no longer exist in the swarm, and it’ll be impossible to download it all.

Such dumps can remain popular enough for years after the original tracker/torrent has disappeared, but at some point, a critical piece disappears, and it becomes impossible for anybody to download more than 99.95%, with everyone in the swarm waiting for that last piece. If you read this blogpost 6 months from now, you are likely to see 10 people in the swarm, all stuck at 99.95% complete.

Conclusion


The upshot of this is that it’s hard censoring BitTorrent, because all torrents also exist as magnet links. It took only a couple hours for Archive.org to take down the tracker/torrents/files, but after complete downloads were out in the swarm, all anybody needed was the hash of the original torrent to create a magnet link to the data. Those magnet links had already been published by many people. The Wikileaks tweet that linked to them was fairly late, all things considered, other people had already published them.

AWS Achieves FedRAMP Authorization for New Services in the AWS GovCloud (US) Region

Post Syndicated from Chad Woolf original https://aws.amazon.com/blogs/security/aws-achieves-fedramp-authorization-for-a-wide-array-of-services/

Today, we’re pleased to announce an array of AWS services that are available in the AWS GovCloud (US) Region and have achieved Federal Risk and Authorization Management Program (FedRAMP) High authorizations. The FedRAMP Joint Authorization Board (JAB) has issued Provisional Authority to Operate (P-ATO) approvals, which are effective immediately. If you are a federal or commercial customer, you can use these services to process and store your critical workloads in the AWS GovCloud (US) Region’s authorization boundary with data up to the high impact level.

The services newly available in the AWS GovCloud (US) Region include database, storage, data warehouse, security, and configuration automation solutions that will help you increase your ability to manage data in the cloud. For example, with AWS CloudFormation, you can deploy AWS resources by automating configuration processes. AWS Key Management Service (KMS) enables you to create and control the encryption keys used to secure your data. Amazon Redshift enables you to analyze all your data cost effectively by using existing business intelligence tools to automate common administrative tasks for managing, monitoring, and scaling your data warehouse.

Our federal and commercial customers can now leverage our FedRAMP P-ATO to access the following services:

  • CloudFormation – CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion. You can use sample templates in CloudFormation, or create your own templates to describe the AWS resources and any associated dependencies or run-time parameters required to run your application.
  • Amazon DynamoDBAmazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent, single-digit-millisecond latency at any scale. It is a fully managed cloud database and supports both document and key-value store models.
  • Amazon EMRAmazon EMR provides a managed Hadoop framework that makes it efficient and cost effective to process vast amounts of data across dynamically scalable Amazon EC2 instances. You can also run other popular distributed frameworks such as Apache Spark, HBase, Presto, and Flink in EMR, and interact with data in other AWS data stores such as Amazon S3 and DynamoDB.
  • Amazon GlacierAmazon Glacier is a secure, durable, and low-cost cloud storage service for data archiving and long-term backup. Customers can reliably store large or small amounts of data for as little as $0.004 per gigabyte per month, a significant savings compared to on-premises solutions.
  • KMS – KMS is a managed service that makes it easier for you to create and control the encryption keys used to encrypt your data, and uses Hardware Security Modules (HSMs) to protect the security of your keys. KMS is integrated with other AWS services to help you protect the data you store with these services. For example, KMS is integrated with CloudTrail to provide you with logs of all key usage and help you meet your regulatory and compliance needs.
  • Redshift – Redshift is a fast, fully managed, petabyte-scale data warehouse that makes it simple and cost effective to analyze all your data by using your existing business intelligence tools.
  • Amazon Simple Notification Service (SNS)Amazon SNS is a fast, flexible, fully managed push notification service that lets you send individual messages or “fan out” messages to large numbers of recipients. SNS makes it simple and cost effective to send push notifications to mobile device users and email recipients or even send messages to other distributed services.
  • Amazon Simple Queue Service (SQS)Amazon SQS is a fully-managed message queuing service for reliably communicating among distributed software components and microservices—at any scale. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be always available.
  • Amazon Simple Workflow Service (SWF)Amazon SWF helps developers build, run, and scale background jobs that have parallel or sequential steps. SWF is a fully managed state tracker and task coordinator in the cloud.

AWS works closely with the FedRAMP Program Management Office (PMO), National Institute of Standards and Technology (NIST), and other federal regulatory and compliance bodies to ensure that we provide you with the cutting-edge technology you need in a secure and compliant fashion. We are working with our authorizing officials to continue to expand the scope of our authorized services, and we are fully committed to ensuring that AWS GovCloud (US) continues to offer government customers the most comprehensive mix of functionality and security.

– Chad

De-Anonymizing Browser History Using Social-Network Data

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/02/de-anonymizing_1.html

Interesting research: “De-anonymizing Web Browsing Data with Social Networks“:

Abstract: Can online trackers and network adversaries de-anonymize web browsing data readily available to them? We show — theoretically, via simulation, and through experiments on real user data — that de-identified web browsing histories can\ be linked to social media profiles using only publicly available data. Our approach is based on a simple observation: each person has a distinctive social network, and thus the set of links appearing in one’s feed is unique. Assuming users visit links in their feed with higher probability than a random user, browsing histories contain tell-tale marks of identity. We formalize this intuition by specifying a model of web browsing behavior and then deriving the maximum likelihood estimate of a user’s social profile. We evaluate this strategy on simulated browsing histories, and show that given a history with 30 links originating from Twitter, we can deduce the corresponding Twitter profile more than 50% of the time. To gauge the real-world effectiveness of this approach, we recruited nearly 400 people to donate their web browsing histories, and we were able to correctly identify more than 70% of them. We further show that several online trackers are embedded on sufficiently many websites to carry out this attack with high accuracy. Our theoretical contribution applies to any type of transactional data and is robust to noisy observations, generalizing a wide range of previous de-anonymization attacks. Finally, since our attack attempts to find the correct Twitter profile out of over 300 million candidates, it is — to our knowledge — the largest scale demonstrated de-anonymization to date.

Weekly roundup: Strawberry jam

Post Syndicated from Eevee original https://eev.ee/dev/2017/02/05/weekly-roundup-strawberry-jam/

Onwards!

  • flora: I wrote another twine for Flora.

  • music: I spent a day with MilkyTracker and managed to produce a remixed title theme for Isaac’s Descent HD! It’s not entirely terrible.

  • isaac: I redid all the sound effects, slapped together a title screen, fixed a ton of bugs, added a trivial ending, and finally released a demo on my $4 Patreon tier! It’s just a port of the original Isaac’s Descent levels, so nothing particularly new or exciting, but I’m happy to see it in a more polished form. And now that everything basically works, I’m free to experiment with puzzle mechanics.

  • neon: I released NEON PHASE 1.2, with a few bugfixes, most notably making the game not be glitchy as all hell when playing under fairly high framerates.

  • patreon: I wrote the monthly prologue, which is just so amazing, you can’t even imagine.

  • games: And then the latter half of the week went towards Strawberry Jam, my month-long horny game jam. I’m churning out art for a puzzle-platformer (surprise), while Mel is doing something more story-driven (surprise), and we’re both helping each other out. Rate of progress seems okay so far, but I do have to put together two games in only a month, and I’ve still got a couple other looming obligations as well. Fingers crossed.

    I’m tweeting bits of progress in a thread on my secret porn account.

Next few weeks will largely be, well, work on video games!

Security and the Internet of Things

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/02/security_and_th.html

Last year, on October 21, your digital video recorder ­- or at least a DVR like yours ­- knocked Twitter off the internet. Someone used your DVR, along with millions of insecure webcams, routers, and other connected devices, to launch an attack that started a chain reaction, resulting in Twitter, Reddit, Netflix, and many sites going off the internet. You probably didn’t realize that your DVR had that kind of power. But it does.

All computers are hackable. This has as much to do with the computer market as it does with the technologies. We prefer our software full of features and inexpensive, at the expense of security and reliability. That your computer can affect the security of Twitter is a market failure. The industry is filled with market failures that, until now, have been largely ignorable. As computers continue to permeate our homes, cars, businesses, these market failures will no longer be tolerable. Our only solution will be regulation, and that regulation will be foisted on us by a government desperate to “do something” in the face of disaster.

In this article I want to outline the problems, both technical and political, and point to some regulatory solutions. Regulation might be a dirty word in today’s political climate, but security is the exception to our small-government bias. And as the threats posed by computers become greater and more catastrophic, regulation will be inevitable. So now’s the time to start thinking about it.

We also need to reverse the trend to connect everything to the internet. And if we risk harm and even death, we need to think twice about what we connect and what we deliberately leave uncomputerized.

If we get this wrong, the computer industry will look like the pharmaceutical industry, or the aircraft industry. But if we get this right, we can maintain the innovative environment of the internet that has given us so much.

**********

We no longer have things with computers embedded in them. We have computers with things attached to them.

Your modern refrigerator is a computer that keeps things cold. Your oven, similarly, is a computer that makes things hot. An ATM is a computer with money inside. Your car is no longer a mechanical device with some computers inside; it’s a computer with four wheels and an engine. Actually, it’s a distributed system of over 100 computers with four wheels and an engine. And, of course, your phones became full-power general-purpose computers in 2007, when the iPhone was introduced.

We wear computers: fitness trackers and computer-enabled medical devices ­- and, of course, we carry our smartphones everywhere. Our homes have smart thermostats, smart appliances, smart door locks, even smart light bulbs. At work, many of those same smart devices are networked together with CCTV cameras, sensors that detect customer movements, and everything else. Cities are starting to embed smart sensors in roads, streetlights, and sidewalk squares, also smart energy grids and smart transportation networks. A nuclear power plant is really just a computer that produces electricity, and ­- like everything else we’ve just listed -­ it’s on the internet.

The internet is no longer a web that we connect to. Instead, it’s a computerized, networked, and interconnected world that we live in. This is the future, and what we’re calling the Internet of Things.

Broadly speaking, the Internet of Things has three parts. There are the sensors that collect data about us and our environment: smart thermostats, street and highway sensors, and those ubiquitous smartphones with their motion sensors and GPS location receivers. Then there are the “smarts” that figure out what the data means and what to do about it. This includes all the computer processors on these devices and ­- increasingly ­- in the cloud, as well as the memory that stores all of this information. And finally, there are the actuators that affect our environment. The point of a smart thermostat isn’t to record the temperature; it’s to control the furnace and the air conditioner. Driverless cars collect data about the road and the environment to steer themselves safely to their destinations.

You can think of the sensors as the eyes and ears of the internet. You can think of the actuators as the hands and feet of the internet. And you can think of the stuff in the middle as the brain. We are building an internet that senses, thinks, and acts.

This is the classic definition of a robot. We’re building a world-size robot, and we don’t even realize it.

To be sure, it’s not a robot in the classical sense. We think of robots as discrete autonomous entities, with sensors, brain, and actuators all together in a metal shell. The world-size robot is distributed. It doesn’t have a singular body, and parts of it are controlled in different ways by different people. It doesn’t have a central brain, and it has nothing even remotely resembling a consciousness. It doesn’t have a single goal or focus. It’s not even something we deliberately designed. It’s something we have inadvertently built out of the everyday objects we live with and take for granted. It is the extension of our computers and networks into the real world.

This world-size robot is actually more than the Internet of Things. It’s a combination of several decades-old computing trends: mobile computing, cloud computing, always-on computing, huge databases of personal information, the Internet of Things ­- or, more precisely, cyber-physical systems ­- autonomy, and artificial intelligence. And while it’s still not very smart, it’ll get smarter. It’ll get more powerful and more capable through all the interconnections we’re building.

It’ll also get much more dangerous.

**********

Computer security has been around for almost as long as computers have been. And while it’s true that security wasn’t part of the design of the original internet, it’s something we have been trying to achieve since its beginning.

I have been working in computer security for over 30 years: first in cryptography, then more generally in computer and network security, and now in general security technology. I have watched computers become ubiquitous, and have seen firsthand the problems ­- and solutions ­- of securing these complex machines and systems. I’m telling you all this because what used to be a specialized area of expertise now affects everything. Computer security is now everything security. There’s one critical difference, though: The threats have become greater.

Traditionally, computer security is divided into three categories: confidentiality, integrity, and availability. For the most part, our security concerns have largely centered around confidentiality. We’re concerned about our data and who has access to it ­- the world of privacy and surveillance, of data theft and misuse.

But threats come in many forms. Availability threats: computer viruses that delete our data, or ransomware that encrypts our data and demands payment for the unlock key. Integrity threats: hackers who can manipulate data entries can do things ranging from changing grades in a class to changing the amount of money in bank accounts. Some of these threats are pretty bad. Hospitals have paid tens of thousands of dollars to criminals whose ransomware encrypted critical medical files. JPMorgan Chase spends half a billion on cybersecurity a year.

Today, the integrity and availability threats are much worse than the confidentiality threats. Once computers start affecting the world in a direct and physical manner, there are real risks to life and property. There is a fundamental difference between crashing your computer and losing your spreadsheet data, and crashing your pacemaker and losing your life. This isn’t hyperbole; recently researchers found serious security vulnerabilities in St. Jude Medical’s implantable heart devices. Give the internet hands and feet, and it will have the ability to punch and kick.

Take a concrete example: modern cars, those computers on wheels. The steering wheel no longer turns the axles, nor does the accelerator pedal change the speed. Every move you make in a car is processed by a computer, which does the actual controlling. A central computer controls the dashboard. There’s another in the radio. The engine has 20 or so computers. These are all networked, and increasingly autonomous.

Now, let’s start listing the security threats. We don’t want car navigation systems to be used for mass surveillance, or the microphone for mass eavesdropping. We might want it to be used to determine a car’s location in the event of a 911 call, and possibly to collect information about highway congestion. We don’t want people to hack their own cars to bypass emissions-control limitations. We don’t want manufacturers or dealers to be able to do that, either, as Volkswagen did for years. We can imagine wanting to give police the ability to remotely and safely disable a moving car; that would make high-speed chases a thing of the past. But we definitely don’t want hackers to be able to do that. We definitely don’t want them disabling the brakes in every car without warning, at speed. As we make the transition from driver-controlled cars to cars with various driver-assist capabilities to fully driverless cars, we don’t want any of those critical components subverted. We don’t want someone to be able to accidentally crash your car, let alone do it on purpose. And equally, we don’t want them to be able to manipulate the navigation software to change your route, or the door-lock controls to prevent you from opening the door. I could go on.

That’s a lot of different security requirements, and the effects of getting them wrong range from illegal surveillance to extortion by ransomware to mass death.

**********

Our computers and smartphones are as secure as they are because companies like Microsoft, Apple, and Google spend a lot of time testing their code before it’s released, and quickly patch vulnerabilities when they’re discovered. Those companies can support large, dedicated teams because those companies make a huge amount of money, either directly or indirectly, from their software ­ and, in part, compete on its security. Unfortunately, this isn’t true of embedded systems like digital video recorders or home routers. Those systems are sold at a much lower margin, and are often built by offshore third parties. The companies involved simply don’t have the expertise to make them secure.

At a recent hacker conference, a security researcher analyzed 30 home routers and was able to break into half of them, including some of the most popular and common brands. The denial-of-service attacks that forced popular websites like Reddit and Twitter off the internet last October were enabled by vulnerabilities in devices like webcams and digital video recorders. In August, two security researchers demonstrated a ransomware attack on a smart thermostat.

Even worse, most of these devices don’t have any way to be patched. Companies like Microsoft and Apple continuously deliver security patches to your computers. Some home routers are technically patchable, but in a complicated way that only an expert would attempt. And the only way for you to update the firmware in your hackable DVR is to throw it away and buy a new one.

The market can’t fix this because neither the buyer nor the seller cares. The owners of the webcams and DVRs used in the denial-of-service attacks don’t care. Their devices were cheap to buy, they still work, and they don’t know any of the victims of the attacks. The sellers of those devices don’t care: They’re now selling newer and better models, and the original buyers only cared about price and features. There is no market solution, because the insecurity is what economists call an externality: It’s an effect of the purchasing decision that affects other people. Think of it kind of like invisible pollution.

**********

Security is an arms race between attacker and defender. Technology perturbs that arms race by changing the balance between attacker and defender. Understanding how this arms race has unfolded on the internet is essential to understanding why the world-size robot we’re building is so insecure, and how we might secure it. To that end, I have five truisms, born from what we’ve already learned about computer and internet security. They will soon affect the security arms race everywhere.

Truism No. 1: On the internet, attack is easier than defense.

There are many reasons for this, but the most important is the complexity of these systems. More complexity means more people involved, more parts, more interactions, more mistakes in the design and development process, more of everything where hidden insecurities can be found. Computer-security experts like to speak about the attack surface of a system: all the possible points an attacker might target and that must be secured. A complex system means a large attack surface. The defender has to secure the entire attack surface. The attacker just has to find one vulnerability ­- one unsecured avenue for attack -­ and gets to choose how and when to attack. It’s simply not a fair battle.

There are other, more general, reasons why attack is easier than defense. Attackers have a natural agility that defenders often lack. They don’t have to worry about laws, and often not about morals or ethics. They don’t have a bureaucracy to contend with, and can more quickly make use of technical innovations. Attackers also have a first-mover advantage. As a society, we’re generally terrible at proactive security; we rarely take preventive security measures until an attack actually happens. So more advantages go to the attacker.

Truism No. 2: Most software is poorly written and insecure.

If complexity isn’t enough, we compound the problem by producing lousy software. Well-written software, like the kind found in airplane avionics, is both expensive and time-consuming to produce. We don’t want that. For the most part, poorly written software has been good enough. We’d all rather live with buggy software than pay the prices good software would require. We don’t mind if our games crash regularly, or our business applications act weird once in a while. Because software has been largely benign, it hasn’t mattered. This has permeated the industry at all levels. At universities, we don’t teach how to code well. Companies don’t reward quality code in the same way they reward fast and cheap. And we consumers don’t demand it.

But poorly written software is riddled with bugs, sometimes as many as one per 1,000 lines of code. Some of them are inherent in the complexity of the software, but most are programming mistakes. Not all bugs are vulnerabilities, but some are.

Truism No. 3: Connecting everything to each other via the internet will expose new vulnerabilities.

The more we network things together, the more vulnerabilities on one thing will affect other things. On October 21, vulnerabilities in a wide variety of embedded devices were all harnessed together to create what hackers call a botnet. This botnet was used to launch a distributed denial-of-service attack against a company called Dyn. Dyn provided a critical internet function for many major internet sites. So when Dyn went down, so did all those popular websites.

These chains of vulnerabilities are everywhere. In 2012, journalist Mat Honan suffered a massive personal hack because of one of them. A vulnerability in his Amazon account allowed hackers to get into his Apple account, which allowed them to get into his Gmail account. And in 2013, the Target Corporation was hacked by someone stealing credentials from its HVAC contractor.

Vulnerabilities like these are particularly hard to fix, because no one system might actually be at fault. It might be the insecure interaction of two individually secure systems.

Truism No. 4: Everybody has to stop the best attackers in the world.

One of the most powerful properties of the internet is that it allows things to scale. This is true for our ability to access data or control systems or do any of the cool things we use the internet for, but it’s also true for attacks. In general, fewer attackers can do more damage because of better technology. It’s not just that these modern attackers are more efficient, it’s that the internet allows attacks to scale to a degree impossible without computers and networks.

This is fundamentally different from what we’re used to. When securing my home against burglars, I am only worried about the burglars who live close enough to my home to consider robbing me. The internet is different. When I think about the security of my network, I have to be concerned about the best attacker possible, because he’s the one who’s going to create the attack tool that everyone else will use. The attacker that discovered the vulnerability used to attack Dyn released the code to the world, and within a week there were a dozen attack tools using it.

Truism No. 5: Laws inhibit security research.

The Digital Millennium Copyright Act is a terrible law that fails at its purpose of preventing widespread piracy of movies and music. To make matters worse, it contains a provision that has critical side effects. According to the law, it is a crime to bypass security mechanisms that protect copyrighted work, even if that bypassing would otherwise be legal. Since all software can be copyrighted, it is arguably illegal to do security research on these devices and to publish the result.

Although the exact contours of the law are arguable, many companies are using this provision of the DMCA to threaten researchers who expose vulnerabilities in their embedded systems. This instills fear in researchers, and has a chilling effect on research, which means two things: (1) Vendors of these devices are more likely to leave them insecure, because no one will notice and they won’t be penalized in the market, and (2) security engineers don’t learn how to do security better.
Unfortunately, companies generally like the DMCA. The provisions against reverse-engineering spare them the embarrassment of having their shoddy security exposed. It also allows them to build proprietary systems that lock out competition. (This is an important one. Right now, your toaster cannot force you to only buy a particular brand of bread. But because of this law and an embedded computer, your Keurig coffee maker can force you to buy a particular brand of coffee.)

**********
In general, there are two basic paradigms of security. We can either try to secure something well the first time, or we can make our security agile. The first paradigm comes from the world of dangerous things: from planes, medical devices, buildings. It’s the paradigm that gives us secure design and secure engineering, security testing and certifications, professional licensing, detailed preplanning and complex government approvals, and long times-to-market. It’s security for a world where getting it right is paramount because getting it wrong means people dying.

The second paradigm comes from the fast-moving and heretofore largely benign world of software. In this paradigm, we have rapid prototyping, on-the-fly updates, and continual improvement. In this paradigm, new vulnerabilities are discovered all the time and security disasters regularly happen. Here, we stress survivability, recoverability, mitigation, adaptability, and muddling through. This is security for a world where getting it wrong is okay, as long as you can respond fast enough.

These two worlds are colliding. They’re colliding in our cars -­ literally -­ in our medical devices, our building control systems, our traffic control systems, and our voting machines. And although these paradigms are wildly different and largely incompatible, we need to figure out how to make them work together.

So far, we haven’t done very well. We still largely rely on the first paradigm for the dangerous computers in cars, airplanes, and medical devices. As a result, there are medical systems that can’t have security patches installed because that would invalidate their government approval. In 2015, Chrysler recalled 1.4 million cars to fix a software vulnerability. In September 2016, Tesla remotely sent a security patch to all of its Model S cars overnight. Tesla sure sounds like it’s doing things right, but what vulnerabilities does this remote patch feature open up?

**********
Until now we’ve largely left computer security to the market. Because the computer and network products we buy and use are so lousy, an enormous after-market industry in computer security has emerged. Governments, companies, and people buy the security they think they need to secure themselves. We’ve muddled through well enough, but the market failures inherent in trying to secure this world-size robot will soon become too big to ignore.

Markets alone can’t solve our security problems. Markets are motivated by profit and short-term goals at the expense of society. They can’t solve collective-action problems. They won’t be able to deal with economic externalities, like the vulnerabilities in DVRs that resulted in Twitter going offline. And we need a counterbalancing force to corporate power.

This all points to policy. While the details of any computer-security system are technical, getting the technologies broadly deployed is a problem that spans law, economics, psychology, and sociology. And getting the policy right is just as important as getting the technology right because, for internet security to work, law and technology have to work together. This is probably the most important lesson of Edward Snowden’s NSA disclosures. We already knew that technology can subvert law. Snowden demonstrated that law can also subvert technology. Both fail unless each work. It’s not enough to just let technology do its thing.

Any policy changes to secure this world-size robot will mean significant government regulation. I know it’s a sullied concept in today’s world, but I don’t see any other possible solution. It’s going to be especially difficult on the internet, where its permissionless nature is one of the best things about it and the underpinning of its most world-changing innovations. But I don’t see how that can continue when the internet can affect the world in a direct and physical manner.

**********

I have a proposal: a new government regulatory agency. Before dismissing it out of hand, please hear me out.

We have a practical problem when it comes to internet regulation. There’s no government structure to tackle this at a systemic level. Instead, there’s a fundamental mismatch between the way government works and the way this technology works that makes dealing with this problem impossible at the moment.

Government operates in silos. In the U.S., the FAA regulates aircraft. The NHTSA regulates cars. The FDA regulates medical devices. The FCC regulates communications devices. The FTC protects consumers in the face of “unfair” or “deceptive” trade practices. Even worse, who regulates data can depend on how it is used. If data is used to influence a voter, it’s the Federal Election Commission’s jurisdiction. If that same data is used to influence a consumer, it’s the FTC’s. Use those same technologies in a school, and the Department of Education is now in charge. Robotics will have its own set of problems, and no one is sure how that is going to be regulated. Each agency has a different approach and different rules. They have no expertise in these new issues, and they are not quick to expand their authority for all sorts of reasons.

Compare that with the internet. The internet is a freewheeling system of integrated objects and networks. It grows horizontally, demolishing old technological barriers so that people and systems that never previously communicated now can. Already, apps on a smartphone can log health information, control your energy use, and communicate with your car. That’s a set of functions that crosses jurisdictions of at least four different government agencies, and it’s only going to get worse.

Our world-size robot needs to be viewed as a single entity with millions of components interacting with each other. Any solutions here need to be holistic. They need to work everywhere, for everything. Whether we’re talking about cars, drones, or phones, they’re all computers.

This has lots of precedent. Many new technologies have led to the formation of new government regulatory agencies. Trains did, cars did, airplanes did. Radio led to the formation of the Federal Radio Commission, which became the FCC. Nuclear power led to the formation of the Atomic Energy Commission, which eventually became the Department of Energy. The reasons were the same in every case. New technologies need new expertise because they bring with them new challenges. Governments need a single agency to house that new expertise, because its applications cut across several preexisting agencies. It’s less that the new agency needs to regulate -­ although that’s often a big part of it -­ and more that governments recognize the importance of the new technologies.

The internet has famously eschewed formal regulation, instead adopting a multi-stakeholder model of academics, businesses, governments, and other interested parties. My hope is that we can keep the best of this approach in any regulatory agency, looking more at the new U.S. Digital Service or the 18F office inside the General Services Administration. Both of those organizations are dedicated to providing digital government services, and both have collected significant expertise by bringing people in from outside of government, and both have learned how to work closely with existing agencies. Any internet regulatory agency will similarly need to engage in a high level of collaborate regulation -­ both a challenge and an opportunity.

I don’t think any of us can predict the totality of the regulations we need to ensure the safety of this world, but here’s a few. We need government to ensure companies follow good security practices: testing, patching, secure defaults -­ and we need to be able to hold companies liable when they fail to do these things. We need government to mandate strong personal data protections, and limitations on data collection and use. We need to ensure that responsible security research is legal and well-funded. We need to enforce transparency in design, some sort of code escrow in case a company goes out of business, and interoperability between devices of different manufacturers, to counterbalance the monopolistic effects of interconnected technologies. Individuals need the right to take their data with them. And internet-enabled devices should retain some minimal functionality if disconnected from the internet

I’m not the only one talking about this. I’ve seen proposals for a National Institutes of Health analog for cybersecurity. University of Washington law professor Ryan Calo has proposed a Federal Robotics Commission. I think it needs to be broader: maybe a Department of Technology Policy.

Of course there will be problems. There’s a lack of expertise in these issues inside government. There’s a lack of willingness in government to do the hard regulatory work. Industry is worried about any new bureaucracy: both that it will stifle innovation by regulating too much and that it will be captured by industry and regulate too little. A domestic regulatory agency will have to deal with the fundamentally international nature of the problem.

But government is the entity we use to solve problems like this. Governments have the scope, scale, and balance of interests to address the problems. It’s the institution we’ve built to adjudicate competing social interests and internalize market externalities. Left to their own devices, the market simply can’t. That we’re currently in the middle of an era of low government trust, where many of us can’t imagine government doing anything positive in an area like this, is to our detriment.

Here’s the thing: Governments will get involved, regardless. The risks are too great, and the stakes are too high. Government already regulates dangerous physical systems like cars and medical devices. And nothing motivates the U.S. government like fear. Remember 2001? A nominally small-government Republican president created the Office of Homeland Security 11 days after the terrorist attacks: a rushed and ill-thought-out decision that we’ve been trying to fix for over a decade. A fatal disaster will similarly spur our government into action, and it’s unlikely to be well-considered and thoughtful action. Our choice isn’t between government involvement and no government involvement. Our choice is between smarter government involvement and stupider government involvement. We have to start thinking about this now. Regulations are necessary, important, and complex; and they’re coming. We can’t afford to ignore these issues until it’s too late.

We also need to start disconnecting systems. If we cannot secure complex systems to the level required by their real-world capabilities, then we must not build a world where everything is computerized and interconnected.

There are other models. We can enable local communications only. We can set limits on collected and stored data. We can deliberately design systems that don’t interoperate with each other. We can deliberately fetter devices, reversing the current trend of turning everything into a general-purpose computer. And, most important, we can move toward less centralization and more distributed systems, which is how the internet was first envisioned.

This might be a heresy in today’s race to network everything, but large, centralized systems are not inevitable. The technical elites are pushing us in that direction, but they really don’t have any good supporting arguments other than the profits of their ever-growing multinational corporations.

But this will change. It will change not only because of security concerns, it will also change because of political concerns. We’re starting to chafe under the worldview of everything producing data about us and what we do, and that data being available to both governments and corporations. Surveillance capitalism won’t be the business model of the internet forever. We need to change the fabric of the internet so that evil governments don’t have the tools to create a horrific totalitarian state. And while good laws and regulations in Western democracies are a great second line of defense, they can’t be our only line of defense.

My guess is that we will soon reach a high-water mark of computerization and connectivity, and that afterward we will make conscious decisions about what and how we decide to interconnect. But we’re still in the honeymoon phase of connectivity. Governments and corporations are punch-drunk on our data, and the rush to connect everything is driven by an even greater desire for power and market share. One of the presentations released by Edward Snowden contained the NSA mantra: “Collect it all.” A similar mantra for the internet today might be: “Connect it all.”

The inevitable backlash will not be driven by the market. It will be deliberate policy decisions that put the safety and welfare of society above individual corporations and industries. It will be deliberate policy decisions that prioritize the security of our systems over the demands of the FBI to weaken them in order to make their law-enforcement jobs easier. It’ll be hard policy for many to swallow, but our safety will depend on it.

**********

The scenarios I’ve outlined, both the technological and economic trends that are causing them and the political changes we need to make to start to fix them, come from my years of working in internet-security technology and policy. All of this is informed by an understanding of both technology and policy. That turns out to be critical, and there aren’t enough people who understand both.

This brings me to my final plea: We need more public-interest technologists.

Over the past couple of decades, we’ve seen examples of getting internet-security policy badly wrong. I’m thinking of the FBI’s “going dark” debate about its insistence that computer devices be designed to facilitate government access, the “vulnerability equities process” about when the government should disclose and fix a vulnerability versus when it should use it to attack other systems, the debacle over paperless touch-screen voting machines, and the DMCA that I discussed above. If you watched any of these policy debates unfold, you saw policy-makers and technologists talking past each other.

Our world-size robot will exacerbate these problems. The historical divide between Washington and Silicon Valley -­ the mistrust of governments by tech companies and the mistrust of tech companies by governments ­- is dangerous.

We have to fix this. Getting IoT security right depends on the two sides working together and, even more important, having people who are experts in each working on both. We need technologists to get involved in policy, and we need policy-makers to get involved in technology. We need people who are experts in making both technology and technological policy. We need technologists on congressional staffs, inside federal agencies, working for NGOs, and as part of the press. We need to create a viable career path for public-interest technologists, much as there already is one for public-interest attorneys. We need courses, and degree programs in colleges, for people interested in careers in public-interest technology. We need fellowships in organizations that need these people. We need technology companies to offer sabbaticals for technologists wanting to go down this path. We need an entire ecosystem that supports people bridging the gap between technology and law. We need a viable career path that ensures that even though people in this field won’t make as much as they would in a high-tech start-up, they will have viable careers. The security of our computerized and networked future ­ meaning the security of ourselves, families, homes, businesses, and communities ­ depends on it.

This plea is bigger than security, actually. Pretty much all of the major policy debates of this century will have a major technological component. Whether it’s weapons of mass destruction, robots drastically affecting employment, climate change, food safety, or the increasing ubiquity of ever-shrinking drones, understanding the policy means understanding the technology. Our society desperately needs technologists working on the policy. The alternative is bad policy.

**********

The world-size robot is less designed than created. It’s coming without any forethought or architecting or planning; most of us are completely unaware of what we’re building. In fact, I am not convinced we can actually design any of this. When we try to design complex sociotechnical systems like this, we are regularly surprised by their emergent properties. The best we can do is observe and channel these properties as best we can.

Market thinking sometimes makes us lose sight of the human choices and autonomy at stake. Before we get controlled ­ or killed ­ by the world-size robot, we need to rebuild confidence in our collective governance institutions. Law and policy may not seem as cool as digital tech, but they’re also places of critical innovation. They’re where we collectively bring about the world we want to live in.

While I might sound like a Cassandra, I’m actually optimistic about our future. Our society has tackled bigger problems than this one. It takes work and it’s not easy, but we eventually find our way clear to make the hard choices necessary to solve our real problems.

The world-size robot we’re building can only be managed responsibly if we start making real choices about the interconnected world we live in. Yes, we need security systems as robust as the threat landscape. But we also need laws that effectively regulate these dangerous technologies. And, more generally, we need to make moral, ethical, and political decisions on how those systems should work. Until now, we’ve largely left the internet alone. We gave programmers a special right to code cyberspace as they saw fit. This was okay because cyberspace was separate and relatively unimportant: That is, it didn’t matter. Now that that’s changed, we can no longer give programmers and the companies they work for this power. Those moral, ethical, and political decisions need, somehow, to be made by everybody. We need to link people with the same zeal that we are currently linking machines. “Connect it all” must be countered with “connect us all.”

This essay previously appeared in New York Magazine.

Let’s Encrypt 2016 In Review

Post Syndicated from Let's Encrypt - Free SSL/TLS Certificates original https://letsencrypt.org/2017/01/06/le-2016-in-review.html

<p>Our first full year as a live CA was an exciting one. I’m incredibly proud of what our team and community accomplished during 2016. I’d like to share some thoughts about how we’ve changed, what we’ve accomplished, and what we’ve learned.</p>

<p>At the start of 2016, Let’s Encrypt certificates had been available to the public for less than a month and we were supporting approximately 240,000 active (unexpired) certificates. That seemed like a lot at the time! Now we’re frequently issuing that many new certificates in a single day while supporting more than 20,000,000 active certificates in total. We’ve issued more than a million certificates in a single day a few times recently. We’re currently serving an average of 6,700 OCSP responses per second. We’ve done a lot of optimization work, we’ve had to add some hardware, and there have been some long nights for our staff, but we’ve been able to keep up and we’re ready for another year of <a href="https://letsencrypt.org/stats/">strong growth</a>.</p>

<p><center><p><img src="/images/Jan-6-2017-Cert-Stats.png" alt="Let's Encrypt certificate issuance statistics." style="width: 650px; margin-bottom: 17px;"/></p></center></p>

<p>We added a number of <a href="https://letsencrypt.org/upcoming-features/">new features</a> during the past year, including support for the ACME DNS challenge, ECDSA signing, IPv6, and Internationalized Domain Names.</p>

<p>When 2016 started, our root certificate had not been accepted into any major root programs. Today we’ve been accepted into the Mozilla, Apple, and Google root programs. We’re close to announcing acceptance into another major root program. These are major steps towards being able to operate as an independent CA. You can read more about why <a href="https://letsencrypt.org/2016/08/05/le-root-to-be-trusted-by-mozilla.html">here</a>.</p>

<p>The ACME protocol for issuing and managing certificates is at the heart of how Let’s Encrypt works. Having a well-defined and heavily audited specification developed in public on a standards track has been a major contributor to our growth and the growth of our client ecosystem. Great progress was made in 2016 towards standardizing ACME in the <a href="https://datatracker.ietf.org/wg/acme/charter/">IETF ACME working group</a>. We’re hoping for a final document around the end of Q2 2017, and we’ll announce plans for implementation of the updated protocol around that time as well.</p>

<p>Supporting the kind of growth we saw in 2016 meant adding staff, and during the past year Internet Security Research Group (ISRG), the non-profit entity behind Let’s Encrypt, went from four full-time employees to nine. We’re still a pretty small crew given that we’re now one of the largest CAs in the world (if not the largest), but it works because of our intense focus on automation, the fact that we’ve been able to hire great people, and because of the incredible support we receive from the Let’s Encrypt community.</p>

<p>Let’s Encrypt exists in order to help create a 100% encrypted Web. Our own metrics can be interesting, but they’re only really meaningful in terms of the impact they have on progress towards a more secure and privacy-respecting Web. The metric we use to track progress towards that goal is the percentage of page loads using HTTPS, as seen by browsers. According to Firefox Telemetry, the Web has gone from approximately 39% of page loads using HTTPS each day to just about 49% during the past year. We’re incredibly close to a Web that is more encrypted than not. We’re proud to have been a big part of that, but we can’t take credit for all of it. Many people and organizations around the globe have come to realize that we need to invest in a more secure and privacy-respecting Web, and have taken steps to secure their own sites as well as their customers’. Thank you to everyone that has advocated for HTTPS this year, or helped to make it easier for people to make the switch.</p>

<p>We learned some lessons this year. When we had service interruptions they were usually related to managing the rapidly growing database backing our CA. Also, while most of our code had proper tests, some small pieces didn’t and that led to <a href="https://community.letsencrypt.org/c/incidents">incidents</a> that shouldn’t have happened. That said, I’m proud of the way we handle incidents promptly, including quick and transparent public disclosure.</p>

<p>We also learned a lot about our client ecosystem. At the beginning of 2016, ISRG / Let’s Encrypt provided client software called letsencrypt. We’ve always known that we would never be able produce software that would work for every Web server/stack, but we felt that we needed to offer a client that would work well for a large number of people and that could act as a reference client. By March of 2016, earlier than we had foreseen, it had become clear that our community was up to the task of creating a wide range of quality clients, and that our energy would be better spent fostering that community than producing our own client. That’s when we made the decision to <a href="https://letsencrypt.org/2016/03/09/le-client-new-home.html">hand off development of our client</a> to the Electronic Frontier Foundation (EFF). EFF renamed the client to <a href="https://certbot.eff.org/">Certbot</a> and has been doing an excellent job maintaining and improving it as one of <a href="https://letsencrypt.org/docs/client-options/">many client options</a>.</p>

<p>As exciting as 2016 was for Let’s Encrypt and encryption on the Web, 2017 seems set to be an even more incredible year. Much of the infrastructure and many of the plans necessary for a 100% encrypted Web came into being or solidified in 2016. More and more hosting providers and CDNs are supporting HTTPS with one click or by default, often without additional fees. It has never been easier for people and organizations running their own sites to find the tools, services, and information they need to move to HTTPS. Browsers are planning to update their user interfaces to better reflect the risks associated with non-secure connections.</p>

<p>We’d like to thank our community, including our <a href="https://letsencrypt.org/sponsors/">sponsors</a>, for making everything we did this past year possible. Please consider <a href="https://letsencrypt.org/getinvolved/">getting involved</a> or making a <a href="https://letsencrypt.org/donate/">donation</a>, and if your company or organization would like to sponsor Let’s Encrypt please email us at <a href="mailto:[email protected]">[email protected]</a>.</p>

Security advisories for Monday

Post Syndicated from ris original http://lwn.net/Articles/710472/rss

Arch Linux has updated curl (two vulnerabilities) and libwmf (multiple vulnerabilities).

Debian has updated libgd2 (denial
of service) and libphp-phpmailer (code execution).

Debian-LTS has updated hdf5
(multiple vulnerabilities), hplip
(man-in-the-middle attack from 2015), kernel (multiple vulnerabilities), libphp-phpmailer (code execution), pgpdump (denial of service), postgresql-common (file overwrites), python-crypto (denial of service), and shutter (code execution from 2015).

Fedora has updated curl (F24:
buffer overflow), cxf (F25: two
vulnerabilities), game-music-emu (F24:
multiple vulnerabilities), libbsd (F25; F24:
denial of service), libpng (F25: NULL
dereference bug), mingw-openjpeg2 (F25; F24:
multiple vulnerabilities), openjpeg2 (F24:
two vulnerabilities), php-zendframework-zend-mail (F25; F24:
parameter injection), springframework (F25:
directory traversal), tor (F25; F24: denial of service), xen (F24: three vulnerabilities), and
zookeeper (F25; F24: buffer overflow).

Gentoo has updated bash (code
execution), busybox (denial of service), chicken (multiple vulnerabilities going back
to 2013), cyassl (multiple vulnerabilities
from 2014), e2fsprogs (code execution from
2015), hdf5 (multiple vulnerabilities), icinga (privilege escalation), libarchive (multiple vulnerabilities, some
from 2015), libjpeg-turbo (code execution),
libotr (code execution), lzo (code execution from 2014), mariadb (multiple unspecified
vulnerabilities), memcached (code
execution), musl (code execution), mutt (denial of service from 2014), openfire (multiple vulnerabilities from 2015),
openvswitch (code execution), pillow (multiple vulnerabilities, two from
2014), w3m (multiple vulnerabilities), xdg-utils (command execution from 2014), and
xen (multiple vulnerabilities).

Mageia has updated mcabber (roster push attack) and tracker (denial of service).

openSUSE has updated firefox
(13.1: multiple vulnerabilities), gd (42.2,
42.1: stack overflow), GNU Health (42.2:
two vulnerabilities), roundcubemail (13.1:
cross-site scripting), kernel (42.1:
information leak), thunderbird (42.2,
42.1, 13.2
; SPH for SLE12:
multiple vulnerabilities), and xen (42.2; 42.1; 13.2: multiple vulnerabilities).

Red Hat has updated ipa (RHEL7:
two vulnerabilities) and rh-nodejs4-nodejs and
rh-nodejs4-http-parser
(RHSCL: multiple vulnerabilities).

Slackware has updated libpng (NULL dereference bug), thunderbird (code execution), and seamonkey (multiple vulnerabilities).

SUSE has updated gstreamer-plugins-good (SLE12-SP2: multiple
vulnerabilities) and kernel (SLERTE12-SP1: multiple vulnerabilities).

Security updates for Friday

Post Syndicated from jake original http://lwn.net/Articles/710355/rss

Debian has updated dcmtk (code
execution from 2015).

Debian-LTS has updated curl (code
execution) and libxi (regression in
previous update).

Fedora has updated js-jquery
(F24: cross-site scripting), js-jquery1 (F25; F24:
cross-site scripting), smack (F25: TLS
bypass), and tracker (F24: adding sandboxing).

Gentoo has updated mod_wsgi
(privilege escalation from 2014).

Mageia has updated game-music-emu
(multiple vulnerabilities), gstreamer1.0-plugins-good (multiple vulnerabilities), hdf5 (multiple vulnerabilities), kernel,
kmod
(three vulnerabilities), libgsf
(denial of service), openjpeg2 (multiple vulnerabilities), roundcubemail (code execution), and samba (authentication bypass).

openSUSE has updated irc-otr
(42.2: information disclosure).

Slackware has updated python (two
vulnerabilities) and samba (three vulnerabilities).

SUSE has updated gstreamer-plugins-bad (SLE12: multiple vulnerabilities) and gstreamer-plugins-good (SLE12: multiple vulnerabilities).

Weekly roundup: A model week

Post Syndicated from Eevee original https://eev.ee/dev/2016/12/19/weekly-roundup-a-model-week/

A couple solid days each went towards different things, with pretty good progress for all of them.

  • isaac: Did a little bit of cleanup. Added some raycasting to the “physics engine”, so the laser eye now actually works correctly — it fires a ray along the laser’s path and looks for the first solid obstacle. (In the PICO-8 version, it just checked map tiles straight down until it found something other than empty space.)

  • art: Spent a few days on some inked reward drawings for Mel’s Kickstarter.

  • music: Studied how Cave Story’s music is put together in exquisite detail. It helps that the music was all written in tracker style, and the whole soundtrack is available in XM format, which trackers are happy to open. The result of this effort was a song that’s not entirely terrible, which isn’t bad considering I’ve only spent a week or two combined on learning anything about music.

  • veekun: I, uh, spent a couple days on actually beating the game, so now I don’t have to worry about spoilers.

    I also resurrected my ancient WebGL model viewer, taught it to play back animations, improved the outlining, and added a bunch of controls. Work continues on this, but I’d really like to have interactive models available on veekun.

I’m mostly charging ahead on veekun stuff right now, and probably for the next few days.

Isaac plods ever closer to matching the feature set of the PICO-8 game.

Weekly roundup: A model week

Post Syndicated from Eevee original https://eev.ee/dev/2016/12/19/weekly-roundup-a-model-week/

A couple solid days each went towards different things, with pretty good progress for all of them.

  • isaac: Did a little bit of cleanup. Added some raycasting to the “physics engine”, so the laser eye now actually works correctly — it fires a ray along the laser’s path and looks for the first solid obstacle. (In the PICO-8 version, it just checked map tiles straight down until it found something other than empty space.)

  • art: Spent a few days on some inked reward drawings for Mel’s Kickstarter.

  • music: Studied how Cave Story’s music is put together in exquisite detail. It helps that the music was all written in tracker style, and the whole soundtrack is available in XM format, which trackers are happy to open. The result of this effort was a song that’s not entirely terrible, which isn’t bad considering I’ve only spent a week or two combined on learning anything about music.

  • veekun: I, uh, spent a couple days on actually beating the game, so now I don’t have to worry about spoilers.

    I also resurrected my ancient WebGL model viewer, taught it to play back animations, improved the outlining, and added a bunch of controls. Work continues on this, but I’d really like to have interactive models available on veekun.

I’m mostly charging ahead on veekun stuff right now, and probably for the next few days.

Isaac plods ever closer to matching the feature set of the PICO-8 game.

Christmas Special: The MagPi 52 is out now!

Post Syndicated from Lucy Hattersley original https://www.raspberrypi.org/blog/magpi-christmas-special/

The MagPi Christmas Special is out now.

For the festive season, the official magazine of the Raspberry Pi community is having a maker special. This edition is packed with fun festive projects!

The MagPi issue 52 cover

The MagPi issue 52

Click here to download the MagPi Christmas Special

Here are just some of the fun projects inside this festive issue:

  • Magazine tree: turn the special cover into a Christmas tree, using LED lights to create a shiny, blinky display
  • DIY decorations: bling out your tree with NeoPixels and code
  • Santa tracker: follow Santa Claus around the world with a Raspberry Pi
  • Christmas crackers: the best low-cost presents for makers and hackers
  • Yuletide game: build Sliders, a fab block-sliding game with a festive feel.

Sliders

A Christmas game from the MagPi No.52

Inside the MagPi Christmas special

If you’re a bit Grinchy when it comes to Christmas, there’s plenty of non-festive fun to be found too:

  • Learn to use VNC Viewer
  • Find out how to build a sunrise alarm clock
  • Read our in-depth guide to Amiga emulation
  • Discover the joys of parallel computing

There’s also a huge amount of community news this month. The MagPi has an exclusive feature on Pioneers, our new programme for 12- to 15-year-olds, and news about Astro Pi winning the Arthur Clarke Award.

The Pioneers

The MagPi outlines our new Pioneers programme in detail

After that, we see some of the most stylish projects ever. Inside is the beautiful Sisyphus table; that’s a moving work of art, a facial recognition door lock, and a working loom controlled by a Raspberry Pi.

The MagPi 52 Sisyphus Project Focus

The MagPi interviews the maker of this amazing Sisyphus table

If that wasn’t enough, we also have a big feature on adding sensors to your robots. These can be used to built a battle-bot ready for the upcoming Pi Wars challenge.

The MagPi team wishes you all a merry Christmas! You can grab The MagPi 52 in stores today: it’s in WHSmith, Tesco, Sainsbury’s, and Asda in the UK, and it will be in Micro Center and selected Barnes & Noble stores when it comes to the US. You can also buy the print edition online from our store, and it’s available digitally on our Android and iOS app.

Get a free Pi Zero
Want to make sure you never miss an issue? Subscribe today and get a Pi Zero bundle featuring the new, camera-enabled Pi Zero, and a cable bundle that includes the camera adapter.

If you subscribe to The MagPi before 7 December 2016, you will get a free Pi Zero in time for Christmas.

The post Christmas Special: The MagPi 52 is out now! appeared first on Raspberry Pi.

New – HIPAA Eligibility for AWS Snowball

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-hipaa-eligibility-for-aws-snowball/

Many of the tools and technologies now in use at your local doctor, dentist, hospital, or other healthcare provider generate massive amounts of sensitive digital data. Other prolific data generators include genomic sequencers and any number of activity and fitness trackers. We all want to benefit from the insights that can be produced by this “data tsunami,” but we also want to be confident that it will be stored in a protected fashion and processed in a responsible manner.

In the United States, protection of healthcare data is governed by HIPAA (the Health Insurance Portability and Accountability Act). Because many AWS customers would like to store and process sensitive health care data on the cloud, we have worked to make multiple AWS services HIPAA-eligible; this means that the services can be used to process Protected Health Information (PHI) and to build applications that are HIPAA-compliant (read HIPAA in the Cloud to learn more about what Cleveland Clinic, Orion Health, Eliza, Philips, and other AWS customers are doing).

Last year I introduced you to AWS Import/Export Snowball. This is an AWS-owned storage appliance that you can use to move large amounts of data (generally 10 terabytes or more) to AWS on a one-time or recurring basis. You simply request a Snowball from the AWS Management Console, connect it to your network when it arrives, copy your data to it, and then send it back to us so that we can copy the data to the AWS storage service of your choice. Snowball encrypts your data using keys that you specify and control.

Today, I am happy to announce that we are adding Snowball to the list of HIPAA-eligible services, joining Amazon DynamoDB, Amazon Elastic Compute Cloud (EC2), Amazon Elastic Block Store (EBS), Elastic Load Balancing, Amazon EMR, Amazon Glacier, Amazon Relational Database Service (RDS) (MySQL and Oracle), Amazon Redshift, and Amazon Simple Storage Service (S3). This brings the total number of eligible services to 10 and represents our commitment to make the AWS Cloud a safe, secure, and reliable destination for PHI and many other types of sensitive data. If you already have a Business Associate Agreement (BAA) with AWS, you can begin using Snowball to transfer data into your HIPAA accounts immediately.

With Snowball now on the list of HIPAA-eligible services, AWS customers in the Healthcare and Life Sciences space can quickly move on-premises data to Snowball and then process it using any of the services that I just mentioned. For example, they can use the new HDFS Import feature to migrate an existing on-premises Hadoop cluster to the cloud and analyze it using a scalable EMR cluster. They can also move existing petabyte-scale data (medical images, patient records, and the like) to AWS and store it in S3 or Glacier, both already HIPAA-eligible.  These services are proven, easy to use, and offer high data durability at low cost.

Jeff

 

Regulation of the Internet of Things

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/11/regulation_of_t.html

Late last month, popular websites like Twitter, Pinterest, Reddit and PayPal went down for most of a day. The distributed denial-of-service attack that caused the outages, and the vulnerabilities that made the attack possible, was as much a failure of market and policy as it was of technology. If we want to secure our increasingly computerized and connected world, we need more government involvement in the security of the “Internet of Things” and increased regulation of what are now critical and life-threatening technologies. It’s no longer a question of if, it’s a question of when.

First, the facts. Those websites went down because their domain name provider — a company named Dyn —­ was forced offline. We don’t know who perpetrated that attack, but it could have easily been a lone hacker. Whoever it was launched a distributed denial-of-service attack against Dyn by exploiting a vulnerability in large numbers ­— possibly millions — of Internet-of-Things devices like webcams and digital video recorders, then recruiting them all into a single botnet. The botnet bombarded Dyn with traffic, so much that it went down. And when it went down, so did dozens of websites.

Your security on the Internet depends on the security of millions of Internet-enabled devices, designed and sold by companies you’ve never heard of to consumers who don’t care about your security.

The technical reason these devices are insecure is complicated, but there is a market failure at work. The Internet of Things is bringing computerization and connectivity to many tens of millions of devices worldwide. These devices will affect every aspect of our lives, because they’re things like cars, home appliances, thermostats, light bulbs, fitness trackers, medical devices, smart streetlights and sidewalk squares. Many of these devices are low-cost, designed and built offshore, then rebranded and resold. The teams building these devices don’t have the security expertise we’ve come to expect from the major computer and smartphone manufacturers, simply because the market won’t stand for the additional costs that would require. These devices don’t get security updates like our more expensive computers, and many don’t even have a way to be patched. And, unlike our computers and phones, they stay around for years and decades.

An additional market failure illustrated by the Dyn attack is that neither the seller nor the buyer of those devices cares about fixing the vulnerability. The owners of those devices don’t care. They wanted a webcam —­ or thermostat, or refrigerator ­— with nice features at a good price. Even after they were recruited into this botnet, they still work fine ­— you can’t even tell they were used in the attack. The sellers of those devices don’t care: They’ve already moved on to selling newer and better models. There is no market solution because the insecurity primarily affects other people. It’s a form of invisible pollution.

And, like pollution, the only solution is to regulate. The government could impose minimum security standards on IoT manufacturers, forcing them to make their devices secure even though their customers don’t care. They could impose liabilities on manufacturers, allowing companies like Dyn to sue them if their devices are used in DDoS attacks. The details would need to be carefully scoped, but either of these options would raise the cost of insecurity and give companies incentives to spend money making their devices secure.

It’s true that this is a domestic solution to an international problem and that there’s no U.S. regulation that will affect, say, an Asian-made product sold in South America, even though that product could still be used to take down U.S. websites. But the main costs in making software come from development. If the United States and perhaps a few other major markets implement strong Internet-security regulations on IoT devices, manufacturers will be forced to upgrade their security if they want to sell to those markets. And any improvements they make in their software will be available in their products wherever they are sold, simply because it makes no sense to maintain two different versions of the software. This is truly an area where the actions of a few countries can drive worldwide change.

Regardless of what you think about regulation vs. market solutions, I believe there is no choice. Governments will get involved in the IoT, because the risks are too great and the stakes are too high. Computers are now able to affect our world in a direct and physical manner.

Security researchers have demonstrated the ability to remotely take control of Internet-enabled cars. They’ve demonstrated ransomware against home thermostats and exposed vulnerabilities in implanted medical devices. They’ve hacked voting machines and power plants. In one recent paper, researchers showed how a vulnerability in smart light bulbs could be used to start a chain reaction, resulting in them all being controlled by the attackers ­— that’s every one in a city. Security flaws in these things could mean people dying and property being destroyed.

Nothing motivates the U.S. government like fear. Remember 2001? A small-government Republican president created the Department of Homeland Security in the wake of the 9/11 terrorist attacks: a rushed and ill-thought-out decision that we’ve been trying to fix for more than a decade. A fatal IoT disaster will similarly spur our government into action, and it’s unlikely to be well-considered and thoughtful action. Our choice isn’t between government involvement and no government involvement. Our choice is between smarter government involvement and stupider government involvement. We have to start thinking about this now. Regulations are necessary, important and complex ­— and they’re coming. We can’t afford to ignore these issues until it’s too late.

In general, the software market demands that products be fast and cheap and that security be a secondary consideration. That was okay when software didn’t matter —­ it was okay that your spreadsheet crashed once in a while. But a software bug that literally crashes your car is another thing altogether. The security vulnerabilities in the Internet of Things are deep and pervasive, and they won’t get fixed if the market is left to sort it out for itself. We need to proactively discuss good regulatory solutions; otherwise, a disaster will impose bad ones on us.

This essay previously appeared in the Washington Post.

Firefox Removing Battery Status API

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/11/firefox_removin.html

Firefox is removing the battery status API, citing privacy concerns. Here’s the paper that described those concerns:

Abstract. We highlight privacy risks associated with the HTML5 Battery Status API. We put special focus on its implementation in the Firefox browser. Our study shows that websites can discover the capacity of users’ batteries by exploiting the high precision readouts provided by Firefox on Linux. The capacity of the battery, as well as its level, expose a fingerprintable surface that can be used to track web users in short time intervals. Our analysis shows that the risk is much higher for old or used batteries with reduced capacities, as the battery capacity may potentially serve as a tracking identifier. The fingerprintable surface of the API could be drastically reduced without any loss in the API’s functionality by reducing the precision of the readings. We propose minor modifications to Battery Status API and its implementation in the Firefox browser to address the privacy issues presented in the study. Our bug report for Firefox was accepted and a fix is deployed.

W3C is updating the spec. Here’s a battery tracker found in the wild.

Security advisories for Wednesday

Post Syndicated from ris original http://lwn.net/Articles/703323/rss

CentOS has updated kernel (C7:
stack corruption), tomcat (C7: multiple
vulnerabilities), and tomcat6 (C6: multiple vulnerabilities).

Debian has updated ghostscript (multiple vulnerabilities).

Fedora has updated ca-certificates (F24: certificate update), nsd (F24: denial of service), and openssl (F23: multiple vulnerabilities).

Gentoo has updated bind (multiple vulnerabilities).

Mageia has updated libgd (denial of service), openssl (multiple vulnerabilities), and python-twisted-web (HTTP proxy redirect).

openSUSE has updated kde-cli-tools5 (SPH for SLE12; Leap42.1, 13.2: code execution), nodejs (Leap42.1, 13.2: multiple vulnerabilities), and xen (Leap42.1; 13.2: multiple vulnerabilities).

Scientific Linux has updated kernel (SL7: stack corruption), tomcat (SL7: multiple vulnerabilities), and tomcat6 (SL6: multiple vulnerabilities).

SUSE has updated ghostscript-library (SLE12-SP1; SLE11-SP2,3,4: multiple vulnerabilities) and
xen (SLE11-SP4: multiple vulnerabilities).

Ubuntu has updated kdepimlibs
(12.04: HTML injection) and tracker (16.04:
denial of service).

Weekly roundup: Bashing my head against a wall

Post Syndicated from Eevee original https://eev.ee/dev/2016/09/11/weekly-roundup-bashing-my-head-against-a-wall/

September is continuing the three big things in particular.

  • gamedev: I spent far too much time just trying to get collision detection working how I want in LÖVE. Seriously, four or five solid days. I guess I learned some things, which I will probably write about soon, but I also can’t help but feel like I wasted a good chunk of time. At least I’ll never have to do this again, right? Ha, ha.

  • music: I found a tracker I kinda like and tinkered with it and LMMS a bit. I’m terribly unconfident about this and don’t even know where to start, so it’s kinda slow going. On the other hand, I finally grasped the basics of Western music theory, so that’s nice.

  • blog: I started on, uh, four different posts. I’m good for the month on topics, at least.

  • art: I doodled a bunch while catching up on podcasts. Also I drew a new avatar, which I hadn’t done since… June? Yikes. This one was painted, too; all the previous ones had separate lineart.

  • veekun: I taught my code to dump all of Gen I at once, cleaned up a bunch of text handling, and successfully extracted flavor text.

I’m not thrilled about the time lost to platformer physics, but oh well. I’m a tad burned out on gamedev after that, so I think I’ll dedicate the next couple days to writing and maybe catching up on veekun.

Weekly roundup: HD Remix

Post Syndicated from Eevee original https://eev.ee/dev/2016/09/05/weekly-roundup-hd-remix/

September is continuing the three big things in particular.

I spoiled most of this last week.

  • gamedev: I did Ludum Dare 36! I made a little PICO-8 game called Isaac’s Descent in 48 hours.

    Also, I started working on Isaac’s Descent HD Remix, a fancier port in LÖVE. In fact that’s pretty much all I did all week, including pixeling a new walk sequence for Isaac and porting all the maps to Tiled and some other stuff. The last couple days in particular went down the collision drain, ugh. Good book fodder, though.

  • blog: I published a timeline of my progress on my Ludum Dare game.

  • book: Oh, yeah, I’m writing a book. I wrote a ton of notes for a LÖVE chapter, based on what I did so far for Isaac HD.

  • music: Fought with a few trackers, with limited success, in an effort to find a grown-up equivalent to the PICO-8’s music tools. Renoise looks interesting and I’ll play with it more, but I don’t know if I want to plop down any real cash for something I’m only just starting to do. I might end up just using LMMS.

No big surprises. I still need to get back to work on veekun, but I’m a bit distracted with LÖVE at the moment, oops. And of course I have another four posts to write this month, somehow…