How to Use AWS Config to Help with Required HIPAA Audit Controls: Part 4 of the Automating HIPAA Compliance Series

Post Syndicated from Chris Crosbie original https://blogs.aws.amazon.com/security/post/Tx27GJDUUTHKRRJ/How-to-Use-AWS-Config-to-Help-with-Required-HIPAA-Audit-Controls-Part-4-of-the-A

In my previous posts in this series, I explained how to get started with the DevSecOps environment for HIPAA that is depicted in the following architecture diagram. In my second post in this series, I gave you guidance about how to set up AWS Service Catalog (#4 in the following diagram) to allow developers a way to launch healthcare web servers and release source code without the need for administrator intervention. In my third post in this series, I advised healthcare security administrators about defining AWS CloudFormation templates (#1 in the diagram) for infrastructure that must comply with the AWS Business Associate Agreement (BAA).

In today’s final post of this series, I am going to complete the explanation of the DevSecOps architecture depicted in the preceding diagram by highlighting ways you can use AWS Config (#9 in the diagram) to help meet audit controls required by HIPAA. Config is a fully managed service that provides you with an AWS resource inventory, configuration history, and configuration change notifications. This Config output, along with other audit trails, gives you the types of information you can use to meet your HIPAA auditing obligations. 

Auditing and monitoring are essential to HIPAA security. Auditing controls are a Technical Safeguard that must be addressed through the use of technical controls by anyone who wishes to store, process, or transmit electronic patient data. However, because there are no standard implementation specifications within the HIPAA law and regulations, AWS Config enables you to address audit controls  to use the cloud to protect the cloud.

Because Config currently targets only AWS infrastructure configuration changes, it is unlikely that Config alone will be able to meet all of the audit control requirements laid out in Technical Safeguard 164.312, the section of the HIPAA regulations that discusses the technical safeguards such as audit controls. However, Config is a cloud-native auditing service that you should evaluate as an alternative to traditional on-premises compliance tools and procedures.

Standard audit controls found in 164.312(b)(2) of the HIPAA regulations says: “Implement hardware, software, and/or procedural mechanisms that record and examine activity in information systems that contain or use electronic health information.” Config helps achieve this because it monitors the activity of both running and deleted AWS resources across time. In a DevSecOps environment in which developers have the power to turn on and turn off infrastructure in a self-service manner, using a cloud-native monitoring tool such as Config will help ensure that you can meet your auditing requirements. Understanding what a configuration looked like and who had access to it at a point in the past is something that you will need to do in a typical HIPAA audit, and Config provides this functionality.

For more about the topic of auditing HIPAA infrastructure in the cloud, the AWS re:Invent 2015 session, Architecting for HIPAA Compliance on AWS, gives additional pointers. To supplement the monitoring provided by Config, review and evaluate the easily deployable monitoring software found in the AWS Marketplace.

Get started with AWS Config

From the AWS Management Console, under Management Tools:

Click Config.

If this is your first time using Config, click Get started.

From the Set up AWS Config page, choose which types of resources that you want to track.

Config is designed to track the interaction among various AWS services. At the time of this post, you can choose to track your accounts in AWS Identity and Access Management (IAM), Amazon EC2–related services (such as Amazon Elastic Block Store, elastic network interfaces , and virtual private cloud [VPC]), and AWS CloudTrail.

All the information collected across these services is normalized into a standard format so that auditors or your compliance team may not need to understand the underlying details of how to audit each AWS service. They simply can review the Config console to ensure that your healthcare privacy standards are being met.

Because the infrastructure described in this post is designed for storing protected health information (PHI), I am going to select the check box next to All resources, as shown in the following screenshot. By choosing this option, I can ensure that not only will all the resources available for tracking be included, but also as new resource types get added to Config, they will automatically be added to my tracking as well.  

Also, be sure to select the Include global resources check box if you would like to use Config to record and govern your IAM resource configurations.

Specify where the configuration history file should be stored

Amazon S3 buckets have global naming, which makes it possible to aggregate the configuration history files across regions or send the files to a separate AWS account with limited privileges. The same consolidation can be configured for Amazon Simple Notification Service (SNS) topics, if you want to programmatically extend the information coming from Config or be immediately alerted of compliance risks.

For this example, I create a new bucket in my account and turn off the Amazon SNS topic notifications (as shown in the following screenshot), and click Continue.  

On the next page, create a new IAM role in your AWS account so that the Config service has the ability to read your infrastructure’s information. You can review the permissions that will be associated with this IAM role by clicking the arrow next to View Policy Document.

After you have verified the policy, click Allow. You should now be taken to the Resource inventory page. On the right side of the page, you should see that Recording is on and that inventory is being taken about your infrastructure. When the Taking inventory label (shown in the following image) is no longer visible, you can start reviewing your healthcare infrastructure.

Review your healthcare server

For the rest of this post, I use Config to review the healthcare web server that I created with AWS Service Catalog in How to Use AWS Service Catalog for Code Deployments: Part 2 of the Automating HIPAA Compliance Series.

From the Resource inventory page, you can search based on types of resources, such as IAM user, network access control list (ACL), VPC, and instance. A resource tag is a way to categorize AWS resources, and you can search by those tags in Config. Because I used CloudFormation to enforce tagging, I can quickly find the type of resources I am interested in by setting up search for these tags.

As an example of why this is useful, consider employee turnover. Most healthcare organizations need to have processes and procedures to deal with employee turnover in a regulated environment. Because our CloudFormation template forced developers to populate a tag with their email addresses, you can easily use Config to find all the resources the employee was using, if they decided to leave the organization (or even if they didn’t leave the company).

Search on the Resource inventory page for the employee’s email address along with the tag, InstanceOwnerEmail, and then click Look up, as shown in the following screenshot.

Click the link under Resource identifier to see the Config timeline that shows the most recent configuration recorded for the instance as well as previously recorded configurations. This timeline will show not only the configuration details of the instance itself, but also will provide the relationships to other AWS services and an easy-to-interpret Changes section. This section provides your auditing and compliance teams the ability to quickly review and interpret changes from a single interface without needing to understand the underlying AWS services in detail or jump between multiple AWS service pages.

Clicking View Details, as shown in the following image, will produce a JSON representation of the configuration, which you may consider including as evidence in the event of an audit.

The details contained in this JSON text will help you understand the structure of the configuration objects passed to AWS Lambda, which you interact with when writing your own Config rules. I discuss this in more detail later in this blog post.

Let’s walk through a quick example of one of the many ways of how an auditor or administrator may go about using Config. Let’s say that there was an emergency production issue. The issue required an administrator to add SSH access to production web servers temporarily so that he or she could log in and manually install a software patch. The patches then were installed and SSH access was revoked from all the security groups except for one instance’s security group, which was mistakenly forgotten. In Config, the compliance team is able to review the last change to any resource type by reviewing the Config Timeline (as show in the following screenshot) and clicking Change to verify exactly what was changed.

It is clear from the following screenshot that the opening of SSH on port 22 was the last change captured, so we need to close the port on this security group to block remote access to this server.

Extend healthcare-specific compliance with Config Rules

Though the SSH configuration I just walked through provided context about how Config works, in a healthcare environment we would ideally want to automate this process. This is what AWS Config Rules can do for us.

Config Rules is a powerful rule system that can target resources and then have those resources evaluated when they are created or changed or evaluated on a periodic basis (hourly, daily, and so forth).

Let’s look at how we could have used Config Rules to identify the same improperly opened SSH port discussed previously in this post.

At the time of this post, AWS Config Rules is available only in the US East (N. Virginia) Region, so to follow along, be sure you have the AWS Management Console set to that region. From the same Config service that we have been using, click Rules in the left pane and then click Add Rule.

You can choose from available managed rules. One of those rules is restricted-common-ports, which will fit our use case. I modify this rule to be limited to only those security groups I have tagged as PROD in the Trigger section, as shown in the following screenshot.

I then override the default ports of this rule and specify my own port under Rule parameters, which is 22.

Click Save and you will be taken back to the Rules page to have the rule run on your infrastructure. While the rule is running, you will see an Evaluating status, as shown in the following image.

When I return to my Resource inventory by clicking Resources in the left pane, I again search for all of my PROD environment resources. However, with AWS Config rules, I can quickly find which resources are noncompliant with the rule I just created. The following screenshot shows the Resource type and Resource identifier of the resource that is noncompliant with this rule.

In addition to this SSH production check, for a regulated healthcare environment you should consider implementing all of the managed AWS Config rules to ensure your AWS infrastructure is meeting basic compliance requirements set by your organization. A few examples are:

Use the encrypted-volumes rule to ensure that volumes tagged as PHI=”Yes” are encrypted.

Ensure that you are always logging API activity by using the cloudtrail-enabled rule.

Ensure you do not have orphaned Elastic IP addresses with eip-attached.

Verify that all development machines can only be accessed with SSH from the development VPC by changing the defaults in restricted-ssh.

Use required-tags to ensure that you have the information you need for healthcare audits.

Ensure that only PROD resources that are hardened for exposure to the public Internet are in a VPC that has an Internet gateway attached by taking advantage of managed rule, ec2-instances-in-vpc.

Create your own healthcare rules with Lambda

The managed rules just discussed will give you a jump-start to make sure your environment is meeting some of the minimum compliance requirements shared across many compliance frameworks. These rules can be configured quickly to make sure you are meeting some of the basic checks in an automated manner.

However, for deep visibility into your healthcare-compliant architecture, you might want to consider developing your own custom rules to help meet your HIPAA obligations. As a trivial, yet important, example of something you might want to check for to be sure you are staying compliant with the AWS Business Associates Agreement, you could create a custom AWS Config rule to check that all of your EC2 instances are set to dedicated tenancy. This can be done by creating a new rule as shown previously in this post, except this time click Add custom rule at the top of the Config Rules page.

You are then taken to the custom rule page where you name your rule and then click Create AWS Lambda function (as shown in the following screenshot) to be taken to Lambda.

On the landing page to which you are taken (see following screenshot), choose a predefined blueprint with the name config-rule-change-triggered, which provides a sample function that is triggered when AWS resource configurations change.

Within the code blueprint provided, customize the evaluateCompliance function by changing the line

if (‘AWS::EC2::Instance’ !== configurationItem.resourceType)

to

if ("dedicated" === configurationItem.configuration.placement.tenancy)

This will change the function to return COMPLIANT if the EC2 instance is dedicated tenancy instead of returning COMPLIANT if the resource type is simply an EC2 instance, as shown in the following screenshot.

After you have modified the Lambda function, create a role that has the permission to interact with Config. By default, Lambda will suggest that you create the role AWS Config role. You can follow all the default advice suggested in the AWS console to create a role that contains the appropriate permissions.

After you have created the new role, click Next. On the next page, review the Lambda function you are about to create, and then click Create function. Now that you have created the function, copy the function’s Amazon Resource Name (ARN) from the Lambda page and return to your Config Rules setup page. Paste the ARN of the Lambda function you just created into the AWS Lambda function ARN* box.

From the Trigger options, choose Configuration changes under Trigger type, because this is the Lambda blueprint that you used. Set the Scope of changes to whichever resources you would like this rule to evaluate. In this sample, I will apply the rule to All changes.

After a few minutes, this rule will evaluate your infrastructure, and you can use the rule to easily audit your infrastructure to display the EC2 instances that are Compliant (in this case, that are using dedicated tenancy), as shown in the following screenshot.

For more details about working with Config Rules, see the AWS Config Developer Guide to learn how to develop your own rules.

In addition to digging deeper into the documentation, you may also want to explore the AWS Config Partners who have developed Config rules that you can simply take and use for your own AWS infrastructure. For companies that have HIPAA expertise and are interested in partnering with AWS to develop HIPAA-specific Config rules, feel free to email me or leave a comment in the “Comments” section below to discuss more.

Conclusion

In this blog post, I have completed my explanation of a DevSecOps architecture for the healthcare sector by looking at AWS Config Rules. I hope you have learned how compliance and auditing can use Config Rules to track the rapid, self service changes developers make to cloud infrastructure as well as how you can extend Config with customized compliance rules that allow auditing and compliance groups to gain deep visibility into a developer-centric AWS environment.

– Chris

Замърсеният въздух на София

Post Syndicated from Боян Юруков original http://feedproxy.google.com/~r/yurukov-blog/~3/0VgfnzmblsM/

Измина точно месец, откакто започнах да тегля данните за замърсяването на въздуха в София в реално време. Междувременно пуснах няколко графики с извадки от тези данни. Пример са тези от 24-ти януари разглеждащи няколко показателя в рамките на четири дни:

Проблемът с тези графики и много други коментари по темата са, че не взимат под внимание точната дефиниция на ограничението от 50 µg/m3. За да се смята въздухът за замърсен, трябва средното количество фини прахови частици (PM10) за 24 часа да надвишава 50 микрограма на кубичен метър. Почти винаги, когато прочетете в медиите или във фейса (включително понякога в моя профил), че въздухът в София е 4 или 5 пъти над нормата, значи някой е отворил портала на общината и е видял замърсяването в последния час. Това обаче е грешно.
За да получим истинските надвишавания, трябва да усредним стойностите за всички периоди от 24 часа. Именно това направих днес и получих следните изводи за праховото замърсяване в периода 22 януари – 21 февруари. Взех средното за всички станции в столицата, но не включвам измерванията на Копитото по очевидни причини.

Нивото на PM10 e било над нормата в 48% от времето
15% от времето е било 2 пъти над нормата, 6% – 3 пъти, 4% – 4 пъти
Сумарно в почти ден от този месец замърсяването е било 5 пъти над нормата
Средното замърсяване за целия период е 62 µg/m3

На следната графика може да видите всички периоди с превишения на 24-овите усреднявания. Вертикалната скала показва колко пъти е превишен лимита.

Отново повтарям, че тези параметри не са за отделни часове, а за цели 24 часови периоди. Когато няколко периода се припокриват, ги събирам. Например, между 2-ри февруари 00:00 и 6-ти февруари 03:00 усреднените стойности са над лимита. Зачитам обаче само времето между 2-ри 12:00 и 5-ти 15:00, защото това са средите на съответните 24 часови периоди. В началото и края им средно-часовите стойности падат доста под 50 микрограма и да се показва целия период на усредненото 24-часово превишаване би било подвеждащо.
В същото време обаче, е интересно да се посочи, че ако смятаме по часове, превишаващите 50 µg/m3 са едва 43.5% от времето. Ако смятаме целите периоди, а не условността, която описах горе, получаваме не 48%, а 57%. Тоест с правилната методика ще излезе, че дори по-голяма част от времето въздухът е бил замърсен (уточнение в коментарите). Именно обяснението за грешна интерпретация на данните е редовното обяснение на ИАОС когато коментира публикациите в медиите. Явно смятайки по периоди стигаме до дори по-лоши изводи.
За съжаление, тези данни не са достъпни в отворен формат. На страницата на ИАОС в портала за отворени данни на правителството са публикувани таблици единствено с превишенията, но не и почасовите стойности на всички станции и параметри. Така бихме могли да следим в реално време и да потвърждаваме изчисленията им. Принципно не е проблем да ги публикуват, тъй като вече ги предоставят свободно на всички общини. Именно така се генерират различни графики на страниците на общините в София, Пловдив и Бургас. Доколкото разбрах обаче, не предоставят таблиците публично, защото се притесняват от грешни интерпретации и неразбиране на установените от ЕС лимити. По тази логика обаче не трябва да се пускат никакви данни за демографията или емиграцията, тъй като е пълно с грешни интерпретации. Все пак, в новия план на кабинета за отваряне на данни фигурират тези справки и надеждата е, че ще започнем да ги получаваме в скоро време.
Конкретно за София свалям данните чрез автоматичен анализ на графиките на сайта им. Това носи със себе си риск от грешки, но според сметките ми те са по-малко от 0.1%. Това не би довело до изкривяване на резултатите илюстрирани до тук. Може да свалите и анализирате сами данните ми от последния месец. Включват всички параметри на станциите в София.
Повече за мръсния въздух, здравните и икономическите му ефекти може да прочетете в Дневник, WHO и Washington Post. За индустриалното замърсяване ще намерите интерактивна графика и разяснения в статията ми от 2013-та.


Introducing On-Demand Pipeline Execution in AWS Data Pipeline

Post Syndicated from Marc Beitchman original https://blogs.aws.amazon.com/bigdata/post/Tx37EJ2IDFXITB2/Introducing-On-Demand-Pipeline-Execution-in-AWS-Data-Pipeline

Marc Beitchman is a Software Development Engineer in the AWS Database Services team

Now it is possible to trigger activation of pipelines in AWS Data Pipeline using the new on-demand schedule type. You can access this functionality through the existing AWS Data Pipeline activation API. On-demand schedules make it easy to integrate pipelines in AWS Data Pipeline with other AWS services and with on-premise orchestration engines.

For example, you can build AWS Lambda functions to activate an AWS Data Pipeline execution in response to AWS CloudWatch cron expression events or AWS S3 event notifications. You can also invoke the AWS Data Pipeline activation API directly from the AWS CLI and SDK.

To get started, create a new pipeline and use the default object to specify a property of ‘scheduleType":"ondemand”. Setting this parameter enables on-demand activation of the pipeline.

Note: Activating a running on-demand pipeline cancels the current run of the pipeline and starts a new run of the pipeline. Check the state of the current running pipeline if you do not want activation to cancel a running on-demand pipeline.

Below is a simple example of a default object configured for on-demand activation.

{
"id": "Default",
"scheduleType": "ondemand"
}

The screen shot below shows an on-demand pipeline with two Hadoop activities. The pipeline has been run three times.

Check out our samples in the AWS Data Pipeline samples Github repository. These samples show you how to create an AWS Lambda function that triggers an on-demand pipeline activation in response to CreateObject (new file) events in S3 and how to trigger an on-demand pipeline activation in response to AWS CloudWatch cron expression events.

If you have questions or suggestions, please leave a comment below.

—————————-

Related:

How Coursera Manages Large-Scale ETL using AWS Data Pipeline and Dataduct

 

Looking to learn more about Big Data or Streaming Data? Check out our Big Data and Streaming data educational pages.

 

Introducing On-Demand Pipeline Execution in AWS Data Pipeline

Post Syndicated from Marc Beitchman original https://blogs.aws.amazon.com/bigdata/post/Tx37EJ2IDFXITB2/Introducing-On-Demand-Pipeline-Execution-in-AWS-Data-Pipeline

Marc Beitchman is a Software Development Engineer in the AWS Database Services team

Now it is possible to trigger activation of pipelines in AWS Data Pipeline using the new on-demand schedule type. You can access this functionality through the existing AWS Data Pipeline activation API. On-demand schedules make it easy to integrate pipelines in AWS Data Pipeline with other AWS services and with on-premise orchestration engines.

For example, you can build AWS Lambda functions to activate an AWS Data Pipeline execution in response to AWS CloudWatch cron expression events or AWS S3 event notifications. You can also invoke the AWS Data Pipeline activation API directly from the AWS CLI and SDK.

To get started, create a new pipeline and use the default object to specify a property of ‘scheduleType":"ondemand”. Setting this parameter enables on-demand activation of the pipeline.

Note: Activating a running on-demand pipeline cancels the current run of the pipeline and starts a new run of the pipeline. Check the state of the current running pipeline if you do not want activation to cancel a running on-demand pipeline.

Below is a simple example of a default object configured for on-demand activation.

{
"id": "Default",
"scheduleType": "ondemand"
}

The screen shot below shows an on-demand pipeline with two Hadoop activities. The pipeline has been run three times.

Check out our samples in the AWS Data Pipeline samples Github repository. These samples show you how to create an AWS Lambda function that triggers an on-demand pipeline activation in response to CreateObject (new file) events in S3 and how to trigger an on-demand pipeline activation in response to AWS CloudWatch cron expression events.

If you have questions or suggestions, please leave a comment below.

—————————-

Related:

How Coursera Manages Large-Scale ETL using AWS Data Pipeline and Dataduct

 

Looking to learn more about Big Data or Streaming Data? Check out our Big Data and Streaming data educational pages.

 

Freedom, the US Government, and why Apple are still bad

Post Syndicated from Matthew Garrett original http://mjg59.dreamwidth.org/39999.html

The US Government is attempting to force Apple to build a signed image that can be flashed onto an iPhone used by one of the San Bernardino shooters. To their credit, Apple have pushed back against this – there’s an explanation of why doing so would be dangerous here. But what’s noteworthy is that Apple are arguing that they shouldn’t do this, not that they can’t do this – Apple (and many other phone manufacturers) have designed their phones such that they can replace the firmware with anything they want.In order to prevent unauthorised firmware being installed on a device, Apple (and most other vendors) verify that any firmware updates are signed with a trusted key. The FBI don’t have access to Apple’s firmware signing keys, and as a result they’re unable to simply replace the software themselves. That’s why they’re asking Apple to build a new firmware image, sign it with their private key and provide it to the FBI.But what do we mean by “unauthorised firmware”? In this case, it’s “not authorised by Apple” – Apple can sign whatever they want, and your iPhone will happily accept that update. As owner of the device, there’s no way for you to reconfigure it such that it will accept your updates. And, perhaps worse, there’s no way to reconfigure it such that it will reject Apple’s.I’ve previously written about how it’s possible to reconfigure a subset of Android devices so that they trust your images and nobody else’s. Any attempt to update the phone using the Google-provided image will fail – instead, they must be re-signed using the keys that were installed in the device. No matter what legal mechanisms were used against them, Google would be unable to produce a signed firmware image that could be installed on the device without your consent. The mechanism I proposed is complicated and annoying, but this could be integrated into the standard vendor update process such that you simply type a password to unlock a key for re-signing.Why’s this important? Sure, in this case the government is attempting to obtain the contents of a phone that belonged to an actual terrorist. But not all cases governments bring will be as legitimate, and not all manufacturers are Apple. Governments will request that manufacturers build new firmware that allows them to monitor the behaviour of activists. They’ll attempt to obtain signing keys and use them directly to build backdoors that let them obtain messages sent to journalists. They’ll be able to reflash phones to plant evidence to discredit opposition politicians.We can’t rely on Apple to fight every case – if it becomes politically or financially expedient for them to do so, they may well change their policy. And we can’t rely on the US government only seeking to obtain this kind of backdoor in clear-cut cases – there’s a risk that these techniques will be used against innocent people. The only way for Apple (and all other phone manufacturers) to protect users is to allow users to remove Apple’s validation keys and substitute their own. If Apple genuinely value user privacy over Apple’s control of a device, it shouldn’t be a difficult decision to make.comment count unavailable comments

Register for and Attend This March 2 Webinar—Using AWS WAF and Lambda for Automatic Protection

Post Syndicated from Craig Liebendorfer original https://blogs.aws.amazon.com/security/post/Tx273MQOP5UGJWO/Register-for-and-Attend-This-March-2-Webinar-Using-AWS-WAF-and-Lambda-for-Automa

As part of the AWS Webinar Series, AWS will present Using AWS WAF and Lambda for Automatic Protection on Wednesday, March 2. This webinar will start at 10:00 A.M. and end at 11:00 A.M. Pacific Time (UTC-8).

AWS WAF Software Development Manager Nathan Dye will share AWS Lambda scripts that you can use to automate security with AWS WAF and write dynamic rules that can prevent HTTP floods, protect against badly behaving IPs, and maintain IP reputation lists. You can also learn how Brazilian retailer, Magazine Luiza, leveraged AWS WAF and Lambda to protect its site and run an operationally smooth Black Friday.

You will:

Learn how to use AWS WAF and Lambda together to automate security responses.

Get the Lambda scripts and AWS CloudFormation templates that prevent HTTP floods, automatically block bad-behaving IPs and bad-behaving bots, and allow you to import and maintain publicly available IP reputation lists.

Gain an understanding of strategies for protecting your web applications using AWS WAF, Amazon CloudFront, and Lambda.

The webinar is free, but space is limited and registration is required. Register today.

– Craig

Band-Aids over Basics: Anti-Drone Bill Revisions Compound Earlier Missteps

Post Syndicated from Elizabeth Wharton original http://blog.erratasec.com/2016/02/band-aids-over-basics-anti-drone-bill.html

Glossing over fundamental legislation flaws in favor of quick fixes only serves lawyers and lobbyists.  In this guest post, friend of Errata Elizabeth Wharton (@lawyerliz) highlights the importance of fixing the underlying technology concepts as Georgia’s anti-drone legislation continues to miss the mark and kill innovation. by Elizabeth WhartonGeorgia’s proposed anti-drone legislation, HB 779, remains on a collision course to crush key economic drivers and technology innovations within the state.  Draft revisions ignore all of the legislation’s flawed technical building blocks in favor of a series of peripheral provision modifications (in some cases removing entire safe harbor carve-outs), making a bad piece of legislation worse for Georgia’s film, research, and aviation technology industries. Only the lawyers and lobbyists hired to challenge and defend the resulting lawsuits benefit from this legislative approach.  Georgia should scrap this piece-meal, awkward legislation in favor of a commission of industry experts to craft a policy foundation for unmanned aircraft systems within Georgia.Band-aid technology policy approaches skip over the technical issues and instead focus on superficial revisions.  Whether a company is prohibited from flying over a railroad track in addition to a road becomes a moot point when the definition of “image” results in banning all non-government contracted flights (regardless if business or recreational).  As outlined in my earlier post, failing to address the legislation’s underlying mechanics is gambling with Georgia’s economic growth and educational research. In bill-making as in software development, patch the vulnerabilities and fix the kinks before the product goes live.  Glossy packaging cannot cover for a product that fails to deliver.  HB 779-alpha version blindly followed a legislative course chartered in a handful of other states, built on a misunderstanding of the basic technology used in unmanned systems and as used in the broader interconnected “internet of things.” Minor definitional revisions and tweaks, such as excluding replicas of weapons like those on a Star-Wars themed drone and exempting military research,  within substitute HB 779 (HB 779S) do not address the legislation’s core technical flaws. When an image is more than an image, it costs Georgia billions of economic dollars.Defined terms in HB 779S must be tailored to fit the technology surroundings and underlying issues to avoid a complete shutdown.  With the latest round of HB 779S revisions, capturing any images (as is broadly defined) via unmanned systems is barred unless the commercial use meets a shorter and stricter list of exceptions.  Gone is even the ability to obtain permission from the individual or property owner whose “image” was captured. Prohibiting the capture of every signal, every transmitted or received data point under the definition of “image” means that flying drones for fun, to inspect utility and infrastructure, for filming movies, and in farming operations are all grounded. The broad “image” definition combined with zero exemptions for hobbyists, including first person view (FPV) racing enthusiasts,  effectively ban all indoor and outdoor recreational flights.  Until legislators address core definitional concepts, entire uses of unmanned systems and their safety and cost-saving benefits are grounded within Georgia. When to follow in order to lead.A new bill introduced in Georgia’s Senate sharply pivots away from HB 779S’ haphazard approach, looking instead to industry experts and experienced stakeholders to direct Georgia’s unmanned systems policies. Senate Bill 325 would create a commission comprised of appointed representatives from government, law enforcement, the unmanned systems technology industry, and the aviation industry to craft recommendations and guidance.  SB 325 takes a page from states that are acing the technology test.  Similar to industry commission efforts underway in Alaska, Hawaii, Illinois, and Virginia, SB 325 refocuses attention onto creating a workable policy framework built on a technical foundation instead of reactions to the latest viral drone video.  Sometimes you have to know when to fold, when to walk away, and when to run. Georgia should run from the ill-conceived hodge-podge of superficial revisions to HB 779S in favor of specialized technical recommendations.  Get the basics squared away before grinding entire industries and economic drivers to a complete stop.[Update: HB 779S has downgraded the Millennium Falcon from a weaponized drone felony to a possible civil action depending on any collected RF data.] Elizabeth is a business and policy attorney specializing in information security and unmanned systems.  While Elizabeth is an attorney, nothing in this post is intended as legal advice.  If you need legal advice, get your own lawyer.

Band-Aids over Basics: Anti-Drone Bill Revisions Compound Earlier Missteps

Post Syndicated from Elizabeth Wharton original http://blog.erratasec.com/2016/02/band-aids-over-basics-anti-drone-bill.html

Glossing over fundamental legislation flaws in favor of quick fixes only serves lawyers and lobbyists.  In this guest post, friend of Errata Elizabeth Wharton (@lawyerliz) highlights the importance of fixing the underlying technology concepts as Georgia’s anti-drone legislation continues to miss the mark and kill innovation. by Elizabeth WhartonGeorgia’s proposed anti-drone legislation, HB 779, remains on a collision course to crush key economic drivers and technology innovations within the state.  Draft revisions ignore all of the legislation’s flawed technical building blocks in favor of a series of peripheral provision modifications (in some cases removing entire safe harbor carve-outs), making a bad piece of legislation worse for Georgia’s film, research, and aviation technology industries. Only the lawyers and lobbyists hired to challenge and defend the resulting lawsuits benefit from this legislative approach.  Georgia should scrap this piece-meal, awkward legislation in favor of a commission of industry experts to craft a policy foundation for unmanned aircraft systems within Georgia.Band-aid technology policy approaches skip over the technical issues and instead focus on superficial revisions.  Whether a company is prohibited from flying over a railroad track in addition to a road becomes a moot point when the definition of “image” results in banning all non-government contracted flights (regardless if business or recreational).  As outlined in my earlier post, failing to address the legislation’s underlying mechanics is gambling with Georgia’s economic growth and educational research. In bill-making as in software development, patch the vulnerabilities and fix the kinks before the product goes live.  Glossy packaging cannot cover for a product that fails to deliver.  HB 779-alpha version blindly followed a legislative course chartered in a handful of other states, built on a misunderstanding of the basic technology used in unmanned systems and as used in the broader interconnected “internet of things.” Minor definitional revisions and tweaks, such as excluding replicas of weapons like those on a Star-Wars themed drone and exempting military research,  within substitute HB 779 (HB 779S) do not address the legislation’s core technical flaws. When an image is more than an image, it costs Georgia billions of economic dollars.Defined terms in HB 779S must be tailored to fit the technology surroundings and underlying issues to avoid a complete shutdown.  With the latest round of HB 779S revisions, capturing any images (as is broadly defined) via unmanned systems is barred unless the commercial use meets a shorter and stricter list of exceptions.  Gone is even the ability to obtain permission from the individual or property owner whose “image” was captured. Prohibiting the capture of every signal, every transmitted or received data point under the definition of “image” means that flying drones for fun, to inspect utility and infrastructure, for filming movies, and in farming operations are all grounded. The broad “image” definition combined with zero exemptions for hobbyists, including first person view (FPV) racing enthusiasts,  effectively ban all indoor and outdoor recreational flights.  Until legislators address core definitional concepts, entire uses of unmanned systems and their safety and cost-saving benefits are grounded within Georgia. When to follow in order to lead.A new bill introduced in Georgia’s Senate sharply pivots away from HB 779S’ haphazard approach, looking instead to industry experts and experienced stakeholders to direct Georgia’s unmanned systems policies. Senate Bill 325 would create a commission comprised of appointed representatives from government, law enforcement, the unmanned systems technology industry, and the aviation industry to craft recommendations and guidance.  SB 325 takes a page from states that are acing the technology test.  Similar to industry commission efforts underway in Alaska, Hawaii, Illinois, and Virginia, SB 325 refocuses attention onto creating a workable policy framework built on a technical foundation instead of reactions to the latest viral drone video.  Sometimes you have to know when to fold, when to walk away, and when to run. Georgia should run from the ill-conceived hodge-podge of superficial revisions to HB 779S in favor of specialized technical recommendations.  Get the basics squared away before grinding entire industries and economic drivers to a complete stop.[Update: HB 779S has downgraded the Millennium Falcon from a weaponized drone felony to a possible civil action depending on any collected RF data.] Elizabeth is a business and policy attorney specializing in information security and unmanned systems.  While Elizabeth is an attorney, nothing in this post is intended as legal advice.  If you need legal advice, get your own lawyer.

How to Translate HIPAA Controls to AWS CloudFormation Templates: Part 3 of the Automating HIPAA Compliance Series

Post Syndicated from Chris Crosbie original https://blogs.aws.amazon.com/security/post/Tx2X8A35ONJYE2V/How-to-Translate-HIPAA-Controls-to-AWS-CloudFormation-Templates-Part-3-of-the-Au

In my previous post, I walked through the setup of a DevSecOps environment that gives healthcare developers the ability to launch their own healthcare web server. At the heart of the architecture is AWS CloudFormation, a JSON representation of your architecture that allows security administrators to provision AWS resources according to the compliance standards they define. In today’s post, I will share examples that provide a Top 10 List of CloudFormation code snippets that you can consider when trying to map the requirements of the AWS Business Associates Agreement (BAA) to CloudFormation templates.

The example CloudFormation template I use as an example in today’s post is the same template I used in my previous post to define a healthcare product in AWS Directory Service. The template creates a healthcare web server that follows many of the contractual obligations outlined in the AWS BAA. The template also allows healthcare developers to customize their web server according to the following parameters:

FriendlyName – The name with which you tag your server.

CodeCommitRepo – The cloneUrlHttp field for the Git repository that you would like to release on the web server.

Environment – A choice between PROD and TEST. TEST will create a security group with several secure ports open, including SSH, from within a Classless Inter-Domain Routing (CIDR) block range. Choosing PROD will create a security group with HTTPS that is only accessible from the public Internet. (Exposing production web servers directly to the public Internet is not a best practice and is shown for example purposes only).

PHI – If you need to store protected health information (PHI) on the server. Choosing YES will create an encrypted EBS volume and attach it to the web server.

WebDirectory – This is the name of your website. For example, DNS-NAME/WebDirectory.

InstanceType – This is the Amazon EC2 instance type on which the code will be deployed. Because the AWS BAA requires PHI to be processed on dedicated instances, the choices here are limited to those EC2 instance types that are offered in dedicated tenancy mode.

I will forego CloudFormation tutorials in this post because an abundance of material for learning CloudFormation is easily accessible in AWS documentation. Instead, I will jump right in to share the Top 10 List of CloudFormation code snippets. If you are new to CloudFormation, you might find value in first understanding the capabilities it offers. qwikLABS is a great resource for learning AWS technology and offers multiple ClouldFormation labs to bring you up to speed quickly. The qwikLabs site offers entry-level CloudFormation labs at no cost. 

It’s important to note that the example CloudFormation template from which the following 10 snippets are taken is only an example and does not guarantee HIPAA or AWS BAA compliance. The template is meant as a starting point for developing your own templates that not only help you meet your AWS BAA obligations, but also provide general guidance as you expand beyond a single web server and start to utilize DevSecOps methods for other HIPAA-driven compliance needs.

Without further ado, here is a Top 10 List of CloudFormation compliance snippets that you should consider when building your own CloudFormation templates. In each section, I highlight the code I refer to in the associated description.

1. Set tenancy to dedicated.

To run a web server, you need an EC2 instance on which to install it. This can be accomplished in CloudFormation by adding it as a resource in the template. However, you also want to make sure that the EC2 instance meets your AWS BAA obligations by running in dedicated tenancy mode (in other words, your instance runs on single-tenant hardware).

To enforce this, in the EC2 instance change the tenancy property of the instance to dedicated.

    "EC2Instance": {
      "Type": "AWS::EC2::Instance",
      "Properties": {
          "Tenancy" : "dedicated",
}}

2. Turn on detailed monitoring.

Detailed monitoring provides data about your EC2 instance over 1-minute periods. You can enable this in CloudFormation by adding a parameter to your EC2Instance resource.

When you turn on detailed monitoring, the data is then available for the instance in AWS Management Console graphs or through the API. Because there is an upcharge for detailed monitoring, you might want to turn this on only in your production environments. Having data each minute could be critical to recognizing failures and triggering responses to these failures.

On the other hand, also turning on detailed monitoring in your development environments could help you diagnose issues and prevent you from inadvertently moving such issues to production.

"EC2Instance": {
"Type": "AWS::EC2::Instance",
"Properties": {
"Tenancy" : "dedicated",
"Monitoring": "true",
}}

3. Define security group rules based on environment.

CloudFormation allows you to modify the firewall rules on your EC2 instance based on input parameters given to the template when it runs. This is done with AWS security groups and is very useful when you want to enforce certain compliance measures you define, such as disabling SSH access to production web servers or restricting development web servers from being accessed by the public Internet.

To do this, change security group settings based on whether your instance is targeted at test, QA, or production environments. You can do this by using conditions and an intrinsic If function. Intrinsic functions help you modify the security groups between environments according to your compliance standards, but at the same time, maintain consistent infastructure between environments.

"Conditions" : {
"CreatePRODResources" : {"Fn::Equals" : [{"Ref" : "Environment"}, "PROD"]}
},
"EC2Instance": {
"Type": "AWS::EC2::Instance",
"Properties": {
"Tenancy" : "dedicated",
"Monitoring": "true"
}},
"SecurityGroups": [{
"Fn::If": [
"CreateTESTResources",
{"Ref": "InstanceSecurityGroupTEST"},
{"Ref": "InstanceSecurityGroupPROD"}
]
}],
"InstanceSecurityGroupTEST": {
"Type": "AWS::EC2::SecurityGroup",
"Condition" : "CreateTESTResources",
"Properties": {
"GroupDescription": "Enable access only from secure protocols",
"SecurityGroupIngress": [
{ "IpProtocol" : "tcp", "FromPort" : "22", "ToPort" : "22", "CidrIp" : "10.0.0.0/24" },
{ "IpProtocol" : "tcp", "FromPort" : "443", "ToPort" : "443", "CidrIp" : "10.0.0.0/24" },
{ "IpProtocol" : "tcp", "FromPort" : "143", "ToPort" : "143", "CidrIp" : "10.0.0.0/24" },
{ "IpProtocol" : "tcp", "FromPort" : "465", "ToPort" : "465", "CidrIp" : "10.0.0.0/24" },
{ "IpProtocol" : "icmp", "FromPort" : "8", "ToPort" : "-1", "CidrIp" : "10.0.0.0/24" }
]
}}

4. Force instance tagging.

EC2 tagging is a common way for auditors and security professionals to understand why EC2 instances were launched and for what purpose. You can require the developer launching the template to enter information that you need for EC2 instance tagging by using CloudFormation parameters.

By using parameters such as AllowedValues and MinLength, you can maintain consistent tagging mechanisms by requiring that the developer enter a tag from a predetermined list of options (AllowedValues), or simply making them enter a text value meeting a certain length (MinLength).

In the following snippet, I use an AllowedValues list of YES and NO to make the developer tag the instance with information about whether or not the EC2 instance will be used to store PHI. I also use the MinLength to make the developer tag the EC2 instance with their email address so that we know who to contact if there is an issue with the instance.

"Parameters": {
"PHI":
{
"Description": "Will this instance need to store protected health information?",
"Default": "YES",
"Type": "String",
"AllowedValues": [
"YES",
"NO"
]
},
"Environment":
{
"Description": "Please specify the target environment",
"Default": "TEST",
"Type": "String",
"AllowedValues": [
"TEST",
"PROD",
“QA”
]
},
},
"InstanceOwnerEmail":
{
"Description": "Please enter the email address of the developer taking responsblity for this server",
"Default": "@mycompany.com",
"Type": "String"
},
"FriendlyName":
{
"Description": "Please enter a friendly name for the server",
"Type": "String",
"MinLength": 3,
"ConstraintDescription": "Must enter a friendy name for the server that is at least three characters long."
},
"Resources": {
"EC2Instance": {
"Type": "AWS::EC2::Instance",
"Properties": {
"Tags":[
{ "Key" : "PHI", "Value" : {"Ref": "PHI"} },
{ "Key" : "Name", "Value" : {"Ref": "FriendlyName"} },
{ "Key" : "Environment", "Value" : {"Ref": "Environment"} },
{ "Key" : "InstanceOwnerEmail", "Value" : {"Ref": "InstanceOwnerEmail"} }
}}

5. Use IAM roles for EC2.

Applications must sign their API requests with AWS credentials. IAM roles are designed so that applications can securely make API requests from your instances, without requiring you to manage the security credentials that the applications use. In this example, I give the EC2 instance permission to perform a Git clone from our AWS CodeCommit repositories and push log data to Amazon CloudWatch.

"Resources": {

"HealthcareWebRole":
{
"Type": "AWS::IAM::Role",
"Properties":
{
"AssumeRolePolicyDocument":
{
"Version" : "2012-10-17",
"Statement":
[ {
"Effect": "Allow",
"Principal":
{
"Service": [ "ec2.amazonaws.com" ]
},
"Action": [ "sts:AssumeRole" ]
} ]
},
"Path": "/",
"ManagedPolicyArns": ["arn:aws:iam::aws:policy/AWSCodeCommitReadOnly", "arn:aws:iam::aws:policy/service-role/AmazonAPIGatewayPushToCloudWatchLogs"]
}
},
"HealthcareWebInstanceProfile":
{
"Type": "AWS::IAM::InstanceProfile",
"Properties":
{
"Path": "/",
"Roles": [ { "Ref": "HealthcareWebRole" } ]
}
},
"EC2Instance": {
"Type": "AWS::EC2::Instance",
"Properties": {
IamInstanceProfile": {"Ref": "HealthcareWebInstanceProfile"}
}}

6. Add encrypted storage if you need to store PHI.

Applications that need to store PHI must encrypt the data at rest to meet the AWS BAA requirements. Amazon EBS encryption is one way to do this. The highlighted portion of the following snippet will add an encrypted EBS volume if the developer answers YES to the question, ”Will this instance need to store protected health information?”

"EC2Instance": {
"Type": "AWS::EC2::Instance",
"Properties": {
"BlockDeviceMappings" : [
{
"DeviceName" : "/dev/sdm",
"Ebs" : {
"VolumeType" : "io1",
"Iops" : "200",
"DeleteOnTermination" : "false",
"VolumeSize" : "10",
"Encrypted": {
"Fn::If" : [
"ContainsPHI",
"true",
"false"
]
}
}
},
{
"DeviceName" : "/dev/sdk",
"NoDevice" : {}
}
]}

7. Turn on CloudWatch Logs.

On each instance, install the AWS CloudWatch Logs agent, which uses CloudWatch Logs to monitor, store, and access your log files from EC2 instances. You can then retrieve the associated log data from a centralized logging repository that can be segregated from the application development team. 

After turning on the CloudWatch Logs agent, by default logs from the /var/log/messages are sent to CloudWatch. These messages store valuable, nondebug, and noncritical messages. These logs should be considered the general system activity logs, where you can start ensuring that you have the highest level of audit logging. However, you most likely will want to modify the /etc/awslogs/awslogs.conf file to add additional log locations if you choose to use this service in a HIPAA environment.

For example, you may want to add authentication logs (/var/log/auth.log) and set up alerting in CloudWatch to notify an administrator if repeated unauthorized access attempts are made against your server.

The following snippet will start the CloudWatch Logs agent and make sure it gets turned on during each startup.

"EC2Instance": {
"Type": "AWS::EC2::Instance",
"Properties": {
"UserData" :
{ "Fn::Base64" :
{ "Fn::Join" :
[
"",
[
"service awslogs startn",
"chkconfig awslogs onn"
]
}}

8. Install Amazon Inspector.

Amazon Inspector is an automated security assessment service (currently offered in preview mode) that can help improve the security and compliance of applications deployed on AWS. Amazon Inspector allows you to run assessments for common best practices, vulnerablities, and exposures, and these findings can then be mapped to your own HIPAA control frameworks. Amazon Inspector makes it easier to validate that applications are adhering to your defined standards, and it helps to manage security issues proactively before a critical event such as a breach occurs.

Amazon Inspector requires an agent-based client to be installed on the EC2 instance. However, this installation can be performed by using a CloudFormation template. In the CloudFormation template used for this blog post, the Amazon Inspector installation is intentionally missing because Amazon Inspector in preview mode is available only in a different region than CodeCommit. However, if you would like to install it while in preview mode, you can use the following snippet.         

"EC2Instance": {
"Type": "AWS::EC2::Instance",
"Properties": {
"UserData" :
{ "Fn::Base64" :
{ "Fn::Join" :
[
"",
[
“curl -O https://s3-us-west-2.amazonaws.com/inspector.agent.us-west-2/latest/install”
“sudo bash install”
]
}}

9. Configure SSL for encryption in flight.

As detailed in the AWS BAA, you must encrypt all PHI in flight. For production healthcare applications, open only those ports in the EC2 security group that are used in secure protocols.

The following snippet provides an example of UserData pulling down self-signed certificates from a publicly available Amazon S3 site. Although there may be situations when you have deemed self-signed certificates to be acceptable, a more secure approach would be to store the certificates in a private S3 bucket and give permission to the EC2 role to download the certificates and configurations.

Important: The certificates in the following code snippet are provided for demonstration purposes only and should never be used for any type of security or compliance purpose.

"EC2Instance": {
"Type": "AWS::EC2::Instance",
"Properties": {
"UserData" :
{ "Fn::Base64" :
{ "Fn::Join" :
[
"",
[
"wget https://s3.amazonaws.com/awsiammedia/public/sample/hippa-compliance/aws-service-catalog/code-deployments/fakehipaa.crt -P /etc/pki/tls/certsn",

"wget https://s3.amazonaws.com/awsiammedia/public/sample/hippa-compliance/aws-service-catalog/code-deployments/fakehipaa.key -P /etc/pki/tls/private/n",

"wget https://s3.amazonaws.com/awsiammedia/public/sample/hippa-compliance/aws-service-catalog/code-deployments/fakehipaa.csr -P /etc/pki/tls/private/n",

"wget https://s3.amazonaws.com/awsiammedia/public/sample/hippa-compliance/aws-service-catalog/code-deployments/ssl.conf -P /etc/httpd/conf.d/ssl.confn",

"service httpd startn",

"chkconfig httpd onn"
]
}}

10. Clone from AWS CodeCommit.

So far the snippets in this post have focused on getting infrastructure secured in accordance with your compliance standards. However, you also need a process for automated code deployments. A variety of tools and techniques is available for automating code deployments, but in the following snippet, I will demonstrate an automated code deployment using an EC2 role and CodeCommit. This combination of an EC2 role and CodeCommit requires you to set the system-wide Git preferences by modifying the /etc/gitconfig file.

In the following snippet, after the authorized connection to CodeCommit is established, Git clones the repository provided by the developer into the default root folder of an Apache web server. However, this example could easily be extended to look for developer makefiles or to have an extra step that calls shell scripts that are written by the developer but maintained in CodeCommit.

"EC2Instance": {
"Type": "AWS::EC2::Instance",
"Properties": {
"UserData" :
{ "Fn::Base64" :
{ "Fn::Join" :
[

"git config –system credential.https://git-codecommit.us-east-1.amazonaws.com.helper ‘!aws –profile default codecommit credential-helper [email protected]’n",

"git config –system credential.https://git-codecommit.us-east-1.amazonaws.com.UseHttpPath truen",
"aws configure set region us-east-1n",

"cd /var/www/htmln",

"git clone ", {"Ref": "CodeCommitRepo"}, " ", {"Ref": "WebDirectory"}, " n",

[
]
}}

Conclusion

I hope that these 10 code snippets give you a head start to develop your own CloudFormation compliance templates. I encourage you to build on the template I provided to learn more about how CloudFormation works as you take steps to achieve your own DevSecOps architecture.

– Chris

The disingenuous question (FBIvApple)

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/02/the-disingenuous-question-fbivapple.html

I need more than 140 characters to respond to this tweet:If you were a crime victim and key evidence was on suspect’s phone, would you want govt to search phone w/ warrant?— Orin Kerr (@OrinKerr) February 22, 2016It’s an invalid question to ask. Firstly, it’s asking for the emotional answer, not the logical answer. Secondly, it’s only about half the debate, when the FBI is on your side, and not against you.The emotional question is like ISIS kidnappings. Logically, we know that the ransom money will fund ISIS’s murderous campaign, killing others. Logically, we know that paying this ransom just encourages more kidnappings of other people — that if we stuck to a policy of never paying ransoms, then ISIS would stop kidnapping people.If it were my loved ones at stake, of course I’d do anything to get them back alive and healthy, including pay a ransom. But at the same time, logically, I’d vote for laws to stop people paying ransoms. In other words, I’d vote for laws that I would then happily break should the situation ever apply to me.Thus, the following question has no meaning in a policy debate over paying ransoms:If it was your loved one at stake, would you pay the ransom?Even those who say “no” are being disingenuous. It’s easy to say it because they aren’t in danger of the situation ever happening to them. Most would change their answer to “yes” if it became real.The second reason the original question is invalid because it ignores why we have warrants in the first place. Unlimited police power is a bad thing. What you need is a counterbalancing question.For example, in 2007 (before iPhones became popular) the FBI showed up at my business and threatened me in order to keep something quiet. Specifically, I was to give a talk at a conference on how, contrary to what the company “TippingPoint” claimed, it was easy to decrypt their “signature” files. That company convinced the FBI that it was important to “national security” that I keep such information quiet. So the FBI came to our offices, and first asked politely, then started threatening me, in order to keep the information quiet.So, in such situations, should the FBI be able to get a warrant and search my phone? Note that a warrant would be easy to get, as the company TippingPoint suggested that I was also trying to blackmail (demanding money to stay quiet). It was a lie, they kept offering to bribe us to keep quiet and we kept telling them “under no circumstances”, but it’s enough to get a warrant in order go fishing for something else to hang us by.If FBI threatened you to keep quiet about something, should they be able to search your phone w/ warrant?@OrinKerr— Rob Graham ❄️ (@ErrataRob) February 22, 2016This is less a meaningful question. Most people are sheep, believing that as long as they don’t stick their heads up above the herd, they are in no danger of getting their heads lopped off. But even if it’s not your head in danger, don’t you want to protect those who do raise their heads?Rather than a “Going Dark” problem, ours is one of “Going Light”. We all now carry a GPS tracking device in our pocket that contains a microphone and video camera. We are quickly putting a microphone (and sometimes camera) in every room in our house, with devices like smart TVs and Amazon’s Echo. License plate readers line the roads, and face recognition (as well as video cameras) are located everywhere crowds gather. All our credit card transactions are slurped up by the government, as are our phone metadata (even more so since the so-called USA FREEDOM ACT).The question is whether the “warrant upon probable cause” is sufficient protection for the Going Light problem? Or do we need more limits?We activists think more limits are needed. The first limits are the ones requiring no special laws. Encryption is basic math — the effort necessary to stop encryption would require a police state worse than that created by the War on Drugs. The government should not be able to conscript programmers to create new technology on their behalf, as in the current Apple-v-FBI case.The War on Drugs and the War on Terror have made a police state out of America. We jail 10 times more people, per capita, than other free nations (more than virtually any other nation). Law enforcement steals more through “civil asset forfeiture” than burglars do. We can no longer travel without showing our papers at numerous checkpoints. We can no longer communicate nor use credit cards without a record going to a government controlled database.Yes, this police state works in our favor when it’s us that have been a victim of crime. But on the whole, we are now more in danger from the police state than we are from crime itself.BTW, @orinkerr is awesome. He asks the question because he honestly wants to know the answer, not because he’s slyly arguing the point. He brings up the question because so many others mention it. I’m using his as they example only because it’s the one that’s handy, and I’m too lazy hunting down a different one. Update: as he points out.

The disingenuous question (FBIvApple)

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/02/the-disingenuous-question-fbivapple.html

I need more than 140 characters to respond to this tweet:If you were a crime victim and key evidence was on suspect’s phone, would you want govt to search phone w/ warrant?— Orin Kerr (@OrinKerr) February 22, 2016It’s an invalid question to ask. Firstly, it’s asking for the emotional answer, not the logical answer. Secondly, it’s only about half the debate, when the FBI is on your side, and not against you.The emotional question is like ISIS kidnappings. Logically, we know that the ransom money will fund ISIS’s murderous campaign, killing others. Logically, we know that paying this ransom just encourages more kidnappings of other people — that if we stuck to a policy of never paying ransoms, then ISIS would stop kidnapping people.If it were my loved ones at stake, of course I’d do anything to get them back alive and healthy, including pay a ransom. But at the same time, logically, I’d vote for laws to stop people paying ransoms. In other words, I’d vote for laws that I would then happily break should the situation ever apply to me.Thus, the following question has no meaning in a policy debate over paying ransoms:If it was your loved one at stake, would you pay the ransom?Even those who say “no” are being disingenuous. It’s easy to say it because they aren’t in danger of the situation ever happening to them. Most would change their answer to “yes” if it became real.The second reason the original question is invalid because it ignores why we have warrants in the first place. Unlimited police power is a bad thing. What you need is a counterbalancing question.For example, in 2007 (before iPhones became popular) the FBI showed up at my business and threatened me in order to keep something quiet. Specifically, I was to give a talk at a conference on how, contrary to what the company “TippingPoint” claimed, it was easy to decrypt their “signature” files. That company convinced the FBI that it was important to “national security” that I keep such information quiet. So the FBI came to our offices, and first asked politely, then started threatening me, in order to keep the information quiet.So, in such situations, should the FBI be able to get a warrant and search my phone? Note that a warrant would be easy to get, as the company TippingPoint suggested that I was also trying to blackmail (demanding money to stay quiet). It was a lie, they kept offering to bribe us to keep quiet and we kept telling them “under no circumstances”, but it’s enough to get a warrant in order go fishing for something else to hang us by.If FBI threatened you to keep quiet about something, should they be able to search your phone w/ warrant?@OrinKerr— Rob Graham ❄️ (@ErrataRob) February 22, 2016This is less a meaningful question. Most people are sheep, believing that as long as they don’t stick their heads up above the herd, they are in no danger of getting their heads lopped off. But even if it’s not your head in danger, don’t you want to protect those who do raise their heads?Rather than a “Going Dark” problem, ours is one of “Going Light”. We all now carry a GPS tracking device in our pocket that contains a microphone and video camera. We are quickly putting a microphone (and sometimes camera) in every room in our house, with devices like smart TVs and Amazon’s Echo. License plate readers line the roads, and face recognition (as well as video cameras) are located everywhere crowds gather. All our credit card transactions are slurped up by the government, as are our phone metadata (even more so since the so-called USA FREEDOM ACT).The question is whether the “warrant upon probable cause” is sufficient protection for the Going Light problem? Or do we need more limits?We activists think more limits are needed. The first limits are the ones requiring no special laws. Encryption is basic math — the effort necessary to stop encryption would require a police state worse than that created by the War on Drugs. The government should not be able to conscript programmers to create new technology on their behalf, as in the current Apple-v-FBI case.The War on Drugs and the War on Terror have made a police state out of America. We jail 10 times more people, per capita, than other free nations (more than virtually any other nation). Law enforcement steals more through “civil asset forfeiture” than burglars do. We can no longer travel without showing our papers at numerous checkpoints. We can no longer communicate nor use credit cards without a record going to a government controlled database.Yes, this police state works in our favor when it’s us that have been a victim of crime. But on the whole, we are now more in danger from the police state than we are from crime itself.BTW, @orinkerr is awesome. He asks the question because he honestly wants to know the answer, not because he’s slyly arguing the point. He brings up the question because so many others mention it. I’m using his as they example only because it’s the one that’s handy, and I’m too lazy hunting down a different one. Update: as he points out.

Case 224: Unsupported Accusations

Post Syndicated from The Codeless Code original http://thecodelesscode.com/case/224

While passing by the temple’s Support Desk, the nun
Hwídah heard of strange behavior in a certain
application. Since she had been appointed by master
Banzen to assist with production issues, the nun
dutifully described the symptoms to the application’s senior
monk:

“Occasionally a user will return to a record they had
previously edited, only to discover that some information is
missing,” said Hwídah. “The behavior is not repeatable, and
the users confess that they may be imagining things.”

“I have heard these reports,” said the senior monk. “There is
no bug in the code that I can see, nor can we reproduce the
problem in a lower environment.”

“Still, it may be prudent to investigate further,” said the
nun.

The monk sighed. “We are all exceedingly busy. Only a few
users have reported this issue, and even they doubt
themselves. So far, all are content to simply re-enter the
‘missing’ information and continue about their business.
Can you offer me one shred of evidence that this is anything
more than user error?”

The nun shook her head, bowed, and departed.

- - -

That night, the senior monk was awoken from his sleep by a
squeaking under his bed, of the sort a mouse might make.
This sound continued throughout the night—sometimes in
one place, sometimes another, presumably as the intruder
wandered about in search of food. A sandal flung in the
direction of the sound resulted in immediate quiet, but
eventually the squeaking would begin again in a different
part of the room.

“This is doubtless some lesson that the meddlesome Hwídah
wishes to teach me,” he complained to his fellows the next
day, dark circles under his eyes. “Yet I will not be
bullied into chasing nonexistent bugs. If the nun is so
annoyed by the squeaking of our users, let her deal with
it!”

The monk set mousetraps in the corners and equipped himself
with a pair of earplugs. Thus he passed the next night, and
the night after, though his sleep was less restful than he
would have liked.

On the seventh night, the exhausted monk turned off the
light and fell hard upon his bed. There was a loud CRACK
and the monk found himself tumbling through space. With a
CRASH he bounced off his mattress and rolled onto a cold
stone floor. His bed had, apparently, fallen through the
floor into the basement.

Perched high on a ladder—just outside the gaping hole in
the basement’s wooden ceiling—was the nun Hwídah, her
face lit only by a single candle hanging nearby. She
descended and dropped an old brace-and-bit hand drill into
the monk’s lap. Then she crouched down next to his ear.

“If you don’t understand it, it’s dangerous,” whispered the
nun.

По-големия video setup на OpenFest 2015

Post Syndicated from Vasil Kolev original https://vasil.ludost.net/blog/?p=3290

(това съм го написал преди няколко месеца и тотално съм забравил да го кача)
Тази година направихме доста по-сериозен setup за видео-записа. За малко начална информация, може да видите схемите за зала “България”, камерна зала и студио “Музика”.
(или доста по-артистично нарисуваните от Guru схеми за зала “България” и камерната зала)
Изискванията ни тази година бяха доста по-високи от предишните:
– Възможност за гледане на камерата или директно на сигнала от лектора, за всички зали;
– За двете по-големи зали, възможност да се снима задаващия въпроса от публиката;
– Пак за двете по-големи зали, контрол от видео-миксера какво излиза на изхода на проектора;
– Поне 720p разделителна способност на записа;
– Full HD (1080p) stream от зала “България”;
– Стерео-запис и стерео-звук от зала “България” (които не се използваха, защото лекцията, на която се очакваше да трябват отпадна в последния момент);
– Запис на случващото се на сцената (свирещите музиканти) в зала “България”;
– WebM stream (което отпадна, поради липса на време и мощност за encode-ване).
Всичките ни setup-и имат следните общи идеи:
– Всичкият звук се събира в един аудио пулт, от него се изважда звук до озвучаването на залата и до една от камерите (която маркираме като primary камера). Това се прави, за да сме сигурни в синхронизацията на звука и видеото (иначе има шанс да се получи един-два кадъра разсинхронизация на звука и видеото, което е доста дразнещо);
– На primary камерата се държеше memory карта, на която се правеше backup запис;
– В зала “България” през миксера минаваше и intercom-а, чрез който се комуникираше с операторите;
– По принцип в залите разполагахме камера за близък план (да снима лектора), общ план и към публиката (т.е. към задаващите въпроси);
– Всичките видео източници – видео камери, лаптоп на лектора и т.н. – се настройват да изваждат същата разделителна способност на същия refresh rate, и се вкарват в един видео миксер (някакъв ATEM), по SDI или HDMI (за по-близките). За целта камерите се настройват на същото нещо, а за лектора на лаптопа се слага scaler, който да може да приеме различните входове и да сгъне сигнала.
– От видео-миксера се изважда сигнал за запис (основно на atomos ninja или нещо подобно), сигнал за restreaming, в две от залите – сигнал до проектора.
– Streaming-а се изпращаше до един сървър, към който се закачаха reencoder-и, за да изкарват stream-овете с различните качества.
Зала “България” беше на 1080i60 (t.e. 1920×1080, 60Hz), другите две зали на 720p60 (1280×720, 60Hz). Честотата се подбира така, че да не се получава мигане, ако камерата вижда екрана (което миналата година беше сериозен проблем в записите). Разделителната способност беше такава, понеже в двете по-малки зали scaler-а можеше да изважда 1080p или 720p, а видео-миксерите не поддържа 1080p като вход.
Тази година за setup-а имаше доста улеснения, които си направихме по време на подготовката:
– Макари с навити кабели, които лесно да могат да се разпънат (и после съберат);
– Готово dolly за камера, така че сравнително лесно да може да се мести;
– Изтестван и лесен интерком до камерите;
За съжаление имахме доста проблеми при setup-а и се наложи да откараме до около 1:30 на място, за да ги изчистим. Доста от техниката ни пристигна след 9, имаше концерт в камерната зала, имахме на няколко пъти проблеми с тока от различни фази на съседни контакти, и имаше няколко сложни неща, които се наложи да правим (включващи катерене по стълби и инсталация на техника по тавана в едната зала).

За умните хора, глупавия народ и реалността

Post Syndicated from Григор original http://www.gatchev.info/blog/?p=1914

Преди повече от 15 години станах свидетел на следната случка:

Беше 24 май, минавах по „Витошка“ покрай парка пред НДК и отведнъж забелязах пред себе си телевизионен екип. Момиче с микрофон и момче с камера спираха минувачите:

– Добър ден! Какво ще кажете за българския народ?
– Добър ден! Кажете нещо за българския народ!

Буквално на три метра пред мен спряха някакъв старец. Висок, слаб, с костюм, очила и бастун.

– Добър ден! Какво мислите за българския народ?

Дядото се подпря на бастуна с две ръце, изгледа ги над очилата и заяви с провлачен старчески глас:

– България е една слаба държава!

– Ама господине, за българския народ ви питаме какво мислите!

– България е една МНОГО слаба държава!

– Господине, не ви питаме за страната! За народа ви питаме!

– България е една УМОНЕПОБИРАЕМО слаба държава! – Още по-провлачено и хрипкаво. Околните вече се подсмиваха на очевидната склероза. Аз, да си призная – също.

– Ама господине, за народа ви питаме! Не за страната! За народа, за народа кажете какво мислите!

– Ами деца, нали и аз това ви казвам – за народа какво мисля. Просто малко по-заобиколно. По-учтиво, така да се каже. Като за пред телевизия…

Всички физиономии наоколо моментално се изпънаха. Моята също. Хем обидно до болка, хем копче не можеш да кажеш… Старецът май хич не беше склеротик.

… Оттогава мина много време. Не спирах да се чудя как така става. Уж не сме толкова глупави хора – или поне тогава бяхме по-малко глупави от сега. Как тогава е възможно да сме толкова безнадеждно тъп народ? Противоречието е… умонепобираемо.

Вчера един познат ми показа отговора. Много нагледно и простичко. Както и може да се очаква от психолог.

Бяхме седнали у тях на петнайсет минути сладка приказка. Неволно стигнахме до темата. Когато споменах какво противоречие ме измъчва, той само се усмихна. Стана, измъкна от един шкаф неголяма кутия и я отвори.

– На кое от тези картончета има нарисувано чудовище?

В кутията имаше към трийсетина картончета с картинки по тях. Пейзажи, нарисувани в характерен стил – само с големи цветови петна. Въпреки това нарисуваното си личеше отлично – къде ливада и небе над нея, къде отляво водопад а отдясно скала, къде полянка с гора отзад… Прегледах ги много внимателно, няколко пъти. Обърнах ги наобратно. Гледах ги под ъгъл към светлината…

– Не виждам чудовище на никое.

– Нима? Съвсем просто е. – Той заподрежда картончетата като парчета от пъзел. Когато привърши, на пода между нас лежеше картинка на нещо, излязло сигурно от детски кошмар.

– Виждаш ли какви красиви елементи какво ужасно цяло могат да дадат? Същото е и с народа. Дори ако всеки поотделно е умен, цялото може да е изумително тъпо… И обратното е възможно – много необразовани и глупави хора могат да съставят умен и читав народ. Ние обаче сме на първия вариант.

… Седя и се чудя. За годините от случката с телевизионния екип хората около мен изглупяха направо умонепобираемо. Колекцията тук, тук и тук е смешно бедна и постна на фона на реалността, дори само на медицинската ѝ част – а останалите не са по-добре. Очаквам много скоро някой духовен наследник на Тодор Колев да запее „Кога ще ги стигнем централноафриканците“. И, както преди за американците, всеки да си мисли „Ако ще срещу нас да тичат, пак няма да ги стигнем“… Има ли какво да направим, за да спасим народа си от самоунищожение на всички възможни нива, от лично та до всенародно? Можем ли въобще да го направим?

… През януари 1990 г. съседи ме помолиха за помощ. Имали роднина, избягал в Щатите още през седемдесетте години. Получили от него писмо – имал „Интернет адрес“ (е-майл), писма до него нямало как да ги спрат в пощата. Знаели, че аз се занимавам с такива работи – не може ли да помогна?

Бяха чудесни хора, така че с удоволствие се съгласих. Взех старателно облепения с марки плик, разпечатах го, отидох където ползвах Нета (и до момента никой не ме е освободил от обещанието да не казвам къде) и набрах писмото на ръка. На следващия ден имаше отговор – преписах го на ръка от екрана и го занесох на съседите ми. Те бяха смаяни – как така за само три дни до Щатите и обратно?! Поусъмниха се, че почеркът не приличал на този на роднината им, но съдържанието ги убеди, че е той.

Писмата зациркулираха всяка седмица. Пътем се запознах и сприятелих с роднината. Започнах и аз да си пиша с него. И един ден го попитах дали не смята да се върне в България, след като вече бай Тошо не е на власт. Отговорът ме попари – помня го и до днес:

Момче, не си разбрал най-важното. Ние, дето бягаме от България, не бягаме от бай Тошо. Бягаме от вас, дето оставате в нея… Изглеждаш свестен момък. Дано го разбереш по-бързо, че да се спасиш и ти.

Оттогава вече има 25 години, че и повече, и още не съм го разбрал напълно. Че е голата истина – така е, няма как да си затворя очите пред фактите. Но не искам да го приема. Толкова прекрасни, свестни, истински хора тук в България познавам! Да, преди бяха двойно повече и половината от тях вече са по света, без никакво намерение да се връщат. Но и така тук има останали предостатъчно. Хора, които заслужават грижа, и подкрепа, и помощ. И това някак да успеем да изтръгнем народа си от умонепобираемата глупост, простотия, егоизъм, лайнодушие, дребнавост… да не продължавам, че ми се реве.

Разбирам колко трудно е, да не кажа невъзможно. Как докато шепичка хора събираме народните добродетели прашинка по прашинка, банда взели реалната власт престъпници ги разсипва и тъпче в калта с роторни екскаватори, умишлено и целенасочено. А угоеното стадо нехае, грухти и ги достъпква… Но въпреки това не мога и не искам да приема това, което виждат очите ми. И най-вече мисълта, че няма какво да направя.

Затова и оглеждам честичко напоследък българския Нет, да търся нещо позитивно. Признаци, че някой се бори да върне интелекта на който може, както може. Ако сте забелязали такива, драснете по някой линк. Сигурно ще е полезен не само на мен.

Kuhn’s Paradox

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2016/02/19/kuhns-paradox.html

I’ve been making the following social observation frequently in my talks
and presentations for the last two years. While I suppose it’s rather
forward of me to do so, I’ve decide to name this principle:

Kuhn’s Paradox

For some time now, this paradoxical principle appears to hold: each
day, more lines of freely licensed code exist than ever before in human
history; yet, it also becomes increasingly more difficult each day
for users to successfully avoid proprietary software while completing their
necessary work on a computer.

Kuhn’s View On Motivations & Causes of Kuhn’s Paradox

I believe this paradox is primarily driven by the cooption of software
freedom by companies that ostensibly support Open Source, but have the (now
extremely
popular) open
source almost everything
philosophy.

For certain areas of software endeavor, companies dedicate enormous
resources toward the authorship of new Free Software for particular narrow
tasks. Often, these core systems provide underpinnings and fuel the growth
of proprietary systems built on top of them. An obvious example here is
OpenStack: a fully Free Software platform, but most deployments of
OpenStack add proprietary features not available from a pure upstream
OpenStack installation.

Meanwhile, in other areas, projects struggle for meager resources to
compete with the largest proprietary behemoths. Large user-facing,
server-based applications of
the Service
as a Software Substitute
variety, along with massive social media sites
like Twitter and Facebook that actively work against federated social
network systems, are the two classes of most difficult culprits on this
point. Even worse, most traditional web sites have now become a mix of
mundane content (i.e., HTML) and proprietary Javascript programs, which are
installed on-demand into the users’ browser all day long, even while most
of those servers run a primarily Free Software operating system.

Finally, much (possibly a majority of) computer use in industrialized
society is via hand-held mobile devices
(usually inaccurately
described as “mobile phones”
). While some of these devices
have Free Software operating systems (i.e., Android/Linux), nearly all the
applications for all of these devices are proprietary software.

The explosion of for-profit interest in “Open Source” over the
last decade has led us to this paradoxical problem, which increases daily
— because the gap between “software under a license respects my
rights to copy, share, and modify” and “software that’s
essential for my daily activities” grows linearly wider with each
sunset.

I propose herein no panacea; I wish I had one to offer. However, I
believe the problem is exacerbated by our community’s tendency to ignore
this paradox, and its pace even accelerates due to many developers’ belief
that having a job writing any old Free Software replaces the need for
volunteer labor to author more strategic code that advances software
freedom.

Linksvayer’s View On Motivations & Causes of Kuhn’s Paradox

Linksvayer agrees the paradox is observable, but disagrees with me
regarding the primary motivations and causes. Linksvayer claims the
following are the primary motivations and causes of Kuhn’s paradox:

Software is becoming harder to avoid.

Proprietary vendors outcompete relatively decentralized free
software efforts to put software in hands of people.

The latter may be increasing or decreasing. But even if the latter is
decreasing, the former trumps it.

Note the competition includes competition to control policy,
particularly public policy. Unfortunately most Free Software activists
appear to be focused on individual (thus dwarfish) heroism and insider
politics rather than collective action.

I rewrote Linksvayer’s text slightly from a comment made to this blog post
to include it in the main text, as I find his arguments regarding causes as
equally plausible as mine.

As an Apologia for
the possibility that Linksvayer means me spending too much time
on insider politics, I believe that the cooption I discussed above means
that the seemingly broad base of support we could use for the collective
action Linksvayer recommends is actually tiny. In other words, most
people involved with Free Software development now are not Free Software
activists. (Compare it to 20 years ago, when rarely did you find a Free
Software developer who wasn’t also a Free Software activist.) Therefore,
one central part of my insider politics work is to recruit moderate Open
Source enthusiasts to become radical Free Software activists.

About McAfee’s claim he could unlock iPhone

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/02/about-mcafees-claim-he-could-unlock.html

So John McAfee has claimed he could unlock the terrorist’s iPhone. Is there any truth to this?http://www.businessinsider.com/john-mcafee-ill-decrypt-san-bernardino-phone-for-free-2016-2No, of course this is bogus. If McAfee could do it, then he’s already have done it.In other words, if it were possible, he’d just say “we’ve unlocked an iPhone 5c running iOS 9 by exploiting {LTE baseband, USB stack, WiFi stack, etc.}, and we can therefore do the same thing for the terrorist’s phone”. Otherwise, it’s just bluster, because everyone knows the FBI won’t let McAfee near the phone in question without proof he could actually accomplish the task.There’s a lot of bluster in the hacking community like this. There is a big difference between those who have done, and those who claim they could do.I suggest LTE baseband, USB stack, and WiFi stack because that’s how I’d attack the phone. WiFi these days is pretty well tested, so that’s the least likely, but LTE and USB should be wide open. I wouldn’t do anything to help the FBI, though. The corrupt FBI goes around threatening security-researchers like me, trampling on our rights, so they’ve burned a lot of bridges with precisely the people who could help them in such situations.I would assume the NSA already has an LTE baseband exploit for Apple phones. If they don’t, then what else are they wasting their tax dollars on? However, the NSA hates the FBI (and rightly so: the FBI are a bunch of corrupt fucktards), so I don’t see them wanting to help the FBI in any way. Indeed, the entire point of te USA FREEDOM ACT was to wrest control of the phone metadata from the NSA and give it to the FBI, so the NSA is particularly hating the FBI right now.

About McAfee’s claim he could unlock iPhone

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/02/about-mcafees-claim-he-could-unlock.html

So John McAfee has claimed he could unlock the terrorist’s iPhone. Is there any truth to this?http://www.businessinsider.com/john-mcafee-ill-decrypt-san-bernardino-phone-for-free-2016-2No, of course this is bogus. If McAfee could do it, then he’s already have done it.In other words, if it were possible, he’d just say “we’ve unlocked an iPhone 5c running iOS 9 by exploiting {LTE baseband, USB stack, WiFi stack, etc.}, and we can therefore do the same thing for the terrorist’s phone”. Otherwise, it’s just bluster, because everyone knows the FBI won’t let McAfee near the phone in question without proof he could actually accomplish the task.There’s a lot of bluster in the hacking community like this. There is a big difference between those who have done, and those who claim they could do.I suggest LTE baseband, USB stack, and WiFi stack because that’s how I’d attack the phone. WiFi these days is pretty well tested, so that’s the least likely, but LTE and USB should be wide open. I wouldn’t do anything to help the FBI, though. The corrupt FBI goes around threatening security-researchers like me, trampling on our rights, so they’ve burned a lot of bridges with precisely the people who could help them in such situations.I would assume the NSA already has an LTE baseband exploit for Apple phones. If they don’t, then what else are they wasting their tax dollars on? However, the NSA hates the FBI (and rightly so: the FBI are a bunch of corrupt fucktards), so I don’t see them wanting to help the FBI in any way. Indeed, the entire point of te USA FREEDOM ACT was to wrest control of the phone metadata from the NSA and give it to the FBI, so the NSA is particularly hating the FBI right now.

Yahoo Hosts The Streaming Video Alliance’s Quarterly Member Meeting

Post Syndicated from yahoo original https://yahooeng.tumblr.com/post/139612658811

Yesterday, Yahoo hosted the Streaming Video Alliance’s quarterly member meeting where over 70 executives from across the streaming video landscape convened to advance discussions on a broad range of streaming video topics and reach agreements on best practices, policy and proposed standards. Ron Jacoby, VP of Engineering, Yahoo Video & TV Applications, and P.P.S. Narayan, VP of Engineering, Yahoo Video were the morning’s featured keynotes.Ron kicked off discussing the challenges and complexities behind building a strong streaming video experience. The root of which stemmed from the rapidly changing consumption patterns of today’s audiences.Millennials, which now represent over 30% of the US population, consume 283% more media via the internet than non-millennial age groups – a vast change from how their parents watched TV. This reflects a dramatic shift in how TV is being consumed, and is accelerating in key demographics.Additionally, the 18-24 year old demographic saw a 37% decline in traditional TV viewing. Ron attributed the shift to the pervasiveness of online video content to premium video services and social media across portable media devices, including laptops, tablets, smartphones, etc.“In order for the industry to succeed in the face of these trends, it needs to look at content and delivery differently,” said Jacoby. “investments in live streaming and innovation in video protocols and delivery is necessary.”P.P.S. Narayan followed up with a presentation about the technical opportunities and challenges in video streaming. To echo Ron’s statements, PPSN said “folks are moving away from TV..and watching video across different social and OTT platforms. Gone are the days of sitting in the same room with everyone watching the same show.”He added, “the shift in consumer behavior and consumption patterns is leading to the disaggregation of content – providers are taking content from TV and cable, and making it accessible on multiple platforms, such as phones, tablets, and connected devices. Services, like Hulu, HBOGo, and MLB.TV have invested heavily in this which is a clear indication that they, and the rest of the industry, are serious about embracing this consumer shift.”This move is indicative of bigger technology shifts, which begs the question “Can the quality of the video be as good as what we see on TV?” Almost. PPSN explained that when Yahoo hosted the first ever NFL live stream, the technological considerations he and his team had to account for, included resolution, bandwidth, encoding, ads, and latency.He also talked about the next-generation immersive experiences, including time-based immersion, made possible by cloud DVRs and live scrubbing; space-based immersion with VR and 360 degree videos; and people-based immersion, evidenced by the sharing of content on social media. Additionally, he covered how the disaggregation of content without having a “TV Guide” is leading to gaps in content discovery and personalization. The Yahoo Video Guide, is one such example of addressing the growing needs for users to discover and consume relevant and contextual content.PPSN concluded by expressing the importance of the groups like the SVA, as they are critical to working together as an industry and help move the ball forward in streaming video.

Canonical, Ubuntu and why I seem so upset about them all the time

Post Syndicated from Matthew Garrett original http://mjg59.dreamwidth.org/39913.html

I had no access to the internet for most of my childhood. Nobody in my area knew anything about programming. I learned a great deal from a number of CDs that included free software source code and archives of project mailing lists. When I got to university, I learned even more by being able to develop a Debian-based OS for use in our computer facilities. That gave me the experience and knowledge that I needed to become involved in Debian development, which in turn gave me the background required to be able to help Ubuntu become the first free software operating system to work out of the box on modern laptops. From there, I’ve been able to build my career around developing free software.Ubuntu can be translated as “I am who I am because of who we all are”. I am who I am because people made the choice to release their software under licenses that permitted examination, modification and redistribution. I am who I am because I was able to participate in communities that took advantages of those freedoms to produce new and better software. I am who I am because when my priorities differed from those of existing communities, it was still possible for me to benefit from their work and for them to benefit from mine.Free software doesn’t mean that the software is entirely free of restrictions. While a core aspect is the right to distribute modified versions of code, it has never been fundamental to free software that you be able to do so while still claiming that the code is the original version. Various approaches have been taken to make it possible for users to distinguish modified versions, ranging from simply including license terms that require modified versions be marked as such, to licenses that require that you change the name of the package if you modify it. However, what’s probably the most effective approach has been to apply trademark law to the problem. Mozilla’s trademark policy is an example of this – if you modify the code in ways that aren’t approved by Mozilla, you aren’t entitled to use the trademarks.A requirement that you avoid use of trademarks in an infringing way is reasonable. Mozilla products include support for building with branding disabled, which makes it very straightforward for a user to build a modified version of Firefox that can be redistributed without any trademark issues. Red Hat have a similar policy for Fedora and RHEL[1] – you simply replace the packages that contain the branding and you’re done.Canonical’s IP policy around Ubuntu is fundamentally different. While Mozilla make it clear that you simply no longer have a right to use the trademarks under trademark law, Canonical appear to require that you remove all trademarks entirely even if using them wouldn’t be a violation of trademark law. While Mozilla restrict the redistribution of modified binaries that include their trademarks, Canonical insist that you rebuild everything even if the package doesn’t contain any trademarks. And while Mozilla give you a single build option that creates binaries that conform with their trademark requirements, Canonical will refuse to tell you what you have to do.When asked about this at SCALE earlier this year, Mark Shuttleworth claimed that Ubuntu’s policy was consistent with that of other projects. This is inaccurate. Nobody else requires that you rebuild every package before you can redistribute it in a modified distribution – such a restriction is a violation of freedom 2 of the Free Software Definition, and as a result the binary distributions of Ubuntu are not free software. Nobody else refuses to discuss whether you’re required to remove non-infringing trademarks in order to be able to redistribute. Nobody else responds to offers to make it easier for users to produce non-infringing derivatives with a flat refusal.Mark claims that I’m only raising this issue because I work for a competitor and wish to harm Canonical. Nothing could be further from the truth. I began discussing this before working for my current employers – my previous employers had no meaningful market overlap with Canonical at all. The reason I care is because I care about free software. I care about people being able to derive new and interesting things from existing code. I care about a small team of people being able to take Ubuntu and make something better in the same way that Ubuntu did with Debian. I care about ensuring that users receive the freedom to do this without having to jump through a significant number of hoops in the process. Ubuntu has been a spectacularly successful vehicle for getting free software into the hands of users. Mark’s generosity in funding this experiment has undoubtedly made the world a better place. Canonical employs a large number of talented developers writing high quality software, many of whom I’m fortunate enough to be able to call friends. And Canonical are squandering that by restricting the rights of their users and alienating the free software community.I want others to be who they are because of my work and the work of all the others like me. Anything that makes that more difficult saddens me, and so I do what I can to fix it. I criticise Canonical’s policies in the hope that we, as a community, can convince Canonical to agree that this kind of artificial barrier to modification hurts us more than it helps them. In many ways, Canonical remain one of our best hopes for broadening the reach of free software, and this is why it’s unfortunate that they do so in a way that makes it more difficult for people to have the same experiences that I did.[1] While it’s easy to turn a trademark infringing version of RHEL into a non-infringing one, Red Hat don’t provide publicly available binary packages for RHEL. If you get hold of them somehow you’re entitled to redistribute them freely, but Red Hat’s subscriber agreement indicates that if you do this as a Red Hat customer you will lose access to further binary updates – a provision that I find utterly repugnant. Its inclusion reduces my respect for Red Hat and my enthusiasm for working with them, and given the official Red Hat support for CentOS it appears to make no sense whatsoever. Red Hat should drop it.comment count unavailable comments

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close