Tag Archives: dfa

Securing Your Cryptocurrency

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/backing-up-your-cryptocurrency/

Securing Your Cryptocurrency

In our blog post on Tuesday, Cryptocurrency Security Challenges, we wrote about the two primary challenges faced by anyone interested in safely and profitably participating in the cryptocurrency economy: 1) make sure you’re dealing with reputable and ethical companies and services, and, 2) keep your cryptocurrency holdings safe and secure.

In this post, we’re going to focus on how to make sure you don’t lose any of your cryptocurrency holdings through accident, theft, or carelessness. You do that by backing up the keys needed to sell or trade your currencies.

$34 Billion in Lost Value

Of the 16.4 million bitcoins said to be in circulation in the middle of 2017, close to 3.8 million may have been lost because their owners no longer are able to claim their holdings. Based on today’s valuation, that could total as much as $34 billion dollars in lost value. And that’s just bitcoins. There are now over 1,500 different cryptocurrencies, and we don’t know how many of those have been misplaced or lost.



Now that some cryptocurrencies have reached (at least for now) staggering heights in value, it’s likely that owners will be more careful in keeping track of the keys needed to use their cryptocurrencies. For the ones already lost, however, the owners have been separated from their currencies just as surely as if they had thrown Benjamin Franklins and Grover Clevelands over the railing of a ship.

The Basics of Securing Your Cryptocurrencies

In our previous post, we reviewed how cryptocurrency keys work, and the common ways owners can keep track of them. A cryptocurrency owner needs two keys to use their currencies: a public key that can be shared with others is used to receive currency, and a private key that must be kept secure is used to spend or trade currency.

Many wallets and applications allow the user to require extra security to access them, such as a password, or iris, face, or thumb print scan. If one of these options is available in your wallets, take advantage of it. Beyond that, it’s essential to back up your wallet, either using the backup feature built into some applications and wallets, or manually backing up the data used by the wallet. When backing up, it’s a good idea to back up the entire wallet, as some wallets require additional private data to operate that might not be apparent.

No matter which backup method you use, it is important to back up often and have multiple backups, preferable in different locations. As with any valuable data, a 3-2-1 backup strategy is good to follow, which ensures that you’ll have a good backup copy if anything goes wrong with one or more copies of your data.

One more caveat, don’t reuse passwords. This applies to all of your accounts, but is especially important for something as critical as your finances. Don’t ever use the same password for more than one account. If security is breached on one of your accounts, someone could connect your name or ID with other accounts, and will attempt to use the password there, as well. Consider using a password manager such as LastPass or 1Password, which make creating and using complex and unique passwords easy no matter where you’re trying to sign in.

Approaches to Backing Up Your Cryptocurrency Keys

There are numerous ways to be sure your keys are backed up. Let’s take them one by one.

1. Automatic backups using a backup program

If you’re using a wallet program on your computer, for example, Bitcoin Core, it will store your keys, along with other information, in a file. For Bitcoin Core, that file is wallet.dat. Other currencies will use the same or a different file name and some give you the option to select a name for the wallet file.

To back up the wallet.dat or other wallet file, you might need to tell your backup program to explicitly back up that file. Users of Backblaze Backup don’t have to worry about configuring this, since by default, Backblaze Backup will back up all data files. You should determine where your particular cryptocurrency, wallet, or application stores your keys, and make sure the necessary file(s) are backed up if your backup program requires you to select which files are included in the backup.

Backblaze B2 is an option for those interested in low-cost and high security cloud storage of their cryptocurrency keys. Backblaze B2 supports 2-factor verification for account access, works with a number of apps that support automatic backups with encryption, error-recovery, and versioning, and offers an API and command-line interface (CLI), as well. The first 10GB of storage is free, which could be all one needs to store encrypted cryptocurrency keys.

2. Backing up by exporting keys to a file

Apps and wallets will let you export your keys from your app or wallet to a file. Once exported, your keys can be stored on a local drive, USB thumb drive, DAS, NAS, or in the cloud with any cloud storage or sync service you wish. Encrypting the file is strongly encouraged — more on that later. If you use 1Password or LastPass, or other secure notes program, you also could store your keys there.

3. Backing up by saving a mnemonic recovery seed

A mnemonic phrase, mnemonic recovery phrase, or mnemonic seed is a list of words that stores all the information needed to recover a cryptocurrency wallet. Many wallets will have the option to generate a mnemonic backup phrase, which can be written down on paper. If the user’s computer no longer works or their hard drive becomes corrupted, they can download the same wallet software again and use the mnemonic recovery phrase to restore their keys.

The phrase can be used by anyone to recover the keys, so it must be kept safe. Mnemonic phrases are an excellent way of backing up and storing cryptocurrency and so they are used by almost all wallets.

A mnemonic recovery seed is represented by a group of easy to remember words. For example:

eye female unfair moon genius pipe nuclear width dizzy forum cricket know expire purse laptop scale identify cube pause crucial day cigar noise receive

The above words represent the following seed:

0a5b25e1dab6039d22cd57469744499863962daba9d2844243fec 9c0313c1448d1a0b2cd9e230a78775556f9b514a8be45802c2808e fd449a20234e9262dfa69

These words have certain properties:

  • The first four letters are enough to unambiguously identify the word.
  • Similar words are avoided (such as: build and built).

Bitcoin and most other cryptocurrencies such as Litecoin, Ethereum, and others use mnemonic seeds that are 12 to 24 words long. Other currencies might use different length seeds.

4. Physical backups — Paper, Metal

Some cryptocurrency holders believe that their backup, or even all their cryptocurrency account information, should be stored entirely separately from the internet to avoid any risk of their information being compromised through hacks, exploits, or leaks. This type of storage is called “cold storage.” One method of cold storage involves printing out the keys to a piece of paper and then erasing any record of the keys from all computer systems. The keys can be entered into a program from the paper when needed, or scanned from a QR code printed on the paper.

Printed public and private keys

Printed public and private keys

Some who go to extremes suggest separating the mnemonic needed to access an account into individual pieces of paper and storing those pieces in different locations in the home or office, or even different geographical locations. Some say this is a bad idea since it could be possible to reconstruct the mnemonic from one or more pieces. How diligent you wish to be in protecting these codes is up to you.

Mnemonic recovery phrase booklet

Mnemonic recovery phrase booklet

There’s another option that could make you the envy of your friends. That’s the CryptoSteel wallet, which is a stainless steel metal case that comes with more than 250 stainless steel letter tiles engraved on each side. Codes and passwords are assembled manually from the supplied part-randomized set of tiles. Users are able to store up to 96 characters worth of confidential information. Cryptosteel claims to be fireproof, waterproof, and shock-proof.

image of a Cryptosteel cold storage device

Cryptosteel cold wallet

Of course, if you leave your Cryptosteel wallet in the pocket of a pair of ripped jeans that gets thrown out by the housekeeper, as happened to the character Russ Hanneman on the TV show Silicon Valley in last Sunday’s episode, then you’re out of luck. That fictional billionaire investor lost a USB drive with $300 million in cryptocoins. Let’s hope that doesn’t happen to you.

Encryption & Security

Whether you store your keys on your computer, an external disk, a USB drive, DAS, NAS, or in the cloud, you want to make sure that no one else can use those keys. The best way to handle that is to encrypt the backup.

With Backblaze Backup for Windows and Macintosh, your backups are encrypted in transmission to the cloud and on the backup server. Users have the option to add an additional level of security by adding a Personal Encryption Key (PEK), which secures their private key. Your cryptocurrency backup files are secure in the cloud. Using our web or mobile interface, previous versions of files can be accessed, as well.

Our object storage cloud offering, Backblaze B2, can be used with a variety of applications for Windows, Macintosh, and Linux. With B2, cryptocurrency users can choose whichever method of encryption they wish to use on their local computers and then upload their encrypted currency keys to the cloud. Depending on the client used, versioning and life-cycle rules can be applied to the stored files.

Other backup programs and systems provide some or all of these capabilities, as well. If you are backing up to a local drive, it is a good idea to encrypt the local backup, which is an option in some backup programs.

Address Security

Some experts recommend using a different address for each cryptocurrency transaction. Since the address is not the same as your wallet, this means that you are not creating a new wallet, but simply using a new identifier for people sending you cryptocurrency. Creating a new address is usually as easy as clicking a button in the wallet.

One of the chief advantages of using a different address for each transaction is anonymity. Each time you use an address, you put more information into the public ledger (blockchain) about where the currency came from or where it went. That means that over time, using the same address repeatedly could mean that someone could map your relationships, transactions, and incoming funds. The more you use that address, the more information someone can learn about you. For more on this topic, refer to Address reuse.

Note that a downside of using a paper wallet with a single key pair (type-0 non-deterministic wallet) is that it has the vulnerabilities listed above. Each transaction using that paper wallet will add to the public record of transactions associated with that address. Newer wallets, i.e. “deterministic” or those using mnemonic code words support multiple addresses and are now recommended.

There are other approaches to keeping your cryptocurrency transaction secure. Here are a couple of them.

Multi-signature

Multi-signature refers to requiring more than one key to authorize a transaction, much like requiring more than one key to open a safe. It is generally used to divide up responsibility for possession of cryptocurrency. Standard transactions could be called “single-signature transactions” because transfers require only one signature — from the owner of the private key associated with the currency address (public key). Some wallets and apps can be configured to require more than one signature, which means that a group of people, businesses, or other entities all must agree to trade in the cryptocurrencies.

Deep Cold Storage

Deep cold storage ensures the entire transaction process happens in an offline environment. There are typically three elements to deep cold storage.

First, the wallet and private key are generated offline, and the signing of transactions happens on a system not connected to the internet in any manner. This ensures it’s never exposed to a potentially compromised system or connection.

Second, details are secured with encryption to ensure that even if the wallet file ends up in the wrong hands, the information is protected.

Third, storage of the encrypted wallet file or paper wallet is generally at a location or facility that has restricted access, such as a safety deposit box at a bank.

Deep cold storage is used to safeguard a large individual cryptocurrency portfolio held for the long term, or for trustees holding cryptocurrency on behalf of others, and is possibly the safest method to ensure a crypto investment remains secure.

Keep Your Software Up to Date

You should always make sure that you are using the latest version of your app or wallet software, which includes important stability and security fixes. Installing updates for all other software on your computer or mobile device is also important to keep your wallet environment safer.

One Last Thing: Think About Your Testament

Your cryptocurrency funds can be lost forever if you don’t have a backup plan for your peers and family. If the location of your wallets or your passwords is not known by anyone when you are gone, there is no hope that your funds will ever be recovered. Taking a bit of time on these matters can make a huge difference.

To the Moon*

Are you comfortable with how you’re managing and backing up your cryptocurrency wallets and keys? Do you have a suggestion for keeping your cryptocurrencies safe that we missed above? Please let us know in the comments.


*To the Moon — Crypto slang for a currency that reaches an optimistic price projection.

The post Securing Your Cryptocurrency appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Russia Blocks 50 VPNs & Anonymizers in Telegram Crackdown, Viber Next

Post Syndicated from Andy original https://torrentfreak.com/russia-blocks-50-vpns-anonymizers-in-telegram-crackdown-viber-next-180504/

Any entity operating an encrypted messaging service in Russia needs to register with local authorities. They must also hand over their encryption keys when requested to do so, so that users can be monitored.

Messaging giant Telegram refused to give in to Russian pressure. Founder Pavel Durov said that he would not compromise the privacy of Telegram’s 200m monthly users, despite losing a lawsuit against the Federal Security Service which compelled him to do so. In response, telecoms watchdog Roscomnadzor filed a lawsuit to degrade Telegram via web-blocking.

After a Moscow court gave the go-ahead for Telegram to be banned in Russia last month, chaos broke out. ISPs around the country tried to block the service, which was using Amazon and Google to provide connectivity. Millions of IP addresses belonging to both companies were blocked and countless other companies and individuals had their services blocked too.

But despite the Russian carpet-bombing of Telegram, the service steadfastly remained online. People had problems accessing the service at times, of course, but their determination coupled with that of Telegram and other facilitators largely kept communications flowing.

Part of the huge counter-offensive was mounted by various VPN and anonymizer services that allowed people to bypass ISP blocks. However, they too have found themselves in trouble, with Russian authorities blocking them for facilitating access to Telegram. In an announcement Thursday, the telecoms watchdog revealed the scale of the crackdown.

Deputy Head of Roskomnadzor told TASS that dozens of VPNs and similar services had been blocked while hinting at yet more to come.

“Fifty for the time being,” Subbotin said.

With VPN providers taking a hit on behalf of Telegram, there could be yet more chaos looming on the horizon. It’s feared that other encrypted services, which have also failed to hand over their keys to the FSB, could be targeted next.

Ministry of Communications chief Nikolai Nikiforov told reporters this week that if Viber doesn’t fall into line, it could suffer the same fate as Telegram.

“This is a matter for the Federal Security Service, because the authority with regard to such specific issues in the execution of the order for the provision of encryption keys is the authority of the FSB,” Nikiforov said.

“If they have problems with the provision of encryption keys, they can also apply to the court and obtain a similar court decision,” the minister said, responding to questions about the Japanese-owned, Luxembourg-based communications app.

With plenty of chaos apparent online, there are also reports of problems from within Roscomnadzor itself. For the past several days, rumors have been circulating in Russian media that Roskomnadzor chief Alexander Zharov has resigned, perhaps in response to the huge over-blocking that took place when Telegram was targeted.

When questioned by reporters this week, Ministry of Communications chief Nikolai Nikiforov refused to provide any further information, stating that such a matter would be for the prime minister to handle.

“I would not like to comment on this. If the chairman of the government takes this decision, I recall that the heads of services are appointed by the decision of the prime minister and personnel decisions are never commented on,” he said.

Whether Prime Minister Dmitry Medvedev will make a statement is yet to be seen, but this week his office has been dealing with a blocking – or rather unblocking – controversy of its own.

In a public post on Facebook May 1, Duma deputy Natalya Kostenko revealed that she was having problems due to the Telegram blockades.

“Dear friends, do not write to me on Telegram, I’m not getting your messages. Use other channels to contact me,” Kostenko wrote.

In response, Dmitry Medvedev’s press secretary, Natalia Timakova, told her colleague to circumvent the blockade so that she could access Telegram once again.

“Use a VPN! It’s simple. And it works almost all the time,” Timakov wrote.

Until those get blocked too, of course…..

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

No, Ray Ozzie hasn’t solved crypto backdoors

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/04/no-ray-ozzie-hasnt-solved-crypto.html

According to this Wired article, Ray Ozzie may have a solution to the crypto backdoor problem. No, he hasn’t. He’s only solving the part we already know how to solve. He’s deliberately ignoring the stuff we don’t know how to solve. We know how to make backdoors, we just don’t know how to secure them.

The vault doesn’t scale

Yes, Apple has a vault where they’ve successfully protected important keys. No, it doesn’t mean this vault scales. The more people and the more often you have to touch the vault, the less secure it becomes. We are talking thousands of requests per day from 100,000 different law enforcement agencies around the world. We are unlikely to protect this against incompetence and mistakes. We are definitely unable to secure this against deliberate attack.

A good analogy to Ozzie’s solution is LetsEncrypt for getting SSL certificates for your website, which is fairly scalable, using a private key locked in a vault for signing hundreds of thousands of certificates. That this scales seems to validate Ozzie’s proposal.

But at the same time, LetsEncrypt is easily subverted. LetsEncrypt uses DNS to verify your identity. But spoofing DNS is easy, as was recently shown in the recent BGP attack against a cryptocurrency. Attackers can create fraudulent SSL certificates with enough effort. We’ve got other protections against this, such as discovering and revoking the SSL bad certificate, so while damaging, it’s not catastrophic.

But with Ozzie’s scheme, equivalent attacks would be catastrophic, as it would lead to unlocking the phone and stealing all of somebody’s secrets.

In particular, consider what would happen if LetsEncrypt’s certificate was stolen (as Matthew Green points out). The consequence is that this would be detected and mass revocations would occur. If Ozzie’s master key were stolen, nothing would happen. Nobody would know, and evildoers would be able to freely decrypt phones. Ozzie claims his scheme can work because SSL works — but then his scheme includes none of the many protections necessary to make SSL work.

What I’m trying to show here is that in a lab, it all looks nice and pretty, but when attacked at scale, things break down — quickly. We have so much experience with failure at scale that we can judge Ozzie’s scheme as woefully incomplete. It’s not even up to the standard of SSL, and we have a long list of SSL problems.

Cryptography is about people more than math

We have a mathematically pure encryption algorithm called the “One Time Pad”. It can’t ever be broken, provably so with mathematics.

It’s also perfectly useless, as it’s not something humans can use. That’s why we use AES, which is vastly less secure (anything you encrypt today can probably be decrypted in 100 years). AES can be used by humans whereas One Time Pads cannot be. (I learned the fallacy of One Time Pad’s on my grandfather’s knee — he was a WW II codebreaker who broke German messages trying to futz with One Time Pads).

The same is true with Ozzie’s scheme. It focuses on the mathematical model but ignores the human element. We already know how to solve the mathematical problem in a hundred different ways. The part we don’t know how to secure is the human element.

How do we know the law enforcement person is who they say they are? How do we know the “trusted Apple employee” can’t be bribed? How can the law enforcement agent communicate securely with the Apple employee?

You think these things are theoretical, but they aren’t. Consider financial transactions. It used to be common that you could just email your bank/broker to wire funds into an account for such things as buying a house. Hackers have subverted that, intercepting messages, changing account numbers, and stealing millions. Most banks/brokers require additional verification before doing such transfers.

Let me repeat: Ozzie has only solved the part we already know how to solve. He hasn’t addressed these issues that confound us.

We still can’t secure security, much less secure backdoors

We already know how to decrypt iPhones: just wait a year or two for somebody to discover a vulnerability. FBI claims it’s “going dark”, but that’s only for timely decryption of phones. If they are willing to wait a year or two a vulnerability will eventually be found that allows decryption.

That’s what’s happened with the “GrayKey” device that’s been all over the news lately. Apple is fixing it so that it won’t work on new phones, but it works on old phones.

Ozzie’s solution is based on the assumption that iPhones are already secure against things like GrayKey. Like his assumption “if Apple already has a vault for private keys, then we have such vaults for backdoor keys”, Ozzie is saying “if Apple already had secure hardware/software to secure the phone, then we can use the same stuff to secure the backdoors”. But we don’t really have secure vaults and we don’t really have secure hardware/software to secure the phone.

Again, to stress this point, Ozzie is solving the part we already know how to solve, but ignoring the stuff we don’t know how to solve. His solution is insecure for the same reason phones are already insecure.

Locked phones aren’t the problem

Phones are general purpose computers. That means anybody can install an encryption app on the phone regardless of whatever other security the phone might provide. The police are powerless to stop this. Even if they make such encryption crime, then criminals will still use encryption.

That leads to a strange situation that the only data the FBI will be able to decrypt is that of people who believe they are innocent. Those who know they are guilty will install encryption apps like Signal that have no backdoors.

In the past this was rare, as people found learning new apps a barrier. These days, apps like Signal are so easy even drug dealers can figure out how to use them.

We know how to get Apple to give us a backdoor, just pass a law forcing them to. It may look like Ozzie’s scheme, it may be something more secure designed by Apple’s engineers. Sure, it will weaken security on the phone for everyone, but those who truly care will just install Signal. But again we are back to the problem that Ozzie’s solving the problem we know how to solve while ignoring the much larger problem, that of preventing people from installing their own encryption.

The FBI isn’t necessarily the problem

Ozzie phrases his solution in terms of U.S. law enforcement. Well, what about Europe? What about Russia? What about China? What about North Korea?

Technology is borderless. A solution in the United States that allows “legitimate” law enforcement requests will inevitably be used by repressive states for what we believe would be “illegitimate” law enforcement requests.

Ozzie sees himself as the hero helping law enforcement protect 300 million American citizens. He doesn’t see himself what he really is, the villain helping oppress 1.4 billion Chinese, 144 million Russians, and another couple billion living in oppressive governments around the world.

Conclusion

Ozzie pretends the problem is political, that he’s created a solution that appeases both sides. He hasn’t. He’s solved the problem we already know how to solve. He’s ignored all the problems we struggle with, the problems we claim make secure backdoors essentially impossible. I’ve listed some in this post, but there are many more. Any famous person can create a solution that convinces fawning editors at Wired Magazine, but if Ozzie wants to move forward he’s going to have to work harder to appease doubting cryptographers.

Barbot 4: the bartending Grandfather clock

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/barbot-4/

Meet Barbot 4, the drink-dispensing Grandfather clock who knows when it’s time to party.

Barbot 4. Grandfather Time (first video of cocktail robot)

The first introduction to my latest barbot – this time made inside a grandfather clock. There is another video where I explain a bit about how it works, and am happy to give more explanations. https://youtu.be/hdxV_KKH5MA This can make cocktails with up to 4 spirits, and 4 mixers, and is controlled by voice, keyboard input, or a gui, depending which is easiest.

Barbot 4

Robert Prest’s Barbot 4 is a beverage dispenser loaded into an old Grandfather clock. There’s space in the back for your favourite spirits and mixers, and a Raspberry Pi controls servo motors that release the required measures of your favourite cocktail ingredients, according to preset recipes.

Barbot 4 Raspberry Pi drink-dispensing robot

The clock can hold four mixers and four spirits, and a human supervisor records these using Drinkydoodad, a friendly touchscreen interface. With information about its available ingredients and a library of recipes, Barbot 4 can create your chosen drink. Patrons control the system either with voice commands or with the touchscreen UI.

Barbot 4 Raspberry Pi drink-dispensing robot

Robert has experimented with various components as this project has progressed. He has switched out peristaltic pumps in order to increase the flow of liquid, and adjusted the motors so that they can handle carbonated beverages. In the video, he highlights other quirks he hopes to address, like the fact that drinks tend to splash during pouring.

Barbot 4 Raspberry Pi drink-dispensing robot

As well as a Raspberry Pi, the build uses Arduinos. These control the light show, which can be adjusted according to your party-time lighting preferences.

An explanation of the build accompanies Robert’s second video. We’re hoping he’ll also release more details of Barbot 3, his suitcase-sized, portable Barbot, and of Doom Shot Bot, a bottle topper that pours a shot every time you die in the game DoomZ.

Automated bartending

Barbot 4 isn’t the first cocktail-dispensing Raspberry Pi bartender we’ve seen, though we have to admit that fitting it into a grandfather clock definitely makes it one of the quirkiest.

If you’ve built a similar project using a Raspberry Pi, we’d love to see it. Share your project in the comments, or tell us what drinks you’d ask Barbot to mix if you had your own at home.

The post Barbot 4: the bartending Grandfather clock appeared first on Raspberry Pi.

Digitising film reels with Pi Film Capture

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/digitising-reels-pi-film-capture/

Joe Herman’s Pi Film Capture project combines old projectors and a stepper motor with a Raspberry Pi and a Raspberry Pi Camera Module, to transform his grandfather’s 8- and 16-mm home movies into glorious digital films.

We chatted to him about his Pi Film Capture build at Maker Faire New York 2016:

Film to Digital Conversion at Maker Faire New York 2016

Uploaded by Raspberry Pi on 2017-08-25.

What inspired Pi Film Capture?

Joe’s grandfather, Leo Willmott, loved recording home movies of his family of eight children and their grandchildren. He passed away when Joe was five, but in 2013 Joe found a way to connect with his legacy: while moving house, a family member uncovered a box of more than a hundred of Leo’s film reels. These covered decades of family history, and some dated back as far as 1939.

Super 8 film reels

Kodachrome film reels of the type Leo used

This provided an unexpected opportunity for Leo’s family to restore some of their shared history. Joe immediately made plans to digitise the material, knowing that the members of his extensive family tree would provide an eager audience.

Building Pi Film Capture

After a failed attempt with a DSLR camera, Joe realised he couldn’t simply re-film the movies — instead, he would have to capture each frame individually. He combined a Raspberry Pi with an old Super 8 projector, and set about rigging up something to do just that.

He went through numerous stages of prototyping, and his final hardware setup works very well. A NEMA 17 stepper motor  moves the film reel forward in the projector. A magnetic reed switch triggers the Camera Module each time the reel moves on to the next frame. Joe hacked the Camera Module so that it has a different focal distance, and he also added a magnifying lens. Moreover, he realised it would be useful to have a diffuser to ‘smooth’ some of the faults in the aged film reel material. To do this, he mounted “a bit of translucent white plastic from an old ceiling fixture” parallel with the film.

Pi Film Capture device by Joe Herman

Joe’s 16-mm projector, with embedded Raspberry Pi hardware

Software solutions

In addition to capturing every single frame (sometimes with multiple exposure settings), Joe found that he needed intensive post-processing to restore some of the films. He settled on sending the images from the Pi to a more powerful Linux machine. To enable processing of the raw data, he had to write Python scripts implementing several open-source software packages. For example, to deal with the varying quality of the film reels more easily, Joe implemented a GUI (written with the help of PyQt), which he uses to change the capture parameters. This was a demanding job, as he was relatively new to using these tools.

Top half of GUI for Pi Film Capture Joe Herman

The top half of Joe’s GUI, because the whole thing is really long and really thin and would have looked weird on the blog…

If a frame is particularly damaged, Joe can capture multiple instances of the image at different settings. These are then merged to achieve a good-quality image using OpenCV functionality. Joe uses FFmpeg to stitch the captured images back together into a film. Some of his grandfather’s reels were badly degraded, but luckily Joe found scripts written by other people to perform advanced digital restoration of film with AviSynth. He provides code he has written for the project on his GitHub account.

For an account of the project in his own words, check out Joe’s guest post on the IEEE Spectrum website. He also described some of the issues he encountered, and how he resolved them, in The MagPi.

What does Pi Film Capture deliver?

Joe provides videos related to Pi Film Capture on two sites: on his YouTube channel, you’ll find videos in which he has documented the build process of his digitising project. Final results of the project live on Joe’s Vimeo channel, where so far he has uploaded 55 digitised home videos.

m093a: Tom Herman Wedding, Detroit 8/10/63

Shot on 8mm by Leo Willmott, captured and restored by Joe Herman (Not a Wozniak film, but placed in that folder b/c it may be of interest to Hermans)

We’re beyond pleased that our tech is part of this amazing project, helping to reconnect the entire Herman/Willmott clan with their past. And it was great to be able to catch up with Joe, and talk about his build at Maker Faire last year!

Maker Faire New York 2017

We’ll be at Maker Faire New York again on the 23-24 September, and we can’t wait to see the amazing makes the Raspberry Pi community will be presenting there!

Are you going to be at MFNY to show off your awesome Pi-powered project? Tweet us, so we can meet up, check it out and share your achievements!

The post Digitising film reels with Pi Film Capture appeared first on Raspberry Pi.

AWS Earns Department of Defense Impact Level 5 Provisional Authorization

Post Syndicated from Chris Gile original https://aws.amazon.com/blogs/security/aws-earns-department-of-defense-impact-level-5-provisional-authorization/

AWS GovCloud (US) Region image

The Defense Information Systems Agency (DISA) has granted the AWS GovCloud (US) Region an Impact Level 5 (IL5) Department of Defense (DoD) Cloud Computing Security Requirements Guide (CC SRG) Provisional Authorization (PA) for six core services. This means that AWS’s DoD customers and partners can now deploy workloads for Controlled Unclassified Information (CUI) exceeding IL4 and for unclassified National Security Systems (NSS).

We have supported sensitive Defense community workloads in the cloud for more than four years, and this latest IL5 authorization is complementary to our FedRAMP High Provisional Authorization that covers 18 services in the AWS GovCloud (US) Region. Our customers now have the flexibility to deploy any range of IL 2, 4, or 5 workloads by leveraging AWS’s services, attestations, and certifications. For example, when the US Air Force needed compute scale to support the Next Generation GPS Operational Control System Program, they turned to AWS.

In partnership with a certified Third Party Assessment Organization (3PAO), an independent validation was conducted to assess both our technical and nontechnical security controls to confirm that they meet the DoD’s stringent CC SRG standards for IL5 workloads. Effective immediately, customers can begin leveraging the IL5 authorization for the following six services in the AWS GovCloud (US) Region:

AWS has been a long-standing industry partner with DoD, federal-agency customers, and private-sector customers to enhance cloud security and policy. We continue to collaborate on the DoD CC SRG, Defense Acquisition Regulation Supplement (DFARS) and other government requirements to ensure that policy makers enact policies to support next-generation security capabilities.

In an effort to reduce the authorization burden of our DoD customers, we’ve worked with DISA to port our assessment results into an easily ingestible format by the Enterprise Mission Assurance Support Service (eMASS) system. Additionally, we undertook a separate effort to empower our industry partners and customers to efficiently solve their compliance, governance, and audit challenges by launching the AWS Customer Compliance Center, a portal providing a breadth of AWS-specific compliance and regulatory information.

We look forward to providing sustained cloud security and compliance support at scale for our DoD customers and adding additional services within the IL5 authorization boundary. See AWS Services in Scope by Compliance Program for updates. To request access to AWS’s DoD security and authorization documentation, contact AWS Sales and Business Development. For a list of frequently asked questions related to AWS DoD SRG compliance, see the AWS DoD SRG page.

To learn more about the announcement in this post, tune in for the AWS Automating DoD SRG Impact Level 5 Compliance in AWS GovCloud (US) webinar on October 11, 2017, at 11:00 A.M. Pacific Time.

– Chris Gile, Senior Manager, AWS Public Sector Risk & Compliance

 

 

Announcing the Winners of the AWS Chatbot Challenge – Conversational, Intelligent Chatbots using Amazon Lex and AWS Lambda

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/announcing-the-winners-of-the-aws-chatbot-challenge-conversational-intelligent-chatbots-using-amazon-lex-and-aws-lambda/

A couple of months ago on the blog, I announced the AWS Chatbot Challenge in conjunction with Slack. The AWS Chatbot Challenge was an opportunity to build a unique chatbot that helped to solve a problem or that would add value for its prospective users. The mission was to build a conversational, natural language chatbot using Amazon Lex and leverage Lex’s integration with AWS Lambda to execute logic or data processing on the backend.

I know that you all have been anxiously waiting to hear announcements of who were the winners of the AWS Chatbot Challenge as much as I was. Well wait no longer, the winners of the AWS Chatbot Challenge have been decided.

May I have the Envelope Please? (The Trumpets sound)

The winners of the AWS Chatbot Challenge are:

  • First Place: BuildFax Counts by Joe Emison
  • Second Place: Hubsy by Andrew Riess, Andrew Puch, and John Wetzel
  • Third Place: PFMBot by Benny Leong and his team from MoneyLion.
  • Large Organization Winner: ADP Payroll Innovation Bot by Eric Liu, Jiaxing Yan, and Fan Yang

 

Diving into the Winning Chatbot Projects

Let’s take a walkthrough of the details for each of the winning projects to get a view of what made these chatbots distinctive, as well as, learn more about the technologies used to implement the chatbot solution.

 

BuildFax Counts by Joe Emison

The BuildFax Counts bot was created as a real solution for the BuildFax company to decrease the amount the time that sales and marketing teams can get answers on permits or properties with permits meet certain criteria.

BuildFax, a company co-founded by bot developer Joe Emison, has the only national database of building permits, which updates data from approximately half of the United States on a monthly basis. In order to accommodate the many requests that come in from the sales and marketing team regarding permit information, BuildFax has a technical sales support team that fulfills these requests sent to a ticketing system by manually writing SQL queries that run across the shards of the BuildFax databases. Since there are a large number of requests received by the internal sales support team and due to the manual nature of setting up the queries, it may take several days for getting the sales and marketing teams to receive an answer.

The BuildFax Counts chatbot solves this problem by taking the permit inquiry that would normally be sent into a ticket from the sales and marketing team, as input from Slack to the chatbot. Once the inquiry is submitted into Slack, a query executes and the inquiry results are returned immediately.

Joe built this solution by first creating a nightly export of the data in their BuildFax MySQL RDS database to CSV files that are stored in Amazon S3. From the exported CSV files, an Amazon Athena table was created in order to run quick and efficient queries on the data. He then used Amazon Lex to create a bot to handle the common questions and criteria that may be asked by the sales and marketing teams when seeking data from the BuildFax database by modeling the language used from the BuildFax ticketing system. He added several different sample utterances and slot types; both custom and Lex provided, in order to correctly parse every question and criteria combination that could be received from an inquiry.  Using Lambda, Joe created a Javascript Lambda function that receives information from the Lex intent and used it to build a SQL statement that runs against the aforementioned Athena database using the AWS SDK for JavaScript in Node.js library to return inquiry count result and SQL statement used.

The BuildFax Counts bot is used today for the BuildFax sales and marketing team to get back data on inquiries immediately that previously took up to a week to receive results.

Not only is BuildFax Counts bot our 1st place winner and wonderful solution, but its creator, Joe Emison, is a great guy.  Joe has opted to donate his prize; the $5,000 cash, the $2,500 in AWS Credits, and one re:Invent ticket to the Black Girls Code organization. I must say, you rock Joe for helping these kids get access and exposure to technology.

 

Hubsy by Andrew Riess, Andrew Puch, and John Wetzel

Hubsy bot was created to redefine and personalize the way users traditionally manage their HubSpot account. HubSpot is a SaaS system providing marketing, sales, and CRM software. Hubsy allows users of HubSpot to create engagements and log engagements with customers, provide sales teams with deals status, and retrieves client contact information quickly. Hubsy uses Amazon Lex’s conversational interface to execute commands from the HubSpot API so that users can gain insights, store and retrieve data, and manage tasks directly from Facebook, Slack, or Alexa.

In order to implement the Hubsy chatbot, Andrew and the team members used AWS Lambda to create a Lambda function with Node.js to parse the users request and call the HubSpot API, which will fulfill the initial request or return back to the user asking for more information. Terraform was used to automatically setup and update Lambda, CloudWatch logs, as well as, IAM profiles. Amazon Lex was used to build the conversational piece of the bot, which creates the utterances that a person on a sales team would likely say when seeking information from HubSpot. To integrate with Alexa, the Amazon Alexa skill builder was used to create an Alexa skill which was tested on an Echo Dot. Cloudwatch Logs are used to log the Lambda function information to CloudWatch in order to debug different parts of the Lex intents. In order to validate the code before the Terraform deployment, ESLint was additionally used to ensure the code was linted and proper development standards were followed.

 

PFMBot by Benny Leong and his team from MoneyLion

PFMBot, Personal Finance Management Bot,  is a bot to be used with the MoneyLion finance group which offers customers online financial products; loans, credit monitoring, and free credit score service to improve the financial health of their customers. Once a user signs up an account on the MoneyLion app or website, the user has the option to link their bank accounts with the MoneyLion APIs. Once the bank account is linked to the APIs, the user will be able to login to their MoneyLion account and start having a conversation with the PFMBot based on their bank account information.

The PFMBot UI has a web interface built with using Javascript integration. The chatbot was created using Amazon Lex to build utterances based on the possible inquiries about the user’s MoneyLion bank account. PFMBot uses the Lex built-in AMAZON slots and parsed and converted the values from the built-in slots to pass to AWS Lambda. The AWS Lambda functions interacting with Amazon Lex are Java-based Lambda functions which call the MoneyLion Java-based internal APIs running on Spring Boot. These APIs obtain account data and related bank account information from the MoneyLion MySQL Database.

 

ADP Payroll Innovation Bot by Eric Liu, Jiaxing Yan, and Fan Yang

ADP PI (Payroll Innovation) bot is designed to help employees of ADP customers easily review their own payroll details and compare different payroll data by just asking the bot for results. The ADP PI Bot additionally offers issue reporting functionality for employees to report payroll issues and aids HR managers in quickly receiving and organizing any reported payroll issues.

The ADP Payroll Innovation bot is an ecosystem for the ADP payroll consisting of two chatbots, which includes ADP PI Bot for external clients (employees and HR managers), and ADP PI DevOps Bot for internal ADP DevOps team.


The architecture for the ADP PI DevOps bot is different architecture from the ADP PI bot shown above as it is deployed internally to ADP. The ADP PI DevOps bot allows input from both Slack and Alexa. When input comes into Slack, Slack sends the request to Lex for it to process the utterance. Lex then calls the Lambda backend, which obtains ADP data sitting in the ADP VPC running within an Amazon VPC. When input comes in from Alexa, a Lambda function is called that also obtains data from the ADP VPC running on AWS.

The architecture for the ADP PI bot consists of users entering in requests and/or entering issues via Slack. When requests/issues are entered via Slack, the Slack APIs communicate via Amazon API Gateway to AWS Lambda. The Lambda function either writes data into one of the Amazon DynamoDB databases for recording issues and/or sending issues or it sends the request to Lex. When sending issues, DynamoDB integrates with Trello to keep HR Managers abreast of the escalated issues. Once the request data is sent from Lambda to Lex, Lex processes the utterance and calls another Lambda function that integrates with the ADP API and it calls ADP data from within the ADP VPC, which runs on Amazon Virtual Private Cloud (VPC).

Python and Node.js were the chosen languages for the development of the bots.

The ADP PI bot ecosystem has the following functional groupings:

Employee Functionality

  • Summarize Payrolls
  • Compare Payrolls
  • Escalate Issues
  • Evolve PI Bot

HR Manager Functionality

  • Bot Management
  • Audit and Feedback

DevOps Functionality

  • Reduce call volume in service centers (ADP PI Bot).
  • Track issues and generate reports (ADP PI Bot).
  • Monitor jobs for various environment (ADP PI DevOps Bot)
  • View job dashboards (ADP PI DevOps Bot)
  • Query job details (ADP PI DevOps Bot)

 

Summary

Let’s all wish all the winners of the AWS Chatbot Challenge hearty congratulations on their excellent projects.

You can review more details on the winning projects, as well as, all of the submissions to the AWS Chatbot Challenge at: https://awschatbot2017.devpost.com/submissions. If you are curious on the details of Chatbot challenge contest including resources, rules, prizes, and judges, you can review the original challenge website here:  https://awschatbot2017.devpost.com/.

Hopefully, you are just as inspired as I am to build your own chatbot using Lex and Lambda. For more information, take a look at the Amazon Lex developer guide or the AWS AI blog on Building Better Bots Using Amazon Lex (Part 1)

Chat with you soon!

Tara

AWS GovCloud (US) Heads East – New Region in the Works for 2018

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-govcloud-us-heads-east-new-region-in-the-works-for-2018/

AWS GovCloud (US) gives AWS customers a place to host sensitive data and regulated workloads in the AWS Cloud. The first AWS GovCloud (US) Region was launched in 2011 and is located on the west coast of the US.

I’m happy to announce that we are working on a second Region that we expect to open in 2018. The upcoming AWS GovCloud (US-East) Region will provide customers with added redundancy, data durability, and resiliency, and will also provide additional options for disaster recovery.

Like the existing region, which we now call AWS GovCloud (US-West), the new region will be isolated and meet top US government compliance requirements including International Traffic in Arms Regulations (ITAR), NIST standards, Federal Risk and Authorization Management Program (FedRAMP) Moderate and High, Department of Defense Impact Levels 2-4, DFARs, IRS1075, and Criminal Justice Information Services (CJIS) requirements. Visit the GovCloud (US) page to learn more about the compliance regimes that we support.

Government agencies and the IT contactors that serve them were early adopters of AWS GovCloud (US), as were companies in regulated industries. These organizations are able to enjoy the flexibility and cost-effectiveness of public cloud while benefiting from the isolation and data protection offered by a region designed and built to meet their regulatory needs and to help them to meet their compliance requirements. Here’s a small sample from our customer base:

Federal (US) GovernmentDepartment of Veterans Affairs, General Services Administration 18F (Digital Services Delivery), NASA JPL, Defense Digital Service, United States Air Force, United States Department of Justice.

Regulated IndustriesCSRA, Talen Energy, Cobham Electronics.

SaaS and Solution ProvidersFIGmd, Blackboard, Splunk, GitHub, Motorola.

Federal, state, and local agencies that want to move their existing applications to the AWS Cloud can take advantage of the AWS Cloud Adoption Framework (CAF) offered by AWS Professional Services.

Jeff;

 

 

Hello World – a new magazine for educators

Post Syndicated from Philip Colligan original https://www.raspberrypi.org/blog/hello-world-new-magazine-for-educators/

Today, the Raspberry Pi Foundation is launching a new, free resource for educators.

Hello World – a new magazine for educators

Hello World is a magazine about computing and digital making written by educators, for educators. With three issues each year, it contains 100 pages filled with news, features, teaching resources, reviews, research and much more. It is designed to be cross-curricular and useful to all kinds of educators, from classroom teachers to librarians.

Hello World is a magazine about computing and digital making written by educators, for educators. With three issues each year, it contains 100 pages filled with news, features, teaching resources, reviews, research and much more.

It is designed to be cross-curricular and useful to all kinds of educators, from classroom teachers to librarians.  While it includes lots of great examples of how educators are using Raspberry Pi computers in education, it is device- and platform-neutral.

Community building

As with everything we do at the Raspberry Pi Foundation, Hello World is about community building. Our goal is to provide a resource that will help educators connect, share great practice, and learn from each other.

Hello World is a collaboration between the Raspberry Pi Foundation and Computing at School, the grass-roots organisation of computing teachers that’s part of the British Computing Society. The magazine builds on the fantastic legacy of Switched On, which it replaces as the official magazine for the Computing at School community.

We’re thrilled that many of the contributors to Switched On have agreed to continue writing for Hello World. They’re joined by educators and researchers from across the globe, as well as the team behind the amazing MagPi, the official Raspberry Pi magazine, who are producing Hello World.

print (“Hello, World!”)

Hello World is available free, forever, for everyone online as a downloadable pdf.  The content is written to be internationally relevant, and includes features on the most interesting developments and best practices from around the world.

The very first issue of Hello World, the magazine about computing and digital making for educators

Thanks to the very generous support of our sponsors BT, we are also offering the magazine in a beautiful print version, delivered for free to the homes of serving educators in the UK.

Papert’s legacy 

This first issue is dedicated to Seymour Papert, in many ways the godfather of computing education. Papert was the creator of the Logo programming language and the author of some of the most important research on the role of computers in education. It will come at no surprise that his legacy has a big influence on our work at the Raspberry Pi Foundation, not least because one of our co-founders, Jack Lang, did a summer internship with Papert.

Seymour Papert

Seymour Papert with one of his computer games at the MIT Media Lab
Credit: Steve Liss/The Life Images Collection/Getty Images

Inside you’ll find articles exploring Papert’s influence on how we think about learning, on the rise of the maker movement, and on the software that is used to teach computing today from Scratch to Greenfoot.

Get involved

We will publish three issues of Hello World a year, timed to coincide with the start of the school terms here in the UK. We’d love to hear your feedback on this first issue, and please let us know what you’d like to see covered in future issues too.

The magazine is by educators, for educators. So if you have experience, insights or practical examples that you can share, get in touch: [email protected].

The post Hello World – a new magazine for educators appeared first on Raspberry Pi.

Harry Potter and the Real-life Weasley Clock

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/harry-potter-real-life-weasley-clock/

Pat Peters (such a wonderful Marvel-sounding name) recently shared his take on the Weasley Clock, a device that hangs on the wall of The Burrow, the rickety home inhabited by the Weasley family in the Harry Potter series.

Mrs. Weasley glanced at the grandfather clock in the corner. Harry liked this clock. It was completely useless if you wanted to know the time, but otherwise very informative. It had nine golden hands, and each of them was engraved with one of the Weasley family’s names. There were no numerals around the face, but descriptions of where each family member might be. “Home,” “school,” and “work” were there, but there was also “traveling,” “lost,” “hospital,” “prison,” and, in the position where the number twelve would be on a normal clock, “mortal peril.”

The clock in the movie has misplaced “mortal peril”, but aside from that it looks a lot like what we’d imagined from the books.

There’s a reason why more and more Harry Potter-themed builds are appearing online. The small size of devices such as the Raspberry Pi and Arduino allow for a digital ‘brain’ to live within an ordinary object, allowing control over it that you could easily confuse with magic…if you allow yourself to believe in such things.

So with last week’s Real-life Daily Prophet doing so well, it’s only right to share another Harry Potter-inspired project.

Harry Potter Weasley Clock

The clock serves not to tell the time but, rather, to indicate the location of Molly, Arthur and the horde of Weasley children. And using the OwnTracks GPS app for smartphones, Pat’s clock does exactly the same thing.

Pat Peters Weasley Clock Raspberry Pi

Pat has posted the entire build on instructables, allowing every budding witch and wizard (and possibly a curious Muggle or two) the chance to build their own Weasley Clock.

This location clock works through a Raspberry Pi that subscribes to an MQTT broker that our phone’s publish events to. Our phones (running the OwnTracks GPS app) send a message to the broker anytime we cross into or out of one of our waypoints that we have set up in OwnTracks, which then triggers the Raspberry Pi to run a servo that moves the clock hand to show our location.

There are no words for how much we love this. Here at Pi Towers we definitely have a soft spot for Harry Potter-themed builds, so make sure to share your own with us in the comments below, or across our social media channels on Facebook, Twitter, Instagram, YouTube and G+.

The post Harry Potter and the Real-life Weasley Clock appeared first on Raspberry Pi.

Monitor Cluster State with Amazon ECS Event Stream

Post Syndicated from Chris Barclay original https://aws.amazon.com/blogs/compute/monitor-cluster-state-with-amazon-ecs-event-stream/

Thanks to my colleague Jay Allen for this great blog on how to use the ECS Event stream for operational tasks.

—-

In the past, in order to obtain updates on the state of a running Amazon ECS cluster, customers have had to rely on periodically polling the state of container instances and tasks using the AWS CLI or an SDK. With the new Amazon ECS event stream feature, it is now possible to retrieve near real-time, event-driven updates on the state of your Amazon ECS tasks and container instances. Events are delivered through Amazon CloudWatch Events, and can be routed to any valid CloudWatch Events target, such as an AWS Lambda function or an Amazon SNS topic.

In this post, I show you how to create a simple serverless architecture that captures, processes, and stores event stream updates. You first create a Lambda function that scans all incoming events to determine if there is an error related to any running tasks (for example, if a scheduled task failed to start); if so, the function immediately sends an SNS notification. Your function then stores the entire message as a document inside of an Elasticsearch cluster using Amazon Elasticsearch Service, where you and your development team can use the Kibana interface to monitor the state of your cluster and search for diagnostic information in response to issues reported by users.

Understanding the structure of event stream events

An ECS event stream sends two types of event notifications:

  • Task state change notifications, which ECS fires when a task starts or stops
  • Container instance state change notifications, which ECS fires when the resource utilization or reservation for an instance changes

A single event may result in ECS sending multiple notifications of both types. For example, if a new task starts, ECS first sends a task state change notification to signal that the task is starting, followed by a notification when the task has started (or has failed to start); additionally, ECS also fires container instance state change notifications when the utilization of the instance on which ECS launches the task changes.

Event stream events are sent using CloudWatch Events, which structures events as JSON messages divided into two sections: the envelope and the payload. The detail section of each event contains the payload data, and the structure of the payload is specific to the event being fired. The following example shows the JSON representation of a container state change event. Notice that the properties at to the top level of the JSON document describe event properties, such as the event name and time the event occurred, while the detail section contains the information about the task and container instance that triggered the event.

The following JSON depicts an ECS task state change event signifying that the essential container for a task running on an ECS cluster has exited, and thus the task has been stopped on the ECS cluster:

{
  "version": "0",
  "id": "8f07966c-b005-4a0f-9ee9-63d2c41448b3",
  "detail-type": "ECS Task State Change",
  "source": "aws.ecs",
  "account": "244698725403",
  "time": "2016-10-17T20:29:14Z",
  "region": "us-east-1",
  "resources": [
    "arn:aws:ecs:us-east-1:123456789012:task/cdf83842-a918-482b-908b-857e667ce328"
  ],
  "detail": {
    "clusterArn": "arn:aws:ecs:us-east-1:123456789012:cluster/eventStreamTestCluster",
    "containerInstanceArn": "arn:aws:ecs:us-east-1:123456789012:container-instance/f813de39-e42c-4a27-be3c-f32ebb79a5dd",
    "containers": [
      {
        "containerArn": "arn:aws:ecs:us-east-1:123456789012:container/4b5f2b75-7d74-4625-8dc8-f14230a6ae7e",
        "exitCode": 1,
        "lastStatus": "STOPPED",
        "name": "web",
        "networkBindings": [
          {
            "bindIP": "0.0.0.0",
            "containerPort": 80,
            "hostPort": 80,
            "protocol": "tcp"
          }
        ],
        "taskArn": "arn:aws:ecs:us-east-1:123456789012:task/cdf83842-a918-482b-908b-857e667ce328"
      }
    ],
    "createdAt": "2016-10-17T20:28:53.671Z",
    "desiredStatus": "STOPPED",
    "lastStatus": "STOPPED",
    "overrides": {
      "containerOverrides": [
        {
          "name": "web"
        }
      ]
    },
    "startedAt": "2016-10-17T20:29:14.179Z",
    "stoppedAt": "2016-10-17T20:29:14.332Z",
    "stoppedReason": "Essential container in task exited",
    "updatedAt": "2016-10-17T20:29:14.332Z",
    "taskArn": "arn:aws:ecs:us-east-1:123456789012:task/cdf83842-a918-482b-908b-857e667ce328",
    "taskDefinitionArn": "arn:aws:ecs:us-east-1:123456789012:task-definition/wpunconfiguredfail:1",
    "version": 3
  }
}

Setting up an Elasticsearch cluster

Before you dive into the code for handling events, set up your Elasticsearch cluster. On the console, choose Elasticsearch Service, Create a New Domain. In Elasticsearch domain name, type elasticsearch-ecs-events, then choose Next.

For Step 2: Configure cluster, accept all of the defaults by choosing Next.

For Step 3: Set up access policy, choose Next. This page lets you establish a resource-based policy for accessing your cluster; to allow access to the cluster’s actions, use an identity-based policy associated with your Lambda function.

Finally, on the Review page, choose Confirm and create. This starts spinning up your cluster.

While your cluster is being created, set up the SNS topic and Lambda function you need to start capturing and issuing notifications about events.

Create an SNS topic

Because your Lambda function emails you when a task fails unexpectedly due to an error condition, you need to set up an Amazon SNS topic to which your Lambda function can write.

In the console, choose SNS, Create Topic. For Topic name, type ECSTaskErrorNotification, and then choose Create topic.

When you’re done, copy the Topic ARN value, and save it to a text editor on your local desktop; you need it to configure permissions for your Lambda function in the next step. Finally, choose Create subscription to subscribe to an email address for which you have access, so that you receive these event notifications. Remember to click the link in the confirmation email, or you won’t receive any events.

The eagle-eyed among you may realize that you haven’t given your future Lambda function permission to call your SNS topic. You grant this permission to the Lambda execution role when you create your Lambda function in the following steps.

Handling event stream events in a Lambda function

For the next step, create your Lambda function to capture events. Here’s the code for your function (written in Python 2.7):

import requests
import json
from requests_aws_sign import AWSV4Sign
from boto3 import session, client
from elasticsearch import Elasticsearch, RequestsHttpConnection

es_host = '<insert your own Amazon ElasticSearch endpoint here>'
sns_topic = '<insert your own SNS topic ARN here>'

def lambda_handler(event, context):
    # Establish credentials
    session_var = session.Session()
    credentials = session_var.get_credentials()
    region = session_var.region_name or 'us-east-1'

    # Check to see if this event is a task event and, if so, if it contains
    # information about an event failure. If so, send an SNS notification.
    if "detail-type" not in event:
        raise ValueError("ERROR: event object is not a valid CloudWatch Logs event")
    else:
        if event["detail-type"] == "ECS Task State Change":
            detail = event["detail"]
            if detail["lastStatus"] == "STOPPED":
                if detail["stoppedReason"] == "Essential container in task exited":
                  # Send an error status message.
                  sns_client = client('sns')
                  sns_client.publish(
                      TopicArn=sns_topic,
                      Subject="ECS task failure detected for container",
                      Message=json.dumps(detail)
                  )

    # Elasticsearch connection. Note that you must sign your requests in order
    # to call the Elasticsearch API anonymously. Use the requests_aws_sign
    # package for this.
    service = 'es'
    auth=AWSV4Sign(credentials, region, service)
    es_client = Elasticsearch(host=es_host,
                              port=443,
                              connection_class=RequestsHttpConnection,
                              http_auth=auth,
                              use_ssl=True,
                              verify_ssl=True)

    es_client.index(index="ecs-index", doc_type="eventstream", body=event)

Break this down: First, the function inspects the event to see if it is a task change event. If so, it further looks to see if the event is reporting a stopped task, and whether that task stopped because one of its essential containers terminated. If these conditions are true, it sends a notification to the SNS topic that you created earlier.

Second, the function creates an Elasticsearch connection to your Amazon ES instance. The function uses the requests_aws_sign library to implement Sig4 signing because, in order to call Amazon ES, you need to sign all requests with the Sig4 signing process. After the Sig4 signature is generated, the function calls Amazon ES and adds the event to an index for later retrieval and inspection.

To get this code to work, your Lambda function must have permission to perform HTTP POST requests against your Amazon ES instance, and to publish messages to your SNS topic. Configure this by setting up your Lambda function with an execution role that grants the appropriate permission to these resources in your account.

To get started, you need to prepare a ZIP file for the above code that contains both the code and its prerequisites. Create a directory named lambda_eventstream, and save the code above to a file named lambda_function.py. In your favorite text editor, replace the es_host and sns_topic variables with your own Amazon ES endpoint and SNS topic ARN, respectively.
Next, on the command line (Linux, Windows or Mac), change to the directory that you just created, and run the following command for pip (the de facto standard Python installation utility) to download all of the required prerequisites for this code into the directory. You need to ship these dependencies with your code, as they are not pre-installed on the instance that runs your Lambda function.

NOTE: You need to be on a machine with Python and pip already installed. If you are using Python 2.7.9 or greater, pip is installed as part of your standard Python installation. If you are not using Python 2.7.9 or greater, consult the pip page for installation instructions.

pip install requests_aws_sign elasticsearch -t .

Finally, zip all of the contents of this directory into a single zip file. Make sure that the lambda-eventstream.py file is at the top of the file hierarchy within the zip file, and that it is not contained within another directory. From within the lambda-eventstream directory, you can use the following command on Linux and MacOS systems:

zip lambda-eventstream.zip *

On Windows clients with the 7-Zip utility installed, you can run the following command from PowerShell or, if you’re really so inclined, a command prompt:

7z a -tzip lambda-eventstream.zip *

Now that your function and its dependencies are properly packaged, install and test it. Navigate to the Lambda console, choose Create a Lambda Function, and then on the Select Blueprint page, choose Blank function. Choose Next on the Configure triggers screen; you wire up your function to your ECS event stream in the next section.

On the Configure function page, for Name, enter lambda-eventstream. For Runtime, choose Python 2.7. Under Lambda function code, for Code entry type, choose Upload a .ZIP file, and choose Upload to select the ZIP file that you just created.

Under Lambda function handler and role, for Role, choose Create a custom role. This opens a new window for configuring your policy. For IAM Role, choose Create a New IAM Role, and type a name. Then choose View Policy Document, Edit. Paste in the IAM policy below, making sure to replace every instance of AWSAccountID with your own AWS account ID.

{
"Version":"2012-10-17",
   "Statement":[
      {
         "Effect":"Allow",
         "Action":"lambda:InvokeFunction",
         "Resource":"arn:aws:lambda:us-east-1:<AWSAccountID>:function:ecs-events",
         "Principal":{
            "Service":"events.amazonaws.com"
         },
         "Condition":{
            "ArnLike":{
               "AWS:SourceArn":"arn:aws:events:us-east-1:<AWSAccountID>:rule/eventstream-rule"
            }
         },
         "Sid":"TrustCWEToInvokeMyLambdaFunction"
      },
      {
         "Effect":"Allow",
         "Action":"logs:CreateLogGroup",
         "Resource":"arn:aws:logs:us-east-1:<AWSAccountID>:*"
      },
     {
         "Effect":"Allow",
         "Action":[
            "logs:CreateLogStream",
            "logs:PutLogEvents"
         ],
         "Resource":[
            "arn:aws:logs:us-east-1:<AWSAccountID>:log-group:/aws/lambda/ecs-events:*"
         ]
      },
      {
          "Effect": "Allow",
          "Action": [
              "es:ESHttpPost"
          ],
          "Resource": "arn:aws:es:us-east-1:<AWSAccountID>:domain/ecs-events-cluster/*"
      },
      {
            "Effect": "Allow",
            "Action": [
                "sns:Publish"
            ],
            "Resource": "arn:aws:sns:us-east-1:<AWSAccountID>:ECSTaskErrorNotification"        
      }
   ]
}

This policy establishes every permission that your Lambda function requires for execution, including permission to:

  • Create a new CloudWatch Events log group, and save all outputs from your Lambda function to this group
  • Perform HTTP PUT commands on your Elasticsearch cluster
  • Publish messages to your SNS topic

When you’re done, you can test your configuration by scrolling up to the sample event stream message provided earlier in this post, and using it to test your Lambda function in the console. On the dashboard page for your new function, choose Test, and in the Input test event window, enter the JSON-formatted event from earlier.

Note that, if you haven’t correctly input your account ID in the correct places in your IAM policy file, you may receive a message along the lines of:

User: arn:aws:sts::123456789012:assumed-role/LambdaEventStreamTake2/awslambda_421_20161017203411268 is not authorized to perform: es:ESHttpPost on resource: ecs-events-cluster.

Edit the policy associated with your Lambda execution role in the IAM console and try again.

Send event stream events to your Lambda function

Almost there! Now with your SNS topic, Elasticsearch cluster, and Lambda function all in place, the only remaining element is to wire up your ECS event stream events and route them to your Lambda function. The CloudWatch Events console offer everything you need to set this up quickly and easily.

From the console, choose CloudWatch, Events. On Step 1: Create Rule, under Event selector, choose Amazon EC2 Container Service. CloudWatch Events enables you to filter by the type of message (task state change or container instance state change), as well as to select a specific cluster from which to receive events. For the purposes of this post, keep the default settings of Any detail type and Any cluster.

Under Targets, choose Lambda function. For Function, choose lambda-eventstream. Behind the scenes, this sends events from your ECS clusters to your Lambda function and also creates the service role required for CloudWatch Events to call your Lambda function.

Verify your work

Now it’s time to verify that messages sent from your ECS cluster flow through your Lambda function, trigger an SNS message for failed tasks, and are stored in your Elasticsearch cluster for future retrieval. To test this workflow, you can use the following ECS task definition, which attempts to start the official WordPress image without configuring an SQL database for storage:

{
    "taskDefinition": {
        "status": "ACTIVE",
        "family": "wpunconfiguredfail",
        "volumes": [],
        "taskDefinitionArn": "arn:aws:ecs:us-east-1:244698725403:task-definition/wpunconfiguredfail:1",
        "containerDefinitions": [
            {
                "environment": [],
                "name": "web",
                "mountPoints": [],
                "image": "wordpress",
                "cpu": 99,
                "portMappings": [
                    {
                        "protocol": "tcp",
                        "containerPort": 80,
                        "hostPort": 80
                    }
                ],
                "memory": 100,
                "essential": true,
                "volumesFrom": []
            }
        ],
        "revision": 1
    }
}

Create this task definition using either the AWS Management Console or the AWS CLI, and then start a task from this task definition. For more detailed instructions, see Launching a Container Instance.

A few minutes after launching this task definition, you should receive an SNS message with the contents of the task state change JSON indicating that the task failed. You can also examine your Elasticsearch cluster in the console by selecting the name of your cluster and choosing Indicates, ecs-index. For Count, you should see that you have multiple records stored.

You can also search the messages that have been stored by opening up access to your Kibana endpoint. Kibana provides a host of visualization and search capabilities for data stored in Amazon ES. To open up access to Kibana to your computer, find your computer’s IP address, and then choose Modify access policy for your Elasticsearch cluster. For Set the domain access policy to, choose Allow access to the domain from specific IP(s) and enter your IP address.

(A more robust and scalable solution for securing Kibana is to front it with a proxy. Details on this approach can be found in Karthi Thyagarajan’s post How to Control Access to Your Amazon Elasticsearch Service Domain.)

You should now be able to kick the Kibana endpoint for your cluster, and search for messages stored in your cluster’s indexes.

Conclusion

After you have this basic, serverless architecture set up for consuming ECS cluster-related event notifications, the possibilities are limitless. For example, instead of storing the events in Amazon ES, you could store them in Amazon DynamoDB, and use the resulting tables to build a UI that materializes the current state of your clusters.

You could also use this information to drive container placement and scaling automatically, allowing you to “right-size” your clusters to a very granular level. By delivering cluster state information in near-real time using an event-driven model as opposed to a pull model, the new ECS event stream feature opens up a much wider array of possibilities for monitoring and scaling your container infrastructure.

If you have questions or suggestions, please comment below.

WTF Yahoo/FISA search in kernel?

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/10/wtf-yahoofisa-search-in-kernel.html

A surprising detail in the Yahoo/FISA email search scandal is that they do it with a kernel module. I thought I’d write up some (rambling) notes.

What the government was searching for

As described in the previoius blog post, we’ll assume the government is searching for the following string, and possibly other strings like it within emails:

### Begin ASRAR El Mojahedeen v2.0 Encrypted Message ###

I point this out because it’s simple search identifying things. It’s not natural language processing. It’s not searching for phrases like “bomb president”.

Also, it’s not AV/spam/childporn processing. Those look at different things. For example, filtering message containing childporn involves calculating a SHA2 hash of email attachments and looking up the hashes in a table of known bad content (or even more in-depth analysis). This is quite different from searching.

The Kernel vs. User Space

Operating systems have two parts, the kernel and user space. The kernel is the operating system proper (e.g. the “Linux kernel”). The software we run is in user space, such as browsers, word processors, games, web servers, databases, GNU utilities [sic], and so on.

The kernel has raw access to the machine, memory, network devices, graphics cards, and so on. User space has virtual access to these things. The user space is the original “virtual machines”, before kernels got so bloated that we needed a third layer to virtualize them too.

This separation between kernel and user has two main benefits. The first is security, controlling which bit of software has access to what. It means, for example, that one user on the machine can’t access another’s files. The second benefit is stability: if one program crashes, the others continue to run unaffected.

Downside of a Kernel Module

Writing a search program as a kernel module (instead of a user space module) defeats the benefits of user space programs, making the machine less stable and less secure.

Moreover, the sort of thing this module does (parsing emails) has a history of big gapping security flaws. Parsing stuff in the kernel makes cybersecurity experts run away screaming in terror.

On the other hand, people have been doing security stuff (SSL implementations and anti-virus scanning) in the kernel in other situations, so it’s not unprecedented. I mean, it’s still wrong, but it’s been done before.

Upside of a Kernel Module

If doing this is as a kernel module (instead of in user space) is so bad, then why does Yahoo do it? It’s probably due to the widely held, but false, belief that putting stuff in the kernel makes it faster.

Everybody knows that kernels are faster, for two reasons. First is that as a program runs, making a system call switches context, from running in user space to running in kernel space. This step is expensive/slow. Kernel modules don’t incur this expense, because code just jumps from one location in the kernel to another. The second performance issue is virtual memory, where reading memory requires an extra step in user space, to translate the virtual memory address to a physical one. Kernel modules access physical memory directly, without this extra step.

But everyone is wrong. Using features like hugepages gets rid of the cost of virtual memory translation cost. There are ways to mitigate the cost of user/kernel transitions, such as moving data in bulk instead of a little bit at a time. Also, CPUs have improved in recent years, dramatically reducing the cost of a kernel/user transition.

The problem we face, though, is inertia. Everyone knows moving modules into the kernel makes things faster. It’s hard getting them to un-learn what they’ve been taught.

Also, following this logic, Yahoo may already have many email handling functions in the kernel. If they’ve already gone down the route of bad design, then they’d have to do this email search as a kernel module as well, to avoid the user/kernel transition cost.

Another possible reason for the kernel-module is that it’s what the programmers knew how to do. That’s especially true if the contractor has experience with other kernel software, such as NSA implants. They might’ve read Phrack magazine on the topic, which might have been their sole education on the subject. [http://phrack.org/issues/61/13.html]

How it was probably done

I don’t know Yahoo’s infrastructure. Presumably they have front-end systems designed to balance the load (and accelerate SSL processing), and back-end systems that do the heavy processing, such as spam and virus checking.

The typical way to do this sort of thing (search) is simply tap into the network traffic, either as a separate computer sniffing (eavesdropping on) the network, or something within the system that taps into the network traffic, such as a netfilter module. Netfilter is the Linux firewall mechanism, and has ways to easily “hook” into specific traffic, either from user space or from a kernel module. There is also a related user space mechanism of hooking network APIs like recv() with a preload shared library.

This traditional mechanism doesn’t work as well anymore. For one thing, incoming email traffic is likely encrypted using SSL (using STARTTLS, for example). For another thing, companies are increasingly encrypting intra-data-center traffic, either with SSL or with hard-coded keys.

Therefore, instead of tapping into network traffic, the code might tap directly into the mail handling software. A good example of this is Sendmail’s milter interface, that allows the easy creation of third-party mail filtering applications, specifically for spam and anti-virus.

But it would be insane to write a milter as a kernel module, since mail handling is done in user space, thus adding unnecessary user/kernel transitions. Consequently, we make the assumption that Yahoo’s intra-data-center traffic in unencrypted, and that for FISA search thing, they wrote something like a kernel-module with netfilter hooks.

How it should’ve been done

Assuming the above guess is correct, that they used kernel netfilter hooks, there are a few alternatives.

They could do user space netfilter hooks instead, but they do have a performance impact. They require a transition from the kernel to user, then a second transition back into the kernel. If the system is designed for high performance, this might be a noticeable performance impact. I doubt it, as it’s still small compared to the rest of the computations involved, but it’s the sort of thing that engineers are prejudiced against, even before they measure the performance impact.

A better way of doing it is hooking the libraries. These days, most software uses shared libraries (.so) to make system calls like recv(). You can write your own shared library, and preload it. When the library function is called, you do your own processing, then call the original function.

Hooking the libraries then lets you tap into the network traffic, but without any additional kernel/user transition.

Yet another way is simple changes in the mail handling software that allows custom hooks to be written.

Third party contractors

We’ve been thinking in terms of technical solutions. There is also the problem of politics.

Almost certainly, the solution was developed by outsiders, by defense contractors like Booz-Allen. (I point them out because of the whole Snowden/Martin thing). This restricts your technical options.

You don’t want to give contractors access to your source code. Nor do you want to the contractors to be making custom changes to your source code, such as adding hooks. Therefore, you are looking at external changes, such as hooking the network stack.

The advantage of a netfilter hook in the kernel is that it has the least additional impact on the system. It can be developed and thoroughly tested by Booz-Allen, then delivered to Yahoo!, who can then install it with little effort.

This is my #1 guess why this was a kernel module – it allowed the most separation between Yahoo! and a defense contractor who wrote it. In other words, there is no technical reason for it — but a political reason.

Let’s talk search

There two ways to search things: using an NFA and using a DFA.

An NFA is the normal way of using regex, or grep. It allows complex patterns to be written, but it requires a potentially large amount of CPU power (i.e. it’s slow). It also requires backtracking within a message, thus meaning the entire email must be reassembled before searching can begin.

The DFA alternative instead creates a large table in memory, then does a single pass over a message to search. Because it does only a single pass, without backtracking, the message can be streamed through the search module, without needing to reassemble the message. In theory, anything searched by an NFA can be searched by a DFA, though in practice some unbounded regex expressions require too much memory, so DFAs usually require simpler patterns.

The DFA approach, by the way, is about 4-gbps per 2.x-GHz Intel x86 server CPU. Because no reassembly is required, it can tap directly into anything above the TCP stack, like netfilter. Or, it can tap below the TCP stack (like libpcap), but would require some logic to re-order/de-duplicate TCP packets, to present the same ordered stream as TCP.

DFAs would therefore require little or no memory. In contrast, the NFA approach will require more CPU and memory just to reassemble email messages, and the search itself would also be slower.

The naïve approach to searching is to use NFAs. It’s what most people start out with. The smart approach is to use DFAs. You see that in the evolution of the Snort intrusion detection engine, where they started out using complex NFAs and then over the years switched to the faster DFAs.

You also see it in the network processor market. These are specialized CPUs designed for things like firewalls. They advertise fast regex acceleration, but what they really do is just convert NFAs into something that is mostly a DFA, which you can do on any processor anyway. I have a low opinion of network processors, since what they accelerate are bad decisions. Correctly designed network applications don’t need any special acceleration, except maybe SSL public-key crypto.

So, what the government’s code needs to do is a very lightweight parse of the SMTP protocol in order to extract the from/to email addresses, then a very lightweight search of the message’s content in order to detect if any of the offending strings have been found. When the pattern is found, it then reports the addresses it found.

Conclusion

I don’t know Yahoo’s system for processing incoming emails. I don’t know the contents of the court order forcing them to do a search, and what needs to be secret. Therefore, I’m only making guesses here.

But they are educated guesses. In 9 times out of 10 in situations similar to Yahoo, I’m guessing that a “kernel module” would be the most natural solution. It’s how engineers are trained to think, and it would likely be the best fit organizationally. Sure, it really REALLY annoys cybersecurity experts, but nobody cares what we think, so that doesn’t matter.

Raptor WAF – C Based Web Application Firewall

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/-zB_ziXpxec/

Raptor WAF is a Web Application Firewall made in C, using DFA to block SQL Injection, Cross Site Scripting (XSS) and Path Traversal. DFA stands for Deterministic Finite Automaton also known as a Deterministic Finite State Machine. It’s essentially a simple web application firewall made in C, using the KISS principle, making polls using the…

Read the full post at darknet.org.uk

Introducing Application Load Balancer – Unlocking and Optimizing Architectures

Post Syndicated from George Huang original https://aws.amazon.com/blogs/devops/introducing-application-load-balancer-unlocking-and-optimizing-architectures/

This is a guest blog post by Felix Candelario & Benjamin F., AWS Solutions Architects.

This blog post will focus on architectures you can unlock with the recently launched Application Load Balancer and compare them with the implementations that use what we now refer to as the Classic Load Balancer. An Application Load Balancer operates at the application layer and makes routing and load-balancing decisions on application traffic using HTTP and HTTPS.

There are several features to help you unlock new workloads:

  • Content-based routing

    • Allows you to define rules that route traffic to different target groups based on the path of a URL. The target group typically represents a service in a customer’s architecture.
  • Container support

    • Provides the ability to load-balance across multiple ports on the same Amazon EC2 instance. This functionality specifically targets the use of containers and is integrated into Amazon ECS.
  • Application monitoring

    • Allows you to monitor and associate health checks per target group.

Service Segmentation Using Subdomains

Our customers often need to break big, monolithic applications into smaller service-oriented architectures while hosting this functionality under the same domain name.

In the example.com architecture shown here, a customer has decided to segment services such as processing orders, serving images, and processing registrations. Each function represents a discrete collection of instances. Each collection of instances host several applications that provide a service.

Using a classic load balancer, the customer has to deploy several load balancers. Each load balancer points to the instances that represent and front the service by using a subdomain.

With the introduction of content-based routing on the new application load balancers, customers can reduce the number of load balancers required to accomplish the segmentation.

Application Load Balancers introduce the concept of rules, targets, and target groups. Rules determine how to route requests. Each rule specifies a target group, a condition, and a priority. An action is taken when the conditions on a rule are matched. Targets are endpoints that can be registered as a member of a target group. Target groups are used to route requests to registered targets as part of the action for a rule. Each target group specifies a protocol and target port. You can define health checks per target group and you can route to multiple target groups from each Application Load Balancer.

A new architecture shown here accomplishes with a single load balancer what previously required three. Here we’ve configured a single Application Load Balancer with three rules.

Let’s walk through the first rule in depth. To configure the Application Load Balancer to route traffic destined to www.example.com/orders/, we must complete five tasks.

  1. Create the Application Load Balancer.
  2. Create a target group.
  3. Register targets with the target group.
  4. Create a listener with the default rule that forwards requests to the default target group.
  5. Create a listener that forwards requests to the previously created target group.

To create the Application Load Balancer, we must provide a name for it and a minimum of two subnets.

aws elbv2 create-load-balancer –name example-loadbalancer –subnets "subnet-9de127c4" "subnet-0b1afc20"

To create a target group, we must specify a name, protocol, port, and vpc-id. Based on the preceding figure, we execute the following command to create a target group for the instances that represent the order-processing functionality.

aws elbv2 create-target-group –name order-instances –protocol HTTP –port 80 –vpc vpc-85a268e0

After the target group has been created, we can either add instances manually or through the use of an Auto Scaling group. To add an Auto Scaling group, we use the Auto Scaling group name and the generated target group ARN:

aws autoscaling attach-load-balancer-target-groups –auto-scaling-group-name order_autoscaling_group –target-group-arns "arn:aws:elasticloadbalancing:us-west-2:007038732177:targetgroup/order-instances/f249f89ef5899de1"

If we want to manually add instances, we would supply a list of instances and the generated target group ARN to register the instances associated with the order-processing functionality:

aws elbv2 register-targets –target-group-arn "arn:aws:elasticloadbalancing:us-west-2:007038732177:targetgroup/order-instances/f249f89ef5899de1" –targets Id=i-01cb16f914ec4714c,Port=80

After the instances have been registered with the target group, we create a listener with a default rule that forwards requests to the first target group. For the sake of this example, we’ll assume that the orders target group is the default group:

aws elb create-listener –load-balancer-arn "arn:aws:elasticloadbalancing:us-west-2:007038732177:targetgroup/order-instances/f249f89ef5899de1"  –protocol HTTP –port 80 –default-actions Type=forward,TargetGroupArn="arn:aws:elasticloadbalancing:us-east-1:007038732177:targetgroup/orders-instances/e53f8f9dfaf230c8"

Finally, we create a rule that forwards a request to the target group to which the order instances are registered when the condition of a path-pattern (in this case, ‘/orders/*’) is met:

aws elbv2 create-rule –listener-arn "arn:aws:elasticloadbalancing:us-west-2:007038732177:listener/app/example-loadbalancer/6bfa6ad4a2dd7925/6f916335439e2735" –conditions Field=path-pattern,Values=’/orders/*’ –priority 20 –actions Type=forward,TargetGroupArn="arn:aws:elasticloadbalancing:us-west-2:007038732177:targetgroup/order-instances/f249f89ef5899de1"

We repeat this process (with the exception of creating the default listener) for the images and registration functionality.

With this new architecture, we can move away from segmenting functionality based on subdomains and rely on paths. In this way, we preserve the use of a single subdomain, www, throughout the entire user experience. This approach reduces the number of Elastic Load Balancing load balancers required, which results in costs savings. It also reduces the operational overheard required for monitoring and maintaining additional elements in the application architecture.

Important The move from subdomain segmentation to path segmentation requires you to rewrite code to accommodate the new URLs.

Service Segmentation Using a Proxy Layer

A proxy layer pattern is used when customers want to use a single subdomain, such as www, while still segmenting functionality by grouping back-end servers. The following figure shows a common implementation of this pattern using the popular open source package NGINX.

In this implementation, the subdomain of www.example.com is associated with a top-level external load balancer. This load balancer is configured so that traffic is distributed to a group of instances running NGINX. Each instance running NGINX is configured with rules that direct traffic to one of the three internal load balancers based on the path in the URL.

For example, when a user browses to www.example.com/amazingbrand/, the external Elastic Load Balancing load balancer sends all traffic to the NGINX layer. All three of the NGINX installations are configured in the same way. When one of the NGINX instances receives the request, it parses the URL, matches a location for “/amazing”, and sends traffic to the server represented by the internal load balancer fronting the group of servers providing the Amazing Brand functionality.

It’s important to consider the impact of failed health checks. Should one of the NGINX instances fail health checks generated by the external load balancer, this load balancer will stop sending traffic to that newly marked unhealthy host. In this scenario, all of the discrete groups of functionality would be affected, making troubleshooting and maintenance more complex.

The following figure shows how customers can achieve segmentation while preserving a single subdomain without having to deploy a proxy layer.

In this implementation, both the proxy layer and the internal load balancers can be removed now that we can use the content-based routing associated with the new application load balancers. Using the previously demonstrated rules functionality, we can create three rules that point to different target groups based on different path conditions.

For this implementation, you’ll need to create the application load balancer, create a target group, register targets to the target group, create the listener, and create the rules.

1. Create the application load balancer.

aws elbv2 create-load-balancer –name example2-loadbalancer –subnets "subnet-fc02b18b" "subnet-63029106"

2. Create three target groups.

aws elbv2 create-target-group –name amazing-instances –protocol HTTP –port 80 –vpc vpc-85a268e0

aws elbv2 create-target-group –name stellar-instances –protocol HTTP –port 80 –vpc vpc-85a268e0

aws elbv2 create-target-group –name awesome-instances –protocol HTTP –port 80 –vpc vpc-85a268e0

3. Register targets with each target group.

aws elbv2 register-targets –target-group-arn "arn:aws:elasticloadbalancing:us-west-2:007038732177:targetgroup/amazing-instances/ad4a2174e7cc314c" –targets Id=i-072db711f70c36961,Port=80

aws elbv2 register-targets –target-group-arn "arn:aws:elasticloadbalancing:us-west-2:007038732177:targetgroup/stellar-instances/ef828b873624ba7a" –targets Id=i-08def6cbea7584481,Port=80

aws elbv2 register-targets –target-group-arn "arn:aws:elasticloadbalancing:us-west-2:007038732177:targetgroup/awesome-instances/116b2df4cd7fcc5c" –targets Id=i-0b9dba5b06321e6fe,Port=80

4. Create a listener with the default rule that forwards requests to the default target group.

aws elbv2 create-listener –load-balancer-arn "arn:aws:elasticloadbalancing:us-west-2:007038732177:loadbalancer/app/example2-loadbalancer/a685c68b17dfd091" –protocol HTTP –port 80 –default-actions Type=forward,TargetGroupArn="arn:aws:elasticloadbalancing:us-west-2:007038732177:targetgroup/amazing-instances/ad4a2174e7cc314c"

5.  Create a listener that forwards requests for each path to each target group. You need to make sure that every priority is unique.

aws elbv2 create-rule –listener-arn "arn:aws:elasticloadbalancing:us-west-2:007038732177:listener/app/example2-loadbalancer/a685c68b17dfd091/546af7daf3bd913e" –conditions Field=path-pattern,Values=’/amazingbrand/*’ –priority 20 –actions Type=forward,TargetGroupArn="arn:aws:elasticloadbalancing:us-west-2:007038732177:targetgroup/amazing-instances/ad4a2174e7cc314c"

 

aws elbv2 create-rule –listener-arn "arn:aws:elasticloadbalancing:us-west-2:007038732177:listener/app/example2-loadbalancer/a685c68b17dfd091/546af7daf3bd913e" –conditions Field=path-pattern,Values=’/stellarbrand/*’ –priority 40 –actions Type=forward,TargetGroupArn="arn:aws:elasticloadbalancing:us-west-2:007038732177:targetgroup/stellar-instances/ef828b873624ba7a"

 

aws elbv2 create-rule –listener-arn "arn:aws:elasticloadbalancing:us-west-2:007038732177:listener/app/example2-loadbalancer/a685c68b17dfd091/546af7daf3bd913e" –conditions Field=path-pattern,Values=’/awesomebrand/*’ –priority 60 –actions Type=forward,TargetGroupArn="arn:aws:elasticloadbalancing:us-west-2:007038732177:targetgroup/awesome-instances/116b2df4cd7fcc5c"

This implementation not only saves you the costs associated with running instances that support a proxy layer and an additional layer of load balancers. It also increases robustness as a result of application monitoring. In the Classic Load Balancer implementation of a proxy pattern, the failure of a single instance hosting NGINX impacts all of the other discrete functionality represented by the grouping of instances. In the application load balancer implementation, health checks are now associated with a single target group only. Failures and performance are now segmented from each other.

Run the following command to verify the health of the registered targets in the Amazing Brands target group:

aws elbv2 describe-target-health –target-group-arn "arn:aws:elasticloadbalancing:us-west-2:007038732177:targetgroup/amazing-instances/ad4a2174e7cc314c"

If the instances in this target group were marked as unhealthy, you would see the following output:

{

    "TargetHealthDescriptions": [

        {

            "HealthCheckPort": "80",

            "Target": {

                "Id": "i-072db711f70c36961",

                "Port": 80

            },

            "TargetHealth": {

                "State": "unhealthy",

                "Reason": "Target.Timeout",

                "Description": "Request timed out"

            }

        }

    ]

}

Service Segmentation Using Containers

Increasingly, customers are using containers as a way to package and isolate applications. Instead of grouping functionality by instances, customers are providing an even more granular collection of computing resources by using containers.

When you use Classic load balancers, you create a fixed relationship between the load balancer port and the container instance port. For example, it is possible to map the load balancer port 80 to the container instance port 3030 and the load balancer port 4040 to the container instance port 4040. However, it is not possible to map the load balancer port 80 to port 3030 on one container instance and port 4040 on another container instance.

The following figure illustrates this limitation. It also points out a pattern of using a proxy container to represent other containers operating on different ports. Logically, this implementation is similar to the proxy segmentation implementation described earlier.

Figure 5 Classic load balancer container based segmentation

Enhanced container support is one of the major features of the Application Load Balancer. It makes it possible to load-balance across multiple ports on the same EC2 instance. The following figure shows how this capability removes the need to run containers that proxy access to other containers.

To integrate containers, you only need to register the targets in the target group, which the Amazon ECS scheduler handles automatically. The following command configures /cart as illustrated in the preceding figure.

aws elbv2 register-targets –-target-group-arn "arn:aws:elasticloadbalancing:us-west-2:007038732177:targetgroup/cart-instances/ad4a2174e7cc314c" –-targets Id=i-84ri3a2c6dcd16b9c,Port=90 Id=i-83fc3a2c6dcd16b9c,Port=90 Id=i-qy342a2c6dcd16b9c,Port=100

A/B Testing

A/B testing is a term used for randomized experiments of two separate website experiences to test and gather data that will be helpful in decision-making. To facilitate this type of testing, you need to redirect a percentage of traffic to the secondary stack.

By using Classic Load Balancers, you can conduct these experiments by grouping the different experiences under separate load balancers. By using Amazon Route 53, you can then leverage a group of weighted resource record sets that point to the CNAMEs provided by the Classic Load Balancer. By modifying the weight of a given record, you can then move a random sampling of customers to a different website experience represented by the instances behind the Classic Load Balancer.

The introduction of the application load balancer optimizes A/B testing in a couple of ways. In the following figure, you can see the same grouping of instances that represent the two different website experiences (the A and the B experience shown in the preceding figure). The major differences here are one less load balancer, which reduces costs and configuration, and a new mechanism, rules, to control the switch from the A to the B experience. In this configuration, the logic for redirecting a percentage of traffic must be done at the application level, not the DNS level, by rewriting URLs that point to the B stack instead of the default A stack. The benefits of this approach are that specific users are targeted based on criteria that the application is aware of (random users, geographies, users’ history or preferences). There is also no need to rely on DNS for redirecting some traffic, so the control of who is directed to stack B is much more fine-grained. This mechanism also allows for a more immediate transitioning of users from the A to the B experience because there is no delay associated with DNS records having to be flushed for user cache.

Conclusion

The launch of the application load balancer provides significant optimization in segmentation techniques and A/B testing. These two use cases represent only a subset, but they illustrate how you can leverage the new features associated with this launch. Feel free to leave your feedback in the comments.

Моите опорни точки за електронното гласуване

Post Syndicated from Delian Delchev original http://feedproxy.google.com/~r/delian/~3/hEViReBL6Y0/blog-post_73.html

Аз съм почитател на електронното гласуване. Не се смятам за специалист, но когато видя самозвани специалисти да приказват технологични и алогични глупости по отношение на електронното гласуване и не мога да не се присмея, като хърбел на щърбел.

И тъй като откакто си сложих баджа ме нападнаха куп хора с мнения против електронното гласуване, къде от страх, къде от тръшкане (че ще навреди на нашите), реших да обобщя някой и да ги коментирам. Така се получи този кратък и непълен списък с моите опорни точки (да иронизирам малко БСП-то и МВР-то от времето на Орешарски) по отношение на електронният вот:


Защо електронен вот а не машинен вот?
Името “електронен вот” е подбрано некоректно и е ограничаващо. И електронният и машинният вот днес са електронни (след време може да са биологични или други). В действителност става въпрос за дистанционен вот, използвайки възможностите на познанията ни и науката за да си осигурим механизъм за сигурно дистанционно гласуване. Това е основната и разлика с машинният вот, който е също некоректно наименование на присъствен вот подпомогнат от машини. В частният случай говорим за референдум, който да позволи започването на дискусия по отношение на това да се намери механизъм за предоставяне на възможност за сигурно и дистанционно гласуване от страна на гласоподавателите. Дали ще го има, и как ще бъде реализиран не е въпрос на референдума, а е въпрос на дискусията, която евентуално би ги последвала след референдума. Без положителен вот от референдума обаче политическата ни история показва, че такава дискусия въобще не влиза в дневен ред.
Няма нужда от конституционна промяна, за да се допусне дистанционен вот. Това е оправдание. Форми на дистанционен вот имаме у нас дори и днес. Ако политическата клика желаеше да дискутира технологии и проблеми по същество, нямаше да има нужда и от референдум. И обратно, решенията от референдума не са обвързващи и задължителни (виж референдума за АЕЦ Белене). Но биха задължили парламента да ги обсъди и да се види публично, кой кой е.


Защо дистанционен вот?
Дистанционният вот улеснява гласуването. За разлика от типичните ограничителни мерки, като например въвеждане на възрастов, расов или образователен ценз, и дори задължителен вот, дистанционният вот увеличава достъпа и възможностите на гражданите да изразяват вот и мнение. Той е разширение, а не ограничаване на демократичната база.


Дистанционният вот би разширил значително възможноста на хора работещи и/ли живеещи в чужбина да гласуват, както и сериозно би намалил разходите по отношение на вота и в България и в чужбина.


Дистанционното гласуване позволява на хора в затруднения (възрастни, болни и други) да гласуват по-свободно и по-достъпно.

Дистанционният вот би намалил значително държавните разходи (пример – електронното преброяване съкрати значително разходите за извършването на преброяването у нас), включително и ако стане масов, ще позволи намаляването на хората работещи в изборните секции и като ефект значително съкращаване на държавните разходи.


Защо да позволяваме да гласуват хората, които вече са избрали да живеят в чужбина?
Грешно е да се фокусираме само върху хората, живеещи в чужбина, по отношение на тяхното право на глас. Електронното и дистанционно гласуване носи ползи за абсолютно всички грамотни граждани в страната. То би повишило значително изборната активност особено сред по-образованото население в страната, и като цяло.

Но дори и да разгледаме в частност само хората, които живеят в чужбина, е безспорно видно, че те са затруднени във възможностите си да упражняват правото си на глас. МВнР от години се справя добре да организира гласуването само в Турция, където са повече от половината отворени секции, и те се запазват поради добра организираност на българската диаспора в Турция, инерция от миналото, предизборен фокус или законодателство определящо къде да се отварят автоматично секции.

Дистанционното/електронното гласуване би улеснило и останалите българи, които нямат достъп до секции за гласуване на под 100 (дори 50)км разстояние или в държави, в които поради локално законодателство такива могат да се отварят само в ограничените ни на брой консулства, значително ограничаващи гражданите ни в други държави да упражняват правото си на глас. Вероятно между 10 и 30% от гласоподавателите се намират в чужбина или далеч от изборните си секции по времето когато се провеждат избори. Въвеждането на дистанционен вот допуска възстановявне поне на половината от тези загубени в момента гласове и значително повишаване на изборната активност. Лесно можете да калкулирате, че всъщност поради невъзможност да присъстват лично, не гласуват повече хора, отколкото сборно гласове получава дори най-голямата партия у нас по време на избори.

Но отново, дистанционното гласуване не е фокусирано и няма отношение само към българите в чужбина, то би значително повишило изборната активност, особено сред по-образованото население, младите и средната класа – точно тази част от гласоподавателите, която се е отдръпнала от урните от години.


Хора, които са избрали да живеят в чужбина не би трябвало да имат право на глас!
Правото на гласуване на един гражданин е определено от неговият паспорт. Той трябва да има право да избира тези, които заради назначеният му паспорт определят дали и какви данъци да плаща и дали да ходи на война.
България губи в емиграция над 3 души на 10000 (2015 est, Worldfactbook), почти изключително млади, в икономически и политически активна и детеродна възраст. В последните 30 години държавата са напуснали 2.5 милиона души. Лесно се изчислява, че при запазване на тенденцията, след 20 години над една трета от младите българи ще живеят и ще бъдат родени в чужбина. Над две трети от тях ще имат (само) българско гражданство поради правилата на европейският съюз. България трябва да се стреми да приобщава тези хора да помагат на страната, да са свързани с нея и да участват в политическият и живот. Електронното дистанционно гласуване е един от най важните механизми, с които реално можем да запазим българщината. Той няма алтернатива. Време е тази дискусия да започне и у нас.


Хора с два паспорта не би трябвало да имат право на глас
Тази дискусия няма отношение към електронното или дистанционно гласуване. Дали да имат или нямат право на глас няма отношение към начина, по който се гласува. Но е редно да отбележа, че тези хора имат право да гласуват според конституцията ни и също така имат и права и задължения поради това, че имат български паспорт, която прави правото им да гласуват справедливо.


Електронното гласуване само ще повиши изборната активност в Турция
Този коментар няма отношение към електронното гласуване. Политическата или етническа сегрегация не е въпрос на технологията на гласуване, дали ще бъде електронна, дистанционна или присъствена. 
Отделно не би трябвало сегрегацията да бъде и въобще въпрос обсъждан в една съвременна демокрация.
В частност обаче, гласуващите ни сънародници в Турция през годините остават константно число и дори намаляват като абсолютна бройка. Причините са логични: демографски и социални. Пика на гласуващите в Турция е отдавна достигнат, и числото не може да нарастне – просто няма от къде. Неговият дял има значение, защото негласуващите в страната се увеличават или защото много българи в други държави не гласуват, и заради недомислената система за разпределение на не целите мандати и липсата на мандатни райони за чужбина.

Електронното/дистанционно гласуване цели да се започне дискусия, с която да се създаде механизъм за по-лесен, по-масов, дистанционен вот, с което да се позволи и на другите българи – в българия или чужбина да гласуват по-лесно и така да се увеличи избирателната активност. Това е мярка за увеличаване на избирателната активност. Точно електронното гласуване може да бъде един от механизмите, с които да се включат по-активно гражданите ни, и да се намали значението на капсулираните вотове.

Не е проблем това, че над 80% от “етническите турци гласуват”. Те имат това си право. И не, електронното гласуване няма как да направи тези 80% на 120%. Но електронното гласуване може да улесни и върне част от онези 45% от населението, които редовно не гласуват, и да вдигне избирателната активност. А точно, чрез вдигането на избирателната активност биха били решени проблемите на  изкривяването на гласовете от ниската избирателна активност.

Електронното / дистанционно гласуване е отхвърлено с решение от конституционният съд
Това твърдение е невярно. Има решение на конституционният съд, което определя конституционните ограничения по отношение на реализирането на електронен (и дистанционен) вот (ако не се направят конституционни промени).
Доколкото се създаде механизъм, който да гарантира, че вота е таен и личен, той е напълно допустим. За това и машинният вот (частна форма на електронно гласуване) е допустим. Отделно, конституцията може да подлежи на промяна, ако обществото или обществената дискусия доведе до консенсус, че има нужда от такава.


Електронното гласуване ще създаде възможност за злоупотреби и фалшив вот
Злоупотреби и фалшив вот има и сега. Това е проблем на реализацията и процедурите и не трябва да е фокус на дискусията по същество. Има достатъчно сигурни механизми, с които да се реализира електронно и дистанционно гласуване, с които да се гарантира, че то е лично и тайно (по конституционните изисквания). Ако бъде реализирано лошо, може и да не е лично и тайно, точно така и както е сега с физическото присъствено гласуване. Но да се отказваме от него, защото има риск да бъде направено лошо (справедливо отнесен и към сегашното присъствено гласуване) би означавало да се откажем от гласуването въобще защото има риск от проблеми. Или да не излизаме навън защото има риск от проблеми. Да не се качваме в колата си, защото има риск да се повреди.
Въпрос на техническата дискусия е, как да бъде реализирано сигурно и с необходимите гаранции за конституционните изисквания. Можем свободно да водим техническата дискусия отделно. Но не можем да водим дискусия, да забраним гласуването защото имало риск.


Електронният подпис не гарантира сигурност еквивалентна на паспорт
Дали да има или няма дистанционно-електронно гласуване няма отношение към специфична технология на реализацията му. То може да бъде реализирано добре с или без електронен подпис.
В частност обаче, няма никаква техническа разлика между електронен подпис и паспорт. Паспорта съдържа информация, за лицето което го пренася. Самият паспорт е издаден от организация, на която ти вярваш и поради това вярваш на информацията, която можеш да прочетеш в него. Идентификаторите и защитите на паспорта (воден знак, печати и други) показват за теб, че той е издаден от доверената организация. Част от информацията в паспорта идентифицира приносителя (снимката). Комбинацията от информацията за преносителя, която можеш да прочетеш и да сравниш с него (снимката) и доверието (чрез защитите срещу фалшификация) към организацията издател, определят доверието ти към (само)идентификацията на приносителя. Същото се отнася и с електронният подпис, той съдържа публична информация, която можеш да провериш (идентифицираш приносителя), подписана от организация, на която имаш доверие (CA). Защитата е значително по голяма и фалшификацията значително по малка. Всъщност ние не знаем дори за един единствен случай на фалшифициран електронен подпис, докато знаем за не малко случаи (всъщност са доста, над 1 на 10000) за фалшифициран паспорт. Очевидно е, че електронният подпис е много по сигурен от паспорта.
Също като паспорта, електронният подпис подлежи на кражба, но тя не може да е масова, и инвалидацията е мигновенна (за разлика от тази на откраднат паспорт, за която трябват дни и седмици).
Но отново, тук не говорим въобще за дискусия как точно ще се направи електронното и дистанционно гласуване, а да позволим на гражданите да го ползват, ако се намери сигурен технологичен начин.


Електронният вот ще позволи по лесно купуване на гласове
Продажбата на глас е личен акт и въпрос на личен морал. Технологията не може да спре или на обратно да създаде възможност за продажба на гласове. Електронният вот няма отношение към това дали някой ще си продава гласовете или не.
Но електронният вот ще повиши значително избирателната активност, с което ще се намали значително дялът на купеният вот.
Електронният вот при добра техническа реализация обаче би допуснал механизъм, с който самият избирател да може да отменя или променя вота си в рамките на избирателният ден и да обезсмисли масовото закупуване или изнудване за гласуване. Локални форми на феодализъм върху гласовете (например случаите с миньорите на Ковачки, или кмета който брои кой и за кого гласува в неговото село) не са въпрос който ще бъде решен или пък направен по лош от електронният вот. Те са въпрос на независимост, свобода и алтернатива за тези хора. Електронният вот увеличава техните шансове, поради значително затрудняване на местните феодали да проследяват кой за кого е гласувал.


Феодалите ще купуват електронни подписи или ще ги събират и гласуват от името на хората
Отново тук не става въпрос за технологията за гласуване. Не е задължително да се ползва електронен подпис. Но в частност, всеки който знае как се издава и как работи електронен подпис знае, че той не може “да се купи” от чуждо име, и също така е много по лесно да се “изловят” феодали, гласували електронно, отколкото когато хората заплашени са отишли и сами са си гласували. Отново обаче, електронният подпис няма да реши (нито да усложни) проблема с продажбата или заробване-феодализиране на гласове. Това е различен проблем. Той се решава по различен начин. Но дистанционният вот със създаването на повече опции и възможности допуска да създаде на образованите хора повече независимост и алтернатива. Необразованите, хората в затруднения и феодализираните ще намалеят, но няма да изчезнат. Техните проблеми трябва да се решават по друг начин, макар че електронният вот да им създава повече възможности за свобода.


Електронният вот ще позволи на много хора да гласуват по два пъти
Точно обратно, електронният вот ще премахне напълно тази възможност за хората, които са гласували електронно – някой да гласува от тяхно име, или на няколко места. Това е един от най-лесно решимите технологично проблеми, който не се решава (поради човешката обработка на списъците) в традиционното безмашинно присъствено гласуване и е един от най големите проблеми на сегашният ни модел.


Електронният вот е безсмислен защото е много скъп
Това не е вярно, и отделно няма отношение към това дали да има електронно гласуване или не. Електронното гласуване няма да замени нормалното, и на който му е скъпо дистанционното гласуване, ще гласува присъствено. Въпреки това, е редно да отбележим, че електронното гласуване може да бъде реализирано много евтино както за гласоподавателя (безплатно) така и за държавата. Тук и в момента не става въпрос “да похарчим едни пари”. Единсвено става въпрос да позволим на гражданите да имат и тази алтернатива когато гласуват, с което да ги заинтересоваме и повишим политическият им интерес, и избирателната активност. Нека и не забравяме, че от 2017-та година новоиздадените лични карти у нас ще имат чип и ще могат да се използват като електронен подпис.


Електронният вот няма да бъде популярен
За това дори имаме пряк пример, че е невярно твърдение. Така се говореше и очакваше, и по отношение на електронното преброяване на гражданите. Но то бе супер успешно. Над половината граждани доброволно и самоинициативно се преброиха електронно, за много кратко време и това спести много пари на държавата. Недейте да подценявате българите. Ние имаме традиции в телекомуникациите, ИТ и Интернет. Ако процедурите са прости и лесни, можем да очакваме съизмеримо електронно гласуване с електронното преброяване, което би направило вотовете значително по бързи, значително по евтини и със значително по голяма избирателна активност.


Хората нямат какво да ядат а вие искате електронен вот
Това е най безмисленият и най несвързан с темата коментар, който съм чувал

PulseAudio FUD

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/jeffrey-stedfast.html

Jeffrey Stedfast

Jeffrey Stedfast seems to have made it his new hobby
to
bash
PulseAudio.
In a series of very negative blog postings he flamed my software and hence me
in best NotZed-like fashion. Particularly interesting in this case is the
fact that he apologized to me privately on IRC for this behaviour shortly
after his first posting when he was critizised on #gnome-hackers
only to continue flaming and bashing in more blog posts shortly after. Flaming
is very much part of the Free Software community I guess. A lot of people do
it from time to time (including me). But maybe there are better places for
this than Planet Gnome. And maybe doing it for days is not particularly nice.
And maybe flaming sucks in the first place anyway.

Regardless what I think about Jeffrey and his behaviour on Planet Gnome,
let’s have a look on his trophies, the five “bugs” he posted:

  1. Not directly related to PulseAudio itself. Also, finding errors in code that is related to esd is not exactly the most difficult thing in the world.
  2. The same theme.
  3. Fixed 3 months ago. It is certainly not my fault that this isn’t available in Jeffrey’s distro.
  4. A real, valid bug report. Fixed in git a while back, but not available in any released version. May only be triggered under heavy load or with a bad high-latency scheduler.
  5. A valid bug, but not really in PulseAudio. Mostly caused because the ALSA API and PA API don’t really match 100%.

OK, Jeffrey found a real bug, but I wouldn’t say this is really enough to make all the fuss about. Or is it?

Why PulseAudio?

Jeffrey wrote something about ‘solution looking for a problem‘ when
speaking of PulseAudio. While that was certainly not a nice thing to say it
however tells me one thing: I apparently didn’t manage to communicate well
enough why I am doing PulseAudio in the first place. So, why am I doing it then?

  • There’s so much more a good audio system needs to provide than just the
    most basic mixing functionality. Per-application volumes, moving streams
    between devices during playback, positional event sounds (i.e. click on the
    left side of the screen, have the sound event come out through the left
    speakers), secure session-switching support, monitoring of sound playback
    levels, rescuing playback streams to other audio devices on hot unplug,
    automatic hotplug configuration, automatic up/downmixing stereo/surround,
    high-quality resampling, network transparency, sound effects, simultaneous
    output to multiple sound devices are all features PA provides right now, and
    what you don’t get without it. It also provides the infrastructure for
    upcoming features like volume-follows-focus, automatic attenuation of music on
    signal on VoIP stream, UPnP media renderer support, Apple RAOP support,
    mixing/volume adjustments with dynamic range compression, adaptive volume of
    event sounds based on the volume of music streams, jack sensing, switching
    between stereo/surround/spdif during runtime, …
  • And even for the most basic mixing functionality plain ALSA/dmix is not
    really everlasting happiness. Due to the way it works all clients are forced
    to use the same buffering metrics all the time, that means all clients are
    limited in their wakeup/latency settings. You will burn more CPU than
    necessary this way, keep the risk of drop-outs unnecessarily high and still
    not be able to make clients with low-latency requirements happy. ‘Glitch-Free’
    PulseAudio
    fixes all this. Quite frankly I believe that ‘glitch-free’
    PulseAudio is the single most important killer feature that should be enough
    to convince everyone why PulseAudio is the right thing to do. Maybe people
    actually don’t know that they want this. But they absolutely do, especially
    the embedded people — if used properly it is a must for power-saving during
    audio playback. It’s a pity that how awesome this feature is you cannot
    directly see from the user interface.[1]
  • PulseAudio provides compatibility with a lot of sound systems/APIs that bare ALSA
    or bare OSS don’t provide.
  • And last but not least, I love breaking Jeffrey’s audio. It’s just soo much fun, you really have to try it! 😉

If you want to know more about why I think that PulseAudio is an important part of the modern Linux desktop audio stack, please read my slides from FOSS.in 2007.

Misconceptions

Many people (like Jeffrey) wonder why have software mixing at all if you
have hardware mixing? The thing is, hardware mixing is a thing of the past,
modern soundcards don’t do it anymore. Precisely for doing things like mixing
in software SIMD CPU extensions like SSE have been invented. Modern sound
cards these days are kind of “dumbed” down, high-quality DACs. They don’t do
mixing anymore, many modern chips don’t even do volume control anymore.
Remember the days where having a Wavetable chip was a killer feature of a
sound card? Those days are gone, today wavetable synthesizing is done almost
exlcusively in software — and that’s exactly what happened to hardware mixing
too. And it is good that way. In software mixing is is much easier to do
fancier stuff like DRC which will increase quality of mixing. And modern CPUs provide
all the necessary SIMD command sets to implement this efficiently.

Other people believe that JACK would be a better solution for the problem.
This is nonsense. JACK has been designed for a very different purpose. It is
optimized for low latency inter-application communication. It requires
floating point samples, it knows nothing about channel mappings, it depends on
every client to behave correctly. And so on, and so on. It is a sound server
for audio production. For desktop applications it is however not well suited.
For a desktop saving power is very important, one application misbehaving
shouldn’t have an effect on other application’s playback; converting from/to
FP all the time is not going to help battery life either. Please understand
that for the purpose of pro audio you can make completely different
compromises than you can do on the desktop. For example, while having
‘glitch-free’ is great for embedded and desktop use, it makes no sense at all
for pro audio, and would only have a drawback on performance. So, please stop
bringing up JACK again and again. It’s just not the right tool for desktop
audio, and this opinion is shared by the JACK developers themselves.

Jeffrey thinks that audio mixing is nothing for userspace. Which is
basically what OSS4 tries to do: mixing in kernel space. However, the future
of PCM audio is floating points. Mixing them in kernel space is problematic because (at least on Linux) FP in kernel space is a no-no.
Also, the kernel people made clear more than once that maths/decoding/encoding like this
should happen in userspace. Quite honestly, doing the mixing in kernel space
is probably one of the primary reasons why I think that OSS4 is a bad idea.
The fancier your mixing gets (i.e. including resampling, upmixing, downmixing,
DRC, …) the more difficulties you will have to move such a complex,
time-intensive code into the kernel.

Not everytime your audio breaks it is alone PulseAudio’s fault. For
example, the original flame of Jeffrey’s was about the low volume that he
experienced when running PA. This is mostly due to the suckish way we
initialize the default volumes of ALSA sound cards. Most distributions have
simple scripts that initialize ALSA sound card volumes to fixed values like
75% of the available range, without understanding what the range or the
controls actually mean. This is actually a very bad thing to do. Integrated
USB speakers for example tend export the full amplification range via the
mixer controls. 75% for them is incredibly loud. For other hardware (like
apparently Jeffrey’s) it is too low in volume. How to fix this has been
discussed on the ALSA mailing list, but no final solution has been presented
yet. Nonetheless, the fact that the volume was too low, is completely
unrelated to PulseAudio.

PulseAudio interfaces with lower-level technologies like ALSA on one hand,
and with high-level applications on the other hand. Those systems are not
perfect. Especially closed-source applications tend to do very evil things
with the audio APIs (Flash!) that are very hard to support on virtualized
sound systems such as PulseAudio [2]. However, things are getting better. My list of issues I found in
ALSA
is getting shorter. Many applications have already been fixed.

The reflex “my audio is broken it must be PulseAudio’s fault” is certainly
easy to come up with, but it certainly is not always right.

Also note that — like many areas in Free Software — development of the
desktop audio stack on Linux is a bit understaffed. AFAIK there are only two
people working on ALSA full-time and only me working on PulseAudio and other
userspace audio infrastructure, assisted by a few others who supply code and patches
from time to time, some more and some less.

More Breakage to Come

I now tried to explain why the audio experience on systems with PulseAudio
might not be as good as some people hoped, but what about the future? To be
frank: the next version of PulseAudio (0.9.11) will break even more things.
The ‘glitch-free’ stuff mentioned above uses quite a few features of the
underlying ALSA infrastructure that apparently noone has been using before —
and which just don’t work properly yet on all drivers. And there are quite a
few drivers around, and I only have a very limited set of hardware to test
with. Already I know that the some of the most popular drivers (USB and HDA)
do not work entirely correctly with ‘glitch-free’.

So you ask why I plan to release this code knowing that it will break
things? Well, it works on some hardware/drivers properly, and for the others I
know work-arounds to get things to work. And 0.9.11 has been delayed for too
long already. Also I need testing from a bigger audience. And it is not so
much 0.9.11 that is buggy, it is the code it is based on. ‘Glitch-free’ PA
0.9.11 is going to part of Fedora 10. Fedora has always been more bleeding
edge than other other distributions. Picking 0.9.11 just like that for an
‘LTS’ release might however be a not a good idea.

So, please bear with me when I release 0.9.11. Snapshots have already
been available in Rawhide for a while, and hell didn’t freeze over.

The Distributions’ Role in the Game

Some distributions did a better job adopting PulseAudio than others. On the
good side I certainly have to list Mandriva, Debian[3], and
Fedora[4]. OTOH Ubuntu didn’t exactly do a stellar job. They didn’t
do their homework. Adopting PA in a distribution is a fair amount of work,
given that it interfaces with so many different things at so many different
places. The integration with other systems is crucial. The information was all
out there, communicated on the wiki, the mailing lists and on the PA IRC
channel. But if you join and hang around on neither, then you won’t get the
memo. To my surprise when Ubuntu adopted PulseAudio they moved into one of their
‘LTS’ releases rightaway [5]. Which I guess can be called gutsy —
on the background that I work for Red Hat and PulseAudio is not part of RHEL
at this time. I get a lot of flak from Ubuntu users, and I am pretty sure the
vast amount of it is undeserving and not my fault.

Why Jeffrey’s distro of choice (SUSE?) didn’t package pavucontrol 0.9.6
although it has been released months ago I don’t know. But there’s certainly no reason to whine about
that to me
and bash me for it.

Having said all this — it’s easy to point to other software’s faults or
other people’s failures. So, admitting this, PulseAudio is certainly not
bug-free, far from that. It’s a relatively complex piece of software
(threading, real-time, lock-free, sensitive to timing, …), and every
software has its bugs. In some workloads they might be easier to find than it
others. And I am working on fixing those which are found. I won’t forget any
bug report, but the order and priority I work on them is still mostly up to me
I guess, right? There’s still a lot of work to do in desktop audio, it will
take some time to get things completely right and complete.

Calls for “audio should just work ™” are often heard. But if you don’t
want to stick with a sound system that was state of the art in the 90’s for
all times, then I fear things *will have* to break from time to time. And
Jeffrey, I have no idea what you are actually hacking on. Some people
mentioned something with Evolution. If that’s true, then quite honestly,
“email should just work”, too, shouldn’t it? Evolution is not exactly
famous for it’s legendary bug-freeness and stability, or did I miss something?
Maybe you should be the one to start with making things “just work”, especially since
Evolution has been around for much longer already.

Back to Work

Now that I responded to Jeffrey’s FUD I think we all can go back to work
and end this flamefest! I wish people would actually try to understand
things before writing an insulting rant — without the slightest clue — but
with words like “clusterfuck”. I’d like to thank all the people who commented
on Jeffrey’s blog and basically already said what I wrote here
now.

So, and now I am off hacking a bit on PulseAudio a bit more — or should
I say in Jeffrey’s words: on my clusterfuck that is an epic fail and that no desktop user needs?

Footnotes

[1] BTW ‘glitch-free’ is nothing I invented, other OS have been doing something
like this for quite a while (Vista, Mac OS). On Linux however, PulseAudio is
the first and only implementation (at least to my knowledge).

[2] In fact, Flash 9 can not be made fully working on PulseAudio.
This is because the way Flash destructs it’s driver backends is racy.
Unfixably racy, from external code. Jeffrey complained about Flash instability
in his second post. This is unfair to PulseAudio, because I cannot fix this.
This is like complaining that X crashes when you use binary-only
fglrx.

[3] To Debian’s standards at least. Since development of Debian is
very distributed the integration of such a system as PulseAudio is much more
difficult since in touches so many different packages in the system that are
kind of private property by a lot of different maintainers with different
views on things.

[4] I maintain the Fedora stuff myself, so I might be a bit biased on this one… 😉

[5] I guess Ubuntu sees that this was a bit too much too early, too.
At least that’s how I understood my invitation to UDS in Prague. Since that
summit I haven’t heard anything from them anymore, though.