Tag Archives: eNom

Bringing Datacenter-Scale Hardware-Software Co-design to the Cloud with FireSim and Amazon EC2 F1 Instances

Post Syndicated from Mia Champion original https://aws.amazon.com/blogs/compute/bringing-datacenter-scale-hardware-software-co-design-to-the-cloud-with-firesim-and-amazon-ec2-f1-instances/

The recent addition of Xilinx FPGAs to AWS Cloud compute offerings is one way that AWS is enabling global growth in the areas of advanced analytics, deep learning and AI. The customized F1 servers use pooled accelerators, enabling interconnectivity of up to 8 FPGAs, each one including 64 GiB DDR4 ECC protected memory, with a dedicated PCIe x16 connection. That makes this a powerful engine with the capacity to process advanced analytical applications at scale, at a significantly faster rate. For example, AWS commercial partner Edico Genome is able to achieve an approximately 30X speedup in analyzing whole genome sequencing datasets using their DRAGEN platform powered with F1 instances.

While the availability of FPGA F1 compute on-demand provides clear accessibility and cost advantages, many mainstream users are still finding that the “threshold to entry” in developing or running FPGA-accelerated simulations is too high. Researchers at the UC Berkeley RISE Lab have developed “FireSim”, powered by Amazon FPGA F1 instances as an open-source resource, FireSim lowers that entry bar and makes it easier for everyone to leverage the power of an FPGA-accelerated compute environment. Whether you are part of a small start-up development team or working at a large datacenter scale, hardware-software co-design enables faster time-to-deployment, lower costs, and more predictable performance. We are excited to feature FireSim in this post from Sagar Karandikar and his colleagues at UC-Berkeley.

―Mia Champion, Sr. Data Scientist, AWS

Mapping an 8-node FireSim cluster simulation to Amazon EC2 F1

As traditional hardware scaling nears its end, the data centers of tomorrow are trending towards heterogeneity, employing custom hardware accelerators and increasingly high-performance interconnects. Prototyping new hardware at scale has traditionally been either extremely expensive, or very slow. In this post, I introduce FireSim, a new hardware simulation platform under development in the computer architecture research group at UC Berkeley that enables fast, scalable hardware simulation using Amazon EC2 F1 instances.

FireSim benefits both hardware and software developers working on new rack-scale systems: software developers can use the simulated nodes with new hardware features as they would use a real machine, while hardware developers have full control over the hardware being simulated and can run real software stacks while hardware is still under development. In conjunction with this post, we’re releasing the first public demo of FireSim, which lets you deploy your own 8-node simulated cluster on an F1 Instance and run benchmarks against it. This demo simulates a pre-built “vanilla” cluster, but demonstrates FireSim’s high performance and usability.

Why FireSim + F1?

FPGA-accelerated hardware simulation is by no means a new concept. However, previous attempts to use FPGAs for simulation have been fraught with usability, scalability, and cost issues. FireSim takes advantage of EC2 F1 and open-source hardware to address the traditional problems with FPGA-accelerated simulation:
Problem #1: FPGA-based simulations have traditionally been expensive, difficult to deploy, and difficult to reproduce.
FireSim uses public-cloud infrastructure like F1, which means no upfront cost to purchase and deploy FPGAs. Developers and researchers can distribute pre-built AMIs and AFIs, as in this public demo (more details later in this post), to make experiments easy to reproduce. FireSim also automates most of the work involved in deploying an FPGA simulation, essentially enabling one-click conversion from new RTL to deploying on an FPGA cluster.

Problem #2: FPGA-based simulations have traditionally been difficult (and expensive) to scale.
Because FireSim uses F1, users can scale out experiments by spinning up additional EC2 instances, rather than spending hundreds of thousands of dollars on large FPGA clusters.

Problem #3: Finding open hardware to simulate has traditionally been difficult. Finding open hardware that can run real software stacks is even harder.
FireSim simulates RocketChip, an open, silicon-proven, RISC-V-based processor platform, and adds peripherals like a NIC and disk device to build up a realistic system. Processors that implement RISC-V automatically support real operating systems (such as Linux) and even support applications like Apache and Memcached. We provide a custom Buildroot-based FireSim Linux distribution that runs on our simulated nodes and includes many popular developer tools.

Problem #4: Writing hardware in traditional HDLs is time-consuming.
Both FireSim and RocketChip use the Chisel HDL, which brings modern programming paradigms to hardware description languages. Chisel greatly simplifies the process of building large, highly parameterized hardware components.

How to use FireSim for hardware/software co-design

FireSim drastically improves the process of co-designing hardware and software by acting as a push-button interface for collaboration between hardware developers and systems software developers. The following diagram describes the workflows that hardware and software developers use when working with FireSim.

Figure 2. The FireSim custom hardware development workflow.

The hardware developer’s view:

  1. Write custom RTL for your accelerator, peripheral, or processor modification in a productive language like Chisel.
  2. Run a software simulation of your hardware design in standard gate-level simulation tools for early-stage debugging.
  3. Run FireSim build scripts, which automatically build your simulation, run it through the Vivado toolchain/AWS shell scripts, and publish an AFI.
  4. Deploy your simulation on EC2 F1 using the generated simulation driver and AFI
  5. Run real software builds released by software developers to benchmark your hardware

The software developer’s view:

  1. Deploy the AMI/AFI generated by the hardware developer on an F1 instance to simulate a cluster of nodes (or scale out to many F1 nodes for larger simulated core-counts).
  2. Connect using SSH into the simulated nodes in the cluster and boot the Linux distribution included with FireSim. This distribution is easy to customize, and already supports many standard software packages.
  3. Directly prototype your software using the same exact interfaces that the software will see when deployed on the real future system you’re prototyping, with the same performance characteristics as observed from software, even at scale.

FireSim demo v1.0

Figure 3. Cluster topology simulated by FireSim demo v1.0.

This first public demo of FireSim focuses on the aforementioned “software-developer’s view” of the custom hardware development cycle. The demo simulates a cluster of 1 to 8 RocketChip-based nodes, interconnected by a functional network simulation. The simulated nodes work just like “real” machines:  they boot Linux, you can connect to them using SSH, and you can run real applications on top. The nodes can see each other (and the EC2 F1 instance on which they’re deployed) on the network and communicate with one another. While the demo currently simulates a pre-built “vanilla” cluster, the entire hardware configuration of these simulated nodes can be modified after FireSim is open-sourced.

In this post, I walk through bringing up a single-node FireSim simulation for experienced EC2 F1 users. For more detailed instructions for new users and instructions for running a larger 8-node simulation, see FireSim Demo v1.0 on Amazon EC2 F1. Both demos walk you through setting up an instance from a demo AMI/AFI and booting Linux on the simulated nodes. The full demo instructions also walk you through an example workload, running Memcached on the simulated nodes, with YCSB as a load generator to demonstrate network functionality.

Deploying the demo on F1

In this release, we provide pre-built binaries for driving simulation from the host and a pre-built AFI that contains the FPGA infrastructure necessary to simulate a RocketChip-based node.

Starting your F1 instances

First, launch an instance using the free FireSim Demo v1.0 product available on the AWS Marketplace on an f1.2xlarge instance. After your instance has booted, log in using the user name centos. On the first login, you should see the message “FireSim network config completed.” This sets up the necessary tap interfaces and bridge on the EC2 instance to enable communicating with the simulated nodes.

AMI contents

The AMI contains a variety of tools to help you run simulations and build software for RISC-V systems, including the riscv64 toolchain, a Buildroot-based Linux distribution that runs on the simulated nodes, and the simulation driver program. For more details, see the AMI Contents section on the FireSim website.

Single-node demo

First, you need to flash the FPGA with the FireSim AFI. To do so, run:

[[email protected]_ADDR ~]$ sudo fpga-load-local-image -S 0 -I agfi-00a74c2d615134b21

To start a simulation, run the following at the command line:

[[email protected]_ADDR ~]$ boot-firesim-singlenode

This automatically calls the simulation driver, telling it to load the Linux kernel image and root filesystem for the Linux distro. This produces output similar to the following:

Simulations Started. You can use the UART console of each simulated node by attaching to the following screens:

There is a screen on:

2492.fsim0      (Detached)

1 Socket in /var/run/screen/S-centos.

You could connect to the simulated UART console by connecting to this screen, but instead opt to use SSH to access the node instead.

First, ping the node to make sure it has come online. This is currently required because nodes may get stuck at Linux boot if the NIC does not receive any network traffic. For more information, see Troubleshooting/Errata. The node is always assigned the IP address 192.168.1.10:

[[email protected]_ADDR ~]$ ping 192.168.1.10

This should eventually produce the following output:

PING 192.168.1.10 (192.168.1.10) 56(84) bytes of data.

From 192.168.1.1 icmp_seq=1 Destination Host Unreachable

64 bytes from 192.168.1.10: icmp_seq=1 ttl=64 time=2017 ms

64 bytes from 192.168.1.10: icmp_seq=2 ttl=64 time=1018 ms

64 bytes from 192.168.1.10: icmp_seq=3 ttl=64 time=19.0 ms

At this point, you know that the simulated node is online. You can connect to it using SSH with the user name root and password firesim. It is also convenient to make sure that your TERM variable is set correctly. In this case, the simulation expects TERM=linux, so provide that:

[[email protected]_ADDR ~]$ TERM=linux ssh [email protected]

The authenticity of host ‘192.168.1.10 (192.168.1.10)’ can’t be established.

ECDSA key fingerprint is 63:e9:66:d0:5c:06:2c:1d:5c:95:33:c8:36:92:30:49.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added ‘192.168.1.10’ (ECDSA) to the list of known hosts.

[email protected]’s password:

#

At this point, you’re connected to the simulated node. Run uname -a as an example. You should see the following output, indicating that you’re connected to a RISC-V system:

# uname -a

Linux buildroot 4.12.0-rc2 #1 Fri Aug 4 03:44:55 UTC 2017 riscv64 GNU/Linux

Now you can run programs on the simulated node, as you would with a real machine. For an example workload (running YCSB against Memcached on the simulated node) or to run a larger 8-node simulation, see the full FireSim Demo v1.0 on Amazon EC2 F1 demo instructions.

Finally, when you are finished, you can shut down the simulated node by running the following command from within the simulated node:

# poweroff

You can confirm that the simulation has ended by running screen -ls, which should now report that there are no detached screens.

Future plans

At Berkeley, we’re planning to keep improving the FireSim platform to enable our own research in future data center architectures, like FireBox. The FireSim platform will eventually support more sophisticated processors, custom accelerators (such as Hwacha), network models, and peripherals, in addition to scaling to larger numbers of FPGAs. In the future, we’ll open source the entire platform, including Midas, the tool used to transform RTL into FPGA simulators, allowing users to modify any part of the hardware/software stack. Follow @firesimproject on Twitter to stay tuned to future FireSim updates.

Acknowledgements

FireSim is the joint work of many students and faculty at Berkeley: Sagar Karandikar, Donggyu Kim, Howard Mao, David Biancolin, Jack Koenig, Jonathan Bachrach, and Krste Asanović. This work is partially funded by AWS through the RISE Lab, by the Intel Science and Technology Center for Agile HW Design, and by ASPIRE Lab sponsors and affiliates Intel, Google, HPE, Huawei, NVIDIA, and SK hynix.

The Data Tinder Collects, Saves, and Uses

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/09/the_data_tinder.html

Under European law, service providers like Tinder are required to show users what information they have on them when requested. This author requested, and this is what she received:

Some 800 pages came back containing information such as my Facebook “likes,” my photos from Instagram (even after I deleted the associated account), my education, the age-rank of men I was interested in, how many times I connected, when and where every online conversation with every single one of my matches happened…the list goes on.

“I am horrified but absolutely not surprised by this amount of data,” said Olivier Keyes, a data scientist at the University of Washington. “Every app you use regularly on your phone owns the same [kinds of information]. Facebook has thousands of pages about you!”

As I flicked through page after page of my data I felt guilty. I was amazed by how much information I was voluntarily disclosing: from locations, interests and jobs, to pictures, music tastes and what I liked to eat. But I quickly realised I wasn’t the only one. A July 2017 study revealed Tinder users are excessively willing to disclose information without realising it.

“You are lured into giving away all this information,” says Luke Stark, a digital technology sociologist at Dartmouth University. “Apps such as Tinder are taking advantage of a simple emotional phenomenon; we can’t feel data. This is why seeing everything printed strikes you. We are physical creatures. We need materiality.”

Reading through the 1,700 Tinder messages I’ve sent since 2013, I took a trip into my hopes, fears, sexual preferences and deepest secrets. Tinder knows me so well. It knows the real, inglorious version of me who copy-pasted the same joke to match 567, 568, and 569; who exchanged compulsively with 16 different people simultaneously one New Year’s Day, and then ghosted 16 of them.

“What you are describing is called secondary implicit disclosed information,” explains Alessandro Acquisti, professor of information technology at Carnegie Mellon University. “Tinder knows much more about you when studying your behaviour on the app. It knows how often you connect and at which times; the percentage of white men, black men, Asian men you have matched; which kinds of people are interested in you; which words you use the most; how much time people spend on your picture before swiping you, and so on. Personal data is the fuel of the economy. Consumers’ data is being traded and transacted for the purpose of advertising.”

Tinder’s privacy policy clearly states your data may be used to deliver “targeted advertising.”

It’s not Tinder. Surveillance is the business model of the Internet. Everyone does this.

The Weather Station and the eclipse

Post Syndicated from Richard Hayler original https://www.raspberrypi.org/blog/weather-station-eclipse/

As everyone knows, one of the problems with the weather is that it can be difficult to predict a long time in advance. In the UK we’ve had stormy conditions for weeks but, of course, now that I’ve finished my lightning detector, everything has calmed down. If you’re planning to make scientific measurements of a particular phenomenon, patience is often required.

Oracle Weather Station

Wake STEM ECH get ready to safely observe the eclipse

In the path of the eclipse

Fortunately, this wasn’t a problem for Mr Burgess and his students at Wake STEM Early College High School in Raleigh, North Carolina, USA. They knew exactly when the event they were interested in studying was going to occur: they were going to use their Raspberry Pi Oracle Weather Station to monitor the progress of the 2017 solar eclipse.

Wake STEM EC HS on Twitter

Through the @Celestron telescope #Eclipse2017 @WCPSS via @stemburgess

Measuring the temperature drop

The Raspberry Pi Oracle Weather Stations are always active and recording data, so all the students needed to do was check that everything was connected and working. That left them free to enjoy the eclipse, and take some amazing pictures like the one above.

You can see from the data how the changes in temperature lag behind the solar events – this makes sense, as it takes a while for the air to cool down. When the sun starts to return, the temperature rise continues on its pre-eclipse trajectory.

Oracle Weather Station

Weather station data 21st Aug: the yellow bars mark the start and end of the eclipse, the red bar marks the maximum sun coverage.

Reading Mr Burgess’ description, I’m feeling rather jealous. Being in the path of the Eclipse sounds amazing: “In North Carolina we experienced 93% coverage, so a lot of sunlight was still shining, but the landscape took on an eerie look. And there was a cool wind like you’d experience at dusk, not at 2:30 pm on a hot summer day. I was amazed at the significant drop in temperature that occurred in a small time frame.”

Temperature drop during Eclipse Oracle Weather Station.

Close up of data showing temperature drop as recorded by the Raspberry Pi Oracle Weather Station. The yellow bars mark the start and end of the eclipse, the red bar marks the maximum sun coverage.

 Weather Station in the classroom

I’ve been preparing for the solar eclipse for almost two years, with the weather station arriving early last school year. I did not think about temperature data until I read about citizen scientists on a NASA website,” explains Mr Burgess, who is now in his second year of working with the Raspberry Pi Oracle Weather Station. Around 120 ninth-grade students (ages 14-15) have been involved with the project so far. “I’ve found that students who don’t have a strong interest in meteorology find it interesting to look at real data and figure out trends.”

Wake STEM EC Raspberry Pi Oracle Weather Station installation

Wake STEM EC Raspberry Pi Oracle Weather Station installation

As many schools have discovered, Mr Burgess found that the biggest challenge with the Weather Station project “was finding a suitable place to install the weather station in a place that could get power and Ethernet“. To help with this problem, we’ve recently added two new guides to help with installing the wind sensors outside and using WiFi to connect the kit to the Internet.

Raspberry Pi Oracle Weather Station

If you want to keep up to date with all the latest Raspberry Pi Oracle Weather Station activities undertaken by our network of schools around the world, make sure you regularly check our weather station forum. Meanwhile, everyone at Wake STEM ECH is already starting to plan for their next eclipse on Monday, April 8, 2024. I wonder if they’d like some help with their Weather Station?

The post The Weather Station and the eclipse appeared first on Raspberry Pi.

[$] The supposed decline of copyleft

Post Syndicated from jake original https://lwn.net/Articles/731722/rss

At DebConf17, John Sullivan, the executive director of the FSF,
gave a talk on the supposed decline of the use of
copyleft licenses in free-software projects. In his presentation, Sullivan
questioned the notion that permissive licenses, like the BSD or MIT
licenses, are gaining ground at the expense of the traditionally dominant
copyleft licenses from the FSF. While there does seem to be a rise in
the use of permissive licenses, in general, there are several possible
explanations for
the phenomenon.

Thomas and Ed become a RealLifeDoodle on the ISS

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/astro-pi-reallifedoodle/

Thanks to the very talented sooperdavid, creator of some of the wonderful animations known as RealLifeDoodles, Thomas Pesquet and Astro Pi Ed have been turned into one of the cutest videos on the internet.

space pi – Create, Discover and Share Awesome GIFs on Gfycat

Watch space pi GIF by sooperdave on Gfycat. Discover more GIFS online on Gfycat

And RealLifeDoodles aaaaare?

Thanks to the power of viral video, many will be aware of the ongoing Real Life Doodle phenomenon. Wait, you’re not aware?

Oh. Well, let me explain it to you.

Taking often comical video clips, those with a know-how and skill level that outweighs my own in spades add faces and emotions to inanimate objects, creating what the social media world refers to as a Real Life Doodle. From disappointed exercise balls to cannibalistic piles of leaves, these video clips are both cute and sometimes, though thankfully not always, a little heartbreaking.

letmegofree – Create, Discover and Share Awesome GIFs on Gfycat

Watch letmegofree GIF by sooperdave on Gfycat. Discover more reallifedoodles GIFs on Gfycat

Our own RealLifeDoodle

A few months back, when Programme Manager Dave Honess, better known to many as SpaceDave, sent me these Astro Pi videos for me to upload to YouTube, a small plan hatched in my brain. For in the midst of the video, and pointed out to me by SpaceDave – “I kind of love the way he just lets the unit drop out of shot” – was the most adorable sight as poor Ed drifted off into the great unknown of the ISS. Finding that I have this odd ability to consider many inanimate objects as ‘cute’, I wanted to see whether we could turn poor Ed into a RealLifeDoodle.

Heading to the Reddit RealLifeDoodle subreddit, I sent moderator sooperdavid a private message, asking if he’d be so kind as to bring our beloved Ed to life.

Yesterday, our dream came true!

Astro Pi

Unless you’re new to the world of the Raspberry Pi blog (in which case, welcome!), you’ll probably know about the Astro Pi Challenge. But for those who are unaware, let me break it down for you.

Raspberry Pi RealLifeDoodle

In 2015, two weeks before British ESA Astronaut Tim Peake journeyed to the International Space Station, two Raspberry Pis were sent up to await his arrival. Clad in 6063-grade aluminium flight cases and fitted with their own Sense HATs and camera modules, the Astro Pis Ed and Izzy were ready to receive the winning codes from school children in the UK. The following year, this time maintained by French ESA Astronaut Thomas Pesquet, children from every ESA member country got involved to send even more code to the ISS.

Get involved

Will there be another Astro Pi Challenge? Well, I just asked SpaceDave and he didn’t say no! So why not get yourself into training now and try out some of our space-themed free resources, including our 3D-print your own Astro Pi case tutorial? You can also follow the adventures of Ed and Izzy in our brilliant Story of Astro Pi cartoons.

Raspberry Pi RealLifeDoodle

And if you’re quick, there’s still time to take part in tomorrow’s Moonhack! Check out their website for more information and help the team at Code Club Australia beat their own world record!

The post Thomas and Ed become a RealLifeDoodle on the ISS appeared first on Raspberry Pi.

Nazis, are bad

Post Syndicated from Eevee original https://eev.ee/blog/2017/08/13/nazis-are-bad/

Anonymous asks:

Could you talk about something related to the management/moderation and growth of online communities? IOW your thoughts on online community management, if any.

I think you’ve tweeted about this stuff in the past so I suspect you have thoughts on this, but if not, again, feel free to just blog about … anything 🙂

Oh, I think I have some stuff to say about community management, in light of recent events. None of it hasn’t already been said elsewhere, but I have to get this out.

Hopefully the content warning is implicit in the title.


I am frustrated.

I’ve gone on before about a particularly bothersome phenomenon that hurts a lot of small online communities: often, people are willing to tolerate the misery of others in a community, but then get up in arms when someone pushes back. Someone makes a lot of off-hand, off-color comments about women? Uses a lot of dog-whistle terms? Eh, they’re not bothering anyone, or at least not bothering me. Someone else gets tired of it and tells them to knock it off? Whoa there! Now we have the appearance of conflict, which is unacceptable, and people will turn on the person who’s pissed off — even though they’ve been at the butt end of an invisible conflict for who knows how long. The appearance of peace is paramount, even if it means a large chunk of the population is quietly miserable.

Okay, so now, imagine that on a vastly larger scale, and also those annoying people who know how to skirt the rules are Nazis.


The label “Nazi” gets thrown around a lot lately, probably far too easily. But when I see a group of people doing the Hitler salute, waving large Nazi flags, wearing Nazi armbands styled after the SS, well… if the shoe fits, right? I suppose they might have flown across the country to join a torch-bearing mob ironically, but if so, the joke is going way over my head. (Was the murder ironic, too?) Maybe they’re not Nazis in the sense that the original party doesn’t exist any more, but for ease of writing, let’s refer to “someone who espouses Nazi ideology and deliberately bears a number of Nazi symbols” as, well, “a Nazi”.

This isn’t a new thing, either; I’ve stumbled upon any number of Twitter accounts that are decorated in Nazi regalia. I suppose the trouble arises when perfectly innocent members of the alt-right get unfairly labelled as Nazis.

But hang on; this march was called “Unite the Right” and was intended to bring together various far right sub-groups. So what does their choice of aesthetic say about those sub-groups? I haven’t heard, say, alt-right coiner Richard Spencer denounce the use of Nazi symbology — extra notable since he was fucking there and apparently didn’t care to discourage it.


And so begins the rule-skirting. “Nazi” is definitely overused, but even using it to describe white supremacists who make not-so-subtle nods to Hitler is likely to earn you some sarcastic derailment. A Nazi? Oh, so is everyone you don’t like and who wants to establish a white ethno state a Nazi?

Calling someone a Nazi — or even a white supremacist — is an attack, you see. Merely expressing the desire that people of color not exist is perfectly peaceful, but identifying the sentiment for what it is causes visible discord, which is unacceptable.

These clowns even know this sort of thing and strategize around it. Or, try, at least. Maybe it wasn’t that successful this weekend — though flicking through Charlottesville headlines now, they seem to be relatively tame in how they refer to the ralliers.

I’m reminded of a group of furries — the alt-furries — who have been espousing white supremacy and wearing red armbands with a white circle containing a black… pawprint. Ah, yes, that’s completely different.


So, what to do about this?

Ignore them” is a popular option, often espoused to bullied children by parents who have never been bullied, shortly before they resume complaining about passive-aggressive office politics. The trouble with ignoring them is that, just like in smaller communitiest, they have a tendency to fester. They take over large chunks of influential Internet surface area like 4chan and Reddit; they help get an inept buffoon elected; and then they start to have torch-bearing rallies and run people over with cars.

4chan illustrates a kind of corollary here. Anyone who’s steeped in Internet Culture™ is surely familiar with 4chan; I was never a regular visitor, but it had enough influence that I was still aware of it and some of its culture. It was always thick with irony, which grew into a sort of ironic detachment — perhaps one of the major sources of the recurring online trope that having feelings is bad — which proceeded into ironic racism.

And now the ironic racism is indistinguishable from actual racism, as tends to be the case. Do they “actually” “mean it”, or are they just trying to get a rise out of people? What the hell is unironic racism if not trying to get a rise out of people? What difference is there to onlookers, especially as they move to become increasingly involved with politics?

It’s just a joke” and “it was just a thoughtless comment” are exceptionally common defenses made by people desperate to preserve the illusion of harmony, but the strain of overt white supremacy currently running rampant through the US was built on those excuses.


The other favored option is to debate them, to defeat their ideas with better ideas.

Well, hang on. What are their ideas, again? I hear they were chanting stuff like “go back to Africa” and “fuck you, faggots”. Given that this was an overtly political rally (and again, the Nazi fucking regalia), I don’t think it’s a far cry to describe their ideas as “let’s get rid of black people and queer folks”.

This is an underlying proposition: that white supremacy is inherently violent. After all, if the alt-right seized total political power, what would they do with it? If I asked the same question of Democrats or Republicans, I’d imagine answers like “universal health care” or “screw over poor people”. But people whose primary goal is to have a country full of only white folks? What are they going to do, politely ask everyone else to leave? They’re invoking the memory of people who committed genocide and also tried to take over the fucking world. They are outright saying, these are the people we look up to, this is who we think had a great idea.

How, precisely, does one defeat these ideas with rational debate?

Because the underlying core philosophy beneath all this is: “it would be good for me if everything were about me”. And that’s true! (Well, it probably wouldn’t work out how they imagine in practice, but it’s true enough.) Consider that slavery is probably fantastic if you’re the one with the slaves; the issue is that it’s reprehensible, not that the very notion contains some kind of 101-level logical fallacy. That’s probably why we had a fucking war over it instead of hashing it out over brunch.

…except we did hash it out over brunch once, and the result was that slavery was still allowed but slaves only counted as 60% of a person for the sake of counting how much political power states got. So that’s how rational debate worked out. I’m sure the slaves were thrilled with that progress.


That really only leaves pushing back, which raises the question of how to push back.

And, I don’t know. Pushing back is much harder in spaces you don’t control, spaces you’re already struggling to justify your own presence in. For most people, that’s most spaces. It’s made all the harder by that tendency to preserve illusory peace; even the tamest request that someone knock off some odious behavior can be met by pushback, even by third parties.

At the same time, I’m aware that white supremacists prey on disillusioned young white dudes who feel like they don’t fit in, who were promised the world and inherited kind of a mess. Does criticism drive them further away? The alt-right also opposes “political correctness”, i.e. “not being a fucking asshole”.

God knows we all suck at this kind of behavior correction, even within our own in-groups. Fandoms have become almost ridiculously vicious as platforms like Twitter and Tumblr amplify individual anger to deafening levels. It probably doesn’t help that we’re all just exhausted, that every new fuck-up feels like it bears the same weight as the last hundred combined.

This is the part where I admit I don’t know anything about people and don’t have any easy answers. Surprise!


The other alternative is, well, punching Nazis.

That meme kind of haunts me. It raises really fucking complicated questions about when violence is acceptable, in a culture that’s completely incapable of answering them.

America’s relationship to violence is so bizarre and two-faced as to be almost incomprehensible. We worship it. We have the biggest military in the world by an almost comical margin. It’s fairly mainstream to own deadly weapons for the express stated purpose of armed revolution against the government, should that become necessary, where “necessary” is left ominously undefined. Our movies are about explosions and beating up bad guys; our video games are about explosions and shooting bad guys. We fantasize about solving foreign policy problems by nuking someone — hell, our talking heads are currently in polite discussion about whether we should nuke North Korea and annihilate up to twenty-five million people, as punishment for daring to have the bomb that only we’re allowed to have.

But… violence is bad.

That’s about as far as the other side of the coin gets. It’s bad. We condemn it in the strongest possible terms. Also, guess who we bombed today?

I observe that the one time Nazis were a serious threat, America was happy to let them try to take over the world until their allies finally showed up on our back porch.

Maybe I don’t understand what “violence” means. In a quest to find out why people are talking about “leftist violence” lately, I found a National Review article from May that twice suggests blocking traffic is a form of violence. Anarchists have smashed some windows and set a couple fires at protests this year — and, hey, please knock that crap off? — which is called violence against, I guess, Starbucks. Black Lives Matter could be throwing a birthday party and Twitter would still be abuzz with people calling them thugs.

Meanwhile, there’s a trend of murderers with increasingly overt links to the alt-right, and everyone is still handling them with kid gloves. First it was murders by people repeating their talking points; now it’s the culmination of a torches-and-pitchforks mob. (Ah, sorry, not pitchforks; assault rifles.) And we still get this incredibly bizarre both-sides-ism, a White House that refers to the people who didn’t murder anyone as “just as violent if not more so“.


Should you punch Nazis? I don’t know. All I know is that I’m extremely dissatisfied with discourse that’s extremely alarmed by hypothetical punches — far more mundane than what you’d see after a sporting event — but treats a push for ethnic cleansing as a mere difference of opinion.

The equivalent to a punch in an online space is probably banning, which is almost laughable in comparison. It doesn’t cause physical harm, but it is a use of concrete force. Doesn’t pose quite the same moral quandary, though.

Somewhere in the middle is the currently popular pastime of doxxing (doxxxxxxing) people spotted at the rally in an attempt to get them fired or whatever. Frankly, that skeeves me out, though apparently not enough that I’m directly chastizing anyone for it.


We aren’t really equipped, as a society, to deal with memetic threats. We aren’t even equipped to determine what they are. We had a fucking world war over this, and now people are outright saying “hey I’m like those people we went and killed a lot in that world war” and we give them interviews and compliment their fashion sense.

A looming question is always, what if they then do it to you? What if people try to get you fired, to punch you for your beliefs?

I think about that a lot, and then I remember that it’s perfectly legal to fire someone for being gay in half the country. (Courts are currently wrangling whether Title VII forbids this, but with the current administration, I’m not optimistic.) I know people who’ve been fired for coming out as trans. I doubt I’d have to look very far to find someone who’s been punched for either reason.

And these aren’t even beliefs; they’re just properties of a person. You can stop being a white supremacist, one of those people yelling “fuck you, faggots”.

So I have to recuse myself from this asinine question, because I can’t fairly judge the risk of retaliation when it already happens to people I care about.

Meanwhile, if a white supremacist does get punched, I absolutely still want my tax dollars to pay for their universal healthcare.


The same wrinkle comes up with free speech, which is paramount.

The ACLU reminds us that the First Amendment “protects vile, hateful, and ignorant speech”. I think they’ve forgotten that that’s a side effect, not the goal. No one sat down and suggested that protecting vile speech was some kind of noble cause, yet that’s how we seem to be treating it.

The point was to avoid a situation where the government is arbitrarily deciding what qualifies as vile, hateful, and ignorant, and was using that power to eliminate ideas distasteful to politicians. You know, like, hypothetically, if they interrogated and jailed a bunch of people for supporting the wrong economic system. Or convicted someone under the Espionage Act for opposing the draft. (Hey, that’s where the “shouting fire in a crowded theater” line comes from.)

But these are ideas that are already in the government. Bannon, a man who was chair of a news organization he himself called “the platform for the alt-right”, has the President’s ear! How much more mainstream can you get?

So again I’m having a little trouble balancing “we need to defend the free speech of white supremacists or risk losing it for everyone” against “we fairly recently were ferreting out communists and the lingering public perception is that communists are scary, not that the government is”.


This isn’t to say that freedom of speech is bad, only that the way we talk about it has become fanatical to the point of absurdity. We love it so much that we turn around and try to apply it to corporations, to platforms, to communities, to interpersonal relationships.

Look at 4chan. It’s completely public and anonymous; you only get banned for putting the functioning of the site itself in jeopardy. Nothing is stopping a larger group of people from joining its politics board and tilting sentiment the other way — except that the current population is so odious that no one wants to be around them. Everyone else has evaporated away, as tends to happen.

Free speech is great for a government, to prevent quashing politics that threaten the status quo (except it’s a joke and they’ll do it anyway). People can’t very readily just bail when the government doesn’t like them, anyway. It’s also nice to keep in mind to some degree for ubiquitous platforms. But the smaller you go, the easier it is for people to evaporate away, and the faster pure free speech will turn the place to crap. You’ll be left only with people who care about nothing.


At the very least, it seems clear that the goal of white supremacists is some form of destabilization, of disruption to the fabric of a community for purely selfish purposes. And those are the kinds of people you want to get rid of as quickly as possible.

Usually this is hard, because they act just nicely enough to create some plausible deniability. But damn, if someone is outright telling you they love Hitler, maybe skip the principled hand-wringing and eject them.

Taking the first step on the journey

Post Syndicated from Matt Richardson original https://www.raspberrypi.org/blog/taking-first-step-journey/

This column is from The MagPi issue 58. You can download a PDF of the full issue for free, or subscribe to receive the print edition in your mailbox or the digital edition on your tablet. All proceeds from the print and digital editions help the Raspberry Pi Foundation achieve its charitable goals.

About five years ago was the first time I unboxed a Raspberry Pi. I hooked it up to our living room television and made space on the TV stand for an old USB keyboard and mouse. Watching the $35 computer boot up for the first time impressed me, and I had a feeling it was a big deal, but I’ll admit that I had no idea how much of a phenomenon Raspberry Pi would become. I had no idea how large the community would grow. I had no idea how much my life would be changed from that moment on. And it all started with a simple first step: booting it up.

Matt Richardson on Twitter

Finally a few minutes to experiment with @Raspberry_Pi! So far, I’m rather impressed!

The key to the success of Raspberry Pi as a computer – and, in turn, a community and a charitable foundation – is that there’s a low barrier to the first step you take with it. The low price is a big reason for that. Whether or not to try Raspberry Pi is not a difficult decision. Since it’s so affordable, you can just give it a go, and see how you get along.

The pressure is off

Linus Torvalds, the creator of the Linux operating system kernel, talked about this in a BBC News interview in 2012. He explained that a lot of people might take the first step with Raspberry Pi, but not everyone will carry on with it. But getting more people to take that first step of turning it on means there are more people who potentially will be impacted by the technology. Torvalds said:

I find things like Raspberry Pi to be an important thing: trying to make it possible for a wider group of people to tinker with computers. And making the computers cheap enough that you really can not only afford the hardware at a big scale, but perhaps more important, also afford failure.

In other words, if things don’t work out with you and your Raspberry Pi, it’s not a big deal, since it’s such an affordable computer.

In this together

Of course, we hope that more and more people who boot up a Raspberry Pi for the first time will decide to continue experimenting, creating, and learning with it. Thanks to improvements to the hardware, the Raspbian operating system, and free software packages, it’s constantly becoming easier to do many amazing things with this little computer. And our continually growing community means you’re not alone on this journey. These improvements and growth over the past few years hopefully encourage more people who boot up Raspberry Pis to keep exploring.
raspberry pi first step

The first step

However, the important thing is that people are given the opportunity to take that first step, especially young people. Young learners are at a critical age, and something like the Raspberry Pi can have an enormously positive impact on the rest of their lives. It’s a major reason why our free resources are aimed at young learners. It’s also why we train educators all over the world for free. And encouraging youngsters to take their first step with Raspberry Pi could not only make a positive difference in their lives, but also in society at large.

With the affordable computational power, excellent software, supportive community, and free resources, you’re given everything you need to make a big impact in the world when you boot up a Raspberry Pi for the first time. That moment could be step one of ten, or one of ten thousand, but it’s up to you to take that first step.

Now you!

Learning and making things with the Pi is incredibly easy, and we’ve created numerous resources and tutorials to help you along. First of all, check out our hardware guide to make sure you’re all set up. Next, you can try out Scratch and Python, our favourite programming languages. Feeling creative? Learn to code music with Sonic Pi, or make visual art with Processing. Ready to control the real world with your Pi? Create a reaction game, or an LED adornment for your clothing. Maybe you’d like to do some science with the help of our Sense HAT, or become a film maker with our camera?

You can do all this with the Raspberry Pi, and so much more. The possibilities are as limitless as your imagination. So where do you want to start?

The post Taking the first step on the journey appeared first on Raspberry Pi.

Book Review: Twitter and Tear Gas, by Zeynep Tufekci

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/07/book_review_twi.html

There are two opposing models of how the Internet has changed protest movements. The first is that the Internet has made protesters mightier than ever. This comes from the successful revolutions in Tunisia (2010-11), Egypt (2011), and Ukraine (2013). The second is that it has made them more ineffectual. Derided as “slacktivism” or “clicktivism,” the ease of action without commitment can result in movements like Occupy petering out in the US without any obvious effects. Of course, the reality is more nuanced, and Zeynep Tufekci teases that out in her new book Twitter and Tear Gas.

Tufekci is a rare interdisciplinary figure. As a sociologist, programmer, and ethnographer, she studies how technology shapes society and drives social change. She has a dual appointment in both the School of Information Science and the Department of Sociology at University of North Carolina at Chapel Hill, and is a Faculty Associate at the Berkman Klein Center for Internet and Society at Harvard University. Her regular New York Times column on the social impacts of technology is a must-read.

Modern Internet-fueled protest movements are the subjects of Twitter and Tear Gas. As an observer, writer, and participant, Tufekci examines how modern protest movements have been changed by the Internet­ — and what that means for protests going forward. Her book combines her own ethnographic research and her usual deft analysis, with the research of others and some big data analysis from social media outlets. The result is a book that is both insightful and entertaining, and whose lessons are much broader than the book’s central topic.

“The Power and Fragility of Networked Protest” is the book’s subtitle. The power of the Internet as a tool for protest is obvious: it gives people newfound abilities to quickly organize and scale. But, according to Tufekci, it’s a mistake to judge modern protests using the same criteria we used to judge pre-Internet protests. The 1963 March on Washington might have culminated in hundreds of thousands of people listening to Martin Luther King Jr. deliver his “I Have a Dream” speech, but it was the culmination of a multi-year protest effort and the result of six months of careful planning made possible by that sustained effort. The 2011 protests in Cairo came together in mere days because they could be loosely coordinated on Facebook and Twitter.

That’s the power. Tufekci describes the fragility by analogy. Nepalese Sherpas assist Mt. Everest climbers by carrying supplies, laying out ropes and ladders, and so on. This means that people with limited training and experience can make the ascent, which is no less dangerous — to sometimes disastrous results. Says Tufekci: “The Internet similarly allows networked movements to grow dramatically and rapidly, but without prior building of formal or informal organizational and other collective capacities that could prepare them for the inevitable challenges they will face and give them the ability to respond to what comes next.” That makes them less able to respond to government counters, change their tactics­ — a phenomenon Tufekci calls “tactical freeze” — make movement-wide decisions, and survive over the long haul.

Tufekci isn’t arguing that modern protests are necessarily less effective, but that they’re different. Effective movements need to understand these differences, and leverage these new advantages while minimizing the disadvantages.

To that end, she develops a taxonomy for talking about social movements. Protests are an example of a “signal” that corresponds to one of several underlying “capacities.” There’s narrative capacity: the ability to change the conversation, as Black Lives Matter did with police violence and Occupy did with wealth inequality. There’s disruptive capacity: the ability to stop business as usual. An early Internet example is the 1999 WTO protests in Seattle. And finally, there’s electoral or institutional capacity: the ability to vote, lobby, fund raise, and so on. Because of various “affordances” of modern Internet technologies, particularly social media, the same signal — a protest of a given size — reflects different underlying capacities.

This taxonomy also informs government reactions to protest movements. Smart responses target attention as a resource. The Chinese government responded to 2015 protesters in Hong Kong by not engaging with them at all, denying them camera-phone videos that would go viral and attract the world’s attention. Instead, they pulled their police back and waited for the movement to die from lack of attention.

If this all sounds dry and academic, it’s not. Twitter and Tear Gasis infused with a richness of detail stemming from her personal participation in the 2013 Gezi Park protests in Turkey, as well as personal on-the-ground interviews with protesters throughout the Middle East — particularly Egypt and her native Turkey — Zapatistas in Mexico, WTO protesters in Seattle, Occupy participants worldwide, and others. Tufekci writes with a warmth and respect for the humans that are part of these powerful social movements, gently intertwining her own story with the stories of others, big data, and theory. She is adept at writing for a general audience, and­despite being published by the intimidating Yale University Press — her book is more mass-market than academic. What rigor is there is presented in a way that carries readers along rather than distracting.

The synthesist in me wishes Tufekci would take some additional steps, taking the trends she describes outside of the narrow world of political protest and applying them more broadly to social change. Her taxonomy is an important contribution to the more-general discussion of how the Internet affects society. Furthermore, her insights on the networked public sphere has applications for understanding technology-driven social change in general. These are hard conversations for society to have. We largely prefer to allow technology to blindly steer society or — in some ways worse — leave it to unfettered for-profit corporations. When you’re reading Twitter and Tear Gas, keep current and near-term future technological issues such as ubiquitous surveillance, algorithmic discrimination, and automation and employment in mind. You’ll come away with new insights.

Tufekci twice quotes historian Melvin Kranzberg from 1985: “Technology is neither good nor bad; nor is it neutral.” This foreshadows her central message. For better or worse, the technologies that power the networked public sphere have changed the nature of political protest as well as government reactions to and suppressions of such protest.

I have long characterized our technological future as a battle between the quick and the strong. The quick — dissidents, hackers, criminals, marginalized groups — are the first to make use of a new technology to magnify their power. The strong are slower, but have more raw power to magnify. So while protesters are the first to use Facebook to organize, the governments eventually figure out how to use Facebook to track protesters. It’s still an open question who will gain the upper hand in the long term, but Tufekci’s book helps us understand the dynamics at work.

This essay originally appeared on Vice Motherboard.

The book on Amazon.com.

Analyze OpenFDA Data in R with Amazon S3 and Amazon Athena

Post Syndicated from Ryan Hood original https://aws.amazon.com/blogs/big-data/analyze-openfda-data-in-r-with-amazon-s3-and-amazon-athena/

One of the great benefits of Amazon S3 is the ability to host, share, or consume public data sets. This provides transparency into data to which an external data scientist or developer might not normally have access. By exposing the data to the public, you can glean many insights that would have been difficult with a data silo.

The openFDA project creates easy access to the high value, high priority, and public access data of the Food and Drug Administration (FDA). The data has been formatted and documented in consumer-friendly standards. Critical data related to drugs, devices, and food has been harmonized and can easily be called by application developers and researchers via API calls. OpenFDA has published two whitepapers that drill into the technical underpinnings of the API infrastructure as well as how to properly analyze the data in R. In addition, FDA makes openFDA data available on S3 in raw format.

In this post, I show how to use S3, Amazon EMR, and Amazon Athena to analyze the drug adverse events dataset. A drug adverse event is an undesirable experience associated with the use of a drug, including serious drug side effects, product use errors, product quality programs, and therapeutic failures.

Data considerations

Keep in mind that this data does have limitations. In addition, in the United States, these adverse events are submitted to the FDA voluntarily from consumers so there may not be reports for all events that occurred. There is no certainty that the reported event was actually due to the product. The FDA does not require that a causal relationship between a product and event be proven, and reports do not always contain the detail necessary to evaluate an event. Because of this, there is no way to identify the true number of events. The important takeaway to all this is that the information contained in this data has not been verified to produce cause and effect relationships. Despite this disclaimer, many interesting insights and value can be derived from the data to accelerate drug safety research.

Data analysis using SQL

For application developers who want to perform targeted searching and lookups, the API endpoints provided by the openFDA project are “ready to go” for software integration using a standard API powered by Elasticsearch, NodeJS, and Docker. However, for data analysis purposes, it is often easier to work with the data using SQL and statistical packages that expect a SQL table structure. For large-scale analysis, APIs often have query limits, such as 5000 records per query. This can cause extra work for data scientists who want to analyze the full dataset instead of small subsets of data.

To address the concern of requiring all the data in a single dataset, the openFDA project released the full 100 GB of harmonized data files that back the openFDA project onto S3. Athena is an interactive query service that makes it easy to analyze data in S3 using standard SQL. It’s a quick and easy way to answer your questions about adverse events and aspirin that does not require you to spin up databases or servers.

While you could point tools directly at the openFDA S3 files, you can find greatly improved performance and use of the data by following some of the preparation steps later in this post.

Architecture

This post explains how to use the following architecture to take the raw data provided by openFDA, leverage several AWS services, and derive meaning from the underlying data.

Steps:

  1. Load the openFDA /drug/event dataset into Spark and convert it to gzip to allow for streaming.
  2. Transform the data in Spark and save the results as a Parquet file in S3.
  3. Query the S3 Parquet file with Athena.
  4. Perform visualization and analysis of the data in R and Python on Amazon EC2.

Optimizing public data sets: A primer on data preparation

Those who want to jump right into preparing the files for Athena may want to skip ahead to the next section.

Transforming, or pre-processing, files is a common task for using many public data sets. Before you jump into the specific steps for transforming the openFDA data files into a format optimized for Athena, I thought it would be worthwhile to provide a quick exploration on the problem.

Making a dataset in S3 efficiently accessible with minimal transformation for the end user has two key elements:

  1. Partitioning the data into objects that contain a complete part of the data (such as data created within a specific month).
  2. Using file formats that make it easy for applications to locate subsets of data (for example, gzip, Parquet, ORC, etc.).

With these two key elements in mind, you can now apply transformations to the openFDA adverse event data to prepare it for Athena. You might find the data techniques employed in this post to be applicable to many of the questions you might want to ask of the public data sets stored in Amazon S3.

Before you get started, I encourage those who are interested in doing deeper healthcare analysis on AWS to make sure that you first read the AWS HIPAA Compliance whitepaper. This covers the information necessary for processing and storing patient health information (PHI).

Also, the adverse event analysis shown for aspirin is strictly for demonstration purposes and should not be used for any real decision or taken as anything other than a demonstration of AWS capabilities. However, there have been robust case studies published that have explored a causal relationship between aspirin and adverse reactions using OpenFDA data. If you are seeking research on aspirin or its risks, visit organizations such as the Centers for Disease Control and Prevention (CDC) or the Institute of Medicine (IOM).

Preparing data for Athena

For this walkthrough, you will start with the FDA adverse events dataset, which is stored as JSON files within zip archives on S3. You then convert it to Parquet for analysis. Why do you need to convert it? The original data download is stored in objects that are partitioned by quarter.

Here is a small sample of what you find in the adverse events (/drugs/event) section of the openFDA website.

If you were looking for events that happened in a specific quarter, this is not a bad solution. For most other scenarios, such as looking across the full history of aspirin events, it requires you to access a lot of data that you won’t need. The zip file format is not ideal for using data in place because zip readers must have random access to the file, which means the data can’t be streamed. Additionally, the zip files contain large JSON objects.

To read the data in these JSON files, a streaming JSON decoder must be used or a computer with a significant amount of RAM must decode the JSON. Opening up these files for public consumption is a great start. However, you still prepare the data with a few lines of Spark code so that the JSON can be streamed.

Step 1:  Convert the file types

Using Apache Spark on EMR, you can extract all of the zip files and pull out the events from the JSON files. To do this, use the Scala code below to deflate the zip file and create a text file. In addition, compress the JSON files with gzip to improve Spark’s performance and reduce your overall storage footprint. The Scala code can be run in either the Spark Shell or in an Apache Zeppelin notebook on your EMR cluster.

If you are unfamiliar with either Apache Zeppelin or the Spark Shell, the following posts serve as great references:

 

import scala.io.Source
import java.util.zip.ZipInputStream
import org.apache.spark.input.PortableDataStream
import org.apache.hadoop.io.compress.GzipCodec

// Input Directory
val inputFile = "s3://download.open.fda.gov/drug/event/2015q4/*.json.zip";

// Output Directory
val outputDir = "s3://{YOUR OUTPUT BUCKET HERE}/output/2015q4/";

// Extract zip files from 
val zipFiles = sc.binaryFiles(inputFile);

// Process zip file to extract the json as text file and save it
// in the output directory 
val rdd = zipFiles.flatMap((file: (String, PortableDataStream)) => {
    val zipStream = new ZipInputStream(file.2.open)
    val entry = zipStream.getNextEntry
    val iter = Source.fromInputStream(zipStream).getLines
    iter
}).map(.replaceAll("\s+","")).saveAsTextFile(outputDir, classOf[GzipCodec])

Step 2:  Transform JSON into Parquet

With just a few more lines of Scala code, you can use Spark’s abstractions to convert the JSON into a Spark DataFrame and then export the data back to S3 in Parquet format.

Spark requires the JSON to be in JSON Lines format to be parsed correctly into a DataFrame.

// Output Parquet directory
val outputDir = "s3://{YOUR OUTPUT BUCKET NAME}/output/drugevents"
// Input json file
val inputJson = "s3://{YOUR OUTPUT BUCKET NAME}/output/2015q4/*”
// Load dataframe from json file multiline 
val df = spark.read.json(sc.wholeTextFiles(inputJson).values)
// Extract results from dataframe
val results = df.select("results")
// Save it to Parquet
results.write.parquet(outputDir)

Step 3:  Create an Athena table

With the data cleanly prepared and stored in S3 using the Parquet format, you can now place an Athena table on top of it to get a better understanding of the underlying data.

Because the openFDA data structure incorporates several layers of nesting, it can be a complex process to try to manually derive the underlying schema in a Hive-compatible format. To shorten this process, you can load the top row of the DataFrame from the previous step into a Hive table within Zeppelin and then extract the “create  table” statement from SparkSQL.

results.createOrReplaceTempView("data")

val top1 = spark.sql("select * from data tablesample(1 rows)")

top1.write.format("parquet").mode("overwrite").saveAsTable("drugevents")

val show_cmd = spark.sql("show create table drugevents”).show(1, false)

This returns a “create table” statement that you can almost paste directly into the Athena console. Make some small modifications (adding the word “external” and replacing “using with “stored as”), and then execute the code in the Athena query editor. The table is created.

For the openFDA data, the DDL returns all string fields, as the date format used in your dataset does not conform to the yyy-mm-dd hh:mm:ss[.f…] format required by Hive. For your analysis, the string format works appropriately but it would be possible to extend this code to use a Presto function to convert the strings into time stamps.

CREATE EXTERNAL TABLE  drugevents (
   companynumb  string, 
   safetyreportid  string, 
   safetyreportversion  string, 
   receiptdate  string, 
   patientagegroup  string, 
   patientdeathdate  string, 
   patientsex  string, 
   patientweight  string, 
   serious  string, 
   seriousnesscongenitalanomali  string, 
   seriousnessdeath  string, 
   seriousnessdisabling  string, 
   seriousnesshospitalization  string, 
   seriousnesslifethreatening  string, 
   seriousnessother  string, 
   actiondrug  string, 
   activesubstancename  string, 
   drugadditional  string, 
   drugadministrationroute  string, 
   drugcharacterization  string, 
   drugindication  string, 
   drugauthorizationnumb  string, 
   medicinalproduct  string, 
   drugdosageform  string, 
   drugdosagetext  string, 
   reactionoutcome  string, 
   reactionmeddrapt  string, 
   reactionmeddraversionpt  string)
STORED AS parquet
LOCATION
  's3://{YOUR TARGET BUCKET}/output/drugevents'

With the Athena table in place, you can start to explore the data by running ad hoc queries within Athena or doing more advanced statistical analysis in R.

Using SQL and R to analyze adverse events

Using the openFDA data with Athena makes it very easy to translate your questions into SQL code and perform quick analysis on the data. After you have prepared the data for Athena, you can begin to explore the relationship between aspirin and adverse drug events, as an example. One of the most common metrics to measure adverse drug events is the Proportional Reporting Ratio (PRR). It is defined as:

PRR = (m/n)/( (M-m)/(N-n) )
Where
m = #reports with drug and event
n = #reports with drug
M = #reports with event in database
N = #reports in database

Gastrointestinal haemorrhage has the highest PRR of any reaction to aspirin when viewed in aggregate. One question you may want to ask is how the PRR has trended on a yearly basis for gastrointestinal haemorrhage since 2005.

Using the following query in Athena, you can see the PRR trend of “GASTROINTESTINAL HAEMORRHAGE” reactions with “ASPIRIN” since 2005:

with drug_and_event as 
(select rpad(receiptdate, 4, 'NA') as receipt_year
    , reactionmeddrapt
    , count(distinct (concat(safetyreportid,receiptdate,reactionmeddrapt))) as reports_with_drug_and_event 
from fda.drugevents
where rpad(receiptdate,4,'NA') 
     between '2005' and '2015' 
     and medicinalproduct = 'ASPIRIN'
     and reactionmeddrapt= 'GASTROINTESTINAL HAEMORRHAGE'
group by reactionmeddrapt, rpad(receiptdate, 4, 'NA') 
), reports_with_drug as 
(
select rpad(receiptdate, 4, 'NA') as receipt_year
    , count(distinct (concat(safetyreportid,receiptdate,reactionmeddrapt))) as reports_with_drug 
 from fda.drugevents 
 where rpad(receiptdate,4,'NA') 
     between '2005' and '2015' 
     and medicinalproduct = 'ASPIRIN'
group by rpad(receiptdate, 4, 'NA') 
), reports_with_event as 
(
   select rpad(receiptdate, 4, 'NA') as receipt_year
    , count(distinct (concat(safetyreportid,receiptdate,reactionmeddrapt))) as reports_with_event 
   from fda.drugevents
   where rpad(receiptdate,4,'NA') 
     between '2005' and '2015' 
     and reactionmeddrapt= 'GASTROINTESTINAL HAEMORRHAGE'
   group by rpad(receiptdate, 4, 'NA')
), total_reports as 
(
   select rpad(receiptdate, 4, 'NA') as receipt_year
    , count(distinct (concat(safetyreportid,receiptdate,reactionmeddrapt))) as total_reports 
   from fda.drugevents
   where rpad(receiptdate,4,'NA') 
     between '2005' and '2015' 
   group by rpad(receiptdate, 4, 'NA')
)
select  drug_and_event.receipt_year, 
(1.0 * drug_and_event.reports_with_drug_and_event/reports_with_drug.reports_with_drug)/ (1.0 * (reports_with_event.reports_with_event- drug_and_event.reports_with_drug_and_event)/(total_reports.total_reports-reports_with_drug.reports_with_drug)) as prr
, drug_and_event.reports_with_drug_and_event
, reports_with_drug.reports_with_drug
, reports_with_event.reports_with_event
, total_reports.total_reports
from drug_and_event
    inner join reports_with_drug on  drug_and_event.receipt_year = reports_with_drug.receipt_year   
    inner join reports_with_event on  drug_and_event.receipt_year = reports_with_event.receipt_year
    inner join total_reports on  drug_and_event.receipt_year = total_reports.receipt_year
order by  drug_and_event.receipt_year


One nice feature of Athena is that you can quickly connect to it via R or any other tool that can use a JDBC driver to visualize the data and understand it more clearly.

With this quick R script that can be run in R Studio either locally or on an EC2 instance, you can create a visualization of the PRR and Reporting Odds Ratio (RoR) for “GASTROINTESTINAL HAEMORRHAGE” reactions from “ASPIRIN” since 2005 to better understand these trends.

# connect to ATHENA
conn <- dbConnect(drv, '<Your JDBC URL>',s3_staging_dir="<Your S3 Location>",user=Sys.getenv(c("USER_NAME"),password=Sys.getenv(c("USER_PASSWORD"))

# Declare Adverse Event
adverseEvent <- "'GASTROINTESTINAL HAEMORRHAGE'"

# Build SQL Blocks
sqlFirst <- "SELECT rpad(receiptdate, 4, 'NA') as receipt_year, count(DISTINCT safetyreportid) as event_count FROM fda.drugsflat WHERE rpad(receiptdate,4,'NA') between '2005' and '2015'"
sqlEnd <- "GROUP BY rpad(receiptdate, 4, 'NA') ORDER BY receipt_year"

# Extract Aspirin with adverse event counts
sql <- paste(sqlFirst,"AND medicinalproduct ='ASPIRIN' AND reactionmeddrapt=",adverseEvent, sqlEnd,sep=" ")
aspirinAdverseCount = dbGetQuery(conn,sql)

# Extract Aspirin counts
sql <- paste(sqlFirst,"AND medicinalproduct ='ASPIRIN'", sqlEnd,sep=" ")
aspirinCount = dbGetQuery(conn,sql)

# Extract adverse event counts
sql <- paste(sqlFirst,"AND reactionmeddrapt=",adverseEvent, sqlEnd,sep=" ")
adverseCount = dbGetQuery(conn,sql)

# All Drug Adverse event Counts
sql <- paste(sqlFirst, sqlEnd,sep=" ")
allDrugCount = dbGetQuery(conn,sql)

# Select correct rows
selAll =  allDrugCount$receipt_year == aspirinAdverseCount$receipt_year
selAspirin = aspirinCount$receipt_year == aspirinAdverseCount$receipt_year
selAdverse = adverseCount$receipt_year == aspirinAdverseCount$receipt_year

# Calculate Numbers
m <- c(aspirinAdverseCount$event_count)
n <- c(aspirinCount[selAspirin,2])
M <- c(adverseCount[selAdverse,2])
N <- c(allDrugCount[selAll,2])

# Calculate proptional reporting ratio
PRR = (m/n)/((M-m)/(N-n))

# Calculate reporting Odds Ratio
d = n-m
D = N-M
ROR = (m/d)/(M/D)

# Plot the PRR and ROR
g_range <- range(0, PRR,ROR)
g_range[2] <- g_range[2] + 3
yearLen = length(aspirinAdverseCount$receipt_year)
axis(1,1:yearLen,lab=ax)
plot(PRR, type="o", col="blue", ylim=g_range,axes=FALSE, ann=FALSE)
axis(1,1:yearLen,lab=ax)
axis(2, las=1, at=1*0:g_range[2])
box()
lines(ROR, type="o", pch=22, lty=2, col="red")

As you can see, the PRR and RoR have both remained fairly steady over this time range. With the R Script above, all you need to do is change the adverseEvent variable from GASTROINTESTINAL HAEMORRHAGE to another type of reaction to analyze and compare those trends.

Summary

In this walkthrough:

  • You used a Scala script on EMR to convert the openFDA zip files to gzip.
  • You then transformed the JSON blobs into flattened Parquet files using Spark on EMR.
  • You created an Athena DDL so that you could query these Parquet files residing in S3.
  • Finally, you pointed the R package at the Athena table to analyze the data without pulling it into a database or creating your own servers.

If you have questions or suggestions, please comment below.


Next Steps

Take your skills to the next level. Learn how to optimize Amazon S3 for an architecture commonly used to enable genomic data analysis. Also, be sure to read more about running R on Amazon Athena.

 

 

 

 

 


About the Authors

Ryan Hood is a Data Engineer for AWS. He works on big data projects leveraging the newest AWS offerings. In his spare time, he enjoys watching the Cubs win the World Series and attempting to Sous-vide anything he can find in his refrigerator.

 

 

Vikram Anand is a Data Engineer for AWS. He works on big data projects leveraging the newest AWS offerings. In his spare time, he enjoys playing soccer and watching the NFL & European Soccer leagues.

 

 

Dave Rocamora is a Solutions Architect at Amazon Web Services on the Open Data team. Dave is based in Seattle and when he is not opening data, he enjoys biking and drinking coffee outside.

 

 

 

 

Burner laptops for DEF CON

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/07/burner-laptops-for-def-con.html

Hacker summer camp (Defcon, Blackhat, BSidesLV) is upon us, so I thought I’d write up some quick notes about bringing a “burner” laptop. Chrome is your best choice in terms of security, but I need Windows/Linux tools, so I got a Windows laptop.

I chose the Asus e200ha for $199 from Amazon with free (and fast) shipping. There are similar notebooks with roughly the same hardware and price from other manufacturers (HP, Dell, etc.), so I’m not sure how this compares against those other ones. However, it fits my needs as a “burner” laptop, namely:

  • cheap
  • lasts 10 hours easily on battery
  • weighs 2.2 pounds (1 kilogram)
  • 11.6 inch and thin

Some other specs are:

  • 4 gigs of RAM
  • 32 gigs of eMMC flash memory
  • quad core 1.44 GHz Intel Atom CPU
  • Windows 10
  • free Microsoft Office 365 for one year
  • good, large keyboard
  • good, large touchpad
  • USB 3.0
  • microSD
  • WiFi ac
  • no fans, completely silent

There are compromises, of course.

  • The Atom CPU is slow, thought it’s only noticeable when churning through heavy webpages. Adblocking addons or Brave are a necessity. Most things are usably fast, such as using Microsoft Word.
  • Crappy sound and video, though VLC does a fine job playing movies with headphones on the airplane. Using in bright sunlight will be difficult.
  • micro-HDMI, keep in mind if intending to do presos from it, you’ll need an HDMI adapter
  • It has limited storage, 32gigs in theory, about half that usable.
  • Does special Windows 10 compressed install that you can’t actually upgrade without a completely new install. It doesn’t have the latest Windows 10 Creators update. I lost a gig thinking I could compress system files.

Copying files across the 802.11ac WiFi to the disk was quite fast, several hundred megabits-per-second. The eMMC isn’t as fast as an SSD, but its a lot faster than typical SD card speeds.

The first thing I did once I got the notebook was to install the free VeraCrypt full disk encryption. The CPU has AES acceleration, so it’s fast. There is a problem with the keyboard driver during boot that makes it really hard to enter long passwords — you have to carefully type one key at a time to prevent extra keystrokes from being entered.

You can’t really install Linux on this computer, but you can use virtual machines. I installed VirtualBox and downloaded the Kali VM. I had some problems attaching USB devices to the VM. First of all, VirtualBox requires a separate downloaded extension to get USB working. Second, it conflicts with USBpcap that I installed for Wireshark.

It comes with one year of free Office 365. Obviously, Microsoft is hoping to hook the user into a longer term commitment, but in practice next year at this time I’d get another burner $200 laptop rather than spend $99 on extending the Office 365 license.

Let’s talk about the CPU. It’s Intel’s “Atom” processor, not their mainstream (Core i3 etc.) processor. Even though it has roughly the same GHz as the processor in a 11inch MacBook Air and twice the cores, it’s noticeably and painfully slower. This is especially noticeable on ad-heavy web pages, while other things seem to work just fine. It has hardware acceleration for most video formats, though I had trouble getting Netflix to work.

The tradeoff for a slow CPU is phenomenal battery life. It seems to last forever on battery. It’s really pretty cool.

Conclusion

A Chromebook is likely more secure, but for my needs, this $200 is perfect.

Building High-Throughput Genomics Batch Workflows on AWS: Workflow Layer (Part 4 of 4)

Post Syndicated from Andy Katz original https://aws.amazon.com/blogs/compute/building-high-throughput-genomics-batch-workflows-on-aws-workflow-layer-part-4-of-4/

Aaron Friedman is a Healthcare and Life Sciences Partner Solutions Architect at AWS

Angel Pizarro is a Scientific Computing Technical Business Development Manager at AWS

This post is the fourth in a series on how to build a genomics workflow on AWS. In Part 1, we introduced a general architecture, shown below, and highlighted the three common layers in a batch workflow:

  • Job
  • Batch
  • Workflow

In Part 2, you built a Docker container for each job that needed to run as part of your workflow, and stored them in Amazon ECR.

In Part 3, you tackled the batch layer and built a scalable, elastic, and easily maintainable batch engine using AWS Batch. This solution took care of dynamically scaling your compute resources in response to the number of runnable jobs in your job queue length as well as managed job placement.

In part 4, you build out the workflow layer of your solution using AWS Step Functions and AWS Lambda. You then run an end-to-end genomic analysis―specifically known as exome secondary analysis―for many times at a cost of less than $1 per exome.

Step Functions makes it easy to coordinate the components of your applications using visual workflows. Building applications from individual components that each perform a single function lets you scale and change your workflow quickly. You can use the graphical console to arrange and visualize the components of your application as a series of steps, which simplify building and running multi-step applications. You can change and add steps without writing code, so you can easily evolve your application and innovate faster.

An added benefit of using Step Functions to define your workflows is that the state machines you create are immutable. While you can delete a state machine, you cannot alter it after it is created. For regulated workloads where auditing is important, you can be assured that state machines you used in production cannot be altered.

In this blog post, you will create a Lambda state machine to orchestrate your batch workflow. For more information on how to create a basic state machine, please see this Step Functions tutorial.

All code related to this blog series can be found in the associated GitHub repository here.

Build a state machine building block

To skip the following steps, we have provided an AWS CloudFormation template that can deploy your Step Functions state machine. You can use this in combination with the setup you did in part 3 to quickly set up the environment in which to run your analysis.

The state machine is composed of smaller state machines that submit a job to AWS Batch, and then poll and check its execution.

The steps in this building block state machine are as follows:

  1. A job is submitted.
    Each analytical module/job has its own Lambda function for submission and calls the batchSubmitJob Lambda function that you built in the previous blog post. You will build these specialized Lambda functions in the following section.
  2. The state machine queries the AWS Batch API for the job status.
    This is also a Lambda function.
  3. The job status is checked to see if the job has completed.
    If the job status equals SUCCESS, proceed to log the final job status. If the job status equals FAILED, end the execution of the state machine. In all other cases, wait 30 seconds and go back to Step 2.

Here is the JSON representing this state machine.

{
  "Comment": "A simple example that submits a Job to AWS Batch",
  "StartAt": "SubmitJob",
  "States": {
    "SubmitJob": {
      "Type": "Task",
      "Resource": "arn:aws:lambda:us-east-1:<account-id>::function:batchSubmitJob",
      "Next": "GetJobStatus"
    },
    "GetJobStatus": {
      "Type": "Task",
      "Resource": "arn:aws:lambda:us-east-1:<account-id>:function:batchGetJobStatus",
      "Next": "CheckJobStatus",
      "InputPath": "$",
      "ResultPath": "$.status"
    },
    "CheckJobStatus": {
      "Type": "Choice",
      "Choices": [
        {
          "Variable": "$.status",
          "StringEquals": "FAILED",
          "End": true
        },
        {
          "Variable": "$.status",
          "StringEquals": "SUCCEEDED",
          "Next": "GetFinalJobStatus"
        }
      ],
      "Default": "Wait30Seconds"
    },
    "Wait30Seconds": {
      "Type": "Wait",
      "Seconds": 30,
      "Next": "GetJobStatus"
    },
    "GetFinalJobStatus": {
      "Type": "Task",
      "Resource": "arn:aws:lambda:us-east-1:<account-id>:function:batchGetJobStatus",
      "End": true
    }
  }
}

Building the Lambda functions for the state machine

You need two basic Lambda functions for this state machine. The first one submits a job to AWS Batch and the second checks the status of the AWS Batch job that was submitted.

In AWS Step Functions, you specify an input as JSON that is read into your state machine. Each state receives the aggregate of the steps immediately preceding it, and you can specify which components a state passes on to its children. Because you are using Lambda functions to execute tasks, one of the easiest routes to take is to modify the input JSON, represented as a Python dictionary, within the Lambda function and return the entire dictionary back for the next state to consume.

Building the batchSubmitIsaacJob Lambda function

For Step 1 above, you need a Lambda function for each of the steps in your analysis workflow. As you created a generic Lambda function in the previous post to submit a batch job (batchSubmitJob), you can use that function as the basis for the specialized functions you’ll include in this state machine. Here is such a Lambda function for the Isaac aligner.

from __future__ import print_function

import boto3
import json
import traceback

lambda_client = boto3.client('lambda')



def lambda_handler(event, context):
    try:
        # Generate output put
        bam_s3_path = '/'.join([event['resultsS3Path'], event['sampleId'], 'bam/'])

        depends_on = event['dependsOn'] if 'dependsOn' in event else []

        # Generate run command
        command = [
            '--bam_s3_folder_path', bam_s3_path,
            '--fastq1_s3_path', event['fastq1S3Path'],
            '--fastq2_s3_path', event['fastq2S3Path'],
            '--reference_s3_path', event['isaac']['referenceS3Path'],
            '--working_dir', event['workingDir']
        ]

        if 'cmdArgs' in event['isaac']:
            command.extend(['--cmd_args', event['isaac']['cmdArgs']])
        if 'memory' in event['isaac']:
            command.extend(['--memory', event['isaac']['memory']])

        # Submit Payload
        response = lambda_client.invoke(
            FunctionName='batchSubmitJob',
            InvocationType='RequestResponse',
            LogType='Tail',
            Payload=json.dumps(dict(
                dependsOn=depends_on,
                containerOverrides={
                    'command': command,
                },
                jobDefinition=event['isaac']['jobDefinition'],
                jobName='-'.join(['isaac', event['sampleId']]),
                jobQueue=event['isaac']['jobQueue']
            )))

        response_payload = response['Payload'].read()

        # Update event
        event['bamS3Path'] = bam_s3_path
        event['jobId'] = json.loads(response_payload)['jobId']
        
        return event
    except Exception as e:
        traceback.print_exc()
        raise e

In the Lambda console, create a Python 2.7 Lambda function named batchSubmitIsaacJob and paste in the above code. Use the LambdaBatchExecutionRole that you created in the previous post. For more information, see Step 2.1: Create a Hello World Lambda Function.

This Lambda function reads in the inputs passed to the state machine it is part of, formats the data for the batchSubmitJob Lambda function, invokes that Lambda function, and then modifies the event dictionary to pass onto the subsequent states. You can repeat these for each of the other tools, which can be found in the tools//lambda/lambda_function.py script in the GitHub repo.

Building the batchGetJobStatus Lambda function

For Step 2 above, the process queries the AWS Batch DescribeJobs API action with jobId to identify the state that the job is in. You can put this into a Lambda function to integrate it with Step Functions.

In the Lambda console, create a new Python 2.7 function with the LambdaBatchExecutionRole IAM role. Name your function batchGetJobStatus and paste in the following code. This is similar to the batch-get-job-python27 Lambda blueprint.

from __future__ import print_function

import boto3
import json

print('Loading function')

batch_client = boto3.client('batch')

def lambda_handler(event, context):
    # Log the received event
    print("Received event: " + json.dumps(event, indent=2))
    # Get jobId from the event
    job_id = event['jobId']

    try:
        response = batch_client.describe_jobs(
            jobs=[job_id]
        )
        job_status = response['jobs'][0]['status']
        return job_status
    except Exception as e:
        print(e)
        message = 'Error getting Batch Job status'
        print(message)
        raise Exception(message)

Structuring state machine input

You have structured the state machine input so that general file references are included at the top-level of the JSON object, and any job-specific items are contained within a nested JSON object. At a high level, this is what the input structure looks like:

{
        "general_field_1": "value1",
        "general_field_2": "value2",
        "general_field_3": "value3",
        "job1": {},
        "job2": {},
        "job3": {}
}

Building the full state machine

By chaining these state machine components together, you can quickly build flexible workflows that can process genomes in multiple ways. The development of the larger state machine that defines the entire workflow uses four of the above building blocks. You use the Lambda functions that you built in the previous section. Rename each building block submission to match the tool name.

We have provided a CloudFormation template to deploy your state machine and the associated IAM roles. In the CloudFormation console, select Create Stack, choose your template (deploy_state_machine.yaml), and enter in the ARNs for the Lambda functions you created.

Continue through the rest of the steps and deploy your stack. Be sure to check the box next to "I acknowledge that AWS CloudFormation might create IAM resources."

Once the CloudFormation stack is finished deploying, you should see the following image of your state machine.

In short, you first submit a job for Isaac, which is the aligner you are using for the analysis. Next, you use parallel state to split your output from "GetFinalIsaacJobStatus" and send it to both your variant calling step, Strelka, and your QC step, Samtools Stats. These then are run in parallel and you annotate the results from your Strelka step with snpEff.

Putting it all together

Now that you have built all of the components for a genomics secondary analysis workflow, test the entire process.

We have provided sequences from an Illumina sequencer that cover a region of the genome known as the exome. Most of the positions in the genome that we have currently associated with disease or human traits reside in this region, which is 1–2% of the entire genome. The workflow that you have built works for both analyzing an exome, as well as an entire genome.

Additionally, we have provided prebuilt reference genomes for Isaac, located at:

s3://aws-batch-genomics-resources/reference/

If you are interested, we have provided a script that sets up all of that data. To execute that script, run the following command on a large EC2 instance:

make reference REGISTRY=<your-ecr-registry>

Indexing and preparing this reference takes many hours on a large-memory EC2 instance. Be careful about the costs involved and note that the data is available through the prebuilt reference genomes.

Starting the execution

In a previous section, you established a provenance for the JSON that is fed into your state machine. For ease, we have auto-populated the input JSON for you to the state machine. You can also find this in the GitHub repo under workflow/test.input.json:

{
  "fastq1S3Path": "s3://aws-batch-genomics-resources/fastq/SRR1919605_1.fastq.gz",
  "fastq2S3Path": "s3://aws-batch-genomics-resources/fastq/SRR1919605_2.fastq.gz",
  "referenceS3Path": "s3://aws-batch-genomics-resources/reference/hg38.fa",
  "resultsS3Path": "s3://<bucket>/genomic-workflow/results",
  "sampleId": "NA12878_states_1",
  "workingDir": "/scratch",
  "isaac": {
    "jobDefinition": "isaac-myenv:1",
    "jobQueue": "arn:aws:batch:us-east-1:<account-id>:job-queue/highPriority-myenv",
    "referenceS3Path": "s3://aws-batch-genomics-resources/reference/isaac/"
  },
  "samtoolsStats": {
    "jobDefinition": "samtools_stats-myenv:1",
    "jobQueue": "arn:aws:batch:us-east-1:<account-id>:job-queue/lowPriority-myenv"
  },
  "strelka": {
    "jobDefinition": "strelka-myenv:1",
    "jobQueue": "arn:aws:batch:us-east-1:<account-id>:job-queue/highPriority-myenv",
    "cmdArgs": " --exome "
  },
  "snpEff": {
    "jobDefinition": "snpeff-myenv:1",
    "jobQueue": "arn:aws:batch:us-east-1:<account-id>:job-queue/lowPriority-myenv",
    "cmdArgs": " -t hg38 "
  }
}

You are now at the stage to run your full genomic analysis. Copy the above to a new text file, change paths and ARNs to the ones that you created previously, and save your JSON input as input.states.json.

In the CLI, execute the following command. You need the ARN of the state machine that you created in the previous post:

aws stepfunctions start-execution --state-machine-arn <your-state-machine-arn> --input file://input.states.json

Your analysis has now started. By using Spot Instances with AWS Batch, you can quickly scale out your workflows while concurrently optimizing for cost. While this is not guaranteed, most executions of the workflows presented here should cost under $1 for a full analysis.

Monitoring the execution

The output from the above CLI command gives you the ARN that describes the specific execution. Copy that and navigate to the Step Functions console. Select the state machine that you created previously and paste the ARN into the search bar.

The screen shows information about your specific execution. On the left, you see where your execution currently is in the workflow.

In the following screenshot, you can see that your workflow has successfully completed the alignment job and moved onto the subsequent steps, which are variant calling and generating quality information about your sample.

You can also navigate to the AWS Batch console and see that progress of all of your jobs reflected there as well.

Finally, after your workflow has completed successfully, check out the S3 path to which you wrote all of your files. If you run a ls –recursive command on the S3 results path, specified in the input to your state machine execution, you should see something similar to the following:

2017-05-02 13:46:32 6475144340 genomic-workflow/results/NA12878_run1/bam/sorted.bam
2017-05-02 13:46:34    7552576 genomic-workflow/results/NA12878_run1/bam/sorted.bam.bai
2017-05-02 13:46:32         45 genomic-workflow/results/NA12878_run1/bam/sorted.bam.md5
2017-05-02 13:53:20      68769 genomic-workflow/results/NA12878_run1/stats/bam_stats.dat
2017-05-02 14:05:12        100 genomic-workflow/results/NA12878_run1/vcf/stats/runStats.tsv
2017-05-02 14:05:12        359 genomic-workflow/results/NA12878_run1/vcf/stats/runStats.xml
2017-05-02 14:05:12  507577928 genomic-workflow/results/NA12878_run1/vcf/variants/genome.S1.vcf.gz
2017-05-02 14:05:12     723144 genomic-workflow/results/NA12878_run1/vcf/variants/genome.S1.vcf.gz.tbi
2017-05-02 14:05:12  507577928 genomic-workflow/results/NA12878_run1/vcf/variants/genome.vcf.gz
2017-05-02 14:05:12     723144 genomic-workflow/results/NA12878_run1/vcf/variants/genome.vcf.gz.tbi
2017-05-02 14:05:12   30783484 genomic-workflow/results/NA12878_run1/vcf/variants/variants.vcf.gz
2017-05-02 14:05:12    1566596 genomic-workflow/results/NA12878_run1/vcf/variants/variants.vcf.gz.tbi

Modifications to the workflow

You have now built and run your genomics workflow. While diving deep into modifications to this architecture are beyond the scope of these posts, we wanted to leave you with several suggestions of how you might modify this workflow to satisfy additional business requirements.

  • Job tracking with Amazon DynamoDB
    In many cases, such as if you are offering Genomics-as-a-Service, you might want to track the state of your jobs with DynamoDB to get fine-grained records of how your jobs are running. This way, you can easily identify the cost of individual jobs and workflows that you run.
  • Resuming from failure
    Both AWS Batch and Step Functions natively support job retries and can cover many of the standard cases where a job might be interrupted. There may be cases, however, where your workflow might fail in a way that is unpredictable. In this case, you can use custom error handling with AWS Step Functions to build out a workflow that is even more resilient. Also, you can build in fail states into your state machine to fail at any point, such as if a batch job fails after a certain number of retries.
  • Invoking Step Functions from Amazon API Gateway
    You can use API Gateway to build an API that acts as a "front door" to Step Functions. You can create a POST method that contains the input JSON to feed into the state machine you built. For more information, see the Implementing Serverless Manual Approval Steps in AWS Step Functions and Amazon API Gateway blog post.

Conclusion

While the approach we have demonstrated in this series has been focused on genomics, it is important to note that this can be generalized to nearly any high-throughput batch workload. We hope that you have found the information useful and that it can serve as a jump-start to building your own batch workloads on AWS with native AWS services.

For more information about how AWS can enable your genomics workloads, be sure to check out the AWS Genomics page.

Other posts in this four-part series:

Please leave any questions and comments below.

Building High-Throughput Genomic Batch Workflows on AWS: Batch Layer (Part 3 of 4)

Post Syndicated from Andy Katz original https://aws.amazon.com/blogs/compute/building-high-throughput-genomic-batch-workflows-on-aws-batch-layer-part-3-of-4/

Aaron Friedman is a Healthcare and Life Sciences Partner Solutions Architect at AWS

Angel Pizarro is a Scientific Computing Technical Business Development Manager at AWS

This post is the third in a series on how to build a genomics workflow on AWS. In Part 1, we introduced a general architecture, shown below, and highlighted the three common layers in a batch workflow:

  • Job
  • Batch
  • Workflow

In Part 2, you built a Docker container for each job that needed to run as part of your workflow, and stored them in Amazon ECR.

In Part 3, you tackle the batch layer and build a scalable, elastic, and easily maintainable batch engine using AWS Batch.

AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. It dynamically provisions the optimal quantity and type of compute resources (for example, CPU or memory optimized instances) based on the volume and specific resource requirements of the batch jobs that you submit. With AWS Batch, you do not need to install and manage your own batch computing software or server clusters, which allows you to focus on analyzing results, such as those of your genomic analysis.

Integrating applications into AWS Batch

If you are new to AWS Batch, we recommend reading Setting Up AWS Batch to ensure that you have the proper permissions and AWS environment.

After you have a working environment, you define several types of resources:

  • IAM roles that provide service permissions
  • A compute environment that launches and terminates compute resources for jobs
  • A custom Amazon Machine Image (AMI)
  • A job queue to submit the units of work and to schedule the appropriate resources within the compute environment to execute those jobs
  • Job definitions that define how to execute an application

After the resources are created, you’ll test the environment and create an AWS Lambda function to send generic jobs to the queue.

This genomics workflow covers the basic steps. For more information, see Getting Started with AWS Batch.

Creating the necessary IAM roles

AWS Batch simplifies batch processing by managing a number of underlying AWS services so that you can focus on your applications. As a result, you create IAM roles that give the service permissions to act on your behalf. In this section, deploy the AWS CloudFormation template included in the GitHub repository and extract the ARNs for later use.

To deploy the stack, go to the top level in the repo with the following command:

aws cloudformation create-stack --template-body file://batch/setup/iam.template.yaml --stack-name iam --capabilities CAPABILITY_NAMED_IAM

You can capture the output from this stack in the Outputs tab in the CloudFormation console:

Creating the compute environment

In AWS Batch, you will set up a managed compute environments. Managed compute environments automatically launch and terminate compute resources on your behalf based on the aggregate resources needed by your jobs, such as vCPU and memory, and simple boundaries that you define.

When defining your compute environment, specify the following:

  • Desired instance types in your environment
  • Min and max vCPUs in the environment
  • The Amazon Machine Image (AMI) to use
  • Percentage value for bids on the Spot Market and VPC subnets that can be used.

AWS Batch then provisions an elastic and heterogeneous pool of Amazon EC2 instances based on the aggregate resource requirements of jobs sitting in the RUNNABLE state. If a mix of CPU and memory-intensive jobs are ready to run, AWS Batch provisions the appropriate ratio and size of CPU and memory-optimized instances within your environment. For this post, you will use the simplest configuration, in which instance types are set to "optimal" allowing AWS Batch to choose from the latest C, M, and R EC2 instance families.

While you could create this compute environment in the console, we provide the following CLI commands. Replace the subnet IDs and key name with your own private subnets and key, and the image-id with the image you will build in the next section.

ACCOUNTID=<your account id>
SERVICEROLE=<from output in CloudFormation template>
IAMFLEETROLE=<from output in CloudFormation template>
JOBROLEARN=<from output in CloudFormation template>
SUBNETS=<comma delimited list of subnets>
SECGROUPS=<your security groups>
SPOTPER=50 # percentage of on demand
IMAGEID=<ami-id corresponding to the one you created>
INSTANCEROLE=<from output in CloudFormation template>
REGISTRY=${ACCOUNTID}.dkr.ecr.us-east-1.amazonaws.com
KEYNAME=<your key name>
MAXCPU=1024 # max vCPUs in compute environment
ENV=myenv

# Creates the compute environment
aws batch create-compute-environment --compute-environment-name genomicsEnv-$ENV --type MANAGED --state ENABLED --service-role ${SERVICEROLE} --compute-resources type=SPOT,minvCpus=0,maxvCpus=$MAXCPU,desiredvCpus=0,instanceTypes=optimal,imageId=$IMAGEID,subnets=$SUBNETS,securityGroupIds=$SECGROUPS,ec2KeyPair=$KEYNAME,instanceRole=$INSTANCEROLE,bidPercentage=$SPOTPER,spotIamFleetRole=$IAMFLEETROLE

Creating the custom AMI for AWS Batch

While you can use default Amazon ECS-optimized AMIs with AWS Batch, you can also provide your own image in managed compute environments. We will use this feature to provision additional scratch EBS storage on each of the instances that AWS Batch launches and also to encrypt both the Docker and scratch EBS volumes.

AWS Batch has the same requirements for your AMI as Amazon ECS. To build the custom image, modify the default Amazon ECS-Optimized Amazon Linux AMI in the following ways:

  • Attach a 1 TB scratch volume to /dev/sdb
  • Encrypt the Docker and new scratch volumes
  • Mount the scratch volume to /docker_scratch by modifying /etcfstab

The first two tasks can be addressed when you create the custom AMI in the console. Spin up a small t2.micro instance, and proceed through the standard EC2 instance launch.

After your instance has launched, record the IP address and then SSH into the instance. Copy and paste the following code:

sudo yum -y update
sudo parted /dev/xvdb mklabel gpt
sudo parted /dev/xvdb mkpart primary 0% 100%
sudo mkfs -t ext4 /dev/xvdb1
sudo mkdir /docker_scratch
sudo echo -e '/dev/xvdb1\t/docker_scratch\text4\tdefaults\t0\t0' | sudo tee -a /etc/fstab
sudo mount -a

This auto-mounts your scratch volume to /docker_scratch, which is your scratch directory for batch processing. Next, create your new AMI and record the image ID.

Creating the job queues

AWS Batch job queues are used to coordinate the submission of batch jobs. Your jobs are submitted to job queues, which can be mapped to one or more compute environments. Job queues have priority relative to each other. You can also specify the order in which they consume resources from your compute environments.

In this solution, use two job queues. The first is for high priority jobs, such as alignment or variant calling. Set this with a high priority (1000) and map back to the previously created compute environment. Next, set a second job queue for low priority jobs, such as quality statistics generation. To create these compute environments, enter the following CLI commands:

aws batch create-job-queue --job-queue-name highPriority-${ENV} --compute-environment-order order=0,computeEnvironment=genomicsEnv-${ENV}  --priority 1000 --state ENABLED
aws batch create-job-queue --job-queue-name lowPriority-${ENV} --compute-environment-order order=0,computeEnvironment=genomicsEnv-${ENV}  --priority 1 --state ENABLED

Creating the job definitions

To run the Isaac aligner container image locally, supply the Amazon S3 locations for the FASTQ input sequences, the reference genome to align to, and the output BAM file. For more information, see tools/isaac/README.md.

The Docker container itself also requires some information on a suitable mountable volume so that it can read and write files temporary files without running out of space.

Note: In the following example, the FASTQ files as well as the reference files to run are in a publicly available bucket.

FASTQ1=s3://aws-batch-genomics-resources/fastq/SRR1919605_1.fastq.gz
FASTQ2=s3://aws-batch-genomics-resources/fastq/SRR1919605_2.fastq.gz
REF=s3://aws-batch-genomics-resources/reference/isaac/
BAM=s3://mybucket/genomic-workflow/test_results/bam/

mkdir ~/scratch

docker run --rm -ti -v $(HOME)/scratch:/scratch $REPO_URI --bam_s3_folder_path $BAM \
--fastq1_s3_path $FASTQ1 \
--fastq2_s3_path $FASTQ2 \
--reference_s3_path $REF \
--working_dir /scratch 

Locally running containers can typically expand their CPU and memory resource headroom. In AWS Batch, the CPU and memory requirements are hard limits and are allocated to the container image at runtime.

Isaac is a fairly resource-intensive algorithm, as it creates an uncompressed index of the reference genome in memory to match the query DNA sequences. The large memory space is shared across multiple CPU threads, and Isaac can scale almost linearly with the number of CPU threads given to it as a parameter.

To fit these characteristics, choose an optimal instance size to maximize the number of CPU threads based on a given large memory footprint, and deploy a Docker container that uses all of the instance resources. In this case, we chose a host instance with 80+ GB of memory and 32+ vCPUs. The following code is example JSON that you can pass to the AWS CLI to create a job definition for Isaac.

aws batch register-job-definition --job-definition-name isaac-${ENV} --type container --retry-strategy attempts=3 --container-properties '
{"image": "'${REGISTRY}'/isaac",
"jobRoleArn":"'${JOBROLEARN}'",
"memory":80000,
"vcpus":32,
"mountPoints": [{"containerPath": "/scratch", "readOnly": false, "sourceVolume": "docker_scratch"}],
"volumes": [{"name": "docker_scratch", "host": {"sourcePath": "/docker_scratch"}}]
}'

You can copy and paste the following code for the other three job definitions:

aws batch register-job-definition --job-definition-name strelka-${ENV} --type container --retry-strategy attempts=3 --container-properties '
{"image": "'${REGISTRY}'/strelka",
"jobRoleArn":"'${JOBROLEARN}'",
"memory":32000,
"vcpus":32,
"mountPoints": [{"containerPath": "/scratch", "readOnly": false, "sourceVolume": "docker_scratch"}],
"volumes": [{"name": "docker_scratch", "host": {"sourcePath": "/docker_scratch"}}]
}'

aws batch register-job-definition --job-definition-name snpeff-${ENV} --type container --retry-strategy attempts=3 --container-properties '
{"image": "'${REGISTRY}'/snpeff",
"jobRoleArn":"'${JOBROLEARN}'",
"memory":10000,
"vcpus":4,
"mountPoints": [{"containerPath": "/scratch", "readOnly": false, "sourceVolume": "docker_scratch"}],
"volumes": [{"name": "docker_scratch", "host": {"sourcePath": "/docker_scratch"}}]
}'

aws batch register-job-definition --job-definition-name samtoolsStats-${ENV} --type container --retry-strategy attempts=3 --container-properties '
{"image": "'${REGISTRY}'/samtools_stats",
"jobRoleArn":"'${JOBROLEARN}'",
"memory":10000,
"vcpus":4,
"mountPoints": [{"containerPath": "/scratch", "readOnly": false, "sourceVolume": "docker_scratch"}],
"volumes": [{"name": "docker_scratch", "host": {"sourcePath": "/docker_scratch"}}]
}'

The value for "image" comes from the previous post on creating a Docker image and publishing to ECR. The value for jobRoleArn you can find from the output of the CloudFormation template that you deployed earlier. In addition to providing the number of CPU cores and memory required by Isaac, you also give it a storage volume for scratch and staging. The volume comes from the previously defined custom AMI.

Testing the environment

After you have created the Isaac job definition, you can submit the job using the AWS Batch submitJob API action. While the base mappings for Docker run are taken care of in the job definition that you just built, the specific job parameters should be specified in the container overrides section of the API call. Here’s what this would look like in the CLI, using the same parameters as in the bash commands shown earlier:

aws batch submit-job --job-name testisaac --job-queue highPriority-${ENV} --job-definition isaac-${ENV}:1 --container-overrides '{
"command": [
			"--bam_s3_folder_path", "s3://mybucket/genomic-workflow/test_batch/bam/",
            "--fastq1_s3_path", "s3://aws-batch-genomics-resources/fastq/ SRR1919605_1.fastq.gz",
            "--fastq2_s3_path", "s3://aws-batch-genomics-resources/fastq/SRR1919605_2.fastq.gz",
            "--reference_s3_path", "s3://aws-batch-genomics-resources/reference/isaac/",
            "--working_dir", "/scratch",
			"—cmd_args", " --exome ",]
}'

When you execute a submitJob call, jobId is returned. You can then track the progress of your job using the describeJobs API action:

aws batch describe-jobs –jobs <jobId returned from submitJob>

You can also track the progress of all of your jobs in the AWS Batch console dashboard.

To see exactly where a RUNNING job is at, use the link in the AWS Batch console to direct you to the appropriate location in CloudWatch logs.

Completing the batch environment setup

To finish, create a Lambda function to submit a generic AWS Batch job.

In the Lambda console, create a Python 2.7 Lambda function named batchSubmitJob. Copy and paste the following code. This is similar to the batch-submit-job-python27 Lambda blueprint. Use the LambdaBatchExecutionRole that you created earlier. For more information about creating functions, see Step 2.1: Create a Hello World Lambda Function.

from __future__ import print_function

import json
import boto3

batch_client = boto3.client('batch')

def lambda_handler(event, context):
    # Log the received event
    print("Received event: " + json.dumps(event, indent=2))
    # Get parameters for the SubmitJob call
    # http://docs.aws.amazon.com/batch/latest/APIReference/API_SubmitJob.html
    job_name = event['jobName']
    job_queue = event['jobQueue']
    job_definition = event['jobDefinition']
    
    # containerOverrides, dependsOn, and parameters are optional
    container_overrides = event['containerOverrides'] if event.get('containerOverrides') else {}
    parameters = event['parameters'] if event.get('parameters') else {}
    depends_on = event['dependsOn'] if event.get('dependsOn') else []
    
    try:
        response = batch_client.submit_job(
            dependsOn=depends_on,
            containerOverrides=container_overrides,
            jobDefinition=job_definition,
            jobName=job_name,
            jobQueue=job_queue,
            parameters=parameters
        )
        
        # Log response from AWS Batch
        print("Response: " + json.dumps(response, indent=2))
        
        # Return the jobId
        event['jobId'] = response['jobId']
        return event
    
    except Exception as e:
        print(e)
        message = 'Error getting Batch Job status'
        print(message)
        raise Exception(message)

Conclusion

In part 3 of this series, you successfully set up your data processing, or batch, environment in AWS Batch. We also provided a Python script in the corresponding GitHub repo that takes care of all of the above CLI arguments for you, as well as building out the job definitions for all of the jobs in the workflow: Isaac, Strelka, SAMtools, and snpEff. You can check the script’s README for additional documentation.

In Part 4, you’ll cover the workflow layer using AWS Step Functions and AWS Lambda.

Please leave any questions and comments below.

AWS Enables Consortium Science to Accelerate Discovery

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-enables-consortium-science-to-accelerate-discovery/

My colleague Mia Champion is a scientist (check out her publications), an AWS Certified Solutions Architect, and an AWS Certified Developer. The time that she spent doing research on large-data datasets gave her an appreciation for the value of cloud computing in the bioinformatics space, which she summarizes and explains in the guest post below!

Jeff;


Technological advances in scientific research continue to enable the collection of exponentially growing datasets that are also increasing in the complexity of their content. The global pace of innovation is now also fueled by the recent cloud-computing revolution, which provides researchers with a seemingly boundless scalable and agile infrastructure. Now, researchers can remove the hindrances of having to own and maintain their own sequencers, microscopes, compute clusters, and more. Using the cloud, scientists can easily store, manage, process and share datasets for millions of patient samples with gigabytes and more of data for each individual. As American physicist, John Bardeen once said: “Science is a collaborative effort. The combined results of several people working together is much more effective than could be that of an individual scientist working alone”.

Prioritizing Reproducible Innovation, Democratization, and Data Protection
Today, we have many individual researchers and organizations leveraging secure cloud enabled data sharing on an unprecedented scale and producing innovative, customized analytical solutions using the AWS cloud.  But, can secure data sharing and analytics be done on such a collaborative scale as to revolutionize the way science is done across a domain of interest or even across discipline/s of science? Can building a cloud-enabled consortium of resources remove the analytical variability that leads to diminished reproducibility, which has long plagued the interpretability and impact of research discoveries? The answers to these questions are ‘yes’ and initiatives such as the Neuro Cloud Consortium, The Global Alliance for Genomics and Health (GA4GH), and The Sage Bionetworks Synapse platform, which powers many research consortiums including the DREAM challenges, are starting to put into practice model cloud-initiatives that will not only provide impactful discoveries in the areas of neuroscience, infectious disease, and cancer, but are also revolutionizing the way in which scientific research is done.

Bringing Crowd Developed Models, Algorithms, and Functions to the Data
Collaborative projects have traditionally allowed investigators to download datasets such as those used for comparative sequence analysis or for training a deep learning algorithm on medical imaging data. Investigators were then able to develop and execute their analysis using institutional clusters, local workstations, or even laptops:

This method of collaboration is problematic for many reasons. The first concern is data security, since dataset download essentially permits “chain-data-sharing” with any number of recipients. Second, analytics done using compute environments that are not templated at some level introduces the risk of variable analytics that itself is not reproducible by a different investigator, or even the same investigator using a different compute environment. Third, the required data dump, processing, and then re-upload or distribution to the collaborative group is highly inefficient and dependent upon each individual’s networking and compute capabilities. Overall, traditional methods of scientific collaboration have introduced methods in which security is compromised and time to discovery is hampered.

Using the AWS cloud, collaborative researchers can share datasets easily and securely by taking advantage of Identity and Access Management (IAM) policy restrictions for user bucket access as well as S3 bucket policies or Access Control Lists (ACLs). To streamline analysis and ensure data security, many researchers are eliminating the necessity to download datasets entirely by leveraging resources that facilitate moving the analytics to the data source and/or taking advantage of remote API requests to access a shared database or data lake. One way our customers are accomplishing this is to leverage container based Docker technology to provide collaborators with a way to submit algorithms or models for execution on the system hosting the shared datasets:

Docker container images have all of the application’s dependencies bundled together, and therefore provide a high degree of versatility and portability, which is a significant advantage over using other executable-based approaches. In the case of collaborative machine learning projects, each docker container will contain applications, language runtime, packages and libraries, as well as any of the more popular deep learning frameworks commonly used by researchers including: MXNet, Caffe, TensorFlow, and Theano.

A common feature in these frameworks is the ability to leverage a host machine’s Graphical Processing Units (GPUs) for significant acceleration of the matrix and vector operations involved in the machine learning computations. As such, researchers with these objectives can leverage EC2’s new P2 instance types in order to power execution of submitted machine learning models. In addition, GPUs can be mounted directly to containers using the NVIDIA Docker tool and appear at the system level as additional devices. By leveraging Amazon EC2 Container Service and the EC2 Container Registry, collaborators are able to execute analytical solutions submitted to the project repository by their colleagues in a reproducible fashion as well as continue to build on their existing environment.  Researchers can also architect a continuous deployment pipeline to run their docker-enabled workflows.

In conclusion, emerging cloud-enabled consortium initiatives serve as models for the broader research community for how cloud-enabled community science can expedite discoveries in Precision Medicine while also providing a platform where data security and discovery reproducibility is inherent to the project execution.

Mia D. Champion, Ph.D.

 

EC2 F1 Instances with FPGAs – Now Generally Available

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/ec2-f1-instances-with-fpgas-now-generally-available/

We launched the Developer Preview of the FPGA-equipped F1 instances at AWS re:Invent. The response to the announcement was quick and overwhelming! We received over 2000 requests for entry, and were able to provide over 200 developers with access to the Hardware Development Kit (HDK) and the actual F1 instances.

In the post that I wrote for re:Invent, I told you that:

This highly parallelized model is ideal for building custom accelerators to process compute-intensive problems. Properly programmed, an FPGA has the potential to provide a 30x speedup to many types of genomics, seismic analysis, financial risk analysis, big data search, and encryption algorithms and applications.

During the preview, partners and developers have been working on all sorts of exciting tools, services, and applications. I’ll tell you more about them in just a moment.

Now Generally Available
Today we are making the F1 instances generally available in the US East (Northern Virginia) Region, with plans to bring them to other regions before too long.

We continued to add features and functions during the preview, while also making the development tools more efficient and easier to use. Here’s a summary:

Developer Community – We launched the AWS FPGA Development Forum to provide a place for FPGA developers to hang out and to communicate with us and with each other.

HDK and SDK – We published the EC2 FPGA Hardware (HDK) and Software Development Kit to GitHub, and made many improvements in response to feedback that we received during the preview.

The improvements include support for VHDL (in addition to Verilog), an improved virtual lab environment (Virtual JTAG, Virtual LED, and Virtual DipSwitch), AWS libraries for FPGA management and the FPGA runtime, and support for OpenCL including the AWS OpenCL runtime library.

FPGA Developer AMI – This Marketplace AMI contains a full set of FPGA development tools including an RTL compiler and simulator, along with Xilinx SDAccel for OpenCL development, all tuned for use on C4, M4, and R4 instances.

FPGAs At Work
Here’s a sampling of the impressive work that our partners have been doing with the F1’s:

Edico Genome is deploying their DRAGEN Bio-IT Platform on F1 instances, with the expectation that it will provide whole-genome sequencing that runs in real time.

Ryft offers the Ryft Cloud, an accelerator for data analytics and machine learning that extends Elastic Stack. It sources data from Amazon Kinesis, Amazon Simple Storage Service (S3), Amazon Elastic Block Store (EBS), and local instance storage and uses massive bitwise parallelism to drive performance. The product supports high-level JDBC, ODBC, and REST interfaces along with low-level C, C++, Java, and Python APIs (see the Ryft API page for more information).

Reconfigure.io launched a cloud-based service that allows you to program FPGAs using the Go programming language. You can build, test, and deploy your code from within their cloud-based environment while taking advantage of concurrency-oriented language features such as goroutines (lightweight threads), channels, and selects.

NGCodec ported their RealityCodec video encoder to the F1 and used it to produce broadcast-quality video at 80 frames per second. Their solution can encode up to 32 independent video streams on a single F1 instance (read their new post, You Deserve Better than Grainy Giraffes, to learn more).

FPGAs In School & Research
Research groups and graduate classes at top-tier universities contacted us via AWS Educate and were eager to gain access to F1 instances.

UCLA‘s CS133 class (Parallel and Distributed Computing) is setting up an F1-based FPGA lab that will be operational within 3 or 4 weeks. According to UCLA Chancellor’s Professor Jason Cong, they are expanding multiple research projects to cover F1 including FPGA performance debugging, machine learning acceleration, Spark to FPGA compilation, and systolic array compilation.

Last month we announced that we are collaborating with the National Science Foundation (NSF) to foster innovation in big data research (read AWS Collaborates With the National Science Foundation to Foster Innovation to learn more and to find out how to apply for a grant).

FPGA’s in the AWS Marketplace
As I shared in my original post, we have built a complete beginning to end solution that lets developers build FPGA-powered applications and services and list them in the AWS Marketplace. I can’t wait to see what kinds of cool things show up there!

Jeff;

A History of Removable Computer Storage

Post Syndicated from Peter Cohen original https://www.backblaze.com/blog/history-removable-computer-storage/

A History of Removable Storage

Almost from the start we’ve had a problem with computers: They create and consume more data than we can economically store. Hundreds of companies have been created around the need for more computer storage. These days if we need space we can turn to cloud services like our own B2 Cloud Storage, but it hasn’t always been that way. The history of removable computer storage is like the history of hard drives: A fascinating look into the ever-evolving technology of data storage.

The Birth of Removable Storage

Punch Cards

punch card

Before electronic computers existed, there were electrical, mechanical computing devices. Herman Hollerith, a U.S. census worker interested in simplifying the laborious process of tabulating census data, made a device that read information from rectangular cards with holes punched in particular locations to indicate information like marital status and age.

Hollerith’s cards long outlasted him and his machine. With the advent of electronic computers in the 1950s, punch cards became the de facto method of data input. The conventions introduced with punch cards, such as an 80 column width, affected everything from the way we’d make computer monitors to the format of text files for decades.

Open-Reel Tapes and Magnetic Cartridges

IBM 100 tape drive

Magnetic tape drives were standard issue for the mainframes and minicomputers used by businesses and other organizations from the advent of the computer industry in the 1950s up until the 1980s.

Tape drives started out on 10 1/2-inch reels. A thin metal strip recorded data magnetically. Watch any television program of this era and the scene with a computer will show you a device like this. The nine-track tapes developed by IBM for its computers could store up to 175 MB per tape. At the time, that was a tremendous amount of data, suitable for archiving days or weeks’ worth of data. These days 175 MBs might be enough to store a few dozen photos from your smartphone. Times have changed!

Eventually the big reel to reel systems would be replaced with much more portable, easier-to-use, and higher density magnetic tape cartridges. Mag tapes for data backup found their way into PCs in the 80s and 90s, though they, too, would be replaced by other removable media systems like CD-R burners.

Linear Tape-Open (LTO) made its debut in the late 1990s. These digital tape cartridges could store 100 GB each, making them ideal for backing up servers and archiving big projects. Since then capacity has improved to 6.0 TB per tape. There’s still a demand for LTO data archival systems today. However, tape drives are nearing their end of usefulness as better cloud options takeover the backup and archival markets. Our own B2 Cloud Storage is rapidly making LTO a thing of the past.

Burning LTO

Winchester Drives

IBM 3340 Winchester drive

Spinning hard disk drives started out as huge refrigerator-sized boxes attached to mainframe computers. As more businesses found uses for computers, the need for storage increased, but allowable floor space did not. IBM’s solution for this problem came in the early 1970s: the IBM 3340, popular known as a Winchester.

The 3340 sported removable data modules that contained hard drive platters which could store up to 70 MB. Instead of having to buy a whole new cabinet, companies leasing equipment from IBM could buy additional data modules to increase their storage capabilities.

From the start, the 3340 was a smashing success (okay, maybe smashing isn’t the best adjective to use when describing a hard drive, but you get the point). You could find these and their descendants connected to mainframes and minicomputers in corporate data centers throughout the 1970s and into the 1980s.

The Birth of the PC Brings New Storage Solutions

Cassette Recorder

TRS-80 w cassette drive

The 1970s saw another massive evolution of computers with the introduction of first generation personal computers. The first PCs lacked any built-in permanent storage. Hard disk drives were still very expensive. Even floppy disk drives were rare at the time. When you turned the computer off, you’d lose your data, unless you had something to store it on.

The solution that the first PC makers came up was to use a cassette recorder. Microcassettes exploded in the consumer electronic market as a convenient and inexpensive way for people to record and listen to music and use for voice dictation. At a time that long-distance phone calls were an expensive luxury, it was the original FaceTime for some of us, too: I remember as a preschooler, recording and playing cassettes to stay in touch with my grandparents on the other side of the country.

So using a cassette recorder to store computer data made sense. The devices were already commonplace and relatively inexpensive. Type in a save command, and the computer played tones through a cable connected to the tape drive to differentiate binary 0s and 1s. Type in a load command, and you could play back the tape to read the program into memory. It was very slow. But it was better than nothing.

Floppy Disk

Commodore 1541

The 1970s saw the rise of the floppy disk, the portable storage format that ultimately reigned supreme for decades. The earliest models of floppy disks were eight inches in diameter and could hold about 80 KB. Eight-inch drives were more common in corporate computing, but when floppies came to personal computers, the smaller 5 1/4-inch design caught on like wildfire.

Floppy disks became commonplace alongside the Apples and Commodores of the day. You could squeeze about 120 KB onto one of those puppies. Doesn’t sound like a lot, but it was plenty of space for Apple DOS and Lode Runner.

Apple popularized the 3 1/2-inch size when it introduced the Macintosh in 1984. By the late 1980s the smaller floppy disk size – which would ultimately store 1.44 MB per disk – was the dominant removable storage medium of the day. And so it would remain for decades.

The Bernoulli Box

Bernoulli Box

In the early 1980s, a new product called the Bernoulli Box would offer the convenience of removable cartridges like Winchester drives but in a much smaller, more portable format. It was called the Bernoulli Box. The Bernoulli box was an important removable storage device for businesses who had transitioned from expensive mainframes and minicomputers to desktops.

Bernoulli cartridges worked on the same principle as floppies but were larger and in a much more shielded enclosure. The cartridges sported larger capacities than floppy disks, too. You could store 10 MB or 20 MB instead of the 1.44 MB limit on a floppy disk. Capacities would increase over time to 230 MB. Bernoulli Boxes and the cartridges were expensive, which kept them in the realm of business storage. Iomega, the Bernoulli Box’s creator, turned its attention to an enormously popular removable storage system you’ll read about later: the Zip drive.

SyQuest Disks

SyQuest drive

In the 1990s another removable storage device made its mark in the computer industry. SyQuest developed a removable storage system that used 44 MB (and later 88 MB) hard disk platters. SyQuest drives were mainstays of creative digital markets – I saw them on almost any I could find a Mac doing graphic design work, desktop publishing, music, or video work.

SyQuest would be a footnote by the late 90s as Zip disks, recordable CDs and other storage media overtook them. Speaking of Zip disks…

The Click of Death

Zip Drive

The 1990s were a transitionary period for personal computing (well, when isn’t, it really). Information density was increasing rapidly. We were still years away from USB thumb drives and ubiquitous high-speed Wi-Fi, so “sneakernet” – physically transporting information from one computer to another – was still the preferred way to get big projects back and forth. Floppy drives were too small, hard disks weren’t portable, and rewritable CDs were expensive.

Iomega came along with the Zip Drive, a removable storage system that used disks shaped like heavier-duty floppies, each capable of storing up to 100 MB on them. A high-density floppy could store 1.4 MB or so, so it was orders of magnitude more of portable storage. Zip Disks quickly became popular, but Iomega eventually redesigned them to lower the cost of manufacturing. The redesign came with a price: The drives failed more frequently and could damage the disk in the process.

The phenomenon became known as the Click of Death: The sound the actuator (the part with the read/write head) would make as it reset after hitting a damaged sector on the disk. Iomega would eventually settle a class-action lawsuit over the issue, but consumers were already moving away from the format.

Iomega developed a successor to the Zip drive: The Jaz drive. When it first came out, it could store 1 GB on a removable cartridge. Inside the cartridge was a spinning hard disk mechanism; it wasn’t unlike the SyQuest drives that had been popular a few years before, but in a smaller size you could easily fit into a jacket pocket. Unfortunately, the Jaz drive developed reliability problems of its own – disks would get jammed in the drives, drives overheated, and some had vibration problems.

Recordable CDs and DVDs

Apple SuperDrive

As a storage medium, Compact Discs had been around since the 1980s, mainly popular as a music listening format. CD burners connected from the beginning, but they were ridiculously huge and expensive: The size of a washing machine and tens of thousands of dollars. By the late 1990s technology improved, prices lowered and recordable CD burners – CD-Rs – became commonplace.

With our ever-increasing need for more storage, we moved on to DVD-R and DVD-RW systems within a few years, upping the total you could store per disc to 4.3 GB (eventually up to 8 GB per disc once dual-layer media and burners were introduced).

Blu-Ray Disc offers even greater storage capacity and is popular for its use in the home entertainment market, so some PCs have added recordable Blu-Ray drives. Blu-ray sports capacities from 25 to 128 GB per disc depending on format. Increasingly, even optical drives have become optional accessories as we’ve slimmed down our laptop computers to improve portability.

Magneto-Optical

Magneto-optical disk

Another optical format, Magneto-Optical (MO), was used on some computer systems in the 80s and 90s. It would also find its way into consumer products. The cartridges could store 650 MB. Initial systems were only able to write once to a disc, but later ones were rewriteable.

NeXT, the other computer maker founded by Steve Jobs besides Apple, was the earliest desktop system to feature a MO drive as standard issue. Magneto-optical drives were available in 5 1/4-inch and 3-inch physical sizes with capacities up to 9 GB per disc. The most popular consumer incarnation of magneto-optical is Sony’s MiniDisc.

Removeable Storage Moves Beyond Computers

SD Cards

SD Cards

The most recent removable media format to see widespread adoption on personal computers is the Secure Digital (SD) Card. SD cards have become the industry standard most popular with many smartphones, still cameras, and video cameras. They can serve up data securely thanks to password protection, smartSD protocol and Near Field Communication (NFC) support available in some variations.

With no moving parts and non-volatile flash memory inside, SD cards are reliable, quiet and relatively fast methods of transporting and archiving data. What’s more, they come in different physical sizes to suit different device applications – everything from postage stamp-sized cards found in digital cameras to fingernail-sized micro cards found in phones.

Even compared to 5 1/4-inch media like Blu-ray Discs, SD card capacities are remarkable. 128 GB and 256 GB cards are commonplace now. What’s more, the SDXC spec maxes out at 2 TB, with support for 8K video transfer speeds possible. So there’s some headroom both for performance and capacity.

The More Things Change

As computer hardware continues to improve and as we continue to demand higher performance and greater portability and convenience, portable media will change. But as we’ve found ourselves with ubiquitous, high-speed Internet connectivity, the very need for removable local storage has diminished. Now instead of archiving data on an external cartridge, disc or card, we can just upload it to the cloud and access it anywhere.

That doesn’t obviate the need for a good backup strategy, of course. It’s vital to keep your important files safe with a local archive or backup. For that, removable media like SD cards and rewritable DVDs and even external hard drives can continue to fill an important role. Remember to store your info offsite too, preferably with a continuous, secure and reliable backup method like Backblaze Cloud Backup: Unlimited, unthrottled and easy to use.

The post A History of Removable Computer Storage appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Friday Squid Blogging: Squid Can Edit Their Own RNA

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/04/friday_squid_bl_573.html

This is just plain weird:

Rosenthal, a neurobiologist at the Marine Biological Laboratory, was a grad student studying a specific protein in squid when he got an an inkling that some cephalopods might be different. Every time he analyzed that protein’s RNA sequence, it came out slightly different. He realized the RNA was occasionally substituting A’ for I’s, and wondered if squid might apply RNA editing to other proteins. Rosenthal, a grad student at the time, joined Tel Aviv University bioinformaticists Noa Liscovitch-Braur and Eli Eisenberg to find out.

In results published today, they report that the family of intelligent mollusks, which includes squid, octopuses and cuttlefish, feature thousands of RNA editing sites in their genes. Where the genetic material of humans, insects, and other multi-celled organisms read like a book, the squid genome reads more like a Mad Lib.

So why do these creatures engage in RNA editing when most others largely abandoned it? The answer seems to lie in some crazy double-stranded cloverleaves that form alongside editing sites in the RNA. That information is like a tag for RNA editing. When the scientists studied octopuses, squid, and cuttlefish, they found that these species had retained those vast swaths of genetic information at the expense of making the small changes that facilitate evolution. “Editing is important enough that they’re forgoing standard evolution,” Rosenthal says.

He hypothesizes that the development of a complex brain was worth that price. The researchers found many of the edited proteins in brain tissue, creating the elaborate dendrites and axons of the neurons and tuning the shape of the electrical signals that neurons pass. Perhaps RNA editing, adopted as a means of creating a more sophisticated brain, allowed these species to use tools, camouflage themselves, and communicate.

Yet more evidence that these bizarre creatures are actually aliens.

Three more articles. Academic paper.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.