Metal Spheres Swarm Together to Create Freeform Modular Robots

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/automaton/robotics/robotics-hardware/freebots-spheres-swarm-robots

Swarms of modular, self-reconfigurable robots have a lot going for them, at least in theory— they’re resilient and easy to scale, since big robots can be made on demand from lots of little robots. One of the trickiest bits about modular robots is figuring out a simple and reliable way of getting them to connect to each other, without having to rely on some kind of dedicated connectivity system.

This week at the IEEE/RSJ International Conference on Intelligent Robots (IROS), a research team at the Chinese University of Hong Kong, Shenzhen, led by Tin Lun Lam is presenting a new kind of modular robot that solves this problem by using little robotic vehicles inside of iron spheres that can stick together wherever you need them to.

New – Application Load Balancer Support for End-to-End HTTP/2 and gRPC

Post Syndicated from Danilo Poccia original https://aws.amazon.com/blogs/aws/new-application-load-balancer-support-for-end-to-end-http-2-and-grpc/

Thanks to its efficiency and support for numerous programming languages, gRPC is a popular choice for microservice integrations and client-server communications. gRPC is a high performance remote procedure call (RPC) framework using HTTP/2 for transport and Protocol Buffers to describe the interface. To make it easier to use gRPC with your applications, Application Load Balancer (ALB) […]

Are Electronic Media Any Good at Getting Out the Vote?

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/at-work/education/are-electronic-media-any-good-at-getting-out-the-vote

Steven Cherry Hi, this is Steven Cherry for Radio Spectrum.

For some years and still today, there’s been a quiet but profound schism among political strategists. There are those who favor modern methods and modern media—mass mailings, robocalling, television advertising, and, increasingly, social-media advertising. On the other hand are those, including my guest today, who not only still see a value in traditional person-to-person messaging, but see it as, frequently, the better bang for the campaign buck.

Just last week [this was recorded Oct 5, 2020—Ed.] the attorney general of Michigan— a state that has been a battleground, not just for electoral delegates, but this methodological dispute—announced that two political operatives were charged with felonies in connection with robocalls that made a number of false claims about the risks of voting by mail, in an apparent attempt to discourage residents of Detroit from voting by mail. And last week as well, the Biden campaign announced a complete turnaround on the question of door-to-door canvassing, perhaps the gold standard of person-to-person political campaigning. Are they perhaps afraid of Democratic standard-bearers making the same mistake twice?

In the endless post-mortem of the 2016 Presidential election, an article in Politico argued that the Clinton campaign was too data-driven and model-driven, and refused local requests, especially in Michigan, for boots-on-the-ground support. It quoted a longtime political hand in Michigan as describing quote “months of failed attempts to get attention to the collapse she was watching unfold in slow-motion among women and African-American millennials.”

I confess I saw something of that phenomenon on a recent Saturday. I’m living in Pittsburgh these days, and in the morning, I worked a Pennsylvania-based phone bank for my preferred political party. One of my first calls was to someone in the Philadelphia area, who told me he had already made his absentee ballot request and asked, while he had me on the phone, when his ballot would come. “There used to be someone around here I forget what you call her but someone I could ask stuff of.” That was strike one.

In another call, to a man in the Erie area, the conversation turned to yard signs. He said he would like to put one out but he had no idea where to get it. Strike two. In the late afternoon, two of us went to a neighborhood near us to put out door-hangers, and if we saw someone face-to-face we would ask if they wanted a yard sign. One fellow said he would. “We were supposed to get one,” he told us. When he saw we had a stack of them in our car, he sheepishly added, “We were supposed to get two in fact, one for a friend.” That was my third indication in one day that there was a lack of political party involvement at the very local level—in three different parts of what could well be the most critical swing state of the 2020 Presidential election.

When I strung these three moments together over a beer, my partner immediately thought of a book she owned, Get Out the Vote, now in its fourth edition. Its authors, Donald Green and Alan Gerber, argue that political consultants and campaign managers have underappreciated boots-on-the-ground canvassing in person and on the phone, in favor of less personal, more easily-scaled methods—radio and TV advertising, robocalling, mass mailings, and the like.

Of particular interest, they base their case with real data, based on experimental research. The first edition of their book described a few dozen such experiments; their new edition, they say, summarizes hundreds.

One of those authors is Donald Green, a political scientist at Columbia University focusing on such issues as voting behavior and partisanship, and most importantly, methodologies for studying politics and elections. His teaching career started at Yale University, where he directed its Institution for Social and Policy Studies. He joins us via Skype.

Steven Cherry Don, welcome to the podcast.

Donald Green Thank you very much for having me.

Steven Cherry Modern campaigns can employ an army of advisers, consultants, direct mail specialists, phone bank vendors, and on and on. You say that much of the advice candidates get from these professionals comes from war stories and not evidence. Robocalls seem to be one example of that. The study of a 2006 Texas primary found that 65 000 calls for one candidate increased his vote share by about two votes.

Donald Green Yes, the robocalls have an almost perfect record of never working in randomized trials. These are trials in which we randomly assigned some voters to get a robocall and others not and allow the campaign to give it its best shot with the best possible robocall. And then at the end of the election, we look at voter turnout records to see who voted. And in that particular case, the results were rather dismal. But not just in that case. I think that there have been more than 10 such large-scale experiments, and it’s hard to think of an instance in which they’ve performed well.

Steven Cherry The two robocallers in Michigan allegedly made 12 000 calls into Detroit, which is majority black—85 000 calls in total to there and similar areas in other cities. According to a report in the Associated Press, calls falsely claimed that voting by mail would result in personal information going into databases that will be used by police to resolve old warrants, credit card companies to collect debts, and federal officials to track mandatory vaccines. It quoted the calls as saying, “Don’t be finessed into giving your private information to The Man. Beware of vote-by-mail.” You’ve studied plenty of affirmative campaigns, that is, attempts to increase voter participation. Do you have any thoughts about this negative robocalling?

Donald Green Well, that certainly seems like a clear case of attempted voter suppression—to try to scare people away from voting. I don’t think I’ve ever seen anything like this. I haven’t heard the call. I’d be curious to know something about the voiceover that was used. But let’s suppose that it seemed credible. You know, the question is whether people take it seriously enough or whether they questioned the content, maybe talking to others in ways that undercut its effectiveness. But if robocalls seldom work, it’s probably because people just don’t notice them. Not sure whether this one would potentially work because it would get somebody to notice at any rate. We don’t know how effective it would be. I suspect not terribly effective, but probably effective enough to be concerning.

Steven Cherry Yeah, it was noticed enough that complaints about it filtered up to the state attorney general, but that doesn’t give us any quantitative data.

For decades, campaigns have spent a lot of their money on television advertising. And it can influence strategy. To take just one example, there’s a debate among Democrats about whether their candidate should invest in Texas because there’s so many big media markets. It’s a very expensive state to contest. What does the experimental data tell us about television?

Donald Green Experience on television is relatively rare. One that I’m most familiar with is one that actually I helped conduct with my three coauthors back when we were studying the Texans for Rick Perry campaign in 2006. We randomly assigned 18 of the 20 media markets in Texas to receive varying amounts of TV advertising, and various timings at which point it would be rolled out. And we conducted daily tracking polls to see the extent to which public opinion moved as ads rolled out in various media markets. And what we found was there was some effect of Rick Perry’s advertising campaign, but it subsided very quickly. Only a few days passed before it was essentially gone without a trace, which means that one can burn quite a lot of money for a relatively evanescent effect in terms of the campaign. I really don’t think that there’s much evidence that the very, very large amounts of money that are spent on television in the context of a presidential campaign have any lasting effect. And so it’s really an open question as to whether, say, the $300 million dollars that the Clinton campaign spent in 2016 would have been better spent least as well spent on the ground.

Steven Cherry In contrast to war stories, you and your colleagues conduct true randomized experiments. Maybe you could say a little bit more about how hard that is to do in the middle of an election.

Yes, it’s a juggling act for sure. The idea is, if we wanted to study, for example, the effects of direct mail on voter turnout, one would randomly assign large lists of registered voters, some to get the mail, some to be left alone. And then we’d use the fact that voting is a public record in the United States—and a few other countries as well—to gauge voter turnout after the election is over. This is often unsatisfactory for campaigns. They want to know the answer ahead of time. But first, we know no good way of answering the question before people actually cast their ballots. And so this is something that’s been done in increasing numbers since 1998. And now hundreds of those trials have been done on everything ranging from radio, robocalls, TV, direct mail, phone calls, social media, etc, etc.

Steven Cherry One thing you would expect campaign professionals to have data on is cost-effectiveness, but apparently they don’t. But you do. You’ve found, for example, that you can generate the same 200 votes with a quarter of a million robocalls, 38 000 mailers, or 2500 door-to-door conversations.

Donald Green Yes, we try to not only gauge the effects of the intervention through randomized trials but also try to figure out what that amounts to in terms of dollars per vote. And these kinds of calculations are always going to be context-dependent because some campaigns are able to rely on inexpensive people power, to inspire volunteers in vast numbers. And so in some sense, the costs that we estimate could be greatly overstated for the kinds of boots-on-the-ground canvassing that are typical of presidential elections in battleground states. Nevertheless, I think that it is interesting to note that even with relatively cautious calculations, to the effect that people are getting $16 an hour for canvassing, canvassing still acquits itself rather well in terms of its comparisons to other campaign tactics.

Steven Cherry Now that’s just for turnout, not votes for one candidate instead of another; a nonpartisan good-government group might be interested in turnout for its own sake, but a campaign wants a higher turnout of its own voters. How does it make that leap?

Donald Green Well, typically what they do is rely on voter files—and augmented voter files, which is, say, voter files that had other information about people appended to them—in order to make an educated guess about which people on the voter file are likely to be supportive of their own campaign. So Biden supporters have been micro-targeted and so have Trump supporters and so on and so forth, based on their history of donating to campaigns or signing petitions or showing up in party primaries. And that makes the job of the campaign much easier because instead of trying to persuade people or win them over from the other side, they’re trying to bring a bigger army to the battlefield by building up enthusiasm and mobilizing their own core supporters. So the ideal for that kind of campaign is a person who is very strongly aligned with the candidate that is sponsoring the campaign but has a low propensity of voting. And so that that kind of person is really perfect for a mobilization campaign.

So that could also be done demographically. I mean, there are zip codes in Detroit that are 80 percent black.

Donald Green Yes, there are lots of ways of doing this based on aggregates. No, you often don’t have to rely on aggregates because you typically have information about each person. But if you were to basically do it, say, precinct by precinct, you could use as proxies for the left—percentage-African-American—or proxies for the right demographics that are associated with Trump voting. So it’s possible to do it, but it’s probably not state of the art.

Steven Cherry You mentioned door-to-door canvassing; it increases turnout but—perhaps counterintuitively—apparently, it doesn’t matter much whether it’s a close contest or a likely blowout, and if it doesn’t matter what the canvasser’s message is.

Donald Green This is one of the most interesting things, actually about studying canvassing and other kinds of tactics experimentally. It appears that some of the most important communication at the door is nonverbal. You know, you show up at my door, and I wonder what you’re up to—are you trying to sell me something, trying to, you know, make your way in here? I figure, oh, actually you’re just having a pleasant conversation. You’re a person like me. You’re taking your time out to encourage me to vote. Well, that sounds okay. And I think that that message is probably the thing that sticks with people, perhaps more than the details of what you’re trying to say to me about the campaign or the particularities about why I should vote—should I vote because it’s my civic duty or should I vote because I need to stand up in solidarity with my community? Those kinds of nuances don’t seem to matter as much as we might suppose.

Steven Cherry So it seems reminiscent of what the sociologists would call a Hawthorne effect.

Donald Green Some of it is reminiscent of the Hawthorne effect. The Hawthorne effect is basically, we increase our productivity when we’re being watched. And so there’s some sense in which being monitored, being encouraged by another person makes us feel as though we’ve got to give a bit more effort. So there’s a bit of that. But I think partly what’s going on is voting is a social activity. And just as you’re more likely to go to a party if you were invited by a person as opposed to by e-mail. So too, you’re more likely to show up to vote if somebody makes an authentic, heartfelt appeal to you and encourages you to vote in-person or through something that’s very much like in-person. So it’s some gathering or some friend to friend communication as opposed to something impersonal, like you get a postcard.

Steven Cherry So without looking into the details of the Biden campaign flip-flop on door-to-door canvassing, your hunch would be that they’re making the right move?

Donald Green Yes, I think so. I mean, putting aside the other kinds of normative concerns about whether people are at risk if they get up and go out to canvass or they’re putting others at risk … In terms of the raw politics of winning votes, it’s a good idea in part because in 2018, they were able to field an enormous army of very committed activists in many of the closely contested congressional elections and showed apparently very good, good results. And the tactic itself is so well tested that if they can do it with appropriate PPE and precautions, they could be quite effective.

Steven Cherry In your research you found by contrast, door-hangers and yard signs—the way I spent that Saturday afternoon I described—have little or maybe even no utility.

Donald Green Well, yard signs might have some utility to candidates, especially down-ballot candidates who are trying to increase their vote share. It doesn’t seem to have much of an effect on voter turnout. Maybe that’s because the election is already in full swing and everybody knows that there’s an election coming up—the yard sign isn’t going to convey any new information. But I do think the door hangers have some residual effect. They’re probably about as effective as a leaflet or a mailer, which is not very effective, but maybe a smidge better than zero.

Steven Cherry You’re more positive on phone banks, albeit with some qualifiers.

Donald Green Yes, I think that phone banking, especially authentic volunteer-staffed phone banking, can be rather effective. You know, I think that if you have an unhurried conversation with someone who is basically like-minded. They’re presumably targeted because they’re someone who shares more or less your political outlook and you bring them around to explain to them why it’s an important and historic election, giving them any guidance you can about when and how to vote. You can have an effect. It’s not an enormous effect. It’s something in the order of, say, three percentage points or about one additional vote for every 30 calls you complete. But it’s a substantial effect.

And if you are able to extract a commitment to vote from that person and you were to be so bold as to call them back on the day before the election to make sure that they’re making good on their pledge, then you can have an even bigger effect, in fact, a very large effect. So I do think it can be effective. I also think that perfunctory, hurried calls by telemarketing operations are rather ineffective for a number of reasons, but especially the lack of authenticity.

Steven Cherry Let’s turn to social media, particularly Facebook. You described one rather pointless Facebook campaign that ended up costing $474 per vote. But your book also describes a very successful experiment in friend-to-friend communication on Facebook.

Donald Green That’s right. We have a number of randomized trials suggesting that encouragements to vote via Facebook ads or other kinds of Facebook media that are mass-produced seem to be relatively limited in their effects. Perhaps the biggest, most intensive Facebook advertising campaign was its full-day banner ads that ran all day long—I think it was the 2010 election—and had precisely no effect, even though it was tested among 61 million people.

More effective on Facebook were ads that showed you whether your Facebook friends had claimed to vote. Now, that didn’t produce a huge harvest of votes, but it increased turnout by about a third of a percentage point. So better than nothing. The big effects you see on Facebook and elsewhere are where people are, in a personalized way, announcing the importance of the upcoming election and urging their Facebook friends—their own social networks—to vote.

And that seems to be rather effective and indeed is part of a larger literature that’s now coming to light, suggesting that even text messaging, though not a particularly personal form of communication, is quite effective when friends are texting other friends about the importance of registering and voting. Surprisingly effective, and that, I think, opens up the door to a wide array of different theories about what can be done to increase voter turnout. It seems as though friend-to-friend communication or neighbor-to-neighbor communication or communication among people who are coworkers or co-congregants … that could be the key to raising turnout—not by not just one or two percentage points, but more like eight to 10.

Steven Cherry On this continuum of personal versus impersonal, Facebook groups,—which are a new phenomenon—seem to lie somewhere in between. Some people are calling them “toxic echo chambers,” but they would seem to maybe be a godsend for political engagement.

Donald Green I would think so, as long as the communication within the groups is authentic. If it’s if it’s automated, then probably not so much. But to the extent that the people in these groups have gotten to know each other or knew each other before they came into the group, then I think communication among them or between them could be quite compelling.

Steven Cherry Yes. Although, of course, that person that you think you’re getting to know might be some employee in St. Petersburg, Russia, of the Internet Research Agency. Snapchat has been getting some attention these days in terms of political advertising. They’ve tried to be more transparent than Facebook, and they do some fact-checking on political advertising. Could it be a better platform for political ads or engagement?

Donald Green I realize I just don’t know very much about the nuances of what they’re doing. I’m not sure that I have enough information to say.

Steven Cherry Getting back to more analog activities, your book discusses events like rallies and processions, but I didn’t see anything about smaller coffee-klatch-style events where, say, you invite all your neighbors and friends to hear a local candidate speak. That would seem to combine the effectiveness of door-to-door canvassing with the Facebook friend-to-friend campaign. But maybe it’s hard to study experimentally.

Donald Green That’s right. I would be very, very optimistic about the effects of those kinds of small gatherings. And it’s not that we are skeptical about their effects. It’s just, as you say, difficult to orchestrate a lot of experiments where people are basically opening their homes to friends. We need to talk to rope in more volunteers to bring in their friends experimentally.

Steven Cherry The business model for some campaign professionals is to get paid relative to the amount of money that gets spent. Does that disincentivize the kind of person-to-person campaigning you generally favor?

Donald Green Yes, I would say that one of the biggest limiting factors on person-to-person campaigning is that it’s very difficult for campaign consultants to make serious money off of it. And that goes double for the kind of serious money that is poured into campaigns in the final weeks. Huge amounts of money tend to be donated within the last three weeks of an election. And by that point, it’s very difficult to build the infrastructure necessary for large-scale canvassing or really any kind of retail-type politics. For that reason, the last-minute money tends to be dumped into digital ads and in television advertising—and in lots and lots of robocalls.

Steven Cherry Don, as we record, this is less than a week after the first 2020 presidential debate and other events in the political news have maybe superseded the debate already. But I’m wondering if you have any thoughts about it in terms of getting out the vote. Many people, I have to say, myself included, found the debate disappointing. Do you think it’s possible for a debate to depress voter participation?

Donald Green I think it’s possible. I think it’s rather unlikely to the extent that political science researchers have argued that negative campaigning depresses turnout, tends to depress turnout among independent voters, not so much among committed partisans who watched the debate and realize more than ever that their opponent is aligned with the forces of evil. For independent voters, they might say, “a plague on both your houses, I’m going to participate.” But I think that this particular election is one that is so intrinsically interesting that the usual way that independents feel about partisan competition probably doesn’t apply here.

Steven Cherry On a lighter note, an upcoming podcast episode for me will be about video game culture. And it’ll be with a professor of communications who writes her own video games for her classes. Your hobby turns out to be designing board games. Are they oriented toward political science? Is there any overlap of these passions?

Donald Green You know, it’s strange that they really don’t overlap at all. My interest in board games goes back to when I was a child. I’ve always been passionate about abstract board games like chess or go. And there was an accident that I started to design them myself. I did it actually when my fully-adult children were kids and we were playing with construction toys. And I began to see possibilities for games in those construction toys. And one thing led to another. And they were actually deployed to the world and marketed. And now I think they’re kind of going the way of the dinosaur. But there’s still a few dinosaurs like me who enjoy playing on an actual physical board.

Steven Cherry My girlfriend and I still play Rack-O. So maybe this is not a completely lost cause.

Well Don, I think in the US, everyone’s thoughts will be far from the election until the counting stops. Opinions and loyalties differ. But the one thing I think we can all agree on is that participation is essential for the health of the body politic. On behalf of all voters, let me thank you for all that your book has done toward that end and for myself and my listeners, thank you for joining me today.

Donald Green I very much appreciate it. Thanks.

Steven Cherry We’ve been speaking with Donald Green, a political scientist and co-author of Get Out the Vote, which takes a data-driven look at maximizing efforts to get out the vote.

This interview was recorded October 5th, 2020. Our thanks to Mike at Gotham Podcast Studio for audio engineering. Our music is by Chad Crouch.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers.

For Radio Spectrum, I’m Steven Cherry.

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

Programmable Filament Gives Even Simple 3D Printers Multi-Material Capabilities

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/tech-talk/consumer-electronics/gadgets/programmable-filament-gives-even-simple-3d-printers-multimaterial-capabilities

On the additive manufacturing spectrum, the majority of 3D printers are relatively simple, providing hobbyists with a way of conjuring arbitrary 3D objects out of long spools of polymer filament. If you want to make objects out of more than just that kind of filament, things start to get much more complicated, because you need a way of combining multiple different materials onto the print bed. There are a bunch of ways of doing this, but it’s not cheap, so most people without access to a corporate or research budget are stuck 3D printing with one kind of filament at a time.

At the ACM UIST Conference last week, researchers presented a paper that offers a way of giving even the simplest 3D printer the ability to print in as many materials as you need (or have the patience for) through a sort of printception—by first printing a filament out of different materials and then using that filament to print the multi-material object that you want.

Squabbling Over the Waters of the River Nile

Post Syndicated from Vaclav Smil original https://spectrum.ieee.org/green-tech/conservation/squabbling-over-the-waters-of-the-river-nile

The completion of the Grand Ethiopian Renaissance Dam on the Blue Nile has made Egypt fear for its very existence. To understand why requires the appreciation of basic water-flow and water-use numbers in the region.

The Blue Nile flows from Ethiopia’s Lake Tana, carrying 48.3 cubic kilometers of water a year. In Khartoum, Sudan, it merges with the White Nile, which adds 26 km3/year. The Atbara adds 11.1 km3. These rivers, coming out of Ethiopia, together provide about 70 percent of the Nile’s flow into Egypt.

The Anglo-Egyptian treaty of 1929 secured for Egypt the rights to 48 km3 of water; the 1959 treaty update raised the amount to 55.5 km3, with Sudan getting 18.5 km3. After accounting for the intervening water losses in the annually flooded Sudd swamps of South Sudan, this allocation left all the other states along the Nile tributaries with no claims to the water at all.

Egypt still upholds this allocation, but in 2009 Ethiopia began a de facto dismantling of the arrangement with the completion of a dam on the Tekezé River, a tributary to the Atbara. At 188 meters high, it is the tallest African arch dam (shaped to resist water pressure), although it has an installed hydropower capacity of just 300 megawatts and a relatively small reservoir, holding 9 km3. The next Ethiopian action, the Tana Beles hydro project (460 MW), began to generate electricity in 2010 and has no storage. Instead, it gets its water straight from Lake Tana and discharges it into the Beles River, a tributary of the Blue Nile. By themselves, these two projects would cause little worry to Egypt, were its dependence on the Nile’s water not becoming precarious.

In 1959 Egypt’s population was about 26 million, by 2020 it had nearly quadrupled to just over 100 million, and it is now increasing by a little under 2 million a year. This growth has reduced the country’s per capita annual supply of fresh water to only 550 cubic meters, less than half the U.S. rate. Should the population reach its projected size of 160 million in 2050, this rate might fall below 400 cubic meters.

The challenge is greatly increased by the new Renaissance dam on the Blue Nile, near Ethiopia’s border with Sudan. The dam, completed in June 2020, has an installed hydropower capacity of 6.45 gigawatts and a reservoir designed to hold 74 km3. The rainy season of 2020 has already put in 5 km3 of water.

Filling the rest of the reservoir in five years would cut the annual flow out of Ethiopia by 30 percent and thus the flow into Egypt by just over 20 percent (that is, 30 percent of 70 percent). This would deprive Egypt of one-fifth of its water, and even after the reservoir has been filled, retention of flows during dry years would continue to limit the downstream supply.

What Egypt sees as a mortal challenge, Ethiopia considers to be its inalienable right: That country numbers 115 million people, growing by 2.6 million a year, and it has a per capita gross domestic product less than 20 percent of the Egyptian average. Should Ethiopia forever remain hopelessly impoverished to support a better-off country?

Partial solutions are possible, but none is easy or easily affordable. Egypt’s own Aswan High Dam (2.1 GW), completed in 1970, impounds 132 km3 but, situated in one of the world’s hottest regions, it loses annually up to 15 km3 to evaporation. Storing this water in a less extreme environment (the best location would be in South Sudan) would reduce the loss but deprive Egypt of 2.1 GW of installed capacity and of water control for its deltaic irrigation. Channeling the White Nile through South Sudan, around the Sudd swamps, would cut the region’s huge evaporation losses, but ever since gaining its independence in 2011, that nation has experienced endless civil war, tribal fighting, and chronic political instability.

Extracting and joining data from multiple data sources with Athena Federated Query

Post Syndicated from Saurabh Bhutyani original https://aws.amazon.com/blogs/big-data/extracting-and-joining-data-from-multiple-data-sources-with-athena-federated-query/

With modern day architectures, it’s common to have data sitting in various data sources. We need proper tools and technologies across those sources to create meaningful insights from stored data. Amazon Athena is primarily used as an interactive query service that makes it easy to analyze unstructured, semi-structured, and structured data stored in Amazon Simple Storage Service (Amazon S3) using standard SQL. With the federated query functionality in Athena, you can now run SQL queries across data stored in relational, non-relational, object, and custom data sources and store the results back in Amazon S3 for further analysis.

The goals for this series of posts are to discuss how we can configure different connectors to run federated queries with complex joins across different data sources, how to configure a user-defined function for redacting sensitive information when running Athena queries, and how we can use machine learning (ML) inference to detect anomaly detection in datasets to help developers, big data architects, data engineers, and business analysts in their daily operational routines.

Athena Federated Query

Athena uses data source connectors that run on AWS Lambda to run federated queries. A data source connector is a piece of code that translates between your target data source and Athena. You can think of a connector as an extension of Athena’s query engine. Prebuilt Athena data source connectors exist for data sources like Amazon CloudWatch Logs, Amazon DynamoDB, Amazon DocumentDB, Amazon Elasticsearch Service (Amazon ES), Amazon ElastiCache for Redis, and JDBC-compliant relational data sources such as MySQL, PostgreSQL, and Amazon RedShift under the Apache 2.0 license. You can also use the Athena Query Federation SDK to write custom connectors. After you deploy data source connectors, the connector is associated with a catalog name that you can specify in your SQL queries. You can combine SQL statements from multiple catalogs and span multiple data sources with a single query.

When a query is submitted against a data source, Athena invokes the corresponding connector to identify parts of the tables that need to be read, manages parallelism, and pushes down filter predicates. Based on the user submitting the query, connectors can provide or restrict access to specific data elements. Connectors use Apache Arrow as the format for returning data requested in a query, which enables connectors to be implemented in languages such as C, C++, Java, Python, and Rust. Because connectors run in Lambda, you can use them to access data from any data source on the cloud or on premises that is accessible from Lambda.

The first post of this series discusses how to configure Athena Federated Query connectors and use them to run federated queries for data residing in HBase on Amazon EMR, Amazon Aurora MySQL, DynamoDB, and ElastiCache for Redis databases.

Test data

To demonstrate Athena federation capabilities, we use the TPCH sample dataset. TPCH is a decision support benchmark and has broad industry-wide relevance. This benchmark illustrates decision support systems that examine large volumes of data, run queries with a high degree of complexity, and give answers to critical business questions. For our use case, imagine a hypothetical ecommerce company with the following architecture:

  • Lineitems processing records stored in HBase on Amazon EMR to meet requirements for a write-optimized data store with high transaction rate and long-term durability
  • ElastiCache for Redis stores Nations and ActiveOrders tables so that the processing engine can get fast access to them
  • An Aurora with MySQL engine is used for Orders, Customer, and Suppliers accounts data like email addresses and shipping addresses
  • DynamoDB hosts Part and Partsupp data, because DynamoDB offers high flexibility and high performance

The following diagram shows a schematic view of the TPCH tables and their associated data stores.

Building a test environment using AWS CloudFormation

Before following along with this post, you need to create the required AWS resources in your account. To do this, we have provided you with an AWS CloudFormation template to create a stack that contains the required resources: the sample TPCH database on Amazon Relational Database Service (Amazon RDS), HBase on Amazon EMR, Amazon ElastiCache for Redis, and DynamoDB.

The template also creates the AWS Glue database and tables, S3 bucket, Amazon S3 VPC endpoint, AWS Glue VPC endpoint, Athena named queries, AWS Cloud9 IDE, an Amazon SageMaker notebook instance, and other AWS Identity and Access Management (IAM) resources that we use to implement the federated query, user-defined functions (UDFs), and ML inference functions.

This template is designed only to show how you can use Athena Federated Query, UDFs, and ML inference. This setup isn’t intended for production use without modification. Additionally, the template is created for use in the us-east-1 Region, and doesn’t work in other Regions.

Before launching the stack, you must have the following prerequisites:

  • An AWS account that provides access to AWS services
  • An IAM user with an access key and secret key to configure the AWS Command Line Interface (AWS CLI), and permissions to create an IAM role, IAM policies, and stacks in AWS CloudFormation

To create your resources, complete the following steps:

  1. Choose Launch Stack:
  2. Select I acknowledge that this template may create IAM resources.

This template creates resources that incur costs while they remain in use. Follow the cleanup steps at the end of this post to delete and clean up the resources to avoid any unnecessary charges.

  1. When the CloudFormation template is complete, record the outputs listed on the Outputs tab on the AWS CloudFormation console.

The CloudFormation stack takes approximately 20–30 minutes to complete. Check the AWS CloudFormation console and wait for the status CREATE_COMPLETE.

When stack creation is complete, your AWS account has all the required resources to implement this solution.

  1. On the Outputs tab of the Athena-Federation-Workshop stack, capture the following:
    1. S3Bucket
    2. Subnets
    3. WorkshopSecurityGroup
    4. EMRSecurityGroup
    5. HbaseConnectionString
    6. RDSConnectionString

You need all this information when setting up connectors.

  1. When the stacks are complete, check the status of the Amazon EMR steps on the Amazon EMR console.

It can take up to 15 minutes for this step to complete.

Deploying connectors and connecting to data sources

Preparing to create federated queries is a two-part process: deploying a Lambda function data source connector, and connecting the Lambda function to a data source. In the first part, you give the Lambda function a name that you can later choose on the Athena console. In the second part, you give the connector a name that you can reference in your SQL queries.

We want to query different data sources, so in the following sections we set up Lambda connectors for HBase on Amazon EMR, Aurora MySQL, DynamoDB, and Redis before we start creating complex joins across data sources using Athena federated queries. The following diagram shows the architecture of our environment.

Installing the Athena JDBC connector for Aurora MySQL

The Athena JDBC connector supports the following databases:

  • MySQL
  • PostGreSQL
  • Amazon Redshift

To install the Athena JDBC connector for Aurora MySQL, complete the following steps:

  1. In your AWS account, search for serverless application repository.
  2. Choose Available applications.
  3. Make sure that Show apps that create custom IAM roles or resource policies is selected.
  4. Search for athena federation.
  5. Locate and choose AthenaJdbcConnector.
  6. Provide the following values:
    1. Application name – Leave it as default name, AthenaJdbcConnector.
    2. SecretNamePrefix – Enter AthenaJdbcFederation.
    3. SpillBucket – Enter the S3Bucket value from the AWS CloudFormation outputs.
    4. DefaultConnectionString – Enter the RDSConnectionString value from the AWS CloudFormation outputs.
    5. DisableSpillEncryption – Leave it as the default value false.
    6. LambdaFunctionName – Enter mysql.
    7. LambdaMemory – Leave it as the default value 3008.
    8. LambdaTimeout – Leave it as the default value 900.
    9. SecurityGroupIds – Enter the WorkshopSecurityGroup value from the AWS CloudFormation outputs.
    10. SpillPrefix – Change the default value to athena-spill/jdbc.
    11. SubnetIds – Enter the Subnets value from the AWS CloudFormation outputs.
  7. Select I acknowledge that this app creates custom IAM roles.
  8. Choose Deploy.

This deploys the Athena JDBC connector for Aurora MySQL; you can refer to this Lambda function in your queries as lambda:mysql.

For more information about the Athena JDBC connector, see the GitHub repo.

Installing the Athena DynamoDB connector

To install Athena DynamoDB Connector, complete the following steps:

  1. In your AWS account, search for serverless application repository.
  2. Choose Available applications.
  3. Make sure that Show apps that create custom IAM roles or resource policies is selected.
  4. Search for athena federation.
  5. Locate and choose AthenaDynamoDBConnector.
  6. Provide the following values:
    1. Application name – Leave it as default name AthenaDynamoDBConnector.
    2. SpillBucket – Enter the S3Bucket value from the AWS CloudFormation outputs.
    3. AthenaCatalogName – Enter dynamo.
    4. DisableSpillEncryption – Leave it as the default value false.
    5. LambdaMemory – Leave it as the default value 3008.
    6. LambdaTimeout – Leave it as the default value 900.
    7. SpillPrefix – Enter athena-spill-dynamo.
  7. Select I acknowledge that this app creates custom IAM roles.
  8. Choose Deploy.

This deploys Athena DynamoDB connector; you can refer to this Lambda function in your queries as lambda:dynamo.

For more information about the Athena DynamoDB connector, see the GitHub repo.

Installing the Athena HBase connector

To install the Athena HBase connector, complete the following steps:

  1. In your AWS account, search for serverless application repository.
  2. Choose Available applications.
  3. Make sure that Show apps that create custom IAM roles or resource policies is selected.
  4. Search for athena federation.
  5. Locate and choose AthenaHBaseConnector.
  6. Provide the following values:
    1. Application name – Leave it as default name AthenaHBaseConnector
    2. SecretNamePrefix – Enter hbase-*.
    3. SpillBucket – Enter the S3Bucket value from the AWS CloudFormation outputs.
    4. AthenaCatalogName – Enter hbase.
    5. DisableSpillEncryption – Leave it as the default value false.
    6. DefaultConnectionString – Enter the HbaseConnectionString value from the AWS CloudFormation outputs.
    7. LambdaMemory – Leave it as the default value of 3008.
    8. LambdaTimeout – Leave it as the default value of 900.
    9. SecurityGroupIds – Enter the EMRSecurityGroup value from the AWS CloudFormation outputs.
    10. SpillPrefix – Enter athena-spill-hbase.
    11. SubnetIds – Enter the Subnets value from the AWS CloudFormation outputs.
  7. Select I acknowledge that this app creates custom IAM roles.
  8. Choose Deploy.

This deploys the Athena HBase connector; you can refer to this Lambda function in your queries as lambda:hbase.

For more information about the Athena HBase connector, see the GitHub repo.

Installing the Athena Redis connector

To install Athena Redis Connector, complete the following steps:

  1. In your AWS account, search for serverless application repository.
  2. Choose Available applications.
  3. Make sure that Show apps that create custom IAM roles or resource policies is selected.
  4. Search for athena federation.
  5. Locate and choose AthenaRedisConnector.
  6. Provide the following values:
    1. Application name – Leave it as default name AthenaRedisConnector.
    2. SecretNameOrPrefix – Enter redis-*.
    3. SpillBucket – Enter the S3Bucket value from the AWS CloudFormation outputs.
    4. AthenaCatalogName – Enter redis.
    5. DisableSpillEncryption – Leave it as the default value false.
    6. LambdaMemory – Leave it as the default value 3008.
    7. LambdaTimeout – Leave it as the default value 900.
    8. SecurityGroupIds – Enter the EMRSecurityGroup value from the AWS CloudFormation outputs.
    9. SpillPrefix – Enter athena-spill-redis.
    10. SubnetIds – Enter the Subnets value from the AWS CloudFormation outputs.
  7. Select I acknowledge that this app creates custom IAM roles.
  8. Choose Deploy.

This deploys the Athena Redis connector; you can refer to this Lambda function in your queries as lambda:redis.

For more information about the Athena Redis connector, see the GitHub repo.

Redis database and tables with the AWS Glue Data Catalog

Because Redis doesn’t have a schema of its own, the Redis connector can’t infer the columns or data type from Redis. The Redis connector needs an AWS Glue database and tables to be set up so it can associate the data to the schema. The CloudFormation template creates the necessary Redis database and tables in the Data Catalog. You can confirm this on the AWS Glue console.

Running federated queries

Now that the connectors are deployed, we can run Athena queries that use those connectors.

  1. On the Athena console, choose Get Started.
  2. Make sure you’re in the workgroup AmazonAthenaPreviewFunctionality. If not, choose Workgroups, select AmazonAthenaPreviewFunctionality, and choose Switch Workgroup.

On the Saved Queries tab, you can see a list of pre-populated queries to test.

The Sources saved query tests your Athena connector functionality for each data source, and you can make sure that you can extract data from each data source before running more complex queries involving different data sources.

  1. Highlight the first query up to the semicolon and choose Run query.

After successfully testing connections to each data source, you can proceed with running more complex queries, such as:

  • FetchActiveOrderInfo
  • ProfitBySupplierNationByYr
  • OrdersRevenueDateAndShipPrio
  • ShippedLineitemsPricingReport
  • SuppliersWhoKeptOrdersWaiting

If you see an error on the HBase query like the following, try rerunning it and it should resolve the issue.

GENERIC_USER_ERROR: Encountered an exception[java.lang.RuntimeException] from your LambdaFunction[hbase] executed in context[retrieving meta-data] with message[org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the location for replica 0]

As an example of the advanced queries, the SuppliersWhoKeptOrdersWaiting query identifies suppliers whose product was part of a multi-supplier order (with current status of F) and they didn’t ship the required parts on time. This query uses multiple data sources: Aurora MySQL and HBase on Amazon EMR. As shown in the following screenshot, the query extracts data from the supplier table on Aurora MySQL, the lineitem table on HBase, and the orders tables on Aurora MySQL. The results are returned in 7.13 seconds.

Cleaning up

To clean up the resources created as part of our CloudFormation template, complete the following steps:

  1. On the Amazon S3 console, empty the bucket athena-federation-workshop-<account-id>.
  2. If you’re using the AWS CLI, delete the objects in the athena-federation-workshop-<account-id> bucket with the following code (make sure you’re running this command on the correct bucket):
    aws s3 rm s3://athena-federation-workshop-<account-id> --recursive

  3. On the AWS CloudFormation console, delete all the connectors so they’re no longer attached to the elastic network interface (ENI) of the VPC. Alternatively, go to each connector and deselect the VPC so it’s no longer attached to the VPC created by AWS CloudFormation.
  4. On the Amazon SageMaker console, delete any endpoints you created as part of the ML inference.
  5. On the Athena console, delete the AmazonAthenaPreviewFunctionality workgroup.
  6. On the AWS CloudFormation console or the AWS CLI, delete the stack Athena-Federation-Workshop.

Summary

In this post, we demonstrated the functionality of Athena federated queries by creating multiple different connectors and running federated queries against multiple data sources. In the next post, we show you how you can use the Athena Federation SDK to deploy your UDF and invoke it to redact sensitive information in your Athena queries.


About the Authors

Saurabh Bhutyani is a Senior Big Data Specialist Solutions Architect at Amazon Web Services. He is an early adopter of open-source big data technologies. At AWS, he works with customers to provide architectural guidance for running analytics solutions on Amazon EMR, Amazon Athena, AWS Glue, and AWS Lake Formation.

 

 

 

Amir Basirat is a Big Data Specialist Solutions Architect at Amazon Web Services, focused on Amazon EMR, Amazon Athena, AWS Glue, and AWS Lake Formation, where he helps customers craft distributed analytics applications on the AWS platform. Prior to his AWS Cloud journey, he worked as a big data specialist for different technology companies. He also has a PhD in computer science, where his research primarily focused on large-scale distributed computing and neural networks.

 

 

 

 

[$] Relief for insomniac tracepoints

Post Syndicated from corbet original https://lwn.net/Articles/835426/rss

The kernel’s tracing infrastructure is designed to be fast and to interfere
as little as possible with the normal operation of the system. One
consequence of this requirement is that the code that runs when a
tracepoint is hit cannot sleep; otherwise execution of the tracepoint could
add an arbitrary delay to the execution of the real work the kernel should
be doing. There are times, though, that the ability to sleep within a
tracepoint would be handy, delays notwithstanding. The sleepable
tracepoints patch set
from Michael Jeanson sets the stage
to make it possible for (some) tracepoint
handlers to take a nap while performing their tasks — but stops short of
completing the job for now.

AWS extends its MTCS Level 3 certification scope to cover United States Regions

Post Syndicated from Clara Lim original https://aws.amazon.com/blogs/security/aws-extends-its-mtcs-level-3-certification-scope-to-cover-united-states-regions/

We’re excited to announce the completion of the Multi-Tier Cloud Security (MTCS) Level 3 triennial certification in September 2020. The scope was expanded to cover the United States Amazon Web Services (AWS) Regions, excluding AWS GovCloud (US) Regions, in addition to Singapore and Seoul. AWS was the first cloud service provider (CSP) to attain the MTCS Level 3 certification in Singapore since 2014, and the services in scope have increased to 130—an approximately 27% increase since the last recertification audit in September 2019, and three times the number of services in scope since the last triennial audit in 2017. This provides customers with more services to choose from in the regions.

MTCS was the world’s first cloud security standard to specify a management system for cloud security that covers multiple tiers, and it can be applied by CSPs to meet differing cloud user needs for data sensitivity and business criticality. The certified CSPs will be able to better specify the levels of security that they can offer to their users. CSPs can achieve this through third-party certification and a self-disclosure requirement for CSPs that covers service-oriented information normally captured in service level agreements. The different levels of security help local businesses to pick the right CSP, and use of MTCS is mandated by the Singapore government as a requirement for public sector agencies and regulated organizations.

MTCS has three levels of security, Level 1 being the base and Level 3 the most stringent:

  • Level 1 was designed for non–business critical data and systems with basic security controls, to counter certain risks and threats targeting low-impact information systems (for example, a website that hosts public information).
  • Level 2 addresses the needs of organizations that run their business-critical data and systems in public or third-party cloud systems (for example, confidential business data and email).
  • Level 3 was designed for regulated organizations with specific and more stringent security requirements. Industry-specific regulations can be applied in addition to the baseline controls, in order to supplement and address security risks and threats in high-impact information systems (for example, highly confidential business data, financial records, and medical records).

Benefits of MTCS certification

Singapore customers in regulated industries with the strictest security requirements can securely host applications and systems with highly sensitive information, ranging from confidential business data to financial and medical records with level 3 compliance.

Financial Services Industry (FSI) customers in Korea are able to accelerate cloud adoption without the need to validate 109 out of 141 controls as required in the relevant regulations (the Financial Security Institute’s Guideline on Use of Cloud Computing Services in the Financial Industry, and the Regulation on Supervision on Electronic Financial Transactions (RSEFT)).

With increasing cloud adoption across different industries, MTCS certification has the potential to provide assurance to customers globally now that the scope is extended beyond Singapore and Korea to the United States AWS Regions. This extension also provides an alternative for Singapore government agencies to leverage the AWS services that haven’t yet launched locally, and provides resiliency and recovery use cases as well.

You can now download the latest MTCS certificates and the MTCS Self-Disclosure Form in AWS Artifact.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Clara Lim

Clara is the Audit Program Manager for the Asia Pacific Region, leading multiple security certification programs. Clara is passionate about leveraging her decade-long experience to deliver compliance programs that provide assurance and build trust with customers.

Netflix Android and iOS Studio Apps — now powered by Kotlin Multiplatform

Post Syndicated from Netflix Technology Blog original https://netflixtechblog.com/netflix-android-and-ios-studio-apps-kotlin-multiplatform-d6d4d8d25d23

Netflix Android and iOS Studio Apps — now powered by Kotlin Multiplatform

By David Henry & Mel Yahya

Over the last few years Netflix has been developing a mobile app called Prodicle to innovate in the physical production of TV shows and movies. The world of physical production is fast-paced, and needs vary significantly between the country, region, and even from one production to the next. The nature of the work means we’re developing write-heavy software, in a distributed environment, on devices where less than ⅓ of our users have very reliable connectivity whilst on set, and with a limited margin for error. For these reasons, as a small engineering team, we’ve found that optimizing for reliability and speed of product delivery is required for us to serve our evolving customers’ needs successfully.

The high likelihood of unreliable network connectivity led us to lean into mobile solutions for robust client side persistence and offline support. The need for fast product delivery led us to experiment with a multiplatform architecture. Now we’re taking this one step further by using Kotlin Multiplatform to write platform agnostic business logic once in Kotlin and compiling to a Kotlin library for Android and a native Universal Framework for iOS via Kotlin/Native.

Kotlin Multiplatform

Kotlin Multiplatform allows you to use a single codebase for the business logic of iOS and Android apps. You only need to write platform-specific code where it’s necessary, for example, to implement a native UI or when working with platform-specific APIs.

Kotlin Multiplatform approaches cross-platform mobile development differently from some well known technologies in the space. Where other technologies abstract away or completely replace platform specific app development, Kotlin Multiplatform is complementary to existing platform specific technologies and is geared towards replacing platform agnostic business logic. It’s a new tool in the toolbox as opposed to replacing the toolbox.

This approach works well for us for several reasons:

  1. Our Android and iOS studio apps have a shared architecture with similar or in some cases identical business logic written on both platforms.
  2. Almost 50% of the production code in our Android and iOS apps is decoupled from the underlying platform.
  3. Our appetite for exploring the latest technologies offered by respective platforms (Android Jetpack Compose, Swift UI, etc) isn’t hampered in any way.

So, what are we doing with it?

Experience Management

As noted earlier, our user needs vary significantly from one production to the next. This translates to a large number of app configurations to toggle feature availability and optimize the in-app experience for each production. Decoupling the code that manages these configurations from the apps themselves helps to reduce complexity as the apps grow. Our first exploration with code sharing involves the implementation of a mobile SDK for our internal experience management tool, Hendrix.

At its core, Hendrix is a simple interpreted language that expresses how configuration values should be computed. These expressions are evaluated in the current app session context, and can access data such as A/B test assignments, locality, device attributes, etc. For our use-case, we’re configuring the availability of production, version, and region specific app feature sets.

Poor network connectivity coupled with frequently changing configuration values in response to user activity means that on-device rule evaluation is preferable to server-side evaluation.

This led us to build a lightweight Hendrix mobile SDK — a great candidate for Kotlin Multiplatform as it requires significant business logic and is entirely platform agnostic.

Implementation

For brevity, we’ll skip over the Hendrix specific details and touch on some of the differences involved in using Kotlin Multiplatform in place of Kotlin/Swift.

Build

For Android, it’s business as usual. The Hendrix Multiplatform SDK is imported via gradle as an Android library project dependency in the same fashion as any other dependency. On the iOS side, the native binary is included in the Xcode project as a universal framework.

Developer ergonomics

Kotlin Multiplatform source code can be edited, recompiled, and can have a debugger attached with breakpoints in Android Studio and Xcode (including lldb support). Android Studio works out of the box, Xcode support is achieved via TouchLabs’ xcode-kotlin plugin.

Debugging Kotlin source code from Xcode.

Networking

Hendrix interprets rule set(s) — remotely configurable files that get downloaded to the device. We’re using Ktor’s Multiplatform HttpClient to embed our networking code within the SDK.

Disk cache

Of course, network connectivity may not always be available so downloaded rule sets need to be cached to disk. For this, we’re using SQLDelight along with it’s Android and Native Database drivers for Multiplatform persistence.

Final thoughts

We’ve followed the evolution of Kotlin Multiplatform keenly over the last few years and believe that the technology has reached an inflection point. The tooling and build system integrations for Xcode have improved significantly such that the complexities involved in integration and maintenance are outweighed by the benefit of not having to write and maintain multiple platform specific implementations.

Opportunities for additional code sharing between our Android and iOS studio apps are plentiful. Potential future applications of the technology become even more interesting when we consider that Javascript transpilation is also possible.

We’re excited by the possibility of evolving our studio mobile apps into thin UI layers with shared business logic and will continue to share our learnings with you on that journey.


Netflix Android and iOS Studio Apps — now powered by Kotlin Multiplatform was originally published in Netflix TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Getting started with DevOps automation

Post Syndicated from Jared Murrell original https://github.blog/2020-10-29-getting-started-with-devops-automation/

This is the second post in our series on DevOps fundamentals. For a guide to what DevOps is and answers to common DevOps myths check out part one.

What role does automation play in DevOps?

First things first—automation is one of the key principles for accelerating with DevOps. As noted in my last blog post, it enables consistency, reliability, and efficiency within the organization, making it easier for teams to discover and troubleshoot problems. 

However, as we’ve worked with organizations, we’ve found not everyone knows where to get started, or which processes can and should be automated. In this post, we’ll discuss a few best practices and insights to get teams moving in the right direction.

A few helpful guidelines

The path to DevOps automation is continually evolving. Before we dive into best practices, there are a few common guidelines to keep in mind as you’re deciding what and how you automate. 

  • Choose open standards. Your contributors and team may change, but that doesn’t mean your tooling has to. By maintaining tooling that follows common, open standards, you can simplify onboarding and save time on specialized training. Community-driven standards for packaging, runtime, configuration, and even networking and storage—like those found in Kubernetes—also become even more important as DevOps and deployments move toward the cloud.
  • Use dynamic variables. Prioritizing reusable code will reduce the amount of rework and duplication you have, both now and in the future. Whether in scripts or specialized tools, securely using externally-defined variables is an easy way to apply your automation to different environments without needing to change the code itself.
  • Use flexible tooling you can take with you. It’s not always possible to find a tool that fits every situation, but using a DevOps tool that allows you to change technologies also helps reduce rework when companies change direction. By choosing a solution with a wide ecosystem of partner integrations that works with any cloud, you’ll be able to  define your unique set of best practices and reach your goals—without being restricted by your toolchain.

DevOps automation best practices

Now that our guidelines are in place, we can evaluate which sets of processes we need to automate. We’ve broken some best practices for DevOps automation into four categories to help you get started. 

1. Continuous integration, continuous delivery, and continuous deployment

We often think of the term “DevOps” as being synonymous with “CI/CD”. At GitHub we recognize that DevOps includes so much more, from enabling contributors to build and run code (or deploy configurations) to improving developer productivity. In turn, this shortens the time it takes to build and deliver applications, helping teams add value and learn faster. While CI/CD and DevOps aren’t precisely the same, CI/CD is still a core component of DevOps automation.

  • Continuous integration (CI) is a process that implements testing on every change, enabling users to see if their changes break anything in the environment. 
  • Continuous delivery (CD) is the practice of building software in a way that allows you to deploy any successful release candidate to production at any time.
  • Continuous deployment (CD) takes continuous delivery a step further. With continuous deployment, every successful change is automatically deployed to production. Since some industries and technologies can’t immediately release new changes to customers (think hardware and manufacturing), adopting continuous deployment depends on your organization and product.

Together, continuous integration and continuous delivery (commonly referred to as CI/CD) create a collaborative process for people to work on projects through shared ownership. At the same time, teams can maintain quality control through automation and bring new features to users with continuous deployment. 

2. Change management

Change management is often a critical part of business processes. Like the automation guidelines, there are some common principles and tooling that development and operations teams can use to create consistency.  

  • Version control: The practice of using version control has a long history rooted in helping people revert changes and learn from past decisions. From RCS to SVN, CVS to Perforce, ClearCase to Git, version control is a staple for enabling teams to collaborate by providing a common workflow and code base for individuals to work with. 
  • Change control: Along with maintaining your code’s version history, having a system in place to coordinate and facilitate changes helps to maintain product direction, reduces the probability of harmful changes to your code, and encourages a collaborative process.
  • Configuration management: Configuration management makes it easier for everyone to manage complex deployments through templates and manage changes at scale with proper controls and approvals.

3. ‘X’ as code

By now, you also may have heard of “infrastructure as code,” “configuration as code,” “policy as code,” or some of the other “as code” models. These models provide a declarative framework for managing different aspects of your operating environments through high level abstractions. Stated another way, you provide variables to a tool and the output is consistently the same, allowing you to recreate your resources consistently. DevOps implements the “as code” principle with several goals, including: an auditable change trail for compliance, collaborative change process via version control, a consistent, testable and reliable way of deploying resources, and as a way to lower the learning curve for new team members. 

  • Infrastructure as code (IaC) provides a declarative model for creating immutable infrastructure using the same versioning and workflow that developers use for source code. As changes are introduced to your infrastructure requirements, new infrastructure is defined, tested, and deployed with new configurations through automated declarative pipelines.
  • Platform as code (PaC) provides a declarative model for services similar to how infrastructure as code provides a framework for recreating the same infrastructure—allowing you to rapidly deploy services to existing infrastructure with high-level abstractions.
  • Configuration as code (CaC) brings the next level of declarative pipelining by defining the configuration of your applications as versioned resources.
  • Policy as code brings versioning and the DevOps workflow to security and policy management. 

4. Continuous monitoring

Operational insights are an invaluable component of any production environment. In order to understand the behaviors of your software in production, you need to have information about how it operates. Continuous monitoring—the processes and technology that monitor performance and stability of applications and infrastructure throughout the software lifecycle—provides operations teams with data to help troubleshoot, and development teams the information needed to debug and patch. This also leads into an important aspect of security, where DevSecOps takes on these principles with a security focus. Choosing the right monitoring tools can be the difference between a slight service interruption and a major outage. When it comes to gaining operational insights, there are some important considerations: 

  • Logging gives you a continuous stream of data about your business’ critical components. Application logs, infrastructure logs, and audit logs all provide important data that helps teams learn and improve products.
  • Monitoring provides a level of intelligence and interpretation to the raw data provided in logs and metrics. With advanced tooling, monitoring can provide teams with correlated insights beyond what the raw data provides.
  • Alerting provides proactive notifications to respective teams to help them stay ahead of major issues. When effectively implemented, these alerts not only let you know when something has gone wrong, but can also provide teams with critical debugging information to help solve the problem quickly.
  • Tracing takes logging a step further, providing a deeper level of application performance and behavioral insights that can greatly impact the stability and scalability of applications in production environments.

Putting DevOps automation into action

At this point, we’ve talked much about automation in the DevOps space, so is DevOps all about automation? Put simply, no. Automation is an important means to accomplishing this work efficiently between teams. Whether you’re new to DevOps or migrating from another set of automation solutions, testing new tooling with a small project or process is a great place to start. It will lay the foundation for scaling and standardizing automation across your entire organization, including how to measure effectiveness and progression toward your goals. 

Regardless of which toolset you choose to automate your DevOps workflow, evaluating your teams’ current workflows and the information you need to do your work will help guide you to your tool and platform selection, and set the stage for success. Here are a few more resources to help you along the way:

Want to see what DevOps automation looks like in practice? See how engineers at Wiley build faster and more securely with GitHub Actions.

You’ve Cat to Be Kitten Me…

Post Syndicated from Yev original https://www.backblaze.com/blog/youve-cat-to-be-kitten-me/

Catblaze. It started as an April Fools’ joke four years ago, but it stuck around as part of our website ever since. A few intrepid website perusers even found their way to the page and signed up for our backup service there. To be clear: There’s no actual difference between the two products except the landing page. If you bought Backblaze on Catblaze, it’s Backblaze. You received the same great service as everyone else, just with a nice cat-themed wrapper. Got it? Great!

It’s been a while since we’ve done anything with Catblaze though, and so I got to thinking… If the page is still functional, how can we make use of it again? Well, why not redirect some traffic there and see how it affects conversions?! A lot of people love cats, maybe that love could be translated to loving backing up, too?

So, that’s exactly what we did! A few weeks ago, for one day, we diverted some traffic from backblaze.com/cloud-backup.html to backblaze.com/catblaze.html to see how they performed against each other. Did anyone even notice? And if they did, did they sign up anyway? Read on to find out! The results may shock you! And other clickbait hyperbole!

Why are we doing this? Well, along with everyone else who has had to shift to remote office-ing during the pandemic, we’ve been working hard to maintain high spirits and morale here at Backblaze. While we made a lot of changes to help our team be as productive as possible while working remotely, we thought, why not get a little silly, engage in a little charitable fundraising, and also buoy the spirits of our community at large: You!

With a lot of people spending more time at home, animal adoption in urban areas increasing, and “Tiger King” being so popular on Netflix, I spent some time chatting with a friend of mine who works for the Humane Society of the United States, and asked if there were any shelters that were looking for aid. He told me that the Peninsula Humane Society—the same branch that the models for the original “Catblaze Cats” came from—could use some donations. So, as part of this experiment, we’ll be contributing to them in honor of the kittens that helped make this experiment possible!

It also happens to be National Cat Day today, so what better way to celebrate?

And Now, on to the Results!

Wow, who would have known that diverting 50% of our hard-earned traffic to an April Fools’ landing page was an interesting idea? The results may or may not surprise you, but here’s the bottom line: Sending traffic to catblaze.com resulted in a decrease in trial conversions (folks coming to our site and creating a trial account) by 15%. Which, admittedly, is better than some of us had guessed!

Let’s dive into more of those numbers, shall we? (Assuming we’re comparing Catblaze to our regular Backblaze Computer Backup landing page.)

  • Days of experiment: One.
  • Traffic diverted: 50%.
  • Percent change in conversion rate from visit to trial: 15% reduction.
  • Percent change in conversion rate from visit to purchase (skip trial): 41% reduction.
    • 69.96% (A palindrome!) of people were less likely to purchase directly from Catblaze—that’s how many fewer folks went to the “buy” page next.
  • Percent change in bounce rate: 15% improvement.
    • Percent change in visits going to the home page from Catblaze: 118%.
  • Tweets asking us what is going on: Zero.
  • Support tickets asking us, “Why the cats?”: Zero.
  • Donation to the Peninsula Humane Society: $2,000.

Lessons Learned

While we probably shouldn’t update our onboarding messaging to include a picture of our Catblaze friends, it may be worth going a bit more kitten-friendly in future illustrations and designs for our website. The fact that there was a 15% improvement in bounce rate (and 20% reduction in exit rate) meant that people were sticking around and looking at that awesome cat content, or they were very confused. The cat content was at best amusing, but at worst confusing (which is usually not what you want your customers to be feeling), and you can see that was the case because we saw the number of people going back to our homepage increase by 118%. So, while we kept people on our website, their confusion was visible in how they navigated our website.

Perhaps the most entertaining thing is that no one asked about the Catblaze website. We received no Tweets or support tickets asking us why everything was cat-themed on our website. Based on our daily traffic, and the seemingly minor reasons that people write in with support tickets, I would have sworn up and down that I’d be on social media answering questions all day—though, if I responded to folks asking about it, that may have affected the experiment—so, it’s great that it went unnoticed.

Will we be doing this again? I doubt it. The Finance department is already sending me eye roll emojis, but it was definitely an interesting experiment and taught me one important lesson: While people definitely noticed the cats, they certainly didn’t seem to mind them.

The post You’ve Cat to Be Kitten Me… appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Making GitHub CI workflow 3x faster

Post Syndicated from Keerthana Kumar original https://github.blog/2020-10-29-making-github-ci-workflow-3x-faster/

Welcome to the first deep dive of the Building GitHub blog series, providing a look at how teams across the GitHub engineering organization identify and address opportunities to improve our internal development tooling and infrastructure.

At GitHub, we use the Four Key Metrics of high performing software development to help frame our engineering fundamentals effort. As we measured Lead Time for Changes—the time it takes for code to be successfully running in production—we identified that developers waited an average of 45 minutes for a successful run of our continuous integration suite to complete before merging any change. This 45-minute lead time was repeated once more before deploying a merge branch. In a perfect scenario, a developer waited almost two hours after checking in code before the change went live on GitHub.com. This 45-minute CI now takes only 15 minutes to run! Here is a deep dive on how we made GitHub’s CI workflow 3x faster.

Analyzing the problem

At this moment the monumental Ruby monolith that powers millions of developers on GitHub.com, has over 7,000 test suites and over 5,000 test files. Every commit to a pull request triggers 25 CI jobs and requires 15 of those CI jobs to complete before merging a pull request. This meant that a developer at GitHub spent approximately 45 minutes and 600 cores of computing resources for every commit. That’s a lot of developer-hours and machine-hours that could be spent creating value for our customers.

Analyzing the types of CI jobs, we identified four categories: unit testing, linting/performance, integration testing, builds/deployments. All jobs except two of the integration testing jobs took less than 13 minutes to run. The two integration testing jobs were the bottleneck in our Lead Time for Changes. As it is true for most DevOps cycles, several test suites were also flaky. Although this blog post isn’t going to share how we solved for the flakiness of our tests, spoiler alert, a future post in this series will explain that process. Apart from being flaky, the two integration testing jobs increased developer friction and reduced productivity at GitHub.

Engineering decision

GitHub Enterprise Server, the on-premise offering of GitHub used by our enterprise customers, ships a new patch release every two weeks and a major release every quarter. The two long running test suites were added to the CI workflow to ensure a pull request did not break the GitHub experience for our Enterprise Server customers. It was also clear that these 45-minute test suites did not provide additional value blocking GitHub.com deployments that happen continuously throughout the day. Driven by customer obsession and developer satisfaction, we developed the deferred compliance tool.

Deferred compliance

The deferred compliance tool integrated along with our CI workflow system aims to strike a critical balance between improving Lead Time for Change in deploying GitHub.com and creating accountability for the quality of Enterprise Server. The long running CI jobs are no longer required to pass before a pull request is merged but the deferred compliance tool is monitoring for any test failure.

If a CI job fails, a GitHub issue with a deferred compliance label is created and the pull request author and code segment’s code owners are tagged. A warning message is sent on Slack to the developer and a 72-hour timer is kicked off. The developer now has 72 hours to fix the build, push a change or revert the pull request. A successful run of the CI job automatically closes the compliance issue and the 72-hour timer is turned off. If the CI job remains broken for more than 72 hours, all deployments to GitHub.com are halted, barring any exceptional situations, until the integration tests for Enterprise Server are fixed. This creates accountability and ownership for all our developers to build features that work flawlessly on GitHub.com and Enterprise Server. The 72-hour timer is customizable but our analysis showed that with a global team of developers, 72 hours reduced the possibility that a change merged by a developer in San Francisco on a Friday afternoon did not unintentionally block deployments for a developer in Sydney on Monday morning. Deferred compliance can be used for any long running CI run that does not need to block deployments while creating a call for action for CI run failures.

Key Takeaways

  • Internal engineering tooling is a powerful resource to support developers and at the same time provide guardrails for product consistency.
  • Focusing on a key metric allows us to identify bottlenecks and develop simple and creative solutions.
  • Comprehending historical context for past decisions and being customer obsessed provides us an opportunity to build a more thoughtful engineering design.

Overall, this project is a testimony that a simple solution can significantly improve developer productivity and that can have long-term positive implications to an engineering organization. And of course, since numbers matter, we made our CI 3x faster.

Building GitHub: introduction

Post Syndicated from Kate Studwell original https://github.blog/2020-10-29-building-github-introduction/

Here at GitHub, we pride ourselves on providing a first-class developer experience to you, our customers. We’re developers, too, and we love that the features that we build for GitHub.com make your day easier — and make ours easier, too. We also know that the more we invest in the infrastructure and tooling that powers GitHub, the faster we can deliver those features, and we’ll have a more delightful experience to boot.

In addition to investing in the infrastructure, we also want to shine a light on all the hard work we do behind the scenes to make GitHub better, specifically focusing on our internal development tooling and infrastructure. And, today, we’re excited to introduce the Building GitHub blog series, providing deep-dives on how teams across the engineering organization have been banding together to identify and address opportunities that would provide us an even smoother internal development experience, up our technical excellence, and improve system reliability in the process. From running the latest and greatest Ruby version, to dramatically decreasing our application boot time, to smoother and more reliable progressive deploys, these efforts paid off greatly and decreased our cycle times.

To help frame our efforts for potential investments, we revisited the Four Key Metrics of high performing software delivery, as our very own Dr. Nicole Forsgren found in her research and outlined by DevOps Research and Assessment. These include:

  • Deploy Frequency. How frequently is the team deploying?
  • Lead Time for Changes. How long does it take to get code successfully running in production?
  • Time to Restore Service. How long does it take to recover from an incident?
  • Change Fail Rate. What percentage of changes to production result in degraded service?

Ideally, any investment we make in our development tooling would move the needle in at least one of these areas. We’ve had teams across the organization join together to tackle these, sometimes diving into areas of our internal systems that they weren’t previously familiar with. This approach provides the opportunity to explore new solutions, and collaborate cross-team and cross-discipline. The excitement of engineers involved in each of these efforts is palpable — not only are we thrilled when we notice a dramatic shift in boot time or introduce new tooling that makes monitoring and debugging even easier, but teams enjoy working more closely with engineers in other parts of the org.

Continue reading along with us in the Building GitHub blog series, where we’ll share specific goals and lessons, the impact of our work, and how we did it. To continue this journey, we’ll start the series with a deep dive on faster CI and we hope to share more soon.

The 11 Greatest Vacuum Tubes You’ve Never Heard Of

Post Syndicated from Carter M. Armstrong original https://spectrum.ieee.org/tech-history/space-age/the-11-greatest-vacuum-tubes-youve-never-heard-of

In an age propped up by quintillions of solid-state devices, should you even care about vacuum tubes? You definitely should! For richness, drama, and sheer brilliance, few technological timelines can match the 116-year (and counting) history of the vacuum tube. To prove it, I’ve assembled a list of vacuum devices that over the past 60 or 70 years inarguably changed the world.

And just for good measure, you’ll also find here a few tubes that are too unique, cool, or weird to languish in obscurity.

Of course, anytime anyone offers up a list of anything—the comfiest trail-running shoes, the most authentic Italian restaurants in Cleveland, movies that are better than the book they’re based on—someone else is bound to weigh in and either object or amplify. So, to state the obvious: This is my list of vacuum tubes. But I’d love to read yours. Feel free to add it in the comments section at the end of this article.

My list isn’t meant to be comprehensive. Here you’ll find no gas-filled glassware like Nixie tubes or thyratrons, no “uber high” pulsed-power microwave devices, no cathode-ray display tubes. I intentionally left out well-known tubes, such as satellite traveling-wave tubes and microwave-oven magnetrons. And I’ve pretty much stuck with radio-frequency tubes, so I’m ignoring the vast panoply of audio-frequency tubes—with one notable exception.

But even within the parameters I’ve chosen, there are so many amazing devices that it was rather hard to pick just eleven of them. So here’s my take, in no particular order, on some tubes that made a difference.


Medical Magnetron

When it comes to efficiently generating coherent radio-frequency power in a compact package, you can’t beat the magnetron.

The magnetron first rose to glory in World War II, to power British radar. While the magnetron’s use in radar began to wane in the 1970s, the tube found new life in industrial, scientific, and medical applications, which continues today.

It is for this last use that the medical magnetron shines. In a linear accelerator, it creates a high-energy electron beam. When electrons in the beam are deflected by the nuclei in a target—consisting of a material having a high atomic number, such as tungsten—copious X-rays are produced, which can then be directed to kill cancer cells in tumors. The first clinical accelerator for radiotherapy was installed at London’s Hammersmith Hospital in 1952. A 2-megawatt magnetron powered the 3-meter-long accelerator.

High-power magnetrons continue to be developed to meet the demands of radiation oncology. The medical magnetron shown here, manufactured by e2v Technologies (now Teledyne e2v), generates a peak power of 2.6 MW, with an average power of 3 kilowatts and an efficiency of more than 50 percent. Just 37 centimeters long and weighing about 8 kilograms, it’s small and light enough to fit the rotating arm of a radiotherapy machine.


Gyrotron

Conceived in the 1960s in the Soviet Union, the gyrotron is a high-power vacuum device used primarily for heating plasmas in nuclear-fusion experiments, such as ITER, now under construction in southern France. These experimental reactors can require temperatures of up to 150 million °C.

So how does a megawatt-class gyrotron work? The name provides a clue: It uses beams of energetic electrons rotating or gyrating in a strong magnetic field inside a cavity. (We tube folks love our -trons and -trodes.) The interaction between the gyrating electrons and the cavity’s electromagnetic field generates high-frequency radio waves, which are directed into the plasma. The high-frequency waves accelerate the electrons within the plasma, heating the plasma in the process.

A tube that produces 1 MW of average power is not going to be small. Fusion gyrotrons typically stand around 2 to 2.5 meters tall and weigh around a metric ton, including a 6- or 7-tesla superconducting magnet.

In addition to heating fusion plasmas, gyrotrons are used in material processing and nuclear magnetic resonance spectroscopy. They have also been explored for nonlethal crowd control, in the U.S. military’s Active Denial System. This system projects a relatively wide millimeter-wave beam, perhaps a meter and a half in diameter. The beam is designed to heat the surface of a person’s skin, creating a burning sensation but without penetrating into or damaging the tissue below.


Mini Traveling-Wave Tube

As its name suggests, a traveling-wave tube (TWT) amplifies signals through the interaction between an electric field of a traveling, or propagating, electromagnetic wave in a circuit and a streaming electron beam. [For a more detailed description of how a TWT works, see “The Quest for the Ultimate Vacuum Tube,” IEEE Spectrum, December 2015.]

Most TWTs of the 20th century were designed for extremely high power gain, with amplification ratios of 100,000 or more. But you don’t always need that much gain. Enter the mini TWT, shown here in an example from L3Harris Electron Devices. With a gain of around 1,000 (or 30 decibels), a mini TWT is meant for applications where you need output power in the 40- to 200-watt range, and where small size and lower voltage are desirable. A 40-W mini TWT operating at 14 gigahertz, for example, fits in the palm of your hand and weighs less than half a kilogram.

As it turns out, military services have a great need for mini TWTs. Soon after their introduction in the 1980s, mini TWTs were adopted in electronic warfare systems on planes and ships for protection against radar-guided missiles. In the early 1990s, device designers began integrating mini TWTs with a compact high-voltage power supply to energize the device and a solid-state amplifier to drive it. The combination created what is known as a microwave power module, or MPM. Due to their small size, low weight, and high efficiency, MPM amplifiers found immediate use in radar and communications transmitters aboard military drones, such as the Predator and Global Hawk, as well as in electronic countermeasures.


Accelerator Klystron

The klystron helped usher in the era of big science in high-energy physics. Klystrons convert the kinetic energy of an electron beam into radio-frequency energy. The device has much greater output power than does a traveling-wave tube or a magnetron. The brothers Russell and Sigurd Varian invented the klystron in the 1930s and, with others, founded Varian Associates to market it. These days, Varian’s tube business lives on at Communications and Power Industries.

Inside a klystron, electrons emitted by a cathode accelerate toward an anode to form an electron beam. A magnetic field keeps the beam from expanding as it travels through an aperture in the anode to a beam collector. In between the anode and collector are hollow structures called cavity resonators. A high-frequency signal is applied to the resonator nearest the cathode, setting up an electromagnetic field inside the cavity. That field modulates the electron beam as it passes through the resonator, causing the speed of the electrons to vary and the electrons to bunch as they move toward the other cavity resonators downstream. Most of the electrons decelerate as they traverse the final resonator, which oscillates at high power. The result is an output signal that is much greater than the input signal.

In the 1960s, engineers developed a klystron to serve as the RF source for a new 3.2-kilometer linear particle accelerator being built at Stanford University. Operating at 2.856 gigahertz and using a 250-kilovolt electron beam, the SLAC klystron produced a peak power of 24 MW. More than 240 of them were needed to attain particle energies of up to 50 billion electron volts.

The SLAC klystrons paved the way for the widespread use of vacuum tubes as RF sources for advanced particle physics and X-ray light-source facilities. A 65-MW version of the SLAC klystron is still in production. Klystrons are also used for cargo screening, food sterilization, and radiation oncology.


Ring-Bar Traveling-Wave Tube

One Cold War tube that is still going strong is the huge ring-bar traveling-wave tube. This high-power tube stands over 3 meters from cathode to collector, making it the world’s largest TWT. There are 128 ring-bar TWTs providing the radio-frequency oomph for an exceedingly powerful phased-array radar at the Cavalier Air Force Station in North Dakota. Called the Perimeter Acquisition Radar Attack Characterization System (PARCS), this 440-megahertz radar looks for ballistic missiles launched toward North America. It also monitors space launches and orbiting objects as part of the Space Surveillance Network. Built by GE in 1972, PARCS tracks more than half of all Earth-orbiting objects, and it’s said to be able to identify a basketball-size object at a range of 2,000 miles (3,218 km).

An even higher-frequency version of the ring-bar tube is used in a phased-array radar on remote Shemya Island, about 1,900 km off the coast of Alaska. Known as Cobra Dane, the radar monitors non-U.S. ballistic missile launches. It also collects surveillance data on space launches and satellites in low Earth orbit.

The circuit used in this behemoth is known as a ring bar, which consists of circular rings connected by alternating strips, or bars, repeated along its length. This setup provides a higher field intensity across the tube’s electron beam than does a garden-variety TWT, in which the radio-frequency waves propagate along a helix-shaped wire. The ring-bar tube’s higher field intensity results in higher power gain and good efficiency. The tube shown here was developed by Raytheon in the early 1970s; it is now manufactured by L3Harris Electron Devices.


Ubitron

Fifteen years before the term “free-electron laser” was coined, there was a vacuum tube that worked on the same basic principle—the ubitron, which sort of stands for “undulating beam interaction.”

The 1957 invention of the ubitron came about by accident. Robert Phillips, an engineer at the General Electric Microwave Lab in Palo Alto, Calif., was trying to explain why one of the lab’s traveling-wave tubes oscillated and another didn’t. Comparing the two tubes, he noticed variations in their magnetic focusing, which caused the beam in one tube to wiggle. He figured that this undulation could result in a periodic interaction with an electromagnetic wave in a waveguide. That, in turn, could be useful for creating exceedingly high levels of peak radio-frequency power. Thus, the ubitron was born.

From 1957 to 1964, Phillips and colleagues built and tested a variety of ubitrons. The 1963 photo shown here is of GE colleague Charles Enderby holding a ubitron without its wiggler magnet. Operating at 70,000 volts, this tube produced a peak power of 150 kW at 54 GHz, a record power level that stood for well over a decade. But the U.S. Army, which funded the ubitron work, halted R&D in 1964 because there were no antennas or waveguides that could handle power levels that high.

Today’s free-electron lasers employ the same basic principle as the ubitron. In fact, in recognition of his pioneering work on the ubitron, Phillips received the Free-Electron Laser Prize in 1992. The FELs now installed in the large light and X-ray sources at particle accelerators produce powerful electromagnetic radiation, which is used to explore the dynamics of chemical bonds, to understand photosynthesis, to analyze how drugs bind with targets, and even to create warm, dense matter to study how gas planets form.


Carcinotron

The French tube called the carcinotron is another fascinating example born of the Cold War. Related to the magnetron, it was conceived by Bernard Epsztein in 1951 at Compagnie Générale de Télégraphie Sans Fil (CSF, now part of Thales).

Like the ubitron, the carcinotron grew out of an attempt to resolve an oscillation problem on a conventional tube. In this case, the source of the oscillation was traced to a radio-frequency circuit’s power flowing backward, in the opposite direction of the tube’s electron beam. Epsztein discovered that the oscillation frequency could be varied with voltage, which led to a patent for a voltage-tunable “backward wave” tube.

For about 20 years, electronic jammers in the United States and Europe employed carcinotrons as their source of RF power. The tube shown here was one of the first manufactured by CSF in 1952. It delivered 200 W of RF power in the S band, which extends from 2 to 4 GHz.

Considering the level of power they can handle, carcinotrons are fairly compact. Including its permanent focusing magnet, a 500-W model weighs just 8 kg and measures 24 by 17 by 15 cm, a shade smaller than a shoebox.

And the strange name? Philippe Thouvenin, a vacuum electronics scientist at Thales Electron Devices, told me that it comes from a Greek word, karkunos, which means crayfish. And crayfish, of course, swim backwards.


Dual-Mode Traveling-Wave Tube

The dual-mode TWT was an oddball microwave tube developed in the United States in the 1970s and ’80s for electronic countermeasures against radar. Capable of both low-power continuous-wave and high-power pulsed operation, this tube followed the old adage that two is better than one: It had two beams, two circuits, two electron guns, two focusing magnets, and two collectors, all enclosed in a single vacuum envelope.

The tube’s main selling point was that it broadened the uses of a given application—a countermeasure system, for example, could operate in both continuous-wave and pulsed-power modes but with a single transmitter and a simple antenna feed. A control grid in the electron gun in the shorter, pulsed-power section could quickly switch the tube from pulsed to continuous wave, or vice versa. Talk about packing a lot of capability into a small package. Of course, if the vacuum leaked, you’d lose both tube functions.

The tube shown here was developed by Raytheon’s Power Tube Division, which was acquired by Litton Electron Devices in 1993. Raytheon/Litton as well as Northrop Grumman manufactured the dual-mode TWT, but it was notoriously hard to produce in volume and was discontinued in the early 2000s.


Multi-Beam Klystron

Power, as many of us learned as youngsters, equals voltage times current. To get more power out of a vacuum tube, you can increase the voltage of the tube’s electron beam, but that calls for a bigger tube and a more complex power supply. Or you can raise the beam’s current, but that can be problematic too. For that, you need to ensure the device can support the higher current and that the required magnetic field can transport the electron beam safely through the tube’s circuit—that is, the part of the tube that interacts with the electron beam.

Adding to the challenge, a tube’s efficiency generally falls as the beam’s current rises because the bunching of the electrons required for power conversion suffers.

All these caveats apply if you’re talking about a conventional vacuum tube with a single electron beam and a single circuit. But what if you employ multiple beams, originating from multiple cathodes and traveling through a common circuit? Even if the individual beam currents are moderate, the total current will be high, while the device’s overall efficiency is unaffected.

Such a multiple-beam device was studied in the 1960s in the United States, the Soviet Union, and elsewhere. The U.S. work petered out, but activity in the USSR continued, leading to the successful deployment of the multi-beam klystron, or MBK. The Soviets fielded many of these tubes for radar and other uses.

A modern example of an MBK is shown above, produced in 2011 by the French firm Thomson Tubes Electroniques (now part of Thales). This MBK was developed for the German Electron Synchrotron facility (DESY). A later version is used at the European X-Ray Free Electron Laser facility. The tube has seven beams providing a total current of 137 amperes, with a peak power of 10 MW and average power of 150 kW; its efficiency is greater than 63 percent. By contrast, a single-beam klystron developed by Thomson provides 5 MW peak and 100 kW average power, with an efficiency of 40 percent. So, in terms of its amplification capability, one MBK is equivalent to two conventional klystrons.


Coaxitron

All the tubes I’ve described so far are what specialists call beam-wave devices (or stream-wave in the case of the magnetron). But before those devices came along, tubes had grids, which are transparent screenlike metal electrodes inserted between the tube’s cathode and anode to control or modulate the flow of electrons. Depending on how many grids the tube has, it is called a diode (no grids), a triode (one grid), a tetrode (two grids), and so on. Low-power tubes were referred to as “receiving tubes,” because they were typically used in radio receivers, or as switches. (Here I should note that what I’ve been referring to as a “tube” is known to the British as a “valve.”)

There were, of course, higher-power grid tubes. Transmitting tubes were used in—you guessed it—radio transmitters. Later on, high-power grid tubes found their way into a wide array of interesting industrial, scientific, and military applications.

Triodes and higher-order grid tubes all included a cathode, a current-control grid, and an anode or collector (or plate). Most of these tubes were cylindrical, with a central cathode, usually a filament, surrounded by electrodes.

The coaxitron, developed by RCA beginning in the 1960s, is a unique permutation of the cylindrical design. The electrons flow radially from the cylindrical coaxial cathode to the anode. But rather than having a single electron emitter, the coaxitron’s cathode is segmented along its circumference, with numerous heated filaments serving as the electron source. Each filament forms its own little beamlet of electrons. Because the beamlet flows radially to the anode, no magnetic field (or magnet) is required to confine the electrons. The coaxitron is thus very compact, considering its remarkable power level of around a megawatt.

A 1-MW, 425-MHz coaxitron weighed 130 pounds (59 kg) and stood 24 inches (61 cm) tall. While the gain was modest (10 to 15 dB), it was still a tour de force as a compact ultrahigh-frequency power booster. RCA envisioned the coaxitron as a source for driving RF accelerators, but it ultimately found a home in high-power UHF radar. Although coaxitrons were recently overtaken by solid-state devices, some are still in service in legacy radar systems.


Telefunken Audio Tube

An important conventional tube with grids resides at the opposite end of the power/frequency spectrum from megawatt beasts like the klystron and the gyrotron. Revered by audio engineers and recording artists, the Telefunken VF14M was employed as an amplifier in the legendary Neumann U47 and U48 microphones favored by Frank Sinatra and by the Beatles’ producer Sir George Martin. Fun fact: There’s a Neumann U47 microphone on display at the Abbey Road Studio in London. The “M” in the VF14M tube designation indicates it’s suitable for microphone use and was only awarded to tubes that passed screening at Neumann.

The VF14 is a pentode, meaning it has five electrodes, including three grids. When used in a microphone, however, it operates as a triode, with two of its grids strapped together and connected to the anode. This was done to exploit the supposedly superior sonic qualities of a triode. The VF14’s heater circuit, which warms the cathode so that it emits electrons, runs at 55 V. That voltage was chosen so that two tubes could be wired in series across a 110-V main to reduce power-supply costs, which was important in postwar Germany.

Nowadays, you can buy a solid-state replacement for the VF14M that even simulates the tube’s 55-V heater circuit. But can it replicate that warm, lovely tube sound? On that one, audio snobs will never agree.

This article appears in the November 2020 print issue as “The 9 Greatest Vacuum Tubes You’ve Never Heard Of.”

Understanding Causality Is the Next Challenge for Machine Learning

Post Syndicated from Payal Dhar original https://spectrum.ieee.org/tech-talk/artificial-intelligence/machine-learning/understanding-causality-is-the-next-challenge-for-machine-learning

“Causality is very important for the next steps of progress of machine learning,” said Yoshua Bengio, a Turing Award-wining scientist known for his work in deep learning, in an interview with IEEE Spectrum in 2019. So far, deep learning has comprised learning from static datasets, which makes AI really good at tasks related to correlations and associations. However, neural nets do not interpret cause-and effect, or why these associations and correlations exist. Nor are they particularly good at tasks that involve imagination, reasoning, and planning. This, in turn, limits AI from being able to generalize their learning and transfer their skills to another related environment.

The lack of generalization is a big problem, says Ossama Ahmed, a master’s student at ETH Zurich who has worked with Bengio’s team to develop a robotic benchmarking tool for causality and transfer learning. “Robots are [often] trained in simulation, and then when you try to deploy [them] in the real world…they usually fail to transfer their learned skills. One of the reasons is that the physical properties of the simulation are quite different from the real world,” says Ahmed. The group’s tool, called CausalWorld, demonstrates that with some of the methods currently available, the generalization capabilities of robots aren’t good enough—at least not to the extent that “we can deploy [them] safely in any arbitrary situation in the real world,” says Ahmed.

The paper on CausalWorld, available as a preprint, describes benchmarks in a simulated robotics manipulation environment using the open-source TriFinger robotics platform. The main purpose of CausalWorld is to accelerate research in causal structure and transfer learning using this simulated environment, where learned skills could potentially be transferred to the real world. Robotic agents can be given tasks that comprise pushing, stacking, placing, and so on, informed by how children have been observed to play with blocks and learn to build complex structures. There is a large set of parameters, such as weight, shape, and appearance of the blocks and the robot itself, on which the user can intervene at any point to evaluate the robot’s generalization capabilities.

In their study, the researchers gave the robots a number of tasks ranging from simple to extremely challenging, based on three different curricula. The first involved no environment changes; the second had changes to a single variable; and the third allowed full randomization of all variables in the environment. They observed that as the curricula got more complex, the agents showed less ability to transfer their skills to the new conditions.

“If we continue scaling up training and network architectures beyond the experiments we report, current methods could potentially solve more of the block stacking environments we propose with CausalWorld,” points out Frederik Träuble, one of the contributors to the study. Träuble adds that “What’s actually interesting is that we humans can generalize much, much quicker [and] we don’t need such a vast amount of experience… We can learn from the underlying shared rules of [certain] environments…[and] use this to generalize better to yet other environments that we haven’t seen.”

A standard neural network, on the other hand, would require insane amounts of experience with myriad environments in order to do the same. “Having a model architecture or method that can learn these underlying rules or causal mechanisms, and utilize them could [help] overcome these challenges,” Träuble says.

CausalWorld’s evaluation protocols, say Ahmed and Träuble, are more versatile than those in previous studies because of the possibility of “disentangling” generalization abilities. In other words, users are free to intervene on a large number of variables in the environment, and thus draw systemic conclusions about what the agent generalizes to—or doesn’t. The next challenge, they say, is to actually use the tools available in CausalWorld to build more generalizable systems.

Despite how dazzled we are by AI’s ability to perform certain tasks, Yoshua Bengio, in 2019, estimated that present-day deep learning is less intelligent than a two-year-old child. Though the ability of neural networks to parallel-process on a large scale has given us breakthroughs in computer vision, translation, and memory, research is now shifting to developing novel deep architectures and training frameworks for addressing tasks like reasoning, planning, capturing causality, and obtaining systematic generalization. “I believe it’s just the beginning of a different style of brain-inspired computation,” Bengio said, adding, “I think we have a lot of the tools to get started.”

Tracking Users on Waze

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/10/tracking-users-on-waze.html

A security researcher discovered a wulnerability in Waze that breaks the anonymity of users:

I found out that I can visit Waze from any web browser at waze.com/livemap so I decided to check how are those driver icons implemented. What I found is that I can ask Waze API for data on a location by sending my latitude and longitude coordinates. Except the essential traffic information, Waze also sends me coordinates of other drivers who are nearby. What caught my eyes was that identification numbers (ID) associated with the icons were not changing over time. I decided to track one driver and after some time she really appeared in a different place on the same road.

The vulnerability has been fixed. More interesting is that the researcher was able to de-anonymize some of the Waze users, proving yet again that anonymity is hard when we’re all so different.

YouTuber Jeff Geerling reviews Raspberry Pi Compute Module 4

Post Syndicated from Ashley Whittaker original https://www.raspberrypi.org/blog/youtuber-jeff-geerling-reviews-raspberry-pi-compute-module-4/

We love seeing how quickly our community of makers responds when we drop a new product, and one of the fastest off the starting block when we released the new Raspberry Pi Compute Module 4 on Monday was YouTuber Jeff Geerling.

Jeff Geerling

We made him keep it a secret until launch day after we snuck one to him early so we could see what one of YouTube’s chief advocates for our Compute Module line thought of our newest baby.

So how does our newest board compare to its predecessor, Compute Module 3+? In Jeff’s first video (above) he reviews some of Compute Module 4’s new features, and he has gone into tons more detail in this blog post.

Jeff also took to live stream for a Q&A (above) covering some of the most asked questions about Compute Module 4, and sharing some more features he missed in his initial review video.

His next video (above) is pretty cool. Jeff explains:

“Everyone knows you can overclock the Pi 4. But what happens when you overclock a Compute Module 4? The results surprised me!”

Jeff Geerling

And again, there’s tons more detail on temperature measurement, storage performance, and more on Jeff’s blog.

Top job, Jeff. We have our eyes on your channel for more videos on Compute Module 4, coming soon.

If you like what you see on his YouTube channel, you can also sponsor Jeff on GitHub, or support his work via Patreon.

The post YouTuber Jeff Geerling reviews Raspberry Pi Compute Module 4 appeared first on Raspberry Pi.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close