Tag Archives: signal

SoFi, the underwater robotic fish

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/robotic-fish/

With the Greenland shark finally caught on video for the very first time, scientists and engineers are discussing the limitations of current marine monitoring technology. One significant advance comes from the CSAIL team at Massachusetts Institute of Technology (MIT): SoFi, the robotic fish.

A Robotic Fish Swims in the Ocean

More info: http://bit.ly/SoFiRobot Paper: http://robert.katzschmann.eu/wp-content/uploads/2018/03/katzschmann2018exploration.pdf

The untethered SoFi robot

Last week, the Computer Science and Artificial Intelligence Laboratory (CSAIL) team at MIT unveiled SoFi, “a soft robotic fish that can independently swim alongside real fish in the ocean.”

MIT CSAIL underwater fish SoFi using Raspberry Pi

Directed by a Super Nintendo controller and acoustic signals, SoFi can dive untethered to a maximum of 18 feet for a total of 40 minutes. A Raspberry Pi receives input from the controller and amplifies the ultrasound signals for SoFi via a HiFiBerry. The controller, Raspberry Pi, and HiFiBerry are sealed within a waterproof, cast-moulded silicone membrane filled with non-conductive mineral oil, allowing for underwater equalisation.

MIT CSAIL underwater fish SoFi using Raspberry Pi

The ultrasound signals, received by a modem within SoFi’s head, control everything from direction, tail oscillation, pitch, and depth to the onboard camera.

As explained on MIT’s news blog, “to make the robot swim, the motor pumps water into two balloon-like chambers in the fish’s tail that operate like a set of pistons in an engine. As one chamber expands, it bends and flexes to one side; when the actuators push water to the other channel, that one bends and flexes in the other direction.”

MIT CSAIL underwater fish SoFi using Raspberry Pi

Ocean exploration

While we’ve seen many autonomous underwater vehicles (AUVs) using onboard Raspberry Pis, SoFi’s ability to roam untethered with a wireless waterproof controller is an exciting achievement.

“To our knowledge, this is the first robotic fish that can swim untethered in three dimensions for extended periods of time. We are excited about the possibility of being able to use a system like this to get closer to marine life than humans can get on their own.” – CSAIL PhD candidate Robert Katzschmann

As the MIT news post notes, SoFi’s simple, lightweight setup of a single camera, a motor, and a smartphone lithium polymer battery set it apart it from existing bulky AUVs that require large motors or support from boats.

For more in-depth information on SoFi and the onboard tech that controls it, find the CSAIL team’s paper here.

The post SoFi, the underwater robotic fish appeared first on Raspberry Pi.

Raspberry Pi 3 Model B+ on sale now at $35

Post Syndicated from Eben Upton original https://www.raspberrypi.org/blog/raspberry-pi-3-model-bplus-sale-now-35/

Here’s a long post. We think you’ll find it interesting. If you don’t have time to read it all, we recommend you watch this video, which will fill you in with everything you need, and then head straight to the product page to fill yer boots. (We recommend the video anyway, even if you do have time for a long read. ‘Cos it’s fab.)

A BRAND-NEW PI FOR π DAY

Raspberry Pi 3 Model B+ is now on sale now for $35, featuring: – A 1.4GHz 64-bit quad-core ARM Cortex-A53 CPU – Dual-band 802.11ac wireless LAN and Bluetooth 4.2 – Faster Ethernet (Gigabit Ethernet over USB 2.0) – Power-over-Ethernet support (with separate PoE HAT) – Improved PXE network and USB mass-storage booting – Improved thermal management Alongside a 200MHz increase in peak CPU clock frequency, we have roughly three times the wired and wireless network throughput, and the ability to sustain high performance for much longer periods.

If you’ve been a Raspberry Pi watcher for a while now, you’ll have a bit of a feel for how we update our products. Just over two years ago, we released Raspberry Pi 3 Model B. This was our first 64-bit product, and our first product to feature integrated wireless connectivity. Since then, we’ve sold over nine million Raspberry Pi 3 units (we’ve sold 19 million Raspberry Pis in total), which have been put to work in schools, homes, offices and factories all over the globe.

Those Raspberry Pi watchers will know that we have a history of releasing improved versions of our products a couple of years into their lives. The first example was Raspberry Pi 1 Model B+, which added two additional USB ports, introduced our current form factor, and rolled up a variety of other feedback from the community. Raspberry Pi 2 didn’t get this treatment, of course, as it was superseded after only one year; but it feels like it’s high time that Raspberry Pi 3 received the “plus” treatment.

So, without further ado, Raspberry Pi 3 Model B+ is now on sale for $35 (the same price as the existing Raspberry Pi 3 Model B), featuring:

  • A 1.4GHz 64-bit quad-core ARM Cortex-A53 CPU
  • Dual-band 802.11ac wireless LAN and Bluetooth 4.2
  • Faster Ethernet (Gigabit Ethernet over USB 2.0)
  • Power-over-Ethernet support (with separate PoE HAT)
  • Improved PXE network and USB mass-storage booting
  • Improved thermal management

Alongside a 200MHz increase in peak CPU clock frequency, we have roughly three times the wired and wireless network throughput, and the ability to sustain high performance for much longer periods.

Behold the shiny

Raspberry Pi 3B+ is available to buy today from our network of Approved Resellers.

New features, new chips

Roger Thornton did the design work on this revision of the Raspberry Pi. Here, he and I have a chat about what’s new.

Introducing the Raspberry Pi 3 Model B+

Raspberry Pi 3 Model B+ is now on sale now for $35, featuring: – A 1.4GHz 64-bit quad-core ARM Cortex-A53 CPU – Dual-band 802.11ac wireless LAN and Bluetooth 4.2 – Faster Ethernet (Gigabit Ethernet over USB 2.0) – Power-over-Ethernet support (with separate PoE HAT) – Improved PXE network and USB mass-storage booting – Improved thermal management Alongside a 200MHz increase in peak CPU clock frequency, we have roughly three times the wired and wireless network throughput, and the ability to sustain high performance for much longer periods.

The new product is built around BCM2837B0, an updated version of the 64-bit Broadcom application processor used in Raspberry Pi 3B, which incorporates power integrity optimisations, and a heat spreader (that’s the shiny metal bit you can see in the photos). Together these allow us to reach higher clock frequencies (or to run at lower voltages to reduce power consumption), and to more accurately monitor and control the temperature of the chip.

Dual-band wireless LAN and Bluetooth are provided by the Cypress CYW43455 “combo” chip, connected to a Proant PCB antenna similar to the one used on Raspberry Pi Zero W. Compared to its predecessor, Raspberry Pi 3B+ delivers somewhat better performance in the 2.4GHz band, and far better performance in the 5GHz band, as demonstrated by these iperf results from LibreELEC developer Milhouse.

Tx bandwidth (Mb/s)Rx bandwidth (Mb/s)
Raspberry Pi 3B35.735.6
Raspberry Pi 3B+ (2.4GHz)46.746.3
Raspberry Pi 3B+ (5GHz)102102

The wireless circuitry is encapsulated under a metal shield, rather fetchingly embossed with our logo. This has allowed us to certify the entire board as a radio module under FCC rules, which in turn will significantly reduce the cost of conformance testing Raspberry Pi-based products.

We’ll be teaching metalwork next.

Previous Raspberry Pi devices have used the LAN951x family of chips, which combine a USB hub and 10/100 Ethernet controller. For Raspberry Pi 3B+, Microchip have supported us with an upgraded version, LAN7515, which supports Gigabit Ethernet. While the USB 2.0 connection to the application processor limits the available bandwidth, we still see roughly a threefold increase in throughput compared to Raspberry Pi 3B. Again, here are some typical iperf results.

Tx bandwidth (Mb/s)Rx bandwidth (Mb/s)
Raspberry Pi 3B94.195.5
Raspberry Pi 3B+315315

We use a magjack that supports Power over Ethernet (PoE), and bring the relevant signals to a new 4-pin header. We will shortly launch a PoE HAT which can generate the 5V necessary to power the Raspberry Pi from the 48V PoE supply.

There… are… four… pins!

Coming soon to a Raspberry Pi 3B+ near you

Raspberry Pi 3B was our first product to support PXE Ethernet boot. Testing it in the wild shook out a number of compatibility issues with particular switches and traffic environments. Gordon has rolled up fixes for all known issues into the BCM2837B0 boot ROM, and PXE boot is now enabled by default.

Clocking, voltages and thermals

The improved power integrity of the BCM2837B0 package, and the improved regulation accuracy of our new MaxLinear MxL7704 power management IC, have allowed us to tune our clocking and voltage rules for both better peak performance and longer-duration sustained performance.

Below 70°C, we use the improvements to increase the core frequency to 1.4GHz. Above 70°C, we drop to 1.2GHz, and use the improvements to decrease the core voltage, increasing the period of time before we reach our 80°C thermal throttle; the reduction in power consumption is such that many use cases will never reach the throttle. Like a modern smartphone, we treat the thermal mass of the device as a resource, to be spent carefully with the goal of optimising user experience.

This graph, courtesy of Gareth Halfacree, demonstrates that Raspberry Pi 3B+ runs faster and at a lower temperature for the duration of an eight‑minute quad‑core Sysbench CPU test.

Note that Raspberry Pi 3B+ does consume substantially more power than its predecessor. We strongly encourage you to use a high-quality 2.5A power supply, such as the official Raspberry Pi Universal Power Supply.

FAQs

We’ll keep updating this list over the next couple of days, but here are a few to get you started.

Are you discontinuing earlier Raspberry Pi models?

No. We have a lot of industrial customers who will want to stick with the existing products for the time being. We’ll keep building these models for as long as there’s demand. Raspberry Pi 1B+, Raspberry Pi 2B, and Raspberry Pi 3B will continue to sell for $25, $35, and $35 respectively.

What about Model A+?

Raspberry Pi 1A+ continues to be the $20 entry-level “big” Raspberry Pi for the time being. We are considering the possibility of producing a Raspberry Pi 3A+ in due course.

What about the Compute Module?

CM1, CM3 and CM3L will continue to be available. We may offer versions of CM3 and CM3L with BCM2837B0 in due course, depending on customer demand.

Are you still using VideoCore?

Yes. VideoCore IV 3D is the only publicly-documented 3D graphics core for ARM‑based SoCs, and we want to make Raspberry Pi more open over time, not less.

Credits

A project like this requires a vast amount of focused work from a large team over an extended period. Particular credit is due to Roger Thornton, who designed the board and ran the exhaustive (and exhausting) RF compliance campaign, and to the team at the Sony UK Technology Centre in Pencoed, South Wales. A partial list of others who made major direct contributions to the BCM2837B0 chip program, CYW43455 integration, LAN7515 and MxL7704 developments, and Raspberry Pi 3B+ itself follows:

James Adams, David Armour, Jonathan Bell, Maria Blazquez, Jamie Brogan-Shaw, Mike Buffham, Rob Campling, Cindy Cao, Victor Carmon, KK Chan, Nick Chase, Nigel Cheetham, Scott Clark, Nigel Clift, Dominic Cobley, Peter Coyle, John Cronk, Di Dai, Kurt Dennis, David Doyle, Andrew Edwards, Phil Elwell, John Ferdinand, Doug Freegard, Ian Furlong, Shawn Guo, Philip Harrison, Jason Hicks, Stefan Ho, Andrew Hoare, Gordon Hollingworth, Tuomas Hollman, EikPei Hu, James Hughes, Andy Hulbert, Anand Jain, David John, Prasanna Kerekoppa, Shaik Labeeb, Trevor Latham, Steve Le, David Lee, David Lewsey, Sherman Li, Xizhe Li, Simon Long, Fu Luo Larson, Juan Martinez, Sandhya Menon, Ben Mercer, James Mills, Max Passell, Mark Perry, Eric Phiri, Ashwin Rao, Justin Rees, James Reilly, Matt Rowley, Akshaye Sama, Ian Saturley, Serge Schneider, Manuel Sedlmair, Shawn Shadburn, Veeresh Shivashimper, Graham Smith, Ben Stephens, Mike Stimson, Yuree Tchong, Stuart Thomson, John Wadsworth, Ian Watch, Sarah Williams, Jason Zhu.

If you’re not on this list and think you should be, please let me know, and accept my apologies.

The post Raspberry Pi 3 Model B+ on sale now at $35 appeared first on Raspberry Pi.

Getting product security engineering right

Post Syndicated from Michal Zalewski original http://lcamtuf.blogspot.com/2018/02/getting-product-security-engineering.html

Product security is an interesting animal: it is a uniquely cross-disciplinary endeavor that spans policy, consulting,
process automation, in-depth software engineering, and cutting-edge vulnerability research. And in contrast to many
other specializations in our field of expertise – say, incident response or network security – we have virtually no
time-tested and coherent frameworks for setting it up within a company of any size.

In my previous post, I shared some thoughts
on nurturing technical organizations and cultivating the right kind of leadership within. Today, I figured it would
be fitting to follow up with several notes on what I learned about structuring product security work – and about actually
making the effort count.

The “comfort zone” trap

For security engineers, knowing your limits is a sought-after quality: there is nothing more dangerous than a security
expert who goes off script and starts dispensing authoritatively-sounding but bogus advice on a topic they know very
little about. But that same quality can be destructive when it prevents us from growing beyond our most familiar role: that of
a critic who pokes holes in other people’s designs.

The role of a resident security critic lends itself all too easily to a sense of supremacy: the mistaken
belief that our cognitive skills exceed the capabilities of the engineers and product managers who come to us for help
– and that the cool bugs we file are the ultimate proof of our special gift. We start taking pride in the mere act
of breaking somebody else’s software – and then write scathing but ineffectual critiques addressed to executives,
demanding that they either put a stop to a project or sign off on a risk. And hey, in the latter case, they better
brace for our triumphant “I told you so” at some later date.

Of course, escalations of this type have their place, but they need to be a very rare sight; when practiced routinely, they are a telltale
sign of a dysfunctional team. We might be failing to think up viable alternatives that are in tune with business or engineering needs; we might
be very unpersuasive, failing to communicate with other rational people in a language they understand; or it might be that our tolerance for risk
is badly out of whack with the rest of the company. Whatever the cause, I’ve seen high-level escalations where the security team
spoke of valiant efforts to resist inexplicably awful design decisions or data sharing setups; and where product leads in turn talked about
pressing business needs randomly blocked by obstinate security folks. Sometimes, simply having them compare their notes would be enough to arrive
at a technical solution – such as sharing a less sensitive subset of the data at hand.

To be effective, any product security program must be rooted in a partnership with the rest of the company, focused on helping them get stuff done
while eliminating or reducing security risks. To combat the toxic us-versus-them mentality, I found it helpful to have some team members with
software engineering backgrounds, even if it’s the ownership of a small open-source project or so. This can broaden our horizons, helping us see
that we all make the same mistakes – and that not every solution that sounds good on paper is usable once we code it up.

Getting off the treadmill

All security programs involve a good chunk of operational work. For product security, this can be a combination of product launch reviews, design consulting requests, incoming bug reports, or compliance-driven assessments of some sort. And curiously, such reactive work also has the property of gradually expanding to consume all the available resources on a team: next year is bound to bring even more review requests, even more regulatory hurdles, and even more incoming bugs to triage and fix.

Being more tractable, such routine tasks are also more readily enshrined in SDLs, SLAs, and all kinds of other official documents that are often mistaken for a mission statement that justifies the existence of our teams. Soon, instead of explaining to a developer why they should fix a particular problem right away, we end up pointing them to page 17 in our severity classification guideline, which defines that “severity 2” vulnerabilities need to be resolved within a month. Meanwhile, another policy may be telling them that they need to run a fuzzer or a web application scanner for a particular number of CPU-hours – no matter whether it makes sense or whether the job is set up right.

To run a product security program that scales sublinearly, stays abreast of future threats, and doesn’t erect bureaucratic speed bumps just for the sake of it, we need to recognize this inherent tendency for operational work to take over – and we need to reign it in. No matter what the last year’s policy says, we usually don’t need to be doing security reviews with a particular cadence or to a particular depth; if we need to scale them back 10% to staff a two-quarter project that fixes an important API and squashes an entire class of bugs, it’s a short-term risk we should feel empowered to take.

As noted in my earlier post, I find contingency planning to be a valuable tool in this regard: why not ask ourselves how the team would cope if the workload went up another 30%, but bad financial results precluded any team growth? It’s actually fun to think about such hypotheticals ahead of the time – and hey, if the ideas sound good, why not try them out today?

Living for a cause

It can be difficult to understand if our security efforts are structured and prioritized right; when faced with such uncertainty, it is natural to stick to the safe fundamentals – investing most of our resources into the very same things that everybody else in our industry appears to be focusing on today.

I think it’s important to combat this mindset – and if so, we might as well tackle it head on. Rather than focusing on tactical objectives and policy documents, try to write down a concise mission statement explaining why you are a team in the first place, what specific business outcomes you are aiming for, how do you prioritize it, and how you want it all to change in a year or two. It should be a fluid narrative that reads right and that everybody on your team can take pride in; my favorite way of starting the conversation is telling folks that we could always have a new VP tomorrow – and that the VP’s first order of business could be asking, “why do you have so many people here and how do I know they are doing the right thing?”. It’s a playful but realistic framing device that motivates people to get it done.

In general, a comprehensive product security program should probably start with the assumption that no matter how many resources we have at our disposal, we will never be able to stay in the loop on everything that’s happening across the company – and even if we did, we’re not going to be able to catch every single bug. It follows that one of our top priorities for the team should be making sure that bugs don’t happen very often; a scalable way of getting there is equipping engineers with intuitive and usable tools that make it easy to perform common tasks without having to worry about security at all. Examples include standardized, managed containers for production jobs; safe-by-default APIs, such as strict contextual autoescaping for XSS or type safety for SQL; security-conscious style guidelines; or plug-and-play libraries that take care of common crypto or ACL enforcement tasks.

Of course, not all problems can be addressed on framework level, and not every engineer will always reach for the right tools. Because of this, the next principle that I found to be worth focusing on is containment and mitigation: making sure that bugs are difficult to exploit when they happen, or that the damage is kept in check. The solutions in this space can range from low-level enhancements (say, hardened allocators or seccomp-bpf sandboxes) to client-facing features such as browser origin isolation or Content Security Policy.

The usual consulting, review, and outreach tasks are an important facet of a product security program, but probably shouldn’t be the sole focus of your team. It’s also best to avoid undue emphasis on vulnerability showmanship: while valuable in some contexts, it creates a hypercompetitive environment that may be hostile to less experienced team members – not to mention, squashing individual bugs offers very limited value if the same issue is likely to be reintroduced into the codebase the next day. I like to think of security reviews as a teaching opportunity instead: it’s a way to raise awareness, form partnerships with engineers, and help them develop lasting habits that reduce the incidence of bugs. Metrics to understand the impact of your work are important, too; if your engagements are seen mostly as a yet another layer of red tape, product teams will stop reaching out to you for advice.

The other tenet of a healthy product security effort requires us to recognize at a scale and given enough time, every defense mechanism is bound to fail – and so, we need ways to prevent bugs from turning into incidents. The efforts in this space may range from developing product-specific signals for the incident response and monitoring teams; to offering meaningful vulnerability reward programs and nourishing a healthy and respectful relationship with the research community; to organizing regular offensive exercises in hopes of spotting bugs before anybody else does.

Oh, one final note: an important feature of a healthy security program is the existence of multiple feedback loops that help you spot problems without the need to micromanage the organization and without being deathly afraid of taking chances. For example, the data coming from bug bounty programs, if analyzed correctly, offers a wonderful way to alert you to systemic problems in your codebase – and later on, to measure the impact of any remediation and hardening work.

Jumping Air Gaps

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/02/jumping_air_gap_2.html

Nice profile of Mordechai Guri, who researches a variety of clever ways to steal data over air-gapped computers.

Guri and his fellow Ben-Gurion researchers have shown, for instance, that it's possible to trick a fully offline computer into leaking data to another nearby device via the noise its internal fan generates, by changing air temperatures in patterns that the receiving computer can detect with thermal sensors, or even by blinking out a stream of information from a computer hard drive LED to the camera on a quadcopter drone hovering outside a nearby window. In new research published today, the Ben-Gurion team has even shown that they can pull data off a computer protected by not only an air gap, but also a Faraday cage designed to block all radio signals.

Here’s a page with all the research results.

BoingBoing post.

Progressing from tech to leadership

Post Syndicated from Michal Zalewski original http://lcamtuf.blogspot.com/2018/02/on-leadership.html

I’ve been a technical person all my life. I started doing vulnerability research in the late 1990s – and even today, when I’m not fiddling with CNC-machined robots or making furniture, I’m probably clobbering together a fuzzer or writing a book about browser protocols and APIs. In other words, I’m a geek at heart.

My career is a different story. Over the past two decades and a change, I went from writing CGI scripts and setting up WAN routers for a chain of shopping malls, to doing pentests for institutional customers, to designing a series of network monitoring platforms and handling incident response for a big telco, to building and running the product security org for one of the largest companies in the world. It’s been an interesting ride – and now that I’m on the hook for the well-being of about 100 folks across more than a dozen subteams around the world, I’ve been thinking a bit about the lessons learned along the way.

Of course, I’m a bit hesitant to write such a post: sometimes, your efforts pan out not because of your approach, but despite it – and it’s possible to draw precisely the wrong conclusions from such anecdotes. Still, I’m very proud of the culture we’ve created and the caliber of folks working on our team. It happened through the work of quite a few talented tech leads and managers even before my time, but it did not happen by accident – so I figured that my observations may be useful for some, as long as they are taken with a grain of salt.

But first, let me start on a somewhat somber note: what nobody tells you is that one’s level on the leadership ladder tends to be inversely correlated with several measures of happiness. The reason is fairly simple: as you get more senior, a growing number of people will come to you expecting you to solve increasingly fuzzy and challenging problems – and you will no longer be patted on the back for doing so. This should not scare you away from such opportunities, but it definitely calls for a particular mindset: your motivation must come from within. Look beyond the fight-of-the-day; find satisfaction in seeing how far your teams have come over the years.

With that out of the way, here’s a collection of notes, loosely organized into three major themes.

The curse of a techie leader

Perhaps the most interesting observation I have is that for a person coming from a technical background, building a healthy team is first and foremost about the subtle art of letting go.

There is a natural urge to stay involved in any project you’ve started or helped improve; after all, it’s your baby: you’re familiar with all the nuts and bolts, and nobody else can do this job as well as you. But as your sphere of influence grows, this becomes a choke point: there are only so many things you could be doing at once. Just as importantly, the project-hoarding behavior robs more junior folks of the ability to take on new responsibilities and bring their own ideas to life. In other words, when done properly, delegation is not just about freeing up your plate; it’s also about empowerment and about signalling trust.

Of course, when you hand your project over to somebody else, the new owner will initially be slower and more clumsy than you; but if you pick the new leads wisely, give them the right tools and the right incentives, and don’t make them deathly afraid of messing up, they will soon excel at their new jobs – and be grateful for the opportunity.

A related affliction of many accomplished techies is the conviction that they know the answers to every question even tangentially related to their domain of expertise; that belief is coupled with a burning desire to have the last word in every debate. When practiced in moderation, this behavior is fine among peers – but for a leader, one of the most important skills to learn is knowing when to keep your mouth shut: people learn a lot better by experimenting and making small mistakes than by being schooled by their boss, and they often try to read into your passing remarks. Don’t run an authoritarian camp focused on total risk aversion or perfectly efficient resource management; just set reasonable boundaries and exit conditions for experiments so that they don’t spiral out of control – and be amazed by the results every now and then.

Death by planning

When nothing is on fire, it’s easy to get preoccupied with maintaining the status quo. If your current headcount or budget request lists all the same projects as last year’s, or if you ever find yourself ending an argument by deferring to a policy or a process document, it’s probably a sign that you’re getting complacent. In security, complacency usually ends in tears – and when it doesn’t, it leads to burnout or boredom.

In my experience, your goal should be to develop a cadre of managers or tech leads capable of coming up with clever ideas, prioritizing them among themselves, and seeing them to completion without your day-to-day involvement. In your spare time, make it your mission to challenge them to stay ahead of the curve. Ask your vendor security lead how they’d streamline their work if they had a 40% jump in the number of vendors but no extra headcount; ask your product security folks what’s the second line of defense or containment should your primary defenses fail. Help them get good ideas off the ground; set some mental success and failure criteria to be able to cut your losses if something does not pan out.

Of course, malfunctions happen even in the best-run teams; to spot trouble early on, instead of overzealous project tracking, I found it useful to encourage folks to run a data-driven org. I’d usually ask them to imagine that a brand new VP shows up in our office and, as his first order of business, asks “why do you have so many people here and how do I know they are doing the right things?”. Not everything in security can be quantified, but hard data can validate many of your assumptions – and will alert you to unseen issues early on.

When focusing on data, it’s important not to treat pie charts and spreadsheets as an art unto itself; if you run a security review process for your company, your CSAT scores are going to reach 100% if you just rubberstamp every launch request within ten minutes of receiving it. Make sure you’re asking the right questions; instead of “how satisfied are you with our process”, try “is your product better as a consequence of talking to us?”

Whenever things are not progressing as expected, it is a natural instinct to fall back to micromanagement, but it seldom truly cures the ill. It’s probable that your team disagrees with your vision or its feasibility – and that you’re either not listening to their feedback, or they don’t think you’d care. It’s good to assume that most of your employees are as smart or smarter than you; barking your orders at them more loudly or more frequently does not lead anyplace good. It’s good to listen to them and either present new facts or work with them on a plan you can all get behind.

In some circumstances, all that’s needed is honesty about the business trade-offs, so that your team feels like your “partner in crime”, not a victim of circumstance. For example, we’d tell our folks that by not falling behind on basic, unglamorous work, we earn the trust of our VPs and SVPs – and that this translates into the independence and the resources we need to pursue more ambitious ideas without being told what to do; it’s how we game the system, so to speak. Oh: leading by example is a pretty powerful tool at your disposal, too.

The human factor

I’ve come to appreciate that hiring decent folks who can get along with others is far more important than trying to recruit conference-circuit superstars. In fact, hiring superstars is a decidedly hit-and-miss affair: while certainly not a rule, there is a proportion of folks who put the maintenance of their celebrity status ahead of job responsibilities or the well-being of their peers.

For teams, one of the most powerful demotivators is a sense of unfairness and disempowerment. This is where tech-originating leaders can shine, because their teams usually feel that their bosses understand and can evaluate the merits of the work. But it also means you need to be decisive and actually solve problems for them, rather than just letting them vent. You will need to make unpopular decisions every now and then; in such cases, I think it’s important to move quickly, rather than prolonging the uncertainty – but it’s also important to sincerely listen to concerns, explain your reasoning, and be frank about the risks and trade-offs.

Whenever you see a clash of personalities on your team, you probably need to respond swiftly and decisively; being right should not justify being a bully. If you don’t react to repeated scuffles, your best people will probably start looking for other opportunities: it’s draining to put up with constant pie fights, no matter if the pies are thrown straight at you or if you just need to duck one every now and then.

More broadly, personality differences seem to be a much better predictor of conflict than any technical aspects underpinning a debate. As a boss, you need to identify such differences early on and come up with creative solutions. Sometimes, all you need is taking some badly-delivered but valid feedback and having a conversation with the other person, asking some questions that can help them reach the same conclusions without feeling that their worldview is under attack. Other times, the only path forward is making sure that some folks simply don’t run into each for a while.

Finally, dealing with low performers is a notoriously hard but important part of the game. Especially within large companies, there is always the temptation to just let it slide: sideline a struggling person and wait for them to either get over their issues or leave. But this sends an awful message to the rest of the team; for better or worse, fairness is important to most. Simply firing the low performers is seldom the best solution, though; successful recovery cases are what sets great managers apart from the average ones.

Oh, one more thought: people in leadership roles have their allegiance divided between the company and the people who depend on them. The obligation to the company is more formal, but the impact you have on your team is longer-lasting and more intimate. When the obligations to the employer and to your team collide in some way, make sure you can make the right call; it might be one of the the most consequential decisions you’ll ever make.

WhatsApp Vulnerability

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/01/whatsapp_vulner.html

A new vulnerability in WhatsApp has been discovered:

…the researchers unearthed far more significant gaps in WhatsApp’s security: They say that anyone who controls WhatsApp’s servers could effortlessly insert new people into an otherwise private group, even without the permission of the administrator who ostensibly controls access to that conversation.

Matthew Green has a good description:

If all you want is the TL;DR, here’s the headline finding: due to flaws in both Signal and WhatsApp (which I single out because I use them), it’s theoretically possible for strangers to add themselves to an encrypted group chat. However, the caveat is that these attacks are extremely difficult to pull off in practice, so nobody needs to panic. But both issues are very avoidable, and tend to undermine the logic of having an end-to-end encryption protocol in the first place.

Here’s the research paper.

Spiegelbilder Studio’s giant CRT video walls

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/crt-video-walls/

After getting in contact with us to share their latest build with us, we invited Matvey Fridman of Germany-based production company Spiegelbilder Studio to write a guest blog post about their CRT video walls created for the band STRANDKØNZERT.

STRANDKØNZERT – TAGTRAUMER – OFFICIAL VIDEO

GERMAN DJENT RAP / EST. 2017. COMPLETE DIY-PROJECT.

CRT video wall

About a year ago, we had the idea of building a huge video wall out of old TVs to use in a music video. It took some time, but half a year later we found ourselves in a studio actually building this thing using 30 connected computers, 24 of which were Raspberry Pis.

STRANDKØNZERT CRT video wall Raspberry Pi

How we did it

After weeks and months of preproduction and testing, we decided on two consecutive days to build the wall, create the underlying IP network, run a few tests, and then film the artists’ performance in front of it. We actually had 32 Pis (a mixed bag of first, second, and third generation models) and even more TVs ready to go, since we didn’t know what the final build would actually look like. We ended up using 29 separate screens of various sizes hooked up to 24 separate Pis — the remaining five TVs got a daisy-chained video signal out of other monitors for a cool effect. Each Pi had to run a free software called PiWall.

STRANDKØNZERT CRT video wall Raspberry Pi

Since the TVs only had analogue video inputs, we had to get special composite breakout cables and then adapt the RCA connectors to either SCART, S-Video, or BNC.

STRANDKØNZERT CRT video wall Raspberry Pi

As soon as we had all of that running, we connected every Pi to a 48-port network switch that we’d hooked up to a Windows PC acting as a DHCP server to automatically assign IP addresses and handle the multicast addressing. To make remote control of the Raspberry Pis easier, a separate master Linux PC and two MacBook laptops, each with SSH enabled and a Samba server running, joined the network as well.

STRANDKØNZERT CRT video wall Raspberry Pi

The MacBook laptops were used to drop two files containing the settings on each Pi. The .pitile file was unique to every Pi and contained their respective IDs. The .piwall file contained the same info for all Pis: the measurements and positions of every single screen to help the software split up the video signal coming in through the network. After every Pi got the command to start the PiWall software, which specifies the UDP multicast address and settings to be used to receive the video stream, the master Linux PC was tasked with streaming the video file to these UDP addresses. Now every TV was showing its section of the video, and we could begin filming.

STRANDKØNZERT CRT video wall Raspberry Pi

The whole process and the contents of the files and commands are summarised in the infographic below. A lot of trial and error was involved in the making of this project, but it all worked out well in the end. We hope you enjoy the craft behind the music video even though the music is not for everybody 😉

PiWall_Infographic

You can follow Spiegelbilder Studio on Facebook, Twitter, and Instagram. And if you enjoyed the music video, be sure to follow STRANDKØNZERT too.

The post Spiegelbilder Studio’s giant CRT video walls appeared first on Raspberry Pi.

Detecting Drone Surveillance with Traffic Analysis

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/01/detecting_drone.html

This is clever:

Researchers at Ben Gurion University in Beer Sheva, Israel have built a proof-of-concept system for counter-surveillance against spy drones that demonstrates a clever, if not exactly simple, way to determine whether a certain person or object is under aerial surveillance. They first generate a recognizable pattern on whatever subject­ — a window, say — someone might want to guard from potential surveillance. Then they remotely intercept a drone’s radio signals to look for that pattern in the streaming video the drone sends back to its operator. If they spot it, they can determine that the drone is looking at their subject.

In other words, they can see what the drone sees, pulling out their recognizable pattern from the radio signal, even without breaking the drone’s encrypted video.

The details have to do with the way drone video is compressed:

The researchers’ technique takes advantage of an efficiency feature streaming video has used for years, known as “delta frames.” Instead of encoding video as a series of raw images, it’s compressed into a series of changes from the previous image in the video. That means when a streaming video shows a still object, it transmits fewer bytes of data than when it shows one that moves or changes color.

That compression feature can reveal key information about the content of the video to someone who’s intercepting the streaming data, security researchers have shown in recent research, even when the data is encrypted.

Research paper and video.

Dark Caracal: Global Espionage Malware from Lebanon

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/01/dark_caracal_gl.html

The EFF and Lookout are reporting on a new piece of spyware operating out of Lebanon. It primarily targets mobile devices compromised by fake secure messaging clients like Signal and WhatsApp.

From the Lookout announcement:

Dark Caracal has operated a series of multi-platform campaigns starting from at least January 2012, according to our research. The campaigns span across 21+ countries and thousands of victims. Types of data stolen include documents, call records, audio recordings, secure messaging client content, contact information, text messages, photos, and account data. We believe this actor is operating their campaigns from a building belonging to the Lebanese General Security Directorate (GDGS) in Beirut.

It looks like a complex infrastructure that’s been well-developed, and continually upgraded and maintained. It appears that a cyberweapons arms manufacturer is selling this tool to different countries. From the full report:

Dark Caracal is using the same infrastructure as was previously seen in the Operation Manul campaign, which targeted journalists, lawyers, and dissidents critical of the government of Kazakhstan.

There’s a lot in the full report. It’s worth reading.

Three news articles.

New Book Coming in September: "Click Here to Kill Everybody"

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/01/new_book_coming.html

My next book is still on track for a September 2018 publication. Norton is still the publisher. The title is now Click Here to Kill Everybody: Peril and Promise on a Hyperconnected Planet, which I generally refer to as CH2KE.

The table of contents has changed since I last blogged about this, and it now looks like this:

  • Introduction: Everything is Becoming a Computer
  • Part 1: The Trends
    • 1. Computers are Still Hard to Secure
    • 2. Everyone Favors Insecurity
    • 3. Autonomy and Physical Agency Bring New Dangers
    • 4. Patching is Failing as a Security Paradigm
    • 5. Authentication and Identification are Getting Harder
    • 6. Risks are Becoming Catastrophic
  • Part 2: The Solutions
    • 7. What a Secure Internet+ Looks Like
    • 8. How We Can Secure the Internet+
    • 9. Government is Who Enables Security
    • 10. How Government Can Prioritize Defense Over Offense
    • 11. What’s Likely to Happen, and What We Can Do in Response
    • 12. Where Policy Can Go Wrong
    • 13. How to Engender Trust on the Internet+
  • Conclusion: Technology and Policy, Together

Two questions for everyone.

1. I’m not really happy with the subtitle. It needs to be descriptive, to counterbalance the admittedly clickbait title. It also needs to telegraph: “everyone needs to read this book.” I’m taking suggestions.

2. In the book I need a word for the Internet plus the things connected to it plus all the data and processing in the cloud. I’m using the word “Internet+,” and I’m not really happy with it. I don’t want to invent a new word, but I need to strongly signal that what’s coming is much more than just the Internet — and I can’t find any existing word. Again, I’m taking suggestions.

Hijacker – Reaver For Android Wifi Hacker App

Post Syndicated from Darknet original https://www.darknet.org.uk/2018/01/hijacker-reaver-android-wifi-hacker-app/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

Hijacker – Reaver For Android Wifi Hacker App

Hijacker is a native GUI which provides Reaver for Android along with Aircrack-ng, Airodump-ng and MDK3 making it a powerful Wifi hacker app.

It offers a simple and easy UI to use these tools without typing commands in a console and copy & pasting MAC addresses.

Features of Hijacker Reaver For Android Wifi Hacker App
Information Gathering

  • View a list of access points and stations (clients) around you (even hidden ones)
  • View the activity of a specific network (by measuring beacons and data packets) and its clients
  • Statistics about access points and stations
  • See the manufacturer of a device (AP or station) from the OUI database
  • See the signal power of devices and filter the ones that are closer to you
  • Save captured packets in .cap file

Reaver for Android Wifi Cracker Attacks

  • Deauthenticate all the clients of a network (either targeting each one or without specific target)
  • Deauthenticate a specific client from the network it’s connected
  • MDK3 Beacon Flooding with custom options and SSID list
  • MDK3 Authentication DoS for a specific network or to every nearby AP
  • Capture a WPA handshake or gather IVs to crack a WEP network
  • Reaver WPS cracking (pixie-dust attack using NetHunter chroot and external adapter)

Other Wifi Hacker App Features

  • Leave the app running in the background, optionally with a notification
  • Copy commands or MAC addresses to clipboard
  • Includes the required tools, no need for manual installation
  • Includes the nexmon driver and management utility for BCM4339 devices
  • Set commands to enable and disable monitor mode automatically
  • Crack .cap files with a custom wordlist
  • Create custom actions and run them on an access point or a client easily
  • Sort and filter Access Points and Stations with many parameters
  • Export all gathered information to a file
  • Add a persistent alias to a device (by MAC) for easier identification

Requirements to Crack Wifi Password with Android

This application requires an ARM Android device with an internal wireless adapter that supports Monitor Mode.

Read the rest of Hijacker – Reaver For Android Wifi Hacker App now! Only available at Darknet.

Acoustical Attacks against Hard Drives

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/12/acoustical_atta.html

Interesting destructive attack: “Acoustic Denial of Service Attacks on HDDs“:

Abstract: Among storage components, hard disk drives (HDDs) have become the most commonly-used type of non-volatile storage due to their recent technological advances, including, enhanced energy efficacy and significantly-improved areal density. Such advances in HDDs have made them an inevitable part of numerous computing systems, including, personal computers, closed-circuit television (CCTV) systems, medical bedside monitors, and automated teller machines (ATMs). Despite the widespread use of HDDs and their critical role in real-world systems, there exist only a few research studies on the security of HDDs. In particular, prior research studies have discussed how HDDs can potentially leak critical private information through acoustic or electromagnetic emanations. Borrowing theoretical principles from acoustics and mechanics, we propose a novel denial-of-service (DoS) attack against HDDs that exploits a physical phenomenon, known as acoustic resonance. We perform a comprehensive examination of physical characteristics of several HDDs and create acoustic signals that cause significant vibrations in HDDs internal components. We demonstrate that such vibrations can negatively influence the performance of HDDs embedded in real-world systems. We show the feasibility of the proposed attack in two real-world case studies, namely, personal computers and CCTVs.

Man-in-the-Middle Attack against Electronic Car-Door Openers

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/11/man-in-the-midd_8.html

This is an interesting tactic, and there’s a video of it being used:

The theft took just one minute and the Mercedes car, stolen from the Elmdon area of Solihull on 24 September, has not been recovered.

In the footage, one of the men can be seen waving a box in front of the victim’s house.

The device receives a signal from the key inside and transmits it to the second box next to the car.

The car’s systems are then tricked into thinking the key is present and it unlocks, before the ignition can be started.

The Decision on Transparency

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/transparency-in-business/

Backblaze transparency

This post by Backblaze’s CEO and co-founder Gleb Budman is the seventh in a series about entrepreneurship. You can choose posts in the series from the list below:

  1. How Backblaze got Started: The Problem, The Solution, and the Stuff In-Between
  2. Building a Competitive Moat: Turning Challenges Into Advantages
  3. From Idea to Launch: Getting Your First Customers
  4. How to Get Your First 1,000 Customers
  5. Surviving Your First Year
  6. How to Compete with Giants
  7. The Decision on Transparency

Use the Join button above to receive notification of new posts in this series.

“Are you crazy?” “Why would you do that?!” “You shouldn’t share that!”

These are just a few of the common questions and comments we heard after posting some of the information we have shared over the years. So was it crazy? Misguided? Should you do it?

With that background I’d like to dig into the decision to become so transparent, from releasing stats on hard drive failures, to storage pod specs, to publishing our cloud storage costs, and open sourcing the Reed-Solomon code. What was the thought process behind becoming so transparent when most companies work so hard to hide their inner workings, especially information such as the Storage Pod specs that would normally be considered a proprietary advantage? Most importantly I’d like to explore the positives and negatives of being so transparent.

Sharing Intellectual Property

The first “transparency” that garnered a flurry of “why would you share that?!” came as a result of us deciding to open source our Storage Pod design: publishing the specs, parts, prices, and how to build it yourself. The Storage Pod was a key component of our infrastructure, gave us a cost (and thus competitive) advantage, took significant effort to develop, and had a fair bit of intellectual property: the “IP.”

The negatives of sharing this are obvious: it allows our competitors to use the design to reduce our cost advantage, and it gives away the IP, which could be patentable or have value as a trade secret.

The positives were certainly less obvious, and at the time we couldn’t have guessed how massive they would be.

We wrestled with the decision: prospective users and others online didn’t believe we could offer our service for such a low price, thinking that we would burn through some cash hoard and then go out of business. We wanted to reassure them, but how?

This is how our response evolved:

We’ve built a lower cost storage platform.
But why would anyone believe us?
Because, we’ve designed our own servers and they’re less expensive.
But why would anyone believe they were so low cost and efficient?
Because here’s how much they cost versus others.
But why would anyone believe they cost that little and still enabled us to efficiently store data?
Because here are all the components they’re made of, this is how to build them, and this is how they work.
Ok, you can’t argue with that.

Great — so that would reassure people. But should we do this? Is it worth it?

This was 2009, we were a tiny company of seven people working from our co-founder’s one-bedroom apartment. We decided that the risk of not having potential customers trust us was more impactful than the risk of our competitors possibly deciding to use our server architecture. The former might kill the company in short order; the latter might make it harder for us to compete in the future. Moreover, we figured that most competitors were established on their own platforms and were unlikely to switch to ours, even if it were better.

Takeaway: Build your brand today. There are no assurances you will make it to tomorrow if you can’t make people believe in you today.

A Sharing Success Story — The Backblaze Storage Pod

So with that, we decided to publish everything about the Storage Pod. As for deciding to actually open source it? That was a ‘thank you’ to the open source community upon whose shoulders we stood as we used software such as Linux, Tomcat, etc.

With eight years of hindsight, here’s what happened:

As best as I can tell, none of our direct competitors ever used our Storage Pod design, opting instead to continue paying more for commercial solutions.

  • Hundreds of press articles have been written about Backblaze as a direct result of sharing the Storage Pod design.
  • Millions of people have read press articles or our blog posts about the Storage Pods.
  • Backblaze was established as a storage tech thought leader, and a resource for those looking for information in the space.
  • Our blog became viewed as a resource, not a corporate mouthpiece.
  • Recruiting has been made easier through the awareness of Backblaze, the appreciation for us taking on challenging tech problems in interesting ways, and for our openness.
  • Sourcing for our Storage Pods has become easier because we can point potential vendors to our blog posts and say, “here’s what we need.”

And those are just the direct benefits for us. One of the things that warms my heart is that doing this has helped others:

  • Several companies have started selling servers based on our Storage Pod designs.
  • Netflix credits Backblaze with being the inspiration behind their CDN servers.
  • Many schools, labs, and others have shared that they’ve been able to do what they didn’t think was possible because using our Storage Pod designs provided lower-cost storage.
  • And I want to believe that in general we pushed forward the development of low-cost storage servers in the industry.

So overall, the decision on being transparent and sharing our Storage Pod designs was a clear win.

Takeaway: Never underestimate the value of goodwill. It can help build new markets that fuel your future growth and create new ecosystems.

Sharing An “Almost Acquisition”

Acquisition announcements are par for the course. No company, however, talks about the acquisition that fell through. If rumors appear in the press, the company’s response is always, “no comment.” But in 2010, when Backblaze was almost, but not acquired, we wrote about it in detail. Crazy?

The negatives of sharing this are slightly less obvious, but the two issues most people worried about were, 1) the fact that the company could be acquired would spook customers, and 2) the fact that it wasn’t would signal to potential acquirers that something was wrong.

So, why share this at all? No one was asking “did you almost get acquired?”

First, we had established a culture of transparency and this was a significant event that occurred for us, thus we defaulted to assuming we would share. Second, we learned that acquisitions fall through all the time, not just during the early fishing stage, but even after term sheets are signed, diligence is done, and all the paperwork is complete. I felt we had learned some things about the process that would be valuable to others that were going through it.

As it turned out, we received emails from startup founders saying they saved the post for the future, and from lawyers, VCs, and advisors saying they shared them with their portfolio companies. Among the most touching emails I received was from a founder who said that after an acquisition fell through she felt so alone that she became incredibly depressed, and that reading our post helped her see that this happens and that things could be OK after. Being transparent about almost getting acquired was worth it just to help that one founder.

And what about the concerns? As for spooking customers, maybe some were — but our sign-ups went up, not down, afterward. Any company can be acquired, and many of the world’s largest have been. That we were being both thoughtful about where to go with it, and open about it, I believe gave customers a sense that we would do the right thing if it happened. And as for signaling to potential acquirers? The ones I’ve spoken with all knew this happens regularly enough that it’s not a factor.

Takeaway: Being open and transparent is also a form of giving back to others.

Sharing Strategic Data

For years people have been desperate to know how reliable are hard drives. They could go to Amazon for individual reviews, but someone saying “this drive died for me” doesn’t provide statistical insight. Google published a study that showed annualized drive failure rates, but didn’t break down the results by manufacturer or model. Since Backblaze has deployed about 100,000 hard drives to store customer data, we have been able to collect a wealth of data on the reliability of the drives by make, model, and size. Was Backblaze the only one with this data? Of course not — Google, Amazon, Microsoft, and any other cloud-scale storage provider tracked it. Yet none would publish. Should Backblaze?

Again, starting with the main negatives: 1) sharing which drives we liked could increase demand for them, thus reducing availability or increasing prices, and 2) publishing the data might make the drive vendors unhappy with us, thereby making it difficult for us to buy drives.

But we felt that the largest drive purchasers (Amazon, Google, etc.) already had their own stats and would buy the drives they chose, and if individuals or smaller companies used our stats, they wouldn’t sufficiently move the overall market demand. Also, we hoped that the drive companies would see that we were being fair in our analysis and, if anything, would leverage our data to make drives even better.

Again, publishing the data resulted in tremendous value for Backblaze, with millions of people having read the analysis that we put out quarterly. Also, becoming known as the place to go for drive reliability information is a natural fit with being a backup and storage provider. In addition, in a twist from many people’s expectations, some of the drive companies actually started working closer with us, seeing that we could be a good source of data for them as feedback. We’ve also seen many individuals and companies make more data-based decisions on which drives to buy, and researchers have used the data for a variety of analyses.

traffic spike from hard drive reliability post

Backblaze blog analytics showing spike in readership after a hard drive stats post

Takeaway: Being open and transparent is rarely as risky as it seems.

Sharing Revenue (And Other Metrics)

Journalists always want to publish company revenue and other metrics, and private companies always shy away from sharing. For a long time we did, too. Then, we opened up about that, as well.

The negatives of sharing these numbers are: 1) external parties may otherwise perceive you’re doing better than you are, 2) if you share numbers often, you may show that growth has slowed or worse, 3) it gives your competitors info to compare their own business too.

We decided that, while some may have perceived we were bigger, our scale was plenty significant. Since we choose what we share and when, it’s up to us whether to disclose at any point. And if our competitors compare, what will they actually change that would affect us?

I did wait to share revenue until I felt I had the right person to write about it. At one point a journalist said she wouldn’t write about us unless I disclosed revenue. I suggested we had a lot to offer for the story, but didn’t want to share revenue yet. She refused to budge and I walked away from the article. Several year later, I reached out to a journalist who had covered Backblaze before and I felt understood our business and offered to share revenue with him. He wrote a deep-dive about the company, with revenue being one of the components of the story.

Sharing these metrics showed that we were at scale and running a real business, one with positive unit economics and margins, but not one where we were gouging customers.

Takeaway: Being open with the press about items typically not shared can be uncomfortable, but the press can amplify your story.

Should You Share?

For Backblaze, I believe the results of transparency have been staggering. However, it’s not for everyone. Apple has, clearly, been wildly successful taking secrecy to the extreme. In their case, early disclosure combined with the long cycle of hardware releases could significantly impact sales of current products.

“For Backblaze, I believe the results of transparency have been staggering.” — Gleb Budman

I will argue, however, that for most startups transparency wins. Most startups need to establish credibility and trust, build awareness and a fan base, show that they understand what their customers need and be useful to them, and show the soul and passion behind the company. Some startup companies try to buy these virtues with investor money, and sometimes amplifying your brand via paid marketing helps. But, authentic transparency can build awareness and trust not only less expensively, but more deeply than money can buy.

Backblaze was open from the beginning. With no outside investors, as founders we were able to express ourselves and make our decisions. And it’s easier to be a company that shares if you do it from the start, but for any company, here are a few suggestions:

  1. Ask about sharing: If something significant happens — good or bad — ask “should we share this?” If you made a tough decision, ask “should we share the thinking behind the decision and why it was tough?”
  2. Default to yes: It’s often scary to share, but look for the reasons to say ‘yes,’ not the reasons to say ‘no.’ That doesn’t mean you won’t sometimes decide not to, but make that the high bar.
  3. Minimize reviews: Press releases tend to be sanitized and boring because they’ve been endlessly wordsmithed by committee. Establish the few things you don’t want shared, but minimize the number of people that have to see anything else before it can go out. Teach, then trust.
  4. Engage: Sharing will result in comments on your blog, social, articles, etc. Reply to people’s questions and engage. It’ll make the readers more engaged and give you a better understanding of what they’re looking for.
  5. Accept mistakes: Things will become public that aren’t perfectly sanitized. Accept that and don’t punish people for oversharing.

Building a culture of a company that is open to sharing takes time, but continuous practice will build that, and over time the company will navigate its voice and approach to sharing.

The post The Decision on Transparency appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Automating Security Group Updates with AWS Lambda

Post Syndicated from Ian Scofield original https://aws.amazon.com/blogs/compute/automating-security-group-updates-with-aws-lambda/

Customers often use public endpoints to perform cross-region replication or other application layer communication to remote regions. But a common problem is how do you protect these endpoints? It can be tempting to open up the security groups to the world due to the complexity of keeping security groups in sync across regions with a dynamically changing infrastructure.

Consider a situation where you are running large clusters of instances in different regions that all require internode connectivity. One approach would be to use a VPN tunnel between regions to provide a secure tunnel over which to send your traffic. A good example of this is the Transit VPC Solution, which is a published AWS solution to help customers quickly get up and running. However, this adds additional cost and complexity to your solution due to the newly required additional infrastructure.

Another approach, which I’ll explore in this post, is to restrict access to the nodes by whitelisting the public IP addresses of your hosts in the opposite region. Today, I’ll outline a solution that allows for cross-region security group updates, can handle remote region failures, and supports external actions such as manually terminating instances or adding instances to an existing Auto Scaling group.

Solution overview

The overview of this solution is diagrammed below. Although this post covers limiting access to your instances, you should still implement encryption to protect your data in transit.

If your entire infrastructure is running in a single region, you can reference a security group as the source, allowing your IP addresses to change without any updates required. However, if you’re going across the public internet between regions to perform things like application-level traffic or cross-region replication, this is no longer an option. Security groups are regional. When you go across regions it can be tempting to drop security to enable this communication.

Although using an Elastic IP address can provide you with a static IP address that you can define as a source for your security groups, this may not always be feasible, especially when automatic scaling is desired.

In this example scenario, you have a distributed database that requires full internode communication for replication. If you place a cluster in us-east-1 and us-west-2, you must provide a secure method of communication between the two. Because the database uses cloud best practices, you can add or remove nodes as the load varies.

To start the process of updating your security groups, you must know when an instance has come online to trigger your workflow. Auto Scaling groups have the concept of lifecycle hooks that enable you to perform custom actions as the group launches or terminates instances.

When Auto Scaling begins to launch or terminate an instance, it puts the instance into a wait state (Pending:Wait or Terminating:Wait). The instance remains in this state while you perform your various actions until either you tell Auto Scaling to Continue, Abandon, or the timeout period ends. A lifecycle hook can trigger a CloudWatch event, publish to an Amazon SNS topic, or send to an Amazon SQS queue. For this example, you use CloudWatch Events to trigger an AWS Lambda function that updates an Amazon DynamoDB table.

Component breakdown

Here’s a quick breakdown of the components involved in this solution:

• Lambda function
• CloudWatch event
• DynamoDB table

Lambda function

The Lambda function automatically updates your security groups, in the following way:

1. Determines whether a change was triggered by your Auto Scaling group lifecycle hook or manually invoked for a “true up” functionality, which I discuss later in this post.
2. Describes the instances in the Auto Scaling group and obtain public IP addresses for each instance.
3. Updates both local and remote DynamoDB tables.
4. Compares the list of public IP addresses for both local and remote clusters with what’s already in the local region security group. Update the security group.
5. Compares the list of public IP addresses for both local and remote clusters with what’s already in the remote region security group. Update the security group
6. Signals CONTINUE back to the lifecycle hook.

CloudWatch event

The CloudWatch event triggers when an instance passes through either the launching or terminating states. When the Lambda function gets invoked, it receives an event that looks like the following:

{
	"account": "123456789012",
	"region": "us-east-1",
	"detail": {
		"LifecycleHookName": "hook-launching",
		"AutoScalingGroupName": "",
		"LifecycleActionToken": "33965228-086a-4aeb-8c26-f82ed3bef495",
		"LifecycleTransition": "autoscaling:EC2_INSTANCE_LAUNCHING",
		"EC2InstanceId": "i-017425ec54f22f994"
	},
	"detail-type": "EC2 Instance-launch Lifecycle Action",
	"source": "aws.autoscaling",
	"version": "0",
	"time": "2017-05-03T02:20:59Z",
	"id": "cb930cf8-ce8b-4b6c-8011-af17966eb7e2",
	"resources": [
		"arn:aws:autoscaling:us-east-1:123456789012:autoScalingGroup:d3fe9d96-34d0-4c62-b9bb-293a41ba3765:autoScalingGroupName/"
	]
}

DynamoDB table

You use DynamoDB to store lists of remote IP addresses in a local table that is updated by the opposite region as a failsafe source of truth. Although you can describe your Auto Scaling group for the local region, you must maintain a list of IP addresses for the remote region.

To minimize the number of describe calls and prevent an issue in the remote region from blocking your local scaling actions, we keep a list of the remote IP addresses in a local DynamoDB table. Each Lambda function in each region is responsible for updating the public IP addresses of its Auto Scaling group for both the local and remote tables.

As with all the infrastructure in this solution, there is a DynamoDB table in both regions that mirror each other. For example, the following screenshot shows a sample DynamoDB table. The Lambda function in us-east-1 would update the DynamoDB entry for us-east-1 in both tables in both regions.

By updating a DynamoDB table in both regions, it allows the local region to gracefully handle issues with the remote region, which would otherwise prevent your ability to scale locally. If the remote region becomes inaccessible, you have a copy of the latest configuration from the table that you can use to continue to sync with your security groups. When the remote region comes back online, it pushes its updated public IP addresses to the DynamoDB table. The security group is updated to reflect the current status by the remote Lambda function.

 

Walkthrough

Note: All of the following steps are performed in both regions. The Launch Stack buttons will default to the us-east-1 region.

Here’s a quick overview of the steps involved in this process:

1. An instance is launched or terminated, which triggers an Auto Scaling group lifecycle hook, triggering the Lambda function via CloudWatch Events.
2. The Lambda function retrieves the list of public IP addresses for all instances in the local region Auto Scaling group.
3. The Lambda function updates the local and remote region DynamoDB tables with the public IP addresses just received for the local Auto Scaling group.
4. The Lambda function updates the local region security group with the public IP addresses, removing and adding to ensure that it mirrors what is present for the local and remote Auto Scaling groups.
5. The Lambda function updates the remote region security group with the public IP addresses, removing and adding to ensure that it mirrors what is present for the local and remote Auto Scaling groups.

Prerequisites

To deploy this solution, you need to have Auto Scaling groups, launch configurations, and a base security group in both regions. To expedite this process, this CloudFormation template can be launched in both regions.

Step 1: Launch the AWS SAM template in the first region

To make the deployment process easy, I’ve created an AWS Serverless Application Model (AWS SAM) template, which is a new specification that makes it easier to manage and deploy serverless applications on AWS. This template creates the following resources:

• A Lambda function, to perform the various security group actions
• A DynamoDB table, to track the state of the local and remote Auto Scaling groups
• Auto Scaling group lifecycle hooks for instance launching and terminating
• A CloudWatch event, to track the EC2 Instance-Launch Lifecycle-Action and EC2 Instance-terminate Lifecycle-Action events
• A pointer from the CloudWatch event to the Lambda function, and the necessary permissions

Download the template from here or click to launch.

Upon launching the template, you’ll be presented with a list of parameters which includes the remote/local names for your Auto Scaling Groups, AWS region, Security Group IDs, DynamoDB table names, as well as where the code for the Lambda function is located. Because this is the first region you’re launching the stack in, fill out all the parameters except for the RemoteTable parameter as it hasn’t been created yet (you fill this in later).

Step 2: Test the local region

After the stack has finished launching, you can test the local region. Open the EC2 console and find the Auto Scaling group that was created when launching the prerequisite stack. Change the desired number of instances from 0 to 1.

For both regions, check your security group to verify that the public IP address of the instance created is now in the security group.

Local region:

Remote region:

Now, change the desired number of instances for your group back to 0 and verify that the rules are properly removed.

Local region:

Remote region:

Step 3: Launch in the remote region

When you deploy a Lambda function using CloudFormation, the Lambda zip file needs to reside in the same region you are launching the template. Once you choose your remote region, create an Amazon S3 bucket and upload the Lambda zip file there. Next, go to the remote region and launch the same SAM template as before, but make sure you update the CodeBucket and CodeKey parameters. Also, because this is the second launch, you now have all the values and can fill out all the parameters, specifically the RemoteTable value.

 

Step 4: Update the local region Lambda environment variable

When you originally launched the template in the local region, you didn’t have the name of the DynamoDB table for the remote region, because you hadn’t created it yet. Now that you have launched the remote template, you can perform a CloudFormation stack update on the initial SAM template. This populates the remote DynamoDB table name into the initial Lambda function’s environment variables.

In the CloudFormation console in the initial region, select the stack. Under Actions, choose Update Stack, and select the SAM template used for both regions. Under Parameters, populate the remote DynamoDB table name, as shown below. Choose Next and let the stack update complete. This updates your Lambda function and completes the setup process.

 

Step 5: Final testing

You now have everything fully configured and in place to trigger security group changes based on instances being added or removed to your Auto Scaling groups in both regions. Test this by changing the desired capacity of your group in both regions.

True up functionality
If an instance is manually added or removed from the Auto Scaling group, the lifecycle hooks don’t get triggered. To account for this, the Lambda function supports a “true up” functionality in which the function can be manually invoked. If you paste in the following JSON text for your test event, it kicks off the entire workflow. For added peace of mind, you can also have this function fire via a CloudWatch event with a CRON expression for nearly continuous checking.

{
	"detail": {
		"AutoScalingGroupName": "<your ASG name>"
	},
	"trueup":true
}

Extra credit

Now that all the resources are created in both regions, go back and break down the policy to incorporate resource-level permissions for specific security groups, Auto Scaling groups, and the DynamoDB tables.

Although this post is centered around using public IP addresses for your instances, you could instead use a VPN between regions. In this case, you would still be able to use this solution to scope down the security groups to the cluster instances. However, the code would need to be modified to support private IP addresses.

 

Conclusion

At this point, you now have a mechanism in place that captures when a new instance is added to or removed from your cluster and updates the security groups in both regions. This ensures that you are locking down your infrastructure securely by allowing access only to other cluster members.

Keep in mind that this architecture (lifecycle hooks, CloudWatch event, Lambda function, and DynamoDB table) requires that the infrastructure to be deployed in both regions, to have synchronization going both ways.

Because this Lambda function is modifying security group rules, it’s important to have an audit log of what has been modified and who is modifying them. The out-of-the-box function provides logs in CloudWatch for what IP addresses are being added and removed for which ports. As these are all API calls being made, they are logged in CloudTrail and can be traced back to the IAM role that you created for your lifecycle hooks. This can provide historical data that can be used for troubleshooting or auditing purposes.

Security is paramount at AWS. We want to ensure that customers are protecting access to their resources. This solution helps you keep your security groups in both regions automatically in sync with your Auto Scaling group resources. Let us know if you have any questions or other solutions you’ve come up with!

Ben’s Raspberry Pi Twilight Zone pinball hack

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/twilight-zone-pinball-display/

When Ben North was faced with the dilemma of his nine-year-old son wanting him to watch his pinball games while, at the same time, Ben should be doing housework, he came up with a brilliant hack. Ben decided to investigate the inner workings of his twenty-year-old Twilight Zone pinball machine to convert its score display data into a video stream he could keep an eye on while working.

Ben North Raspberry Pi Twilight Zone Pinball

Ben ended up with this. Read on to find out how…

Dad? Dad! DAD!!

Kids love sharing their achievements. That’s a given. And so, after Ben introduced his son Zach to his beloved pinball machine, Zach wanted his dad to witness his progress. However, at some point Ben had to get back to the dull reality of adulting.

My son Zach, now 9, has been steadily getting better at [playing pinball], and is keen for me to watch his games. So he and I wanted a way for me to keep an eye on how his game is going, while I do other jobs elsewhere.

The two of them thought that, with the right tools and some fiddling, they could hijack the machine’s score information on its way to the dot matrix display and divert it to a computer. “One way to do this would be to set up a webcam.” Ben explains on his blog, “But where’s the fun in that?”

Twilight Zone pinball wizardry

After researching how the dot matrix receives and displays the score data, Ben and Zach figured out how to fetch its output using a 16-channel USB logic analyser. Then they dove into learning to convert the data the logic analyser outputs back into images.

Ben North Raspberry Pi Twilight Zone Pinball

“Exploring in more detail confirmed that the data looked reasonable. We could see well-distinguished frames and rows, and within each row, the pixel data had a mixture of high (lit pixel) and low (dark pixel).”

After Ben managed to convert the signals of one frame into a human-readable pixel image, it was time to think about the hardware that could do this conversion in real time. Though he and Zach were convinced they would have to build custom hardware to complete their project, they decided to first give the Raspberry Pi a go. And it turned out that the Pi was up to the challenge!

Ben North Raspberry Pi Twilight Zone Pinball - example output

“By an amazing coincidence, the [first] frame I decoded was one showing that I am the current Lost In The Zone champion.”

To decode the first frame, Ben had written a Python script. However, he coded the program to produce a score live stream in C++, since this language is better at handling high-speed input and output. To make sure Zach would learn from the experience, Ben explained the how and why of the program to him.

I talked through with Zach what the program needed to do — detect clock edges, sample pixel data, collect rows, etc. — but then he left me to do ‘all the boring typing’.

Ben used various pieces of open-source software while working on this project, including the sigrok suite for signal analysis and the multimedia framework gstreamer for handling the live video stream to the Raspberry Pi.

Find more information about the Twilight Zone pinball build, including a lot of technical details and the code itself, on Ben’s blog.

Worthy self-promotion from Ben

“I also did an FPGA project to replicate some of the Colossus code-breaking machine used in Bletchley Park during World War II,” explained Ben in our recent emails. “with a Raspberry Pi as the host.”

Colossus computer Twilight Zone Pinball

The original Colossus, not Ben’s.
Image c/o Wikipedia

As a bit of a history nerd myself, I think this is beyond cool. And if, like me, you’d like to learn more, check out the link here.

The post Ben’s Raspberry Pi Twilight Zone pinball hack appeared first on Raspberry Pi.

Using AWS Step Functions State Machines to Handle Workflow-Driven AWS CodePipeline Actions

Post Syndicated from Marcilio Mendonca original https://aws.amazon.com/blogs/devops/using-aws-step-functions-state-machines-to-handle-workflow-driven-aws-codepipeline-actions/

AWS CodePipeline is a continuous integration and continuous delivery service for fast and reliable application and infrastructure updates. It offers powerful integration with other AWS services, such as AWS CodeBuildAWS CodeDeployAWS CodeCommit, AWS CloudFormation and with third-party tools such as Jenkins and GitHub. These services make it possible for AWS customers to successfully automate various tasks, including infrastructure provisioning, blue/green deployments, serverless deployments, AMI baking, database provisioning, and release management.

Developers have been able to use CodePipeline to build sophisticated automation pipelines that often require a single CodePipeline action to perform multiple tasks, fork into different execution paths, and deal with asynchronous behavior. For example, to deploy a Lambda function, a CodePipeline action might first inspect the changes pushed to the code repository. If only the Lambda code has changed, the action can simply update the Lambda code package, create a new version, and point the Lambda alias to the new version. If the changes also affect infrastructure resources managed by AWS CloudFormation, the pipeline action might have to create a stack or update an existing one through the use of a change set. In addition, if an update is required, the pipeline action might enforce a safety policy to infrastructure resources that prevents the deletion and replacement of resources. You can do this by creating a change set and having the pipeline action inspect its changes before updating the stack. Change sets that do not conform to the policy are deleted.

This use case is a good illustration of workflow-driven pipeline actions. These are actions that run multiple tasks, deal with async behavior and loops, need to maintain and propagate state, and fork into different execution paths. Implementing workflow-driven actions directly in CodePipeline can lead to complex pipelines that are hard for developers to understand and maintain. Ideally, a pipeline action should perform a single task and delegate the complexity of dealing with workflow-driven behavior associated with that task to a state machine engine. This would make it possible for developers to build simpler, more intuitive pipelines and allow them to use state machine execution logs to visualize and troubleshoot their pipeline actions.

In this blog post, we discuss how AWS Step Functions state machines can be used to handle workflow-driven actions. We show how a CodePipeline action can trigger a Step Functions state machine and how the pipeline and the state machine are kept decoupled through a Lambda function. The advantages of using state machines include:

  • Simplified logic (complex tasks are broken into multiple smaller tasks).
  • Ease of handling asynchronous behavior (through state machine wait states).
  • Built-in support for choices and processing different execution paths (through state machine choices).
  • Built-in visualization and logging of the state machine execution.

The source code for the sample pipeline, pipeline actions, and state machine used in this post is available at https://github.com/awslabs/aws-codepipeline-stepfunctions.

Overview

This figure shows the components in the CodePipeline-Step Functions integration that will be described in this post. The pipeline contains two stages: a Source stage represented by a CodeCommit Git repository and a Prod stage with a single Deploy action that represents the workflow-driven action.

This action invokes a Lambda function (1) called the State Machine Trigger Lambda, which, in turn, triggers a Step Function state machine to process the request (2). The Lambda function sends a continuation token back to the pipeline (3) to continue its execution later and terminates. Seconds later, the pipeline invokes the Lambda function again (4), passing the continuation token received. The Lambda function checks the execution state of the state machine (5,6) and communicates the status to the pipeline. The process is repeated until the state machine execution is complete. Then the Lambda function notifies the pipeline that the corresponding pipeline action is complete (7). If the state machine has failed, the Lambda function will then fail the pipeline action and stop its execution (7). While running, the state machine triggers various Lambda functions to perform different tasks. The state machine and the pipeline are fully decoupled. Their interaction is handled by the Lambda function.

The Deploy State Machine

The sample state machine used in this post is a simplified version of the use case, with emphasis on infrastructure deployment. The state machine will follow distinct execution paths and thus have different outcomes, depending on:

  • The current state of the AWS CloudFormation stack.
  • The nature of the code changes made to the AWS CloudFormation template and pushed into the pipeline.

If the stack does not exist, it will be created. If the stack exists, a change set will be created and its resources inspected by the state machine. The inspection consists of parsing the change set results and detecting whether any resources will be deleted or replaced. If no resources are being deleted or replaced, the change set is allowed to be executed and the state machine completes successfully. Otherwise, the change set is deleted and the state machine completes execution with a failure as the terminal state.

Let’s dive into each of these execution paths.

Path 1: Create a Stack and Succeed Deployment

The Deploy state machine is shown here. It is triggered by the Lambda function using the following input parameters stored in an S3 bucket.

Create New Stack Execution Path

{
    "environmentName": "prod",
    "stackName": "sample-lambda-app",
    "templatePath": "infra/Lambda-template.yaml",
    "revisionS3Bucket": "codepipeline-us-east-1-418586629775",
    "revisionS3Key": "StepFunctionsDrivenD/CodeCommit/sjcmExZ"
}

Note that some values used here are for the use case example only. Account-specific parameters like revisionS3Bucket and revisionS3Key will be different when you deploy this use case in your account.

These input parameters are used by various states in the state machine and passed to the corresponding Lambda functions to perform different tasks. For example, stackName is used to create a stack, check the status of stack creation, and create a change set. The environmentName represents the environment (for example, dev, test, prod) to which the code is being deployed. It is used to prefix the name of stacks and change sets.

With the exception of built-in states such as wait and choice, each state in the state machine invokes a specific Lambda function.  The results received from the Lambda invocations are appended to the state machine’s original input. When the state machine finishes its execution, several parameters will have been added to its original input.

The first stage in the state machine is “Check Stack Existence”. It checks whether a stack with the input name specified in the stackName input parameter already exists. The output of the state adds a Boolean value called doesStackExist to the original state machine input as follows:

{
  "doesStackExist": true,
  "environmentName": "prod",
  "stackName": "sample-lambda-app",
  "templatePath": "infra/lambda-template.yaml",
  "revisionS3Bucket": "codepipeline-us-east-1-418586629775",
  "revisionS3Key": "StepFunctionsDrivenD/CodeCommit/sjcmExZ",
}

The following stage, “Does Stack Exist?”, is represented by Step Functions built-in choice state. It checks the value of doesStackExist to determine whether a new stack needs to be created (doesStackExist=true) or a change set needs to be created and inspected (doesStackExist=false).

If the stack does not exist, the states illustrated in green in the preceding figure are executed. This execution path creates the stack, waits until the stack is created, checks the status of the stack’s creation, and marks the deployment successful after the stack has been created. Except for “Stack Created?” and “Wait Stack Creation,” each of these stages invokes a Lambda function. “Stack Created?” and “Wait Stack Creation” are implemented by using the built-in choice state (to decide which path to follow) and the wait state (to wait a few seconds before proceeding), respectively. Each stage adds the results of their Lambda function executions to the initial input of the state machine, allowing future stages to process them.

Path 2: Safely Update a Stack and Mark Deployment as Successful

Safely Update a Stack and Mark Deployment as Successful Execution Path

If the stack indicated by the stackName parameter already exists, a different path is executed. (See the green states in the figure.) This path will create a change set and use wait and choice states to wait until the change set is created. Afterwards, a stage in the execution path will inspect  the resources affected before the change set is executed.

The inspection procedure represented by the “Inspect Change Set Changes” stage consists of parsing the resources affected by the change set and checking whether any of the existing resources are being deleted or replaced. The following is an excerpt of the algorithm, where changeSetChanges.Changes is the object representing the change set changes:

...
var RESOURCES_BEING_DELETED_OR_REPLACED = "RESOURCES-BEING-DELETED-OR-REPLACED";
var CAN_SAFELY_UPDATE_EXISTING_STACK = "CAN-SAFELY-UPDATE-EXISTING-STACK";
for (var i = 0; i < changeSetChanges.Changes.length; i++) {
    var change = changeSetChanges.Changes[i];
    if (change.Type == "Resource") {
        if (change.ResourceChange.Action == "Delete") {
            return RESOURCES_BEING_DELETED_OR_REPLACED;
        }
        if (change.ResourceChange.Action == "Modify") {
            if (change.ResourceChange.Replacement == "True") {
                return RESOURCES_BEING_DELETED_OR_REPLACED;
            }
        }
    }
}
return CAN_SAFELY_UPDATE_EXISTING_STACK;

The algorithm returns different values to indicate whether the change set can be safely executed (CAN_SAFELY_UPDATE_EXISTING_STACK or RESOURCES_BEING_DELETED_OR_REPLACED). This value is used later by the state machine to decide whether to execute the change set and update the stack or interrupt the deployment.

The output of the “Inspect Change Set” stage is shown here.

{
  "environmentName": "prod",
  "stackName": "sample-lambda-app",
  "templatePath": "infra/lambda-template.yaml",
  "revisionS3Bucket": "codepipeline-us-east-1-418586629775",
  "revisionS3Key": "StepFunctionsDrivenD/CodeCommit/sjcmExZ",
  "doesStackExist": true,
  "changeSetName": "prod-sample-lambda-app-change-set-545",
  "changeSetCreationStatus": "complete",
  "changeSetAction": "CAN-SAFELY-UPDATE-EXISTING-STACK"
}

At this point, these parameters have been added to the state machine’s original input:

  • changeSetName, which is added by the “Create Change Set” state.
  • changeSetCreationStatus, which is added by the “Get Change Set Creation Status” state.
  • changeSetAction, which is added by the “Inspect Change Set Changes” state.

The “Safe to Update Infra?” step is a choice state (its JSON spec follows) that simply checks the value of the changeSetAction parameter. If the value is equal to “CAN-SAFELY-UPDATE-EXISTING-STACK“, meaning that no resources will be deleted or replaced, the step will execute the change set by proceeding to the “Execute Change Set” state. The deployment is successful (the state machine completes its execution successfully).

"Safe to Update Infra?": {
      "Type": "Choice",
      "Choices": [
        {
          "Variable": "$.taskParams.changeSetAction",
          "StringEquals": "CAN-SAFELY-UPDATE-EXISTING-STACK",
          "Next": "Execute Change Set"
        }
      ],
      "Default": "Deployment Failed"
 }

Path 3: Reject Stack Update and Fail Deployment

Reject Stack Update and Fail Deployment Execution Path

If the changeSetAction parameter is different from “CAN-SAFELY-UPDATE-EXISTING-STACK“, the state machine will interrupt the deployment by deleting the change set and proceeding to the “Deployment Fail” step, which is a built-in Fail state. (Its JSON spec follows.) This state causes the state machine to stop in a failed state and serves to indicate to the Lambda function that the pipeline deployment should be interrupted in a fail state as well.

 "Deployment Failed": {
      "Type": "Fail",
      "Cause": "Deployment Failed",
      "Error": "Deployment Failed"
    }

In all three scenarios, there’s a state machine’s visual representation available in the AWS Step Functions console that makes it very easy for developers to identify what tasks have been executed or why a deployment has failed. Developers can also inspect the inputs and outputs of each state and look at the state machine Lambda function’s logs for details. Meanwhile, the corresponding CodePipeline action remains very simple and intuitive for developers who only need to know whether the deployment was successful or failed.

The State Machine Trigger Lambda Function

The Trigger Lambda function is invoked directly by the Deploy action in CodePipeline. The CodePipeline action must pass a JSON structure to the trigger function through the UserParameters attribute, as follows:

{
  "s3Bucket": "codepipeline-StepFunctions-sample",
  "stateMachineFile": "state_machine_input.json"
}

The s3Bucket parameter specifies the S3 bucket location for the state machine input parameters file. The stateMachineFile parameter specifies the file holding the input parameters. By being able to specify different input parameters to the state machine, we make the Trigger Lambda function and the state machine reusable across environments. For example, the same state machine could be called from a test and prod pipeline action by specifying a different S3 bucket or state machine input file for each environment.

The Trigger Lambda function performs two main tasks: triggering the state machine and checking the execution state of the state machine. Its core logic is shown here:

exports.index = function (event, context, callback) {
    try {
        console.log("Event: " + JSON.stringify(event));
        console.log("Context: " + JSON.stringify(context));
        console.log("Environment Variables: " + JSON.stringify(process.env));
        if (Util.isContinuingPipelineTask(event)) {
            monitorStateMachineExecution(event, context, callback);
        }
        else {
            triggerStateMachine(event, context, callback);
        }
    }
    catch (err) {
        failure(Util.jobId(event), callback, context.invokeid, err.message);
    }
}

Util.isContinuingPipelineTask(event) is a utility function that checks if the Trigger Lambda function is being called for the first time (that is, no continuation token is passed by CodePipeline) or as a continuation of a previous call. In its first execution, the Lambda function will trigger the state machine and send a continuation token to CodePipeline that contains the state machine execution ARN. The state machine ARN is exposed to the Lambda function through a Lambda environment variable called stateMachineArn. Here is the code that triggers the state machine:

function triggerStateMachine(event, context, callback) {
    var stateMachineArn = process.env.stateMachineArn;
    var s3Bucket = Util.actionUserParameter(event, "s3Bucket");
    var stateMachineFile = Util.actionUserParameter(event, "stateMachineFile");
    getStateMachineInputData(s3Bucket, stateMachineFile)
        .then(function (data) {
            var initialParameters = data.Body.toString();
            var stateMachineInputJSON = createStateMachineInitialInput(initialParameters, event);
            console.log("State machine input JSON: " + JSON.stringify(stateMachineInputJSON));
            return stateMachineInputJSON;
        })
        .then(function (stateMachineInputJSON) {
            return triggerStateMachineExecution(stateMachineArn, stateMachineInputJSON);
        })
        .then(function (triggerStateMachineOutput) {
            var continuationToken = { "stateMachineExecutionArn": triggerStateMachineOutput.executionArn };
            var message = "State machine has been triggered: " + JSON.stringify(triggerStateMachineOutput) + ", continuationToken: " + JSON.stringify(continuationToken);
            return continueExecution(Util.jobId(event), continuationToken, callback, message);
        })
        .catch(function (err) {
            console.log("Error triggering state machine: " + stateMachineArn + ", Error: " + err.message);
            failure(Util.jobId(event), callback, context.invokeid, err.message);
        })
}

The Trigger Lambda function fetches the state machine input parameters from an S3 file, triggers the execution of the state machine using the input parameters and the stateMachineArn environment variable, and signals to CodePipeline that the execution should continue later by passing a continuation token that contains the state machine execution ARN. In case any of these operations fail and an exception is thrown, the Trigger Lambda function will fail the pipeline immediately by signaling a pipeline failure through the putJobFailureResult CodePipeline API.

If the Lambda function is continuing a previous execution, it will extract the state machine execution ARN from the continuation token and check the status of the state machine, as shown here.

function monitorStateMachineExecution(event, context, callback) {
    var stateMachineArn = process.env.stateMachineArn;
    var continuationToken = JSON.parse(Util.continuationToken(event));
    var stateMachineExecutionArn = continuationToken.stateMachineExecutionArn;
    getStateMachineExecutionStatus(stateMachineExecutionArn)
        .then(function (response) {
            if (response.status === "RUNNING") {
                var message = "Execution: " + stateMachineExecutionArn + " of state machine: " + stateMachineArn + " is still " + response.status;
                return continueExecution(Util.jobId(event), continuationToken, callback, message);
            }
            if (response.status === "SUCCEEDED") {
                var message = "Execution: " + stateMachineExecutionArn + " of state machine: " + stateMachineArn + " has: " + response.status;
                return success(Util.jobId(event), callback, message);
            }
            // FAILED, TIMED_OUT, ABORTED
            var message = "Execution: " + stateMachineExecutionArn + " of state machine: " + stateMachineArn + " has: " + response.status;
            return failure(Util.jobId(event), callback, context.invokeid, message);
        })
        .catch(function (err) {
            var message = "Error monitoring execution: " + stateMachineExecutionArn + " of state machine: " + stateMachineArn + ", Error: " + err.message;
            failure(Util.jobId(event), callback, context.invokeid, message);
        });
}

If the state machine is in the RUNNING state, the Lambda function will send the continuation token back to the CodePipeline action. This will cause CodePipeline to call the Lambda function again a few seconds later. If the state machine has SUCCEEDED, then the Lambda function will notify the CodePipeline action that the action has succeeded. In any other case (FAILURE, TIMED-OUT, or ABORT), the Lambda function will fail the pipeline action.

This behavior is especially useful for developers who are building and debugging a new state machine because a bug in the state machine can potentially leave the pipeline action hanging for long periods of time until it times out. The Trigger Lambda function prevents this.

Also, by having the Trigger Lambda function as a means to decouple the pipeline and state machine, we make the state machine more reusable. It can be triggered from anywhere, not just from a CodePipeline action.

The Pipeline in CodePipeline

Our sample pipeline contains two simple stages: the Source stage represented by a CodeCommit Git repository and the Prod stage, which contains the Deploy action that invokes the Trigger Lambda function. When the state machine decides that the change set created must be rejected (because it replaces or deletes some the existing production resources), it fails the pipeline without performing any updates to the existing infrastructure. (See the failed Deploy action in red.) Otherwise, the pipeline action succeeds, indicating that the existing provisioned infrastructure was either created (first run) or updated without impacting any resources. (See the green Deploy stage in the pipeline on the left.)

The Pipeline in CodePipeline

The JSON spec for the pipeline’s Prod stage is shown here. We use the UserParameters attribute to pass the S3 bucket and state machine input file to the Lambda function. These parameters are action-specific, which means that we can reuse the state machine in another pipeline action.

{
  "name": "Prod",
  "actions": [
      {
          "inputArtifacts": [
              {
                  "name": "CodeCommitOutput"
              }
          ],
          "name": "Deploy",
          "actionTypeId": {
              "category": "Invoke",
              "owner": "AWS",
              "version": "1",
              "provider": "Lambda"
          },
          "outputArtifacts": [],
          "configuration": {
              "FunctionName": "StateMachineTriggerLambda",
              "UserParameters": "{\"s3Bucket\": \"codepipeline-StepFunctions-sample\", \"stateMachineFile\": \"state_machine_input.json\"}"
          },
          "runOrder": 1
      }
  ]
}

Conclusion

In this blog post, we discussed how state machines in AWS Step Functions can be used to handle workflow-driven actions. We showed how a Lambda function can be used to fully decouple the pipeline and the state machine and manage their interaction. The use of a state machine greatly simplified the associated CodePipeline action, allowing us to build a much simpler and cleaner pipeline while drilling down into the state machine’s execution for troubleshooting or debugging.

Here are two exercises you can complete by using the source code.

Exercise #1: Do not fail the state machine and pipeline action after inspecting a change set that deletes or replaces resources. Instead, create a stack with a different name (think of blue/green deployments). You can do this by creating a state machine transition between the “Safe to Update Infra?” and “Create Stack” stages and passing a new stack name as input to the “Create Stack” stage.

Exercise #2: Add wait logic to the state machine to wait until the change set completes its execution before allowing the state machine to proceed to the “Deployment Succeeded” stage. Use the stack creation case as an example. You’ll have to create a Lambda function (similar to the Lambda function that checks the creation status of a stack) to get the creation status of the change set.

Have fun and share your thoughts!

About the Author

Marcilio Mendonca is a Sr. Consultant in the Canadian Professional Services Team at Amazon Web Services. He has helped AWS customers design, build, and deploy best-in-class, cloud-native AWS applications using VMs, containers, and serverless architectures. Before he joined AWS, Marcilio was a Software Development Engineer at Amazon. Marcilio also holds a Ph.D. in Computer Science. In his spare time, he enjoys playing drums, riding his motorcycle in the Toronto GTA area, and spending quality time with his family.

Microcell through a mobile hotspot

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/10/microcell-through-mobile-hotspot.html

I accidentally acquired a tree farm 20 minutes outside of town. For utilities, it gets electricity and basic phone. It doesn’t get water, sewer, cable, or DSL (i.e. no Internet). Also, it doesn’t really get cell phone service. While you can get SMS messages up there, you usually can’t get a call connected, or hold a conversation if it does.

We have found a solution — an evil solution. We connect an AT&T “Microcell“, which provides home cell phone service through your Internet connection, to an AT&T Mobile Hotspot, which provides an Internet connection through your cell phone service.

Now, you may be laughing at this, because it’s a circular connection. It’s like trying to make a sailboat go by blowing on the sails, or lifting up a barrel to lighten the load in the boat.

But it actually works.

Since we get some, but not enough, cellular signal, we setup a mast 20 feet high with a directional antenna pointed to the cell tower 7.5 miles to the southwest, connected to a signal amplifier. It’s still an imperfect solution, as we are still getting terrain distortions in the signal, but it provides a good enough signal-to-noise ratio to get a solid connection.

We then connect that directional antenna directly to a high-end Mobile Hotspot. This gives us a solid 2mbps connection with a latency under 30milliseconds. This is far lower than the 50mbps you can get right next to a 4G/LTE tower, but it’s still pretty good for our purposes.

We then connect the AT&T Microcell to the Mobile Hotspot, via WiFi.

To avoid the circular connection, we lock the frequencies for the Mobile Hotspot to 4G/LTE, and to 3G for the Microcell. This prevents the Mobile Hotspot locking onto the strong 3G signal from the Microcell. It also prevents the two from causing noise to the other.

This works really great. We now get a strong cell signal on our phones even 400 feet from the house through some trees. We can be all over the property, out in the lake, down by the garden, and so on, and have our phones work as normal. It’s only AT&T, but that’s what the whole family uses.

You might be asking why we didn’t just use a normal signal amplifier, like they use on corporate campus. It boosts all the analog frequencies, making any cell phone service works.

We’ve tried this, and it works a bit, allowing cell phones to work inside the house pretty well. But they don’t work outside the house, which is where we spend a lot of time. In addition, while our newer phones work, my sister’s iPhone 5 doesn’t. We have no idea what’s going on. Presumably, we could hire professional installers and stuff to get everything working, but nobody would quote us a price lower than $25,000 to even come look at the property.

Another possible solution is satellite Internet. There are two satellites in orbit that cover the United States with small “spot beams” delivering high-speed service (25mbps downloads). However, the latency is 500milliseconds, which makes it impractical for low-latency applications like phone calls.

While I know a lot about the technology in theory, I find myself hopelessly clueless in practice. I’ve been playing with SDR (“software defined radio”) to try to figure out exactly where to locate and point the directional antenna, but I’m not sure I’ve come up with anything useful. In casual tests, it seems rotating the antenna from vertical to horizontal increases the signal-to-noise ratio a bit, which seems counter intuitive, and should not happen. So I’m completely lost.

Anyway, I thought I’d write this up as a blogpost, in case anybody has better suggestion. Or, instead of signals, suggestions to get wired connectivity. Properties a half mile away get DSL, I wish I knew who to talk to at the local phone company to pay them money to extend Internet to our property.

Phone works in all this area now