SHB is a small invitational gathering of people studying various aspects of the human side of security, organized each year by Alessandro Acquisti, Ross Anderson, and myself. The 50 or so people in the room include psychologists, economists, computer security researchers, sociologists, political scientists, neuroscientists, designers, lawyers, philosophers, anthropologists, business school professors, and a smattering of others. It’s not just an interdisciplinary event; most of the people here are individually interdisciplinary.
The goal is to maximize discussion and interaction. We do that by putting everyone on panels, and limiting talks to 7-10 minutes. The rest of the time is left to open discussion. Four hour-and-a-half panels per day over two days equals eight panels; six people per panel means that 48 people get to speak. We also have lunches, dinners, and receptions — all designed so people from different disciplines talk to each other.
I invariably find this to be the most intellectually stimulating conference of my year. It influences my thinking in many different, and sometimes surprising, ways.
Here are my posts on the first, second, third, fourth, fifth, sixth, seventh, eighth, ninth, and tenth SHB workshops. Follow those links to find summaries, papers, and occasionally audio recordings of the various workshops.
I’m in danger of contradicting myself, after previously pointing out that x86 machine code is a high-level language, but this article claiming C is a not a low level language is bunk. C certainly has some problems, but it’s still the closest language to assembly. This is obvious by the fact it’s still the fastest compiled language. What we see is a typical academic out of touch with the real world.
The author makes the (wrong) observation that we’ve been stuck emulating the PDP-11 for the past 40 years. C was written for the PDP-11, and since then CPUs have been designed to make C run faster. The author imagines a different world, such as where CPU designers instead target something like LISP as their preferred language, or Erlang. This misunderstands the state of the market. CPUs do indeed supports lots of different abstractions, and C has evolved to accommodate this.
The author criticizes things like “out-of-order” execution which has lead to the Spectre sidechannel vulnerabilities. Out-of-order execution is necessary to make C run faster. The author claims instead that those resources should be spent on having more slower CPUs, with more threads. This sacrifices single-threaded performance in exchange for a lot more threads executing in parallel. The author cites Sparc Tx CPUs as his ideal processor.
But here’s the thing, the Sparc Tx was a failure. To be fair, it’s mostly a failure because most of the time, people wanted to run old C code instead of new Erlang code. But it was still a failure at running Erlang.
Time after time, engineers keep finding that “out-of-order”, single-threaded performance is still the winner. A good example is ARM processors for both mobile phones and servers. All the theory points to in-order CPUs as being better, but all the products are out-of-order, because this theory is wrong. The custom ARM cores from Apple and Qualcomm used in most high-end phones are so deeply out-of-order they give Intel CPUs competition. The same is true on the server front with the latest Qualcomm Centriq and Cavium ThunderX2 processors, deeply out of order supporting more than 100 instructions in flight.
The Cavium is especially telling. Its ThunderX CPU had 48 simple cores which was replaced with the ThunderX2 having 32 complex, deeply out-of-order cores. The performance increase was massive, even on multithread-friendly workloads. Every competitor to Intel’s dominance in the server space has learned the lesson from Sparc Tx: many wimpy cores is a failure, you need fewer beefy cores. Yes, they don’t need to be as beefy as Intel’s processors, but they need to be close.
Even Intel’s “Xeon Phi” custom chip learned this lesson. This is their GPU-like chip, running 60 cores with 512-bit wide “vector” (sic) instructions, designed for supercomputer applications. Its first version was purely in-order. Its current version is slightly out-of-order. It supports four threads and focuses on basic number crunching, so in-order cores seems to be the right approach, but Intel found in this case that out-of-order processing still provided a benefit. Practice is different than theory.
As an academic, the author of the above article focuses on abstractions. The criticism of C is that it has the wrong abstractions which are hard to optimize, and that if we instead expressed things in the right abstractions, it would be easier to optimize.
This is an intellectually compelling argument, but so far bunk.
The reason is that while the theoretical base language has issues, everyone programs using extensions to the language, like “intrinsics” (C ‘functions’ that map to assembly instructions). Programmers write libraries using these intrinsics, which then the rest of the normal programmers use. In other words, if your criticism is that C is not itself low level enough, it still provides the best access to low level capabilities.
Given that C can access new functionality in CPUs, CPU designers add new paradigms, from SIMD to transaction processing. In other words, while in the 1980s CPUs were designed to optimize C (stacks, scaled pointers), these days CPUs are designed to optimize tasks regardless of language.
The author of that article criticizes the memory/cache hierarchy, claiming it has problems. Yes, it has problems, but only compared to how well it normally works. The author praises the many simple cores/threads idea as hiding memory latency with little caching, but misses the point that caches also dramatically increase memory bandwidth. Intel processors are optimized to read a whopping 256 bits every clock cycle from L1 cache. Main memory bandwidth is orders of magnitude slower.
The author goes onto criticize cache coherency as a problem. C uses it, but other languages like Erlang don’t need it. But that’s largely due to the problems each languages solves. Erlang solves the problem where a large number of threads work on largely independent tasks, needing to send only small messages to each other across threads. The problems C solves is when you need many threads working on a huge, common set of data.
For example, consider the “intrusion prevention system”. Any thread can process any incoming packet that corresponds to any region of memory. There’s no practical way of solving this problem without a huge coherent cache. It doesn’t matter which language or abstractions you use, it’s the fundamental constraint of the problem being solved. RDMA is an important concept that’s moved from supercomputer applications to the data center, such as with memcached. Again, we have the problem of huge quantities (terabytes worth) shared among threads rather than small quantities (kilobytes).
The fundamental issue the author of the the paper is ignoring is decreasing marginal returns. Moore’s Law has gifted us more transistors than we can usefully use. We can’t apply those additional registers to just one thing, because the useful returns we get diminish.
For example, Intel CPUs have two hardware threads per core. That’s because there are good returns by adding a single additional thread. However, the usefulness of adding a third or fourth thread decreases. That’s why many CPUs have only two threads, or sometimes four threads, but no CPU has 16 threads per core.
You can apply the same discussion to any aspect of the CPU, from register count, to SIMD width, to cache size, to out-of-order depth, and so on. Rather than focusing on one of these things and increasing it to the extreme, CPU designers make each a bit larger every process tick that adds more transistors to the chip.
The same applies to cores. It’s why the “more simpler cores” strategy fails, because more cores have their own decreasing marginal returns. Instead of adding cores tied to limited memory bandwidth, it’s better to add more cache. Such cache already increases the size of the cores, so at some point it’s more effective to add a few out-of-order features to each core rather than more cores. And so on.
The question isn’t whether we can change this paradigm and radically redesign CPUs to match some academic’s view of the perfect abstraction. Instead, the goal is to find new uses for those additional transistors. For example, “message passing” is a useful abstraction in languages like Go and Erlang that’s often more useful than sharing memory. It’s implemented with shared memory and atomic instructions, but I can’t help but think it couldn’t better be done with direct hardware support.
Of course, as soon as they do that, it’ll become an intrinsic in C, then added to languages like Go and Erlang.
Summary Academics live in an ideal world of abstractions, the rest of us live in practical reality. The reality is that vast majority of programmers work with the C family of languages (JavaScript, Go, etc.), whereas academics love the epiphanies they learned using other languages, especially function languages. CPUs are only superficially designed to run C and “PDP-11 compatibility”. Instead, they keep adding features to support other abstractions, abstractions available to C. They are driven by decreasing marginal returns — they would love to add new abstractions to the hardware because it’s a cheap way to make use of additional transitions. Academics are wrong believing that the entire system needs to be redesigned from scratch. Instead, they just need to come up with new abstractions CPU designers can add.
“Security is hard” is a tautology, especially in the fast-moving world of container orchestration. We have previously covered various aspects of Linux container security through, for example, the Clear Containers implementation or the broader question of Kubernetes and security, but those are mostly concerned with container isolation; they do not address the question of trusting a container’s contents. What is a container running? Who built it and when? Even assuming we have good programmers and solid isolation layers, propagating that good code around a Kubernetes cluster and making strong assertions on the integrity of that supply chain is far from trivial. The 2018 KubeCon + CloudNativeCon Europe event featured some projects that could eventually solve that problem.
Bad software is everywhere. One can even claim that every software is bad. Cool companies, tech giants, established companies, all produce bad software. And no, yours is not an exception.
Who’s to blame for bad software? It’s all complicated and many factors are intertwined – there’s business requirements, there’s organizational context, there’s lack of sufficient skilled developers, there’s the inherent complexity of software development, there’s leaky abstractions, reliance on 3rd party software, consequences of wrong business and purchase decisions, time limitations, flawed business analysis, etc. So yes, despite the catchy title, I’m aware it’s actually complicated.
But in every “it’s complicated” scenario, there’s always one or two factors that are decisive. All of them contribute somehow, but the major drivers are usually a handful of things. And in the case of base software, I think it’s the fault of technical people. Developers, architects, ops.
We don’t seem to care about best practices. And I’ll do some nasty generalizations here, but bear with me. We can spend hours arguing about tabs vs spaces, curly bracket on new line, git merge vs rebase, which IDE is better, which framework is better and other largely irrelevant stuff. But we tend to ignore the important aspects that span beyond the code itself. The context in which the code lives, the non-functional requirements – robustness, security, resilience, etc.
We don’t seem to get security. Even trivial stuff such as user authentication is almost always implemented wrong. These days Twitter and GitHub realized they have been logging plain-text passwords, for example, but that’s just the tip of the iceberg. Too often we ignore the security implications.
“But the business didn’t request the security features”, one may say. The business never requested 2-factor authentication, encryption at rest, PKI, secure (or any) audit trail, log masking, crypto shredding, etc., etc. Because the business doesn’t know these things – we do and we have to put them on the backlog and fight for them to be implemented. Each organization has its specifics and tech people can influence the backlog in different ways, but almost everywhere we can put things there and prioritize them.
The other aspect is testing. We should all be well aware by now that automated testing is mandatory. We have all the tools in the world for unit, functional, integration, performance and whatnot testing, and yet many software projects lack the necessary test coverage to be able to change stuff without accidentally breaking things. “But testing takes time, we don’t have it”. We are perfectly aware that testing saves time, as we’ve all had those “not again!” recurring bugs. And yet we think of all sorts of excuses – “let the QAs test it”, we have to ship that now, we’ll test it later”, “this is too trivial to be tested”, etc.
And you may say it’s not our job. We don’t define what has do be done, we just do it. We don’t define the budget, the scope, the features. We just write whatever has been decided. And that’s plain wrong. It’s not our job to make money out of our code, and it’s not our job to define what customers need, but apart from that everything is our job. The way the software is structured, the security aspects and security features, the stability of the code base, the way the software behaves in different environments. The non-functional requirements are our job, and putting them on the backlog is our job.
You’ve probably heard that every software becomes “legacy” after 6 months. And that’s because of us, our sloppiness, our inability to mitigate external factors and constraints. Too often we create a mess through “just doing our job”.
And of course that’s a generalization. I happen to know a lot of great professionals who don’t make these mistakes, who strive for excellence and implement things the right way. But our industry as a whole doesn’t. Our industry as a whole produces bad software. And it’s our fault, as developers – as the only people who know why a certain piece of software is bad.
In a talk of his, Bob Martin warns us of the risks of our sloppiness. We have been building websites so far, but we are more and more building stuff that interacts with the real world, directly and indirectly. Ultimately, lives may depend on our software (like the recent unfortunate death caused by a self-driving car). And I’ll agree with Uncle Bob that it’s high time we self-regulate as an industry, before some technically incompetent politician decides to do that.
How, I don’t know. We’ll have to think more about it. But I’m pretty sure it’s our fault that software is bad, and no amount of blaming the management, the budget, the timing, the tools or the process can eliminate our responsibility.
Why do I insist on bashing my fellow software engineers? Because if we start looking at software development with more responsibility; with the fact that if it fails, it’s our fault, then we’re more likely to get out of our current bug-ridden, security-flawed, fragile software hole and really become the experts of the future.
If you’re not already familiar with building visualizations for quick access to business insights using Amazon QuickSight, consider this your introduction. In this post, we’ll walk through some common scenarios with sample datasets to provide an overview of how you can connect yuor data, perform advanced analysis and access the results from any web browser or mobile device.
The following visualizations are built from the public datasets available in the links below. Before we jump into that, let’s take a look at the supported data sources, file formats and a typical QuickSight workflow to build any visualization.
Which data sources does Amazon QuickSight support?
At the time of publication, you can use the following data methods:
Connect to AWS data sources, including:
Amazon RDS
Amazon Aurora
Amazon Redshift
Amazon Athena
Amazon S3
Upload Excel spreadsheets or flat files (CSV, TSV, CLF, and ELF)
Connect to on-premises databases like Teradata, SQL Server, MySQL, and PostgreSQL
Import data from SaaS applications like Salesforce and Snowflake
Use big data processing engines like Spark and Presto
SPICE is the Amazon QuickSight super-fast, parallel, in-memory calculation engine, designed specifically for ad hoc data visualization. SPICE stores your data in a system architected for high availability, where it is saved until you choose to delete it. Improve the performance of database datasets by importing the data into SPICE instead of using a direct database query. To calculate how much SPICE capacity your dataset needs, see Managing SPICE Capacity.
Typical Amazon QuickSight workflow
When you create an analysis, the typical workflow is as follows:
Connect to a data source, and then create a new dataset or choose an existing dataset.
(Optional) If you created a new dataset, prepare the data (for example, by changing field names or data types).
Create a new analysis.
Add a visual to the analysis by choosing the fields to visualize. Choose a specific visual type, or use AutoGraph and let Amazon QuickSight choose the most appropriate visual type, based on the number and data types of the fields that you select.
(Optional) Modify the visual to meet your requirements (for example, by adding a filter or changing the visual type).
(Optional) Add more visuals to the analysis.
(Optional) Add scenes to the default story to provide a narrative about some aspect of the analysis data.
(Optional) Publish the analysis as a dashboard to share insights with other users.
The following graphic illustrates a typical Amazon QuickSight workflow.
Visualizations created in Amazon QuickSight with sample datasets
Data catalog: The World Bank invests into multiple development projects at the national, regional, and global levels. It’s a great source of information for data analysts.
The following graph shows the percentage of the population that has access to electricity (rural and urban) during 2000 in Asia, Africa, the Middle East, and Latin America.
The following graph shows the share of healthcare costs that are paid out-of-pocket (private vs. public). Also, you can maneuver over the graph to get detailed statistics at a glance.
Data catalog: The DBG PDS project makes real-time data derived from Deutsche Börse’s trading market systems available to the public for free. This is the first time that such detailed financial market data has been shared freely and continually from the source provider.
The following graph shows the market trend of max trade volume for different EU banks. It builds on the data available on XETRA engines, which is made up of a variety of equities, funds, and derivative securities. This graph can be scrolled to visualize trade for a period of an hour or more.
The following graph shows the common stock beating the rest of the maximum trade volume over a period of time, grouped by security type.
Data catalog: Data derived from different sensor stations placed on the city bridges and surface streets are a core information source. The road weather information station has a temperature sensor that measures the temperature of the street surface. It also has a sensor that measures the ambient air temperature at the station each second.
The following graph shows the present max air temperature in Seattle from different RWI station sensors.
The following graph shows the minimum temperature of the road surface at different times, which helps predicts road conditions at a particular time of the year.
Data catalog: Kaggle has come up with a platform where people can donate open datasets. Data engineers and other community members can have open access to these datasets and can contribute to the open data movement. They have more than 350 datasets in total, with more than 200 as featured datasets. It has a few interesting datasets on the platform that are not present at other places, and it’s a platform to connect with other data enthusiasts.
The following graph shows the trending YouTube videos and presents the max likes for the top 20 channels. This is one of the most popular datasets for data engineers.
The following graph shows the YouTube daily statistics for the max views of video titles published during a specific time period.
Data catalog: NYC Open data hosts some very popular open data sets for all New Yorkers. This platform allows you to get involved in dive deep into the data set to pull some useful visualizations. 2016 Green taxi trip dataset includes trip records from all trips completed in green taxis in NYC in 2016. Records include fields capturing pick-up and drop-off dates/times, pick-up and drop-off locations, trip distances, itemized fares, rate types, payment types, and driver-reported passenger counts.
The following graph presents maximum fare amount grouped by the passenger count during a period of time during a day. This can be further expanded to follow through different day of the month based on the business need.
The following graph shows the NewYork taxi data from January 2016, showing the dip in the number of taxis ridden on January 23, 2016 across all types of taxis.
A quick search for that date and location shows you the following news report:
Summary
Using Amazon QuickSight, you can see patterns across a time-series data by building visualizations, performing ad hoc analysis, and quickly generating insights. We hope you’ll give it a try today!
Karthik Odapally is a Sr. Solutions Architect in AWS. His passion is to build cost effective and highly scalable solutions on the cloud. In his spare time, he bakes cookies and cupcakes for family and friends here in the PNW. He loves vintage racing cars.
Pranabesh Mandal is a Solutions Architect in AWS. He has over a decade of IT experience. He is passionate about cloud technology and focuses on Analytics. In his spare time, he likes to hike and explore the beautiful nature and wild life of most divine national parks around the United States alongside his wife.
As part of my current project (secure audit trail) I decided to make a survey about the use of audit trail “in the wild”.
I haven’t written in details about this project of mine (unlike with someotherprojects). Mostly because it’s commercial and I don’t want to use my blog as a direct promotion channel (though I am doing that at the moment, ironically). But the aim of this post is to shed some light on how audit trail is used.
The survey can be found here. The questions are basically: does your current project have audit trail functionality, and if yes, is it protected from tampering. If not – do you think you should have such functionality.
The results are interesting (although with only around 50 respondents)
So more than half of the systems (on which respondents are working) don’t have audit trail. While audit trail is recommended by information security and related standards, it may not find place in the “busy schedule” of a software project, even though it’s fairly easy to provide a trivial implementation (e.g. I’ve written how to quickly setup one with Hibernate and Spring)
A trivial implementation might do in many cases but if the audit log is critical (e.g. access to sensitive data, performing financial operations etc.), then relying on a trivial implementation might not be enough. In other words – if the sysadmin can access the database and delete or modify the audit trail, then it doesn’t serve much purpose. Hence the next question – how is the audit trail protected from tampering:
And apparently, from the less than 50% of projects with audit trail, around 50% don’t have technical guarantees that the audit trail can’t be tampered with. My guess is it’s more, because people have different understanding of what technical measures are sufficient. E.g. someone may think that digitally signing your log files (or log records) is sufficient, but in fact it isn’t, as whole files (or records) can be deleted (or fully replaced) without a way to detect that. Timestamping can help (and a good audit trail solution should have that), but it doesn’t guarantee the order of events or prevent a malicious actor from deleting or inserting fake ones. And if timestamping is done on a log file level, then any not-yet-timestamped log file is vulnerable to manipulation.
I’ve written about event logs before and their two flavours – event sourcing and audit trail. An event log can effectively be considered audit trail, but you’d need additional security to avoid the problems mentioned above.
So, let’s see what would various levels of security and usefulness of audit logs look like. There are many papers on the topic (e.g. this and this), and they often go into the intricate details of how logging should be implemented. I’ll try to give an overview of the approaches:
Regular logs – rely on regular INFO log statements in the production logs to look for hints of what has happened. This may be okay, but is harder to look for evidence (as there is non-auditable data in those log files as well), and it’s not very secure – usually logs are collected (e.g. with graylog) and whoever has access to the log collector’s database (or search engine in the case of Graylog), can manipulate the data and not be caught
Designated audit trail – whether it’s stored in the database or in logs files. It has the proper business-event level granularity, but again doesn’t prevent or detect tampering. With lower risk systems that may is perfectly okay.
Timestamped logs – whether it’s log files or (harder to implement) database records. Timestamping is good, but if it’s not an external service, a malicious actor can get access to the local timestamping service and issue fake timestamps to either re-timestamp tampered files. Even if the timestamping is not compromised, whole entries can be deleted. The fact that they are missing can sometimes be deduced based on other factors (e.g. hour of rotation), but regularly verifying that is extra effort and may not always be feasible.
Hash chaining – each entry (or sequence of log files) could be chained (just as blockchain transactions) – the next one having the hash of the previous one. This is a good solution (whether it’s local, external or 3rd party), but it has the risk of someone modifying or deleting a record, getting your entire chain and re-hashing it. All the checks will pass, but the data will not be correct
Hash chaining with anchoring – the head of the chain (the hash of the last entry/block) could be “anchored” to an external service that is outside the capabilities of a malicious actor. Ideally, a public blockchain, alternatively – paper, a public service (twitter), email, etc. That way a malicious actor can’t just rehash the whole chain, because any check against the external service would fail.
WORM storage (write once, ready many). You could send your audit logs almost directly to WORM storage, where it’s impossible to replace data. However, that is not ideal, as WORM storage can be slow and expensive. For example AWS Glacier has rather big retrieval times and searching through recent data makes it impractical. It’s actually cheaper than S3, for example, and you can have expiration policies. But having to support your own WORM storage is expensive. It is a good idea to eventually send the logs to WORM storage, but “fresh” audit trail should probably not be “archived” so that it’s searchable and some actionable insight can be gained from it.
All-in-one – applying all of the above “just in case” may be unnecessary for every project out there, but that’s what I decided to do at LogSentinel. Business-event granularity with timestamping, hash chaining, anchoring, and eventually putting to WORM storage – I think that provides both security guarantees and flexibility.
I hope the overview is useful and the results from the survey shed some light on how this aspect of information security is underestimated.
This post courtesy of Paul Johnston, AWS Senior Developer Advocate – Serverless
Welcome to the first edition of the AWS Serverless ICYMI (In case you missed it) quarterly recap! Every quarter we’ll share all of the most recent product launches, feature enhancements, blog posts, webinars, Twitch live streams, and other interesting things that you might have missed!
Alexa Random Restaurant – Python-based backend for an Alexa skill that returns an open restaurant in a specified city using the Yelp API. Published by: Harsha Warrdhan Sharma
Podless – A serverless application that downloads podcasts to an S3 bucket. Published by: Stilvoid
Crypto-monitor – Collect and store crypto currency prices and send yourself an alert if one changes significantly. Published by: Drew Dresser
DailyDoggo – Send a daily link to a random dog picture to a phone number, via AWS Lambda and SNS. Published by: Kevin McCandless
These runtimes give Lambda developers and development teams even greater options for coding serverless, on-demand, compute solutions.
The AWS SAM 1.4.0 release was one of its biggest. The release added features for configuring many aspects of Amazon API Gateway, including CORS support, regional endpoints, binary media types, and stage settings. It also included per function concurrency support, tags and TableName for SimpleTable, and many documentation updates. Check out the release notes for the full list!
AppSync came out of the whitelisted preview and added a whole bunch of new features:
Here are the three webinars we delivered in Q1. We hold several Serverless webinars throughout the year, so look out for them in the Serverless section of the AWS Online Tech Talks page:
Keep an eye on AWS on Twitch for more Serverless videos and on the Join us on the Twitch AWS page for information about upcoming broadcasts and recent live streams.
Case studies
We’ve published several new case studies this quarter to help you with understanding how other organizations are using serverless technologies:
If you haven’t read the AWS Well Architected Framework Serverless Application Lens document, then it’s worth taking the time to do so. The document covers common serverless applications scenarios and identifies key elements to ensure that your workloads are architected according to best practices.
From now on, if you find issues with documentation we have open-sourced, you can tell us via a Pull Request rather than tweeting or emailing us. The current available serverless repositories are here:
We’re always looking to help people start learning how to build serverless applications. Our serverless web application workshops are online and you can do the hands-on labs yourself: Build a Serverless web application
Still looking for more?
The Serverless landing page has lots of information including a resources page containing case studies, webinars, whitepapers, customer stories, reference architectures, and even more Getting Started tutorials. Check it out!
Good article about how difficult it is to insure an organization against Internet attacks, and how expensive the insurance is.
Companies like retailers, banks, and healthcare providers began seeking out cyberinsurance in the early 2000s, when states first passed data breach notification laws. But even with 20 years’ worth of experience and claims data in cyberinsurance, underwriters still struggle with how to model and quantify a unique type of risk.
“Typically in insurance we use the past as prediction for the future, and in cyber that’s very difficult to do because no two incidents are alike,” said Lori Bailey, global head of cyberrisk for the Zurich Insurance Group. Twenty years ago, policies dealt primarily with data breaches and third-party liability coverage, like the costs associated with breach class-action lawsuits or settlements. But more recent policies tend to accommodate first-party liability coverage, including costs like online extortion payments, renting temporary facilities during an attack, and lost business due to systems failures, cloud or web hosting provider outages, or even IT configuration errors.
There are challenges to creating these new insurance products. There are two basic models for insurance. There’s the fire model, where individual houses catch on fire at a fairly steady rate, and the insurance industry can calculate premiums based on that rate. And there’s the flood model, where an infrequent large-scale event affects large numbers of people — but again at a fairly steady rate. Internet+ insurance is complicated because it follows neither of those models but instead has aspects of both: individuals are hacked at a steady (albeit increasing) rate, while class breaks and massive data breaches affect lots of people at once. Also, the constantly changing technology landscape makes it difficult to gather and analyze the historical data necessary to calculate premiums.
User authentication is the functionality that every web application shared. We should have perfected that a long time ago, having implemented it so many times. And yet there are so many mistakes made all the time.
Part of the reason for that is that the list of things that can go wrong is long. You can store passwords incorrectly, you can have a vulnerably password reset functionality, you can expose your session to a CSRF attack, your session can be hijacked, etc. So I’ll try to compile a list of best practices regarding user authentication. OWASP top 10 is always something you should read, every year. But that might not be enough.
So, let’s start. I’ll try to be concise, but I’ll include as much of the related pitfalls as I can cover – e.g. what could go wrong with the user session after they login:
Store passwords with bcrypt/scrypt/PBKDF2. No MD5 or SHA, as they are not good for password storing. Long salt (per user) is mandatory (the aforementioned algorithms have it built in). If you don’t and someone gets hold of your database, they’ll be able to extract the passwords of all your users. And then try these passwords on other websites.
Use HTTPS. Period. (Otherwise user credentials can leak through unprotected networks). Force HTTPS if user opens a plain-text version.
Mark cookies as secure. Makes cookie theft harder.
Use CSRF protection (e.g. CSRF one-time tokens that are verified with each request). Frameworks have such functionality built-in.
Disallow framing (X-Frame-Options: DENY). Otherwise your website may be included in another website in a hidden iframe and “abused” through javascript.
Logout – let your users logout by deleting all cookies and invalidating the session. This makes usage of shared computers safer (yes, users should ideally use private browsing sessions, but not all of them are that savvy)
Session expiry – don’t have forever-lasting sessions. If the user closes your website, their session should expire after a while. “A while” may still be a big number depending on the service provided. For ajax-heavy website you can have regular ajax-polling that keeps the session alive while the page stays open.
Remember me – implementing “remember me” (on this machine) functionality is actually hard due to the risks of a stolen persistent cookie. Spring-security uses this approach, which I think should be followed if you wish to implement more persistent logins.
Forgotten password flow – the forgotten password flow should rely on sending a one-time (or expiring) link to the user and asking for a new password when it’s opened. 0Auth explain it in this post and Postmark gives some best pracitces. How the link is formed is a separate discussion and there are several approaches. Store a password-reset token in the user profile table and then send it as parameter in the link. Or do not store anything in the database, but send a few params: userId:expiresTimestamp:hmac(userId+expiresTimestamp). That way you have expiring links (rather than one-time links). The HMAC relies on a secret key, so the links can’t be spoofed. It seems there’s no consensus, as the OWASP guide has a bit different approach
One-time login links – this is an option used by Slack, which sends one-time login links instead of asking users for passwords. It relies on the fact that your email is well guarded and you have access to it all the time. If your service is not accessed to often, you can have that approach instead of (rather than in addition to) passwords.
Limit login attempts – brute-force through a web UI should not be possible; therefore you should block login attempts if they become too many. One approach is to just block them based on IP. The other one is to block them based on account attempted. (Spring example here). Which one is better – I don’t know. Both can actually be combined. Instead of fully blocking the attempts, you may add a captcha after, say, the 5th attempt. But don’t add the captcha for the first attempt – it is bad user experience.
Don’t leak information through error messages – you shouldn’t allow attackers to figure out if an email is registered or not. If an email is not found, upon login report just “Incorrect credentials”. On passwords reset, it may be something like “If your email is registered, you should have received a password reset email”. This is often at odds with usability – people don’t often remember the email they used to register, and the ability to check a number of them before getting in might be important. So this rule is not absolute, though it’s desirable, especially for more critical systems.
Consider using a 3rd party authentication – OpenID Connect, OAuth by Google/Facebook/Twitter (but be careful with OAuth flaws as well). There’s an associated risk with relying on a 3rd party identity provider, and you still have to manage cookies, logout, etc., but some of the authentication aspects are simplified.
For high-risk or sensitive applications use 2-factor authentication. There’s a caveat with Google Authenticator though – if you lose your phone, you lose your accounts (unless there’s a manual process to restore it). That’s why Authy seems like a good solution for storing 2FA keys.
I’m sure I’m missing something. And you see it’s complicated. Sadly we’re still at the point where the most common functionality – authenticating users – is so tricky and cumbersome, that you almost always get at least some of it wrong.
What happens when you combine the Internet of Things, Machine Learning, and Edge Computing? Before I tell you, let’s review each one and discuss what AWS has to offer.
Internet of Things (IoT) – Devices that connect the physical world and the digital one. The devices, often equipped with one or more types of sensors, can be found in factories, vehicles, mines, fields, homes, and so forth. Important AWS services include AWS IoT Core, AWS IoT Analytics, AWS IoT Device Management, and Amazon FreeRTOS, along with others that you can find on the AWS IoT page.
Machine Learning (ML) – Systems that can be trained using an at-scale dataset and statistical algorithms, and used to make inferences from fresh data. At Amazon we use machine learning to drive the recommendations that you see when you shop, to optimize the paths in our fulfillment centers, fly drones, and much more. We support leading open source machine learning frameworks such as TensorFlow and MXNet, and make ML accessible and easy to use through Amazon SageMaker. We also provide Amazon Rekognition for images and for video, Amazon Lex for chatbots, and a wide array of language services for text analysis, translation, speech recognition, and text to speech.
Edge Computing – The power to have compute resources and decision-making capabilities in disparate locations, often with intermittent or no connectivity to the cloud. AWS Greengrass builds on AWS IoT, giving you the ability to run Lambda functions and keep device state in sync even when not connected to the Internet.
ML Inference at the Edge Today I would like to toss all three of these important new technologies into a blender! You can now perform Machine Learning inference at the edge using AWS Greengrass. This allows you to use the power of the AWS cloud (including fast, powerful instances equipped with GPUs) to build, train, and test your ML models before deploying them to small, low-powered, intermittently-connected IoT devices running in those factories, vehicles, mines, fields, and homes that I mentioned.
Here are a few of the many ways that you can put Greengrass ML Inference to use:
Precision Farming – With an ever-growing world population and unpredictable weather that can affect crop yields, the opportunity to use technology to increase yields is immense. Intelligent devices that are literally in the field can process images of soil, plants, pests, and crops, taking local corrective action and sending status reports to the cloud.
Physical Security – Smart devices (including the AWS DeepLens) can process images and scenes locally, looking for objects, watching for changes, and even detecting faces. When something of interest or concern arises, the device can pass the image or the video to the cloud and use Amazon Rekognition to take a closer look.
Industrial Maintenance – Smart, local monitoring can increase operational efficiency and reduce unplanned downtime. The monitors can run inference operations on power consumption, noise levels, and vibration to flag anomalies, predict failures, detect faulty equipment.
Greengrass ML Inference Overview There are several different aspects to this new AWS feature. Let’s take a look at each one:
Machine Learning Models – Precompiled TensorFlow and MXNet libraries, optimized for production use on the NVIDIA Jetson TX2 and Intel Atom devices, and development use on 32-bit Raspberry Pi devices. The optimized libraries can take advantage of GPU and FPGA hardware accelerators at the edge in order to provide fast, local inferences.
Model Building and Training – The ability to use Amazon SageMaker and other cloud-based ML tools to build, train, and test your models before deploying them to your IoT devices. To learn more about SageMaker, read Amazon SageMaker – Accelerated Machine Learning.
Model Deployment – SageMaker models can (if you give them the proper IAM permissions) be referenced directly from your Greengrass groups. You can also make use of models stored in S3 buckets. You can add a new machine learning resource to a group with a couple of clicks:
With the advent of AWS PrivateLink, you can provide services to AWS customers directly in their Virtual Private Networks by offering cross-account SaaS solutions on private IP addresses rather than over the Internet.
Traffic that flows to the services you provide does so over private AWS networking rather than over the Internet, offering security and performance enhancements, as well as convenience. PrivateLink can tie in with the AWS Marketplace, facilitating billing and providing a straightforward consumption model to your customers.
The use cases are myriad, but, for this blog post, we’ll demonstrate a fictional order-processing resource. The resource accepts JSON data over a RESTful API, simulating an interface. This could easily be an existing application being considered for a PrivateLink-based consumption model. Consumers of this resource send JSON payloads representing new orders and the system responds with order IDs corresponding to newly-created orders in the system. In a real-world scenario, additional APIs, such as authentication, might also represent critical aspects of the system. This example will not demonstrate these additional APIs because they could be consumed over PrivateLink in a similar fashion to the API constructed in the example.
I’ll demonstrate how to expose the resource on a private IP address in a customer’s VPC. I’ll also explain an architecture leveraging PrivateLink and provide detailed instructions for how to set up such a service. Finally, I’ll provide an example of how a customer might consume such a service. I’ll focus not only on how to architect the solution, but also the considerations that drive architectural choices.
Solution Overview
N.B.: Only two subnets and Availability Zones are shown per VPC for simplicity. Resources must cover all Availability Zones per Region, so that the application is available to all consumers in the region. The instructions in this post, which pertain to resources sitting in us-east-1 will detail the deployment of subnets in all six Availability Zones for this region.
This solution exposes an application’s HTTP-based API over PrivateLink in a provider’s AWS account. The application is a stateless web server running on Amazon Elastic Compute Cloud (EC2) instances. The provider places instances within a virtual private network (VPC) consisting of one private subnet per Availability Zone (AZ). Each AZ contains a subnet. Instances populate each subnet inside of Auto Scaling Groups (ASGs), maintaining a desired count per subnet. There is one ASG per subnet to ensure that the service is available in each AZ. An internal Network Load Balancer (NLB) sits in front of the entire fleet of application instances and an endpoint service is connected with the NLB.
In the customer’s AWS account, they create an endpoint that consumes the endpoint service from the provider’s account. The endpoint exposes an Elastic Network Interface (ENI) in each subnet the customer desires. Each ENI is assigned an IP address within the CIDR block associated with the subnet, for any number of subnets in any number of AZs within the region, for each customer.
PrivateLink facilitates cross-account access to services so the customer can use the provider’s service, feeding it data that exist within the customer’s account while using application logic and systems that run in the provider’s account. The routing between accounts is over private networking rather than over the Internet.
Though this example shows a simple, stateless service running on EC2 and sitting behind an NLB, many kinds of AWS services can be exposed through PrivateLink and can serve as pathways into a provider’s application, such as Amazon Kinesis Streams, Amazon EC2 Container Service, Amazon EC2 Systems Manager, and more.
Using PrivateLink to Establish a Service for Consumption
Building a service to be consumed through PrivateLink involves a few steps:
Build a VPC covering all AZs in region with private subnets
Create a NLB, listener, and target group for instances
Create a launch configuration and ASGs to manage the deployment of Amazon
EC2 instances in each subnet
Launch an endpoint service and connect it with the NLB
Tie endpoint-request approval with billing systems or the AWS Marketplace
Provide the endpoint service in multiple regions
Step 1: Build a VPC and private subnets
Start by determining the network you will need to serve the application. Keep in mind, that you will need to serve the application out of each AZ within any region you choose. Customers will expect to consume your service in multiple AZs because AWS recommends they architect their own applications to span across AZs for fault-tolerance purposes.
Additionally, anything less than full coverage across all AZs in a single region will not facilitate straightforward consumption of your service because AWS does not guarantee that a single AZ will carry the same name across accounts. In fact, AWS randomizes AZ names across accounts to ensure even distribution of independent workloads. Telling customers, for example, that you provide a service in us-east-1a may not give them sufficient information to connect with your service.
The solution is to serve your application in all AZs within a region because this guarantees that no matter what AZs a customer chooses for endpoint creation, that customer is guaranteed to find a running instance of your application with which to connect.
You can lay the foundations for doing this by creating a subnet in each AZ within the region of your choice. The subnets can be private because the service, exposed via PrivateLink, will not provide any publicly routable APIs.
This example uses the us-east-1 region. If you use a different region, the number of AZs may vary, which will change the number of subnets required, and thus the size of the IP address range for your VPC may require adjustments.
The example above creates a VPC with 128 IP addresses starting at 10.3.0.0. Each subnet will contain 16 IP addresses, using a total of 96 addresses in the space. Allocating a sufficient block of addresses requires some planning (though you can make adjustments later if needed). I’d suggest an equally-sized address space in each subnet because the provided service should embody the same performance, availability, and functionality regardless of which AZ your customers choose. Each subnet will need a sufficient address space to accommodate the number of instances you run within it. Additionally, you will need enough space to allow for one IP address per subnet to assign to that subnet’s NLB node’s Elastic Network Interface (ENI).
In this simple example, 16 IP addresses per subnet are enough because we will configure ASGs to maintain two instances each and the NLB requires one ENI. Each subnet reserves five IP addresses for internal purposes, for a total of eight IP addresses needed in each subnet to support the service.
Next, create the private subnets for each Availability Zone. The following demonstrates the creation of the first subnet, which sits in the us-east-1a AZ:
Repeat this step for each remaining AZ. If using the us-east-1 region, you will need to create private subnets in all AZs as follows:
For the purpose of this example, the subnets can leverage the default route table, as it contains a single rule for routing requests to private IP addresses in the VPC, as follows:
In a real-world case, additional routing may be required. For example, you may need additional routes to support VPC peering to access dependencies in other VPCs, connectivity to on-premises resources over DirectConnect or VPN, Internet-accessible dependencies via NAT, or other scenarios.
Security Group Creation
Instances will need to be placed in a security group that allows traffic from the NLB nodes that sit in each subnet.
All instances running the service should be in a security group accepting TCP traffic on the traffic port from any other IP address in the VPC. This will allow the NLB to forward traffic to those instances because the NLB nodes sit in the VPC and are assigned IP addresses in the subnets. In this example, the order processing server running on each instance exposes a service on port 3000, so the security group rule covers this port.
Create a security group for instances:
aws ec2 create-security-group \
--group-name "service-sg" \
--description "Security group for service instances" \
--vpc-id "vpc-2cadab54"
Step 2: Create a Network Load Balancer, Listener, and Target Group
The service integrates with PrivateLink using an internal NLB which sits in front of instances that run the service.
Step 3: Create a Launch Configuration and Auto Scaling Groups
Each private subnet in the VPC will require its own ASG in order to ensure that there is always a minimum number of instances in each subnet.
A single ASG spanning all subnets will not guarantee that every subnet contains the appropriate number of instances. For example, while a single ASG could be configured to work across six subnets and maintain twelve instances, there is no guarantee that each of the six subnets will contain two instances. To guarantee the appropriate number of instances on a per-subnet basis, each subnet must be configured with its own ASG.
New instances should be automatically created within each ASG based on a single launch configuration. The launch configuration should be set up to use an existing Amazon Machine Image (AMI).
This post presupposes you have an AMI that can be used to create new instances that serve the application. There are only a few basic assumptions to how this image is configured:
1. The image containes a web server that serves traffic (in this case, on port 3000) 2. The image is configured to automatically launch the web server as a daemon when the instance starts.
Create a launch configuration for the ASGs, providing the AMI ID, the ID of the security group created in previous steps (above), a key for access, and an instance type:
Repeat this process to create an ASG in each remaining subnet, using the same launch configuration and target group.
In this example, only two instances are created in each subnet. In a real-world scenario, additional instances would likely be recommended for both availability and scale. The ASGs use the provided launch configuration as a template for creating new instances.
When creating the ASGs, the ARN of the target group for the NLB is provided. This way, the ASGs automatically register newly-created instances with the target group so that the NLB can begin sending traffic to them.
Step 4: Launch an endpoint service and connect with NLB
Now, expose the service via PrivateLink with an endpoint service, providing the ARN of the NLB:
This endpoint service is configured to require acceptance. This means that new consumers who attempt to add endpoints that consume it will have to wait for the provider to allow access. This provides an opportunity to control access and integrate with billing systems that monetize the provided service.
Step 5: Tie endpoint request approval with billing system or the AWS Marketplace
If you’re maintaining your service as a private service, any account that is intended to have access must be whitelisted before it can find the endpoint service and create an endpoint to consume it.
For more information on listing a PrivateLink service in the AWS Marketplace, see How to List Your Product in AWS Marketplace (https://aws.amazon.com/blogs/apn/how-to-list-your-product-in-aws-marketplace/).
Most production-ready services offered through PrivateLink will require acceptance of Endpoint requests before customers can consume them. Typically, some level of automation around processing approvals is helpful. PrivateLink can publish on a Simple Notification Service (SNS) topic when customers request approval.
Setting this up requires two steps:
1. Create a new SNS topic 2. Create an endpoint connection notification that publishes to the SNS topic.
Each is discussed below.
Create an SNS Topic
First, create a new SNS Topic that can receive messages relating to endpoint service access requests:
This creates a new topic with the name “service-notification-topic”. Endpoint request approval systems can subscribe to this Topic so that acceptance can be automated.
Next, create a new Endpoint Connection Notification, so that messages will be published to the topic when new Endpoints connect and need to have access requests approved:
A billing system may ultimately tie in with request approval. This can also be done manually, which may be less useful, but is illustrative. As an example, assume that a customer account has already requested an endpoint to consume the service. The customer can be accepted manually, as follows:
At this point, the consumer can begin consuming the service.
Step 6: Take the Service Across Regions
In distributing SaaS via PrivateLink, providers may have to have to think about how to make their services available in different regions because Endpoint Services are only available within the region where they are created. Customers who attempt to consume Endpoint Services will not be able to create Endpoints across regions.
Rather than saddling consumers with the responsibility of making the jump across regions, we recommend providers work to make services available where their customers consume. They are in a better position to adapt their architectures to multiple regions than customers who do not know the internals of how providers have designed their services.
There are several architectural options that can support multi-region adaptation. Selection among them will depend on a number of factors, including read-to-write ratio, latency requirements, budget, amenability to re-architecture, and preference for simplicity.
Generally, the challenge in providing multi-region SaaS is in instantiating stateful components in multiple regions because the data on which such components depend are hard to replicate, synchronize, and access with low latency over large geographical distances.
Of all stateful components, perhaps the most frequently encountered will be databases. Some solutions for overcoming this challenge with respect to databases are as follows:
1. Provide a master in a single region; provide read replicas in every region. 2. Provide a master in every region; assign each tenant to one master only. 3. Create a full multi-master architecture; replicate data efficiently. 4. Rely on a managed service for replicating data cross-regionally (e.g., DynamoDB Global Tables).
Stateless components can be provisioned in multiple regions more easily. In this example, you will have to re-create all of the VPC resources—including subnets, Routing Tables, Security Groups, and Endpoint Services—as well as all EC2 resources—including instances, NLBs, Listeners, Target Groups, ASGs, and Launch Configurations—in each additional region. Because of the complexity in doing so, in addition to the significant need to keep regional configurations in-sync, you may wish to explore an orchestration tool such as CloudFormation, rather than the command line.
Regardless of what orchestration tooling you choose, you will need to copy your AMI to each region in which you wish to deploy it. Once available, you can build out your service in that region much as you did in the first one.
This copies the AMI used to provide the service in us-east-1 into us-east-2. Repeat the process for each region you wish to enable.
Consuming a Service via PrivateLink
To consume a service over PrivateLink, a customer must create a new Endpoint in their VPC within a Security Group that allows traffic on the traffic port.
Start by creating a Security Group to apply to a new Endpoint:
aws ec2 create-security-group \ --group-name "consumer-sg" \ --description "Security group for service consumers" \ --vpc-id "vpc-2cadab54"
Next, create an endpoint in the VPC, placing it in the Security Group:
The response will include an attribute called VpcEndpoint.DnsEntries. The service can be accessed at each of the DNS names in the output under any of the entries there. Before the consumer can access the endpoint service, the provider has to accept the Endpoint.
Access Endpoint Via Custom DNS Names
When creating a new Endpoint, the consumer will receive named endpoint addresses in each AZ where the Endpoint is created, plus a named endpoint that is AZ-agnostic. For example:
The consumer can use Route53 to provide a custom DNS name for the service. This not only allows for using cleaner service names, but also enables the consumer to leverage the traffic management features of Route53, such as fail-over routing.
First, the the consumer must enable DNS Hostnames and DNS Support on the VPC within which the Endpoint was created. The consumer should start by enabling DNS Hostnames:
After the VPC is properly configured to work with Route53, the consumer should either select an existing hosted zone or create a new one. Assuming one has not already been created, the consumer should create one as follows:
In the request, the consumer specifies the DNS name, VPC ID, region, and flags the hosted zone as private. Additionally, the consumer must provide a “caller reference” which is a unique ID of the request that can be used to identify it in subsequent actions if the request fails.
Next, the consumer should create a JSON file corresponding to a batch of record change requests. In this file, the consumer can specify the name of the endpoint, as well as a CNAME pointing to the AZ-agnostic DNS name of the Endpoint:
At this point, the Endpoint can be consumed at http://order-processor.endpoints.internal.
Conclusion
AWS PrivateLink is an exciting way to expose SaaS services to customers. This article demonstrated how to expose an existing application on EC2 via PrivateLink in a customer’s VPC, as well as recommended architecture. Finally, it walked through the steps that a customer would have to go through to consume the service.
One of the trickiest aspects to concurrency in the kernel is waiting for a specific event to take place. There is a wide variety of possible events, including a process exiting, the last reference to a data structure going away, a device completing an operation, or a timeout occurring. Waiting is surprisingly hard to get right — race conditions abound to trap the unwary — so the kernel has accumulated a large set of wait_event_*() macros to make the task easier. An attempt to add a new one, though, has led to the generalization of specific types of waits for 4.17.
With AWS Organizations, you can centrally manage policies across multiple AWS accounts without having to use custom scripts and manual processes. For example, you can apply service control policies (SCPs) across multiple AWS accounts that are members of an organization. SCPs allow you to define which AWS service APIs can and cannot be executed by AWS Identity and Access Management (IAM) entities (such as IAM users and roles) in your organization’s member AWS accounts. SCPs are created and applied from the master account, which is the AWS account that you used when you created your organization.
In a previous post, How to Use Service Control Policies in AWS Organizations to Enforce Healthcare Compliance in Your AWS Account, we reviewed how to create and manage SCPs and Organizational Units (OU) within an organization. In this post, I show how to use SCPs for access control in Organizations, with a specific focus on evaluating SCPs when an IAM entity calls an API in a member AWS account. I first cover some key Organizations concepts, and then I show how an SCP attached to an organization impacts which AWS service APIs are available to member accounts. Finally, I demonstrate these concepts with an example.
Organizational structure in Organizations
OUs give you a way to logically group and structure member AWS accounts in your organization. The screenshot shows the tree view of an example organizational structure in my organization with several OUs. Currently, I have selected OrgUnit01, and this is the current view I see in my main window. You can see here that within the OrgUnit01 OU, I have nested two additional OUs (OrgUnit01ChildA and OrgUnit01ChildB) and an AWS account is also contained within OrgUnit01, named “Developer Sandbox Account”.
The parts of the example organizational structure in the screenshot are:
Tree view — The hierarchy of your organization’s root and any OUs you have created
Tree view toggle — Enable and disable tree view
Organizational Units — Any child OUs of the selected root or OU in tree view
Accounts — Any AWS accounts (members or master) in the current OU
In the next section, I explain why at least one SCP must be attached to your root and OUs and introduce SCP evaluation.
How Service Control Policy evaluation logic works
To allow an AWS service API at the member account level, you must allow the API at every level between the member account and the root of your organization. This means you must attach an SCP at every level between your organization’s root and the member account that allows the given AWS service API (such as ec2:RunInstances). For more information, see About Service Control Policies.
Let’s say you want to allow the ec2:RunInstances API in the Developer Sandbox Account in the example structure in the preceding screenshot. To allow this AWS service API, you must allow the API in at least one SCP attached at each of these levels:
The organization’s root
The OU named OrgUnit01
If you don’t allow the AWS service in an SCP attached at each of these two levels, neither IAM entities nor the root user in the Developer Sandbox Account will be able to call ec2:RunInstances, even if an administrator has given them permission to do so (for IAM entities). In terms of policy evaluation, SCPs follow exactly the same policy evaluation logic as IAM does: by default, all requests are denied, an explicit allow overrides this default, and an explicit deny overrides any explicit allows.
What does this look like in practice? In the next section, I share a practical example to demonstrate how this works in Organizations.
An example structure with nested OUs and SCPs
In the previous section, I introduced design aspects of AWS Organizations that help prevent administrators from breaking structures in their Organizations. But because AWS Organizations is flexible enough to address multiple use cases, administrators can make changes that have unintended consequences, such as breaking organizational structures when moving an AWS account from one OU to another. In this section, I show an example with broken OU and SCP structures and explain how you can fix them.
I’ll take a blacklisting approach. That is, I’ll use the FullAWSAccess SCP, which doesn’t filter out any AWS service APIs. Then, I will filter out specific APIs by blacklisting them in subsequent SCPs attached to OUs at various points in my organization’s structure. For further reading on blacklisting and whitelisting with AWS Organizations, review AWS Organizations Terminology and Concepts.
Let’s say I have developed the OU and SCP structure shown in the diagram below. Before taking a close look at that diagram, I’ll briefly outline the goals I’m trying to achieve. Broadly speaking, there’s a small subset of APIs that I want to filter out using SCPs. This means that IAM entities in some AWS accounts in my organization will not have access to particular AWS service APIs, such as those related to Amazon EC2, while other accounts will not have access to APIs associated with Amazon CloudWatch, Amazon S3, and so on. Apart from these special cases, I do want the accounts in my organization to have access to all other APIs. More specifically, my goals are as follows:
Any AWS accounts in the Root should not have any API filtered out.
Any AWS accounts in OU 001 should have APIs for CloudWatch filtered out, but all other APIs will be accessible.
Any AWS accounts in OU 002 should have APIs for both CloudWatch and EC2 filtered out, but all other APIs will be accessible.
Any AWS accounts in OU 003 should have APIs for S3 filtered out, but all other APIs will be accessible.
To that end, let’s now look at my initial SCP and OU configuration in the image below that shows the example OU and SCP structure. The arrow shows the direction of inheritance: the root and the OUs below it (children) inherit SCPs from the OUs above them (parents). This example structure contains the following SCPs:
FullAWSAccess — Allows all AWS service APIs
Deny_CW — Denies all CloudWatch APIs
Deny_EC2 — Denies all Amazon EC2 APIs
Deny_S3 — Denies all Amazon S3 APIs
Now that I’ve outlined my intent, and shown you the OU / SCP structure that I’ve created to meet that set of goals, you can probably already see that the structure provided in the image above will not work correctly for my stated goals. In fact, AWS accounts in the Root container and OU 001 will have the intended access, as per my goals (1) and (2). I will not, however, meet my goals (3) and (4) with the above structure: entities in member accounts directly under OU 002 cannot perform any actions, even if they’re granted permissions by IAM access policies. This is because the FullAWSAccess SCP isn’t attached directly to this OU (it’s only inherited).
Why is this important? For an AWS service API to be available to IAM entities in a member account, the API must be specified in an SCP attached at every level all the way down the hierarchy to the relevant member account. Similarly, even though OU 003 does have the FullAWSAccess SCP attached directly to it, the fact that it’s not attached to the parent OU (OU 002) means that IAM entities in member accounts under OU 003 also aren’t able to access any service APIs. This doesn’t happen by default—I have deliberately taken this action to organize my structure in this way, to show both the flexibility and the kind of problems you can encounter when working with OUs and SCPs.
So I now need to fix the problems that I’ve inadvertently created. To start with, I’m going to make one change to OU 002 by attaching the FullAWSAccess SCP directly to that OU. After I do that, OU 002 has the attached and inherited policies that are shown in the following image.
With the FullAWSAccess policy attached to OU 002, member accounts in both OU 002 and OU 003 can access the other non-restricted AWS service APIs (keeping in mind that the FullAWSAccess policy was already applied to OU 003).
I have one final issue to address in this example: OU 003 has an SCP attached that blocks access to the Amazon S3 APIs. However, in this OU, the intent is to allow IAM entities in member accounts to access the EC2 APIs. EC2 API access is blocked because in the parent OU (OU 002), an SCP is attached that denies access to that API (the Deny_EC2 SCP), which means that any actions listed in Deny_EC2 have already been filtered out. An explicit deny always trumps an allow, so to meet goal (4) and have an OU in which EC2 APIs are allowed but access to CloudWatch APIs and S3 APIs is filtered out, I will move OU 003 up one level, placing it directly under OU 001. This change gives me a working OU and policy structure, as shown in the following image.
I recommend that at each level of your organization’s hierarchy, you directly apply the relevant SCPs. By doing this, you’re less likely to forget to apply an SCP to a particular OU, which can break your permission structure. By directly applying SCPs, you also make your policy structure easier to read.
If you have a group of accounts in your organization that are for testing purposes, I recommend that you experiment with OUs and SCPs. Applying SCPs to OUs and then moving an AWS account around within that structure can show you how SCPs affect IAM entities. For example, if you have an IAM user with the AdministratorAccess policy attached, you should see how SCPs can filter out certain AWS service APIs from specified member accounts.
Conclusion
I showed you how you can effectively apply SCPs to OUs in your organization and avoid some of the common issues that you might experience. I demonstrated an approach to designing a working organizational structure that I hope will help smooth your deployment of your organization and enables you to better centrally secure and manage your AWS accounts.
If you have comments about this post, submit them in the Comments section below. If you have questions about anything in this post, start a new thread on the Organizations forum.
This post courtesy of Michael Edge, Sr. Cloud Architect – AWS Professional Services
Applications built using a microservices architecture typically result in a number of independent, loosely coupled microservices communicating with each other, synchronously via their APIs and asynchronously via events. These microservices are often owned by different product teams, and these teams may segregate their resources into different AWS accounts for reasons that include security, billing, and resource isolation. This can sometimes result in the following challenges:
Cross-account deployment: A single pipeline must deploy a microservice into multiple accounts; for example, a microservice must be deployed to environments such as DEV, QA, and PROD, all in separate accounts.
Cross-account lookup: During deployment, a resource deployed into one AWS account may need to refer to a resource deployed in another AWS account.
Cross-account communication: Microservices executing in one AWS account may need to communicate with microservices executing in another AWS account.
In this post, I look at ways to address these challenges using a sample application composed of a web application supported by two serverless microservices. The microservices are owned by different product teams and deployed into different accounts using AWS CodePipeline, AWS CloudFormation, and the Serverless Application Model (SAM). At runtime, the microservices communicate using an event-driven architecture that requires asynchronous, cross-account communication via an Amazon Simple Notification Service (Amazon SNS) topic.
Sample application
First, look at the sample application I use to demonstrate these concepts. In the following overview diagram, you can see the following:
The entire application consists of three main services:
A Booking microservice, owned by the Booking account.
An Airmiles microservice, owned by the Airmiles account.
A web application that uses the services exposed by both microservices, owned by the Web Channel account.
The Booking microservice creates flight bookings and publishes booking events to an SNS topic.
The Airmiles microservice consumes booking events from the SNS topic and uses the booking event to calculate the airmiles associated with the flight booking. It also supports querying airmiles for a specific flight booking.
The web application allows an end user to make flight bookings, view flights bookings, and view the airmiles associated with a flight booking.
In the sample application, the Booking and Airmiles microservices are implemented using AWS Lambda. Together with Amazon API Gateway, Amazon DynamoDB, and SNS, the sample application is completely serverless.
Sample Serverless microservices application
The typical booking flow would be triggered by an end user making a flight booking using the web application, which invokes the Booking microservice via its REST API. The Booking microservice persists the flight booking and publishes the booking event to an SNS topic to enable sharing of the booking with other interested consumers. In this sample application, the Airmiles microservice subscribes to the SNS topic and consumes the booking event, using the booking information to calculate the airmiles. In line with microservices best practices, both the Booking and Airmiles microservices store their information in their own DynamoDB tables, and expose an API (via API Gateway) that is used by the web application.
Setup
Before you delve into the details of the sample application, get the source code and deploy it.
Cross-account deployment of Lambda functions using CodePipeline has been previously discussed by my colleague Anuj Sharma in his post, Building a Secure Cross-Account Continuous Delivery Pipeline. This sample application builds upon the solution proposed by Anuj, using some of the same scripts and a similar account structure. To make it feasible for you to deploy the sample application, I’ve reduced the number of accounts needed down to three accounts by consolidating some of the services. In the following diagram, you can see the services used by the sample application require three accounts:
Tools: A central location for the continuous delivery/deployment services such as CodePipeline and AWS CodeBuild. To reduce the number of accounts required by the sample application, also deploy the AWS CodeCommit repositories here, though typically they may belong in a separate Dev account.
Booking: Account for the Booking microservice.
Airmiles: Account for the Airmiles microservice.
Without consolidation, the sample application may require up to 10 accounts: one for Tools, and three accounts each for Booking, Airmiles and Web Application (to support the DEV, QA, and PROD environments).
Account structure for sample application
To follow the rest of this post, clone the repository in step 1 below. To deploy the application on AWS, follow steps 2 and 3:
Follow the instructions in the repository README to build the CodePipeline pipelines and deploy the microservices and web application.
Challenge 1: Cross-account deployment using CodePipeline
Though the Booking pipeline executes in the Tools account, it deploys the Booking Lambda functions into the Booking account.
In the sample application code repository, open the ToolsAcct/code-pipeline.yaml CloudFormation template.
Scroll down to the Pipeline resource and look for the DeployToTest pipeline stage (shown below). There are two AWS Identity and Access Management (IAM) service roles used in this stage that allow cross-account activity. Both of these roles exist in the Booking account:
Under Actions.RoleArn, find the service role assumed by CodePipeline to execute this pipeline stage in the Booking account. The role referred to by the parameter NonProdCodePipelineActionServiceRole allows access to the CodePipeline artifacts in the S3 bucket in the Tools account, and also access to the AWS KMS key needed to encrypt/decrypt the artifacts.
Under Actions.Configuration.RoleArn, find the service role assumed by CloudFormation when it carries out the CHANGE_SET_REPLACE action in the Booking account.
These roles are created in the CloudFormation template NonProdAccount/toolsacct-codepipeline-cloudformation-deployer.yaml.
Challenge 2: Cross-account stack lookup using custom resources
Asynchronous communication is a fairly common pattern in microservices architectures. A publishing microservice publishes an event that consumers may be interested in, without any concern for who those consumers may be.
In the case of the sample application, the publisher is the Booking microservice, publishing a flight booking event onto an SNS topic that exists in the Booking account. The consumer is the Airmiles microservice in the Airmiles account. To enable the two microservices to communicate, the Airmiles microservice must look up the ARN of the Booking SNS topic at deployment time in order to set up a subscription to it.
To enable CloudFormation templates to be reused, you are not hardcoding resource names in the templates. Because you allow CloudFormation to generate a resource name for the Booking SNS topic, the Airmiles microservice CloudFormation template must look up the SNS topic name at stack creation time. It’s not possible to use cross-stack references, as these can’t be used across different accounts.
However, you can use a CloudFormation custom resource to achieve the same outcome. This is discussed in the next section. Using a custom resource to look up stack exports in another stack does not create a dependency between the two stacks, unlike cross-stack references. A stack dependency would prevent one stack being deleted if another stack depended upon it. For more information, see Fn::ImportValue.
Using a Lambda function as a custom resource to look up the booking SNS topic
The sample application uses a Lambda function as a custom resource. This approach is discussed in AWS Lambda-backed Custom Resources. Walk through the sample application and see how the custom resource is used to list the stack export variables in another account and return these values to the calling AWS CloudFormation stack. Examine each of the following aspects of the custom resource:
Deploying the custom resource.
Custom resource IAM role.
A calling CloudFormation stack uses the custom resource.
The custom resource assumes a role in another account.
The custom resource obtains the stack exports from all stacks in the other account.
The custom resource returns the stack exports to the calling CloudFormation stack.
The custom resource handles all the event types sent by the calling CloudFormation stack.
Step1: Deploying the custom resource
The custom Lambda function is deployed using SAM, as are the Lambda functions for the Booking and Airmiles microservices. See the CustomLookupExports resource in Custom/custom-lookup-exports.yml.
Step2: Custom resource IAM role
The CustomLookupExports resource in Custom/custom-lookup-exports.yml executes using the CustomLookupLambdaRole IAM role. This role allows the custom Lambda function to assume a cross account role that is created along with the other cross account roles in the NonProdAccount/toolsacct-codepipeline-cloudformation-deployer.yml. See the resource CustomCrossAccountServiceRole.
Step3: The CloudFormation stack uses the custom resource
The Airmiles microservice is created by the Airmiles CloudFormation template Airmiles/sam-airmile.yml, a SAM template that uses the custom Lambda resource to look up the ARN of the Booking SNS topic. The custom resource is specified by the CUSTOMLOOKUP resource, and the PostAirmileFunction resource uses the custom resource to look up the ARN of the SNS topic and create a subscription to it.
Because the Airmiles Lambda function is going to subscribe to an SNS topic in another account, it must grant the SNS topic in the Booking account permissions to invoke the Airmiles Lambda function in the Airmiles account whenever a new event is published. Permissions are granted by the LambdaResourcePolicy resource.
Step4: The custom resource assumes a role in another account
When the Lambda custom resource is invoked by the Airmiles CloudFormation template, it must assume a role in the Booking account (see Step 2) in order to query the stack exports in that account.
This can be seen in Custom/custom-lookup-exports.py, where the AWS Simple Token Service (AWS STS) is used to obtain a temporary access key to allow access to resources in the account referred to by the environment variable: ‘CUSTOM_CROSS_ACCOUNT_ROLE_ARN’. This environment variable is defined in the Custom/custom-lookup-exports.yml CloudFormation template, and refers to the role created in Step 2.
Step5: The custom resource obtains the stack exports from all stacks in the other account
The function get_exports() in Custom/custom-lookup-exports.py uses the AWS SDK to list all the stack exports in the Booking account.
Step 6: The custom resource returns the stack exports to the calling CloudFormation stack
The calling CloudFormation template, Airmiles/sam-airmile.yml, uses the custom resource to look up a stack export with the name of booking-lambda-BookingTopicArn. This is exported by the CloudFormation template Booking/sam-booking.yml.
Step7: The custom resource handles all the event types sent by the calling CloudFormation stack
Custom resources used in a stack are called whenever the stack is created, updated, or deleted. They must therefore handle CREATE, UPDATE, and DELETE stack events. If your custom resource fails to send a SUCCESS or FAILED notification for each of these stack events, your stack may become stuck in a state such as CREATE_IN_PROGRESS or UPDATE_ROLLBACK_IN_PROGRESS. To handle this cleanly, use the crhelper custom resource helper. This strongly encourages you to handle the CREATE, UPDATE, and DELETE CloudFormation stack events.
Challenge 3: Cross-account SNS subscription
The SNS topic is specified as a resource in the Booking/sam-booking.yml CloudFormation template. To allow an event published to this topic to trigger the Airmiles Lambda function, permissions must be granted on both the Booking SNS topic and the Airmiles Lambda function. This is discussed in the next section.
Permissions on booking SNS topic
SNS topics support resource-based policies, which allow a policy to be attached directly to a resource specifying who can access the resource. This policy can specify which accounts can access a resource, and what actions those accounts are allowed to perform. This is the same approach as used by a small number of AWS services, such as Amazon S3, KMS, Amazon SQS, and Lambda. In Booking/sam-booking.yml, the SNS topic policy allows resources in the Airmiles account (referenced by the parameter NonProdAccount in the following snippet) to subscribe to the Booking SNS topic:
The Airmiles microservice is created by the Airmiles/sam-airmile.yml CloudFormation template. The template uses SAM to specify the Lambda functions together with their associated API Gateway configurations. SAM hides much of the complexity of deploying Lambda functions from you.
For instance, by adding an ‘Events’ resource to the PostAirmileFunction in the CloudFormation template, the Airmiles Lambda function is triggered by an event published to the Booking SNS topic. SAM creates the SNS subscription, as well as the permissions necessary for the SNS subscription to trigger the Lambda function.
However, the Lambda permissions generated automatically by SAM are not sufficient for cross-account SNS subscription, which means you must specify an additional Lambda permissions resource in the CloudFormation template. LambdaResourcePolicy, in the following snippet, specifies that the Booking SNS topic is allowed to invoke the Lambda function PostAirmileFunction in the Airmiles account.
In this post, I showed you how product teams can use CodePipeline to deploy microservices into different AWS accounts and different environments such as DEV, QA, and PROD. I also showed how, at runtime, microservices in different accounts can communicate securely with each other using an asynchronous, event-driven architecture. This allows product teams to maintain their own AWS accounts for billing and security purposes, while still supporting communication with microservices in other accounts owned by other product teams.
Acknowledgments
Thanks are due to the following people for their help in preparing this post:
Kevin Yung for the great web application used in the sample application
Jay McConnell for crhelper, the custom resource helper used with the CloudFormation custom resources
We love Mugsy, the Raspberry Pi coffee robot that has smashed its crowdfunding goal within days! Our latest YouTube video shows our catch-up with Mugsy and its creator Matthew Oswald at Maker Faire New York last year.
Labelled ‘the world’s first hackable, customisable, dead simple, robotic coffee maker’, Mugsy allows you to take control of every aspect of the coffee-making process: from grind size and water temperature, to brew and bloom time. Feeling lazy instead? Read in your beans’ barcode via an onboard scanner, and it will automatically use the best settings for your brew.
Looking to start your day with your favourite coffee straight out of bed? Send the robot a text, email, or tweet, and it will notify you when your coffee is ready!
Learning through product development
“Initially, I used [Mugsy] as a way to teach myself hardware design,” explained Matthew at his Editor’s Choice–winning Maker Faire stand. “I really wanted to hold something tangible in my hands. By using the Raspberry Pi and just being curious, anytime I wanted to use a new technology, I would try to pull back [and ask] ‘How can I integrate this into Mugsy?’”
By exploring his passions and using Mugsy as his guinea pig, Matthew created a project that not only solves a problem — how to make amazing coffee at home — but also brings him one step closer to ‘making things’ for a living. “I used to dream about this stuff when I was a kid, and I used to say ‘I’m never going to be able to do something like that.’” he admitted. But now, with open-source devices like the Raspberry Pi so readily available, he “can see the end of the road”: making his passion his livelihood.
Back Mugsy
With only a few days left on the Kickstarter campaign, Mugsy has reached its goal and then some. It’s available for backing from $150 if you provide your own Raspberry Pi 3, or from $175 with a Pi included — check it out today!
Energy-aware scheduling — running a system’s workload in a way that minimizes the amount of energy consumed — has been a topic of active discussion and development for some time; LWN first covered the issue at the beginning of 2012. Many approaches have been tried during the intervening years, but little in the way of generalized energy-aware scheduling work has made it into the mainline. Recently, a new patch set was posted by Dietmar Eggemann that only tries to address one aspect of the problem; perhaps the problem domain has now been simplified enough that this support can finally be merged.
GDPR is the new data protection regulation, as you probably already know. I’ve given a detailed practical advice for what it means for developers (and product owners). However, there’s one thing missing there – cookies. The elephant in the room.
Previously I’ve stated that cookies are subject to another piece of legislation – the ePrivacy directive, which is getting updated and its new version will be in force a few years from now. And while that’s technically correct, cookies seem to be affected by GDPR as well. In a way I’ve underestimated that effect.
When you do a Google search on “GDPR cookies”, you’ll pretty quickly realize that a) there’s not too much information and b) there’s not much technical understanding of the issue.
What appears to be the consensus is that GDPR does change the way cookies are handled. More specifically – tracking cookies. Here’s recital 30:
(30) Natural persons may be associated with online identifiers provided by their devices, applications, tools and protocols, such as internet protocol addresses, cookie identifiers or other identifiers such as radio frequency identification tags. This may leave traces which, in particular when combined with unique identifiers and other information received by the servers, may be used to create profiles of the natural persons and identify them.
How tracking cookies work – a 3rd party (usually an ad network) gives you a code snippet that you place on your website, for example to display ads. That code snippet, however, calls “home” (makes a request to the 3rd party domain). If the 3rd party has previously been used on your computer, it has created a cookie. In the example of Facebook, they have the cookie with your Facebook identifier because you’ve logged in to Facebook. So this cookie (with your identifier) is sent with the request. The request also contains all the details from the page. In effect, you are uniquely identified by an identifier (in the case of Facebook and Google – fully identified, rather than some random anonymous identifier as with other ad networks).
Your behaviour on the website is personal data. It gets associated with your identifier, which in turn is associated with your profile. And all of that is personal data. Who is responsible for collecting the website behaviour data, i.e. who is the “controller”? Is it Facebook (or any other 3rd party) that technically does the collection? No, it’s the website owner, as the behaviour data is obtained on their website, and they have put the tracking piece of code there. So they bear responsibility.
What’s the responsibility? So far it boiled down to displaying the useless “we use cookies” warning that nobody cares about. And the current (old) ePrivacy directive and its interpretations says that this is enough – if the users actions can unambiguously mean that they are fine with cookies – i.e. if they continue to use the website after seeing the warning – then you’re fine. This is no longer true from a GDPR perspective – you are collecting user data and you have to have a lawful ground for processing.
For the data collected by tracking cookies you have two options – “consent” and “legitimate interest”. Legitimate interest will be hard to prove – it is not something that a user reasonably expects, it is not necessary for you to provide the service. If your lawyers can get that option to fly, good for them, but I’m not convinced regulators will be happy with that.
The other option is “consent”. You have to ask your users explicitly – that means “with a checkbox” – to let you use tracking cookies. That has two serious implications – from technical and usability point of view.
The technical issue is that the data is sent via 3rd party code as soon as the page loads and before the user can give their consent. And that’s already a violation. You can, of course, have the 3rd party code be dynamically inserted only after the user gives consent, but that will require some fiddling with javascript and might not always work depending on the provider. And you’d have to support opt-out at any time (which would in turn disable the 3rd party snippet). It would require actual coding, rather than just copy-pasting a snippet.
The usability aspect is the bigger issue – while you could neatly tuck a cookie warning at the bottom, you’d now have to have a serious, “stop the world” popup that asks for consent if you want anyone to click it. You can, of course, just add a checkbox to the existing cookie warning, but don’t expect anyone to click it.
These aspects pose a significant questions: is it worth it to have tracking cookies? Is developing new functionality worth it, is interrupting the user worth it, and is implementing new functionality just so that users never clicks a hidden checkbox worth it? Especially given that Firefox now blocks all tracking cookies and possibly other browsers will follow?
That by itself is an interesting topic – Firefox has basically implemented the most strict form of requirements of the upcoming ePrivacy directive update (that would turn it into an ePrivacy regulation). Other browsers will have to follow, even though Google may not be happy to block their own tracking cookies. I hope other browsers follow Firefox in tracking protection and the issue will be gone automatically.
To me it seems that it will be increasingly not worthy to have tracking cookies on your website. They add regulatory obligations for you and give you very little benefit (yes, you could track engagement from ads, but you can do that in other ways, arguably by less additional code than supporting the cookie consents). And yes, the cookie consent will be “outsourced” to browsers after the ePrivacy regulation is passed, but we can’t be sure at the moment whether there won’t be technical whack-a-mole between browsers and advertisers and whether you wouldn’t still need additional effort to have dynamic consent for tracking cookies. (For example there are reported issues that Firefox used to make Facebook login fail if tracking protection is enabled. Which could be a simple bug, or could become a strategy by big vendors in the future to force browsers into a less strict tracking protection).
Okay, we’ve decided it’s not worth it managing tracking cookies. But do you have a choice as a website owner? Can you stop your ad network from using them? (Remember – you are liable if users’ data is collected by visiting your website). And currently the answer is no – you can’t disable that. You can’t have “just the ads”. This is part of the “deal” – you get money for the ads you place, but you participate in a big “surveillance” network. Users have a way to opt out (e.g. Google AdWords gives them that option). You, as a website owner, don’t.
Facebook has a recommendations page that says “you take care of getting the consent”. But for example the “like button” plugin doesn’t have an option to not send any data to Facebook.
And sometimes you don’t want to serve ads, just track user behaviour and measure conversion. But even if you ask for consent for that and conditionally insert the plugin/snippet, do you actually know what data it sends? And what it’s used for? Because you have to know in order to inform your users. “Do you agree to use tracking cookies that Facebook has inserted in order to collect data about your behaviour on our website” doesn’t sound compelling.
So, what to do? The easiest thing is just not to use any 3rd party ad-related plugins. But that’s obviously not an option, as ad revenue is important, especially in the publishing industry. I don’t have a good answer, apart from “Regulators should pressure ad networks to provide opt-outs and clearly document their data usage”. They have to do that under GDPR, and while website owners are responsible for their users’ data, the ad networks that are in the role of processors in this case (as you delegate the data collection for your visitors to them) also have obligation to assist you in fulfilling your obligations. So ask Facebook – what should I do with your tracking cookies? And when the regulator comes after a privacy-aware customer files a complaint, you could prove that you’ve tried.
The ethical debate whether it’s wrong to collect data about peoples’ behaviour without their informed consent is an easy one. And that’s why I don’t put blame on the regulators – they are putting the ethical consensus in law. It gets more complicated if not allowing tracking means some internet services are no longer profitable and therefore can’t exist. Can we have the cake and eat it too?
The AWS Community Heroes program helps shine a spotlight on some of the innovative work being done by rockstar AWS developers around the globe. Marrying cloud expertise with a passion for community building and education, these Heroes share their time and knowledge across social media and in-person events. Heroes also actively help drive content at Meetups, workshops, and conferences.
This March, we have five Heroes that we’re happy to welcome to our network of cloud innovators:
Peter Sbarski is VP of Engineering at A Cloud Guru and the organizer of Serverlessconf, the world’s first conference dedicated entirely to serverless architectures and technologies. His work at A Cloud Guru allows him to work with, talk and write about serverless architectures, cloud computing, and AWS. He has written a book called Serverless Architectures on AWS and is currently collaborating on another book called Serverless Design Patterns with Tim Wagner and Yochay Kiriaty.
Peter is always happy to talk about cloud computing and AWS, and can be found at conferences and meetups throughout the year. He helps to organize Serverless Meetups in Melbourne and Sydney in Australia, and is always keen to share his experience working on interesting and innovative cloud projects.
Peter’s passions include serverless technologies, event-driven programming, back end architecture, microservices, and orchestration of systems. Peter holds a PhD in Computer Science from Monash University, Australia and can be followed on Twitter, LinkedIn, Medium, and GitHub.
In close collaboration with his brother Andreas Wittig, the Wittig brothers are actively creating AWS related content. Their book Amazon Web Services in Action (Manning) introduces AWS with a strong focus on automation. Andreas and Michael run the blog cloudonaut.io where they share their knowledge about AWS with the community. The Wittig brothers also published a bunch of video courses with O’Reilly, Manning, Pluralsight, and A Cloud Guru. You can also find them speaking at conferences and user groups in Europe. Both brothers are co-organizing the AWS user group in Stuttgart.
Fernando is an experienced Infrastructure Solutions Leader, holding 5 AWS Certifications, with extensive IT Architecture and Management experience in a variety of market sectors. Working as a Cloud Architect Consultant in United Kingdom since 2014, Fernando built an online community for Hispanic speakers worldwide.
Fernando founded a LinkedIn Group, a Slack Community and a YouTube channel all of them named “AWS en Español”, and started to run a monthly webinar via YouTube streaming where different leaders discuss aspects and challenges around AWS Cloud.
During the last 18 months he’s been helping to run and coach AWS User Group leaders across LATAM and Spain, and 10 new User Groups were founded during this time.
Feel free to follow Fernando on Twitter, connect with him on LinkedIn, or join the ever-growing Hispanic Community via Slack, LinkedIn or YouTube.
Anders is a consultant and cloud evangelist at Webstep AS in Norway. He finished his degree in Computer Science at the Norwegian Institute of Technology at about the same time the Internet emerged as a public service. Since then he has been an IT consultant and a passionate advocate of knowledge-sharing.
He architected and implemented his first customer solution on AWS back in 2010, and is essential in building Webstep’s core cloud team. Anders applies his broad expert knowledge across all layers of the organizational stack. He engages with developers on technology and architectures and with top management where he advises about cloud strategies and new business models.
Anders enjoys helping people increase their understanding of AWS and cloud in general, and holds several AWS certifications. He co-founded and co-organizes the AWS User Groups in the largest cities in Norway (Oslo, Bergen, Trondheim and Stavanger), and also uses any opportunity to engage in events related to AWS and cloud wherever he is.
You can follow him on Twitter or connect with him on LinkedIn.
To learn more about the AWS Community Heroes Program and how to get involved with your local AWS community, click here.
Germany-based Andreas Rottach’s multi-purpose LED table is an impressive build within a gorgeous-looking body. Play games, view (heavily pixelated) images, and become hypnotised by flashy lights, once you’ve built your own using his newly released tutorial.
This is a short presentation of my LED-Matrix Table. The table is controlled by a raspberry pi computer that executes a control engine, written in c++. It supports input from keyboards or custom made game controllers. A full list of all features as well as the source code is available on GitHub (https://github.com/rottaca/LEDTableEngine).
Much excitement
Andreas uploaded a video of his LED Matrix Table to YouTube back in February, with the promise of publishing a complete write-up within the coming weeks. And so the members of Pi Towers sat, eagerly waiting and watching. Now the write-up has arrived, to our cheers of acclaim for this beautful, shiny, flashy, LED-based wonderment.
Andreas created the table’s impressive light matrix using a strip of 300 LEDs, chained together and connected to the Raspberry Pi via an LED controller.
The LEDs are set out in zigzags
For the code, he used several open-source tools, such as SDL for image and audio support, and CMake for building the project software.
Anyone planning to recreate Andreas’ table can compile its engine by downloading the project repository from GitHub. Again, find full instructions for this on his GitHub.
Features
The table boasts multiple cool features, including games and visualisation tools. Using the controllers, you can play simplified versions of Flappy Bird and Minesweeper, or go on a nostalgia trip with Tetris, Pong, and Snake.
There’s also a version of Conway’s Game of Life. Andreas explains: “The lifespan of each cell is color-coded. If the game field gets static, the animation is automatically reset to a new random cell population.”
The table can also display downsampled Bitmap images, or show clear static images such as a chess board, atop of which you can place physical game pieces.
Find all the 3D-printable aspects of the LED table on Thingiverse here and here, and the full GitHub tutorial and repository here. If you build your own, or have already dabbled in LED tables and displays, be sure to share your project with us, either in the comments below or via our social media accounts. What other functions would you integrate into this awesome build?
This post was written in partnership with Intuit to share learnings, best practices, and recommendations for running an Apache Kafka cluster on AWS. Thanks to Vaishak Suresh and his colleagues at Intuit for their contribution and support.
Intuit, in their own words: Intuit, a leading enterprise customer for AWS, is a creator of business and financial management solutions. For more information on how Intuit partners with AWS, see our previous blog post, Real-time Stream Processing Using Apache Spark Streaming and Apache Kafka on AWS. Apache Kafka is an open-source, distributed streaming platform that enables you to build real-time streaming applications.
The best practices described in this post are based on our experience in running and operating large-scale Kafka clusters on AWS for more than two years. Our intent for this post is to help AWS customers who are currently running Kafka on AWS, and also customers who are considering migrating on-premises Kafka deployments to AWS.
Running your Kafka deployment on Amazon EC2 provides a high performance, scalable solution for ingesting streaming data. AWS offers many different instance types and storage option combinations for Kafka deployments. However, given the number of possible deployment topologies, it’s not always trivial to select the most appropriate strategy suitable for your use case.
In this blog post, we cover the following aspects of running Kafka clusters on AWS:
Deployment considerations and patterns
Storage options
Instance types
Networking
Upgrades
Performance tuning
Monitoring
Security
Backup and restore
Note: While implementing Kafka clusters in a production environment, make sure also to consider factors like your number of messages, message size, monitoring, failure handling, and any operational issues.
Deployment considerations and patterns
In this section, we discuss various deployment options available for Kafka on AWS, along with pros and cons of each option. A successful deployment starts with thoughtful consideration of these options. Considering availability, consistency, and operational overhead of the deployment helps when choosing the right option.
Single AWS Region, Three Availability Zones, All Active
One typical deployment pattern (all active) is in a single AWS Region with three Availability Zones (AZs). One Kafka cluster is deployed in each AZ along with Apache ZooKeeper and Kafka producer and consumer instances as shown in the illustration following.
In this pattern, this is the Kafka cluster deployment:
Kafka producers and Kafka cluster are deployed on each AZ.
Data is distributed evenly across three Kafka clusters by using Elastic Load Balancer.
Kafka consumers aggregate data from all three Kafka clusters.
Kafka cluster failover occurs this way:
Mark down all Kafka producers
Stop consumers
Debug and restack Kafka
Restart consumers
Restart Kafka producers
Following are the pros and cons of this pattern.
Pros
Cons
Highly available
Can sustain the failure of two AZs
No message loss during failover
Simple deployment
Very high operational overhead:
All changes need to be deployed three times, one for each Kafka cluster
Maintaining and monitoring three Kafka clusters
Maintaining and monitoring three consumer clusters
A restart is required for patching and upgrading brokers in a Kafka cluster. In this approach, a rolling upgrade is done separately for each cluster.
Single Region, Three Availability Zones, Active-Standby
Another typical deployment pattern (active-standby) is in a single AWS Region with a single Kafka cluster and Kafka brokers and Zookeepers distributed across three AZs. Another similar Kafka cluster acts as a standby as shown in the illustration following. You can use Kafka mirroring with MirrorMaker to replicate messages between any two clusters.
In this pattern, this is the Kafka cluster deployment:
Kafka producers are deployed on all three AZs.
Only one Kafka cluster is deployed across three AZs (active).
ZooKeeper instances are deployed on each AZ.
Brokers are spread evenly across all three AZs.
Kafka consumers can be deployed across all three AZs.
Standby Kafka producers and a Multi-AZ Kafka cluster are part of the deployment.
Kafka cluster failover occurs this way:
Switch traffic to standby Kafka producers cluster and Kafka cluster.
Restart consumers to consume from standby Kafka cluster.
Following are the pros and cons of this pattern.
Pros
Cons
Less operational overhead when compared to the first option
Only one Kafka cluster to manage and consume data from
Can handle single AZ failures without activating a standby Kafka cluster
Added latency due to cross-AZ data transfer among Kafka brokers
For Kafka versions before 0.10, replicas for topic partitions have to be assigned so they’re distributed to the brokers on different AZs (rack-awareness)
The cluster can become unavailable in case of a network glitch, where ZooKeeper does not see Kafka brokers
Possibility of in-transit message loss during failover
Intuit recommends using a single Kafka cluster in one AWS Region, with brokers distributing across three AZs (single region, three AZs). This approach offers stronger fault tolerance than otherwise, because a failed AZ won’t cause Kafka downtime.
Storage options
There are two storage options for file storage in Amazon EC2:
Ephemeral storage is local to the Amazon EC2 instance. It can provide high IOPS based on the instance type. On the other hand, Amazon EBS volumes offer higher resiliency and you can configure IOPS based on your storage needs. EBS volumes also offer some distinct advantages in terms of recovery time. Your choice of storage is closely related to the type of workload supported by your Kafka cluster.
Kafka provides built-in fault tolerance by replicating data partitions across a configurable number of instances. If a broker fails, you can recover it by fetching all the data from other brokers in the cluster that host the other replicas. Depending on the size of the data transfer, it can affect recovery process and network traffic. These in turn eventually affect the cluster’s performance.
The following table contrasts the benefits of using an instance store versus using EBS for storage.
Instance store
EBS
Instance storage is recommended for large- and medium-sized Kafka clusters. For a large cluster, read/write traffic is distributed across a high number of brokers, so the loss of a broker has less of an impact. However, for smaller clusters, a quick recovery for the failed node is important, but a failed broker takes longer and requires more network traffic for a smaller Kafka cluster.
Storage-optimized instances like h1, i3, and d2 are an ideal choice for distributed applications like Kafka.
The primary advantage of using EBS in a Kafka deployment is that it significantly reduces data-transfer traffic when a broker fails or must be replaced. The replacement broker joins the cluster much faster.
Data stored on EBS is persisted in case of an instance failure or termination. The broker’s data stored on an EBS volume remains intact, and you can mount the EBS volume to a new EC2 instance. Most of the replicated data for the replacement broker is already available in the EBS volume and need not be copied over the network from another broker. Only the changes made after the original broker failure need to be transferred across the network. That makes this process much faster.
Intuit chose EBS because of their frequent instance restacking requirements and also other benefits provided by EBS.
Generally, Kafka deployments use a replication factor of three. EBS offers replication within their service, so Intuit chose a replication factor of two instead of three.
Instance types
The choice of instance types is generally driven by the type of storage required for your streaming applications on a Kafka cluster. If your application requires ephemeral storage, h1, i3, and d2 instances are your best option.
Intuit used r3.xlarge instances for their brokers and r3.large for ZooKeeper, with ST1 (throughput optimized HDD) EBS for their Kafka cluster.
Here are sample benchmark numbers from Intuit tests.
Configuration
Broker bytes (MB/s)
r3.xlarge
ST1 EBS
12 brokers
12 partitions
Aggregate 346.9
If you need EBS storage, then AWS has a newer-generation r4 instance. The r4 instance is superior to R3 in many ways:
It has a faster processor (Broadwell).
EBS is optimized by default.
It features networking based on Elastic Network Adapter (ENA), with up to 10 Gbps on smaller sizes.
The network plays a very important role in a distributed system like Kafka. A fast and reliable network ensures that nodes can communicate with each other easily. The available network throughput controls the maximum amount of traffic that Kafka can handle. Network throughput, combined with disk storage, is often the governing factor for cluster sizing.
If you expect your cluster to receive high read/write traffic, select an instance type that offers 10-Gb/s performance.
In addition, choose an option that keeps interbroker network traffic on the private subnet, because this approach allows clients to connect to the brokers. Communication between brokers and clients uses the same network interface and port. For more details, see the documentation about IP addressing for EC2 instances.
If you are deploying in more than one AWS Region, you can connect the two VPCs in the two AWS Regions using cross-region VPC peering. However, be aware of the networking costs associated with cross-AZ deployments.
Upgrades
Kafka has a history of not being backward compatible, but its support of backward compatibility is getting better. During a Kafka upgrade, you should keep your producer and consumer clients on a version equal to or lower than the version you are upgrading from. After the upgrade is finished, you can start using a new protocol version and any new features it supports. There are three upgrade approaches available, discussed following.
Rolling or in-place upgrade
In a rolling or in-place upgrade scenario, upgrade one Kafka broker at a time. Take into consideration the recommendations for doing rolling restarts to avoid downtime for end users.
Downtime upgrade
If you can afford the downtime, you can take your entire cluster down, upgrade each Kafka broker, and then restart the cluster.
Blue/green upgrade
Intuit followed the blue/green deployment model for their workloads, as described following.
If you can afford to create a separate Kafka cluster and upgrade it, we highly recommend the blue/green upgrade scenario. In this scenario, we recommend that you keep your clusters up-to-date with the latest Kafka version. For additional details on Kafka version upgrades or more details, see the Kafka upgrade documentation.
The following illustration shows a blue/green upgrade.
In this scenario, the upgrade plan works like this:
Create a new Kafka cluster on AWS.
Create a new Kafka producers stack to point to the new Kafka cluster.
Create topics on the new Kafka cluster.
Test the green deployment end to end (sanity check).
Using Amazon Route 53, change the new Kafka producers stack on AWS to point to the new green Kafka environment that you have created.
The roll-back plan works like this:
Switch Amazon Route 53 to the old Kafka producers stack on AWS to point to the old Kafka environment.
You can tune Kafka performance in multiple dimensions. Following are some best practices for performance tuning.
These are some general performance tuning techniques:
If throughput is less than network capacity, try the following:
Add more threads
Increase batch size
Add more producer instances
Add more partitions
To improve latency when acks =-1, increase your num.replica.fetches value.
For cross-AZ data transfer, tune your buffer settings for sockets and for OS TCP.
Make sure that num.io.threads is greater than the number of disks dedicated for Kafka.
Adjust num.network.threads based on the number of producers plus the number of consumers plus the replication factor.
Your message size affects your network bandwidth. To get higher performance from a Kafka cluster, select an instance type that offers 10 Gb/s performance.
For Java and JVM tuning, try the following:
Minimize GC pauses by using the Oracle JDK, which uses the new G1 garbage-first collector.
Try to keep the Kafka heap size below 4 GB.
Monitoring
Knowing whether a Kafka cluster is working correctly in a production environment is critical. Sometimes, just knowing that the cluster is up is enough, but Kafka applications have many moving parts to monitor. In fact, it can easily become confusing to understand what’s important to watch and what you can set aside. Items to monitor range from simple metrics about the overall rate of traffic, to producers, consumers, brokers, controller, ZooKeeper, topics, partitions, messages, and so on.
For monitoring, Intuit used several tools, including Newrelec, Wavefront, Amazon CloudWatch, and AWS CloudTrail. Our recommended monitoring approach follows.
For system metrics, we recommend that you monitor:
CPU load
Network metrics
File handle usage
Disk space
Disk I/O performance
Garbage collection
ZooKeeper
For producers, we recommend that you monitor:
Batch-size-avg
Compression-rate-avg
Waiting-threads
Buffer-available-bytes
Record-queue-time-max
Record-send-rate
Records-per-request-avg
For consumers, we recommend that you monitor:
Batch-size-avg
Compression-rate-avg
Waiting-threads
Buffer-available-bytes
Record-queue-time-max
Record-send-rate
Records-per-request-avg
Security
Like most distributed systems, Kafka provides the mechanisms to transfer data with relatively high security across the components involved. Depending on your setup, security might involve different services such as encryption, Kerberos, Transport Layer Security (TLS) certificates, and advanced access control list (ACL) setup in brokers and ZooKeeper. The following tells you more about the Intuit approach. For details on Kafka security not covered in this section, see the Kafka documentation.
Encryption at rest
For EBS-backed EC2 instances, you can enable encryption at rest by using Amazon EBS volumes with encryption enabled. Amazon EBS uses AWS Key Management Service (AWS KMS) for encryption. For more details, see Amazon EBS Encryption in the EBS documentation. For instance store–backed EC2 instances, you can enable encryption at rest by using Amazon EC2 instance store encryption.
Encryption in transit
Kafka uses TLS for client and internode communications.
Authentication
Authentication of connections to brokers from clients (producers and consumers) to other brokers and tools uses either Secure Sockets Layer (SSL) or Simple Authentication and Security Layer (SASL).
Kafka supports Kerberos authentication. If you already have a Kerberos server, you can add Kafka to your current configuration.
Authorization
In Kafka, authorization is pluggable and integration with external authorization services is supported.
Backup and restore
The type of storage used in your deployment dictates your backup and restore strategy.
The best way to back up a Kafka cluster based on instance storage is to set up a second cluster and replicate messages using MirrorMaker. Kafka’s mirroring feature makes it possible to maintain a replica of an existing Kafka cluster. Depending on your setup and requirements, your backup cluster might be in the same AWS Region as your main cluster or in a different one.
For EBS-based deployments, you can enable automatic snapshots of EBS volumes to back up volumes. You can easily create new EBS volumes from these snapshots to restore. We recommend storing backup files in Amazon S3.
For more information on how to back up in Kafka, see the Kafka documentation.
Conclusion
In this post, we discussed several patterns for running Kafka in the AWS Cloud. AWS also provides an alternative managed solution with Amazon Kinesis Data Streams, there are no servers to manage or scaling cliffs to worry about, you can scale the size of your streaming pipeline in seconds without downtime, data replication across availability zones is automatic, you benefit from security out of the box, Kinesis Data Streams is tightly integrated with a wide variety of AWS services like Lambda, Redshift, Elasticsearch and it supports open source frameworks like Storm, Spark, Flink, and more. You may refer to kafka-kinesis connector.
If you have questions or suggestions, please comment below.
Prasad Alle is a Senior Big Data Consultant with AWS Professional Services. He spends his time leading and building scalable, reliable Big data, Machine learning, Artificial Intelligence and IoT solutions for AWS Enterprise and Strategic customers. His interests extend to various technologies such as Advanced Edge Computing, Machine learning at Edge. In his spare time, he enjoys spending time with his family.
The collective thoughts of the interwebz
By continuing to use the site, you agree to the use of cookies. more information
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.