Tag Archives: Plex

Cloud Storage Doesn’t have to be Convoluted, Complex, or Confusing

Post Syndicated from Ahin Thomas original https://www.backblaze.com/blog/cloud-storage-pricing-comparison/

business man frustrated over cloud storage pricing

So why do many vendors make it so hard to get information about how much you’re storing and how much you’re being charged?

Cloud storage is fast becoming the central repository for mission critical information, irreplaceable memories, and in some cases entire corporate and personal histories. Given this responsibility, we believe cloud storage vendors have an obligation to be transparent as possible in how they interact with their customers.

In that light we decided to challenge four cloud storage vendors and ask two simple questions:

  1. Can a customer understand how much data is stored?
  2. Can a customer understand the bill?

The detailed results are below, but if you wish to skip the details and the screen captures (TL;DR), we’ve summarized the results in the table below.

Summary of Cloud Storage Pricing Test

Our challenge was to upload 1 terabyte of data, store it for one month, and then download it.

Visibility to Data StoredEasy to Understand BillCost
Backblaze B2Accurate, intuitive display of storage information.Available on demand, and the site clearly defines what has and will be charged for.$25
Microsoft AzureStorage is being measured in KiB, but is billed by the GB. With a calculator, it is unclear how much storage we are using.Available, but difficult to find. The nearly 30 day lag in billing creates business and accounting challenges.$72
Amazon S3Incomplete. From the file browsing user interface, there is no reasonable way to understand how much data is being stored.Available on demand. While there are some line items that seem unnecessary for our test, the bill is generally straight-forward to understand.$71
Google Cloud ServiceIncomplete. From the file browsing user interface, there is no reasonable way to understand how much data is being stored.Available, but provides descriptions in units that are not on the pricing table nor commonly used.$100

Cloud Storage Test Details

For our tests, we choose Backblaze B2, Microsoft’s Azure, Amazon’s S3, and Google Cloud Storage. Our idea was simple: Upload 1 TB of data to the comparable service for each vendor, store it for 1 month, download that 1 TB, then document and share the results.

Let’s start with most obvious observation, the cost charged by each vendor for the test:

Cost
Backblaze B2$25
Microsoft Azure$72
Amazon S3$71
Google Cloud Service$100

Later in this post, we’ll see if we can determine the different cost components (storage, downloading, transactions, etc.) for each vendor, but our first step is to see if we can determine how much data we stored. In some cases, the answer is not as obvious as it would seem.

Test 1: Can a Customer Understand How Much Data Is Stored?

At the core, a provider of a service ought to be able to tell a customer how much of the service he or she is using. In this case, one might assume that providers of Cloud Storage would be able to tell customers how much data is being stored at any given moment. It turns out, it’s not that simple.

Backblaze B2
Logging into a Backblaze B2 account, one is presented with a summary screen that displays all “buckets.” Each bucket displays key summary information, including data currently stored.

B2 Cloud Storage Buckets screenshot

Clicking into a given bucket, one can browse individual files. Each file displays its size, and multiple files can be selected to create a size summary.

B2 file tree screenshot

Summary: Accurate, intuitive display of storage information.

Microsoft Azure

Moving on to Microsoft’s Azure, things get a little more “exciting.” There was no area that we could find where one can determine the total amount of data, in GB, stored with Azure.

There’s an area entitled “usage,” but that wasn’t helpful.

Microsoft Azure cloud storage screenshot

We then moved on to “Overview,” but had a couple challenges.The first issue was that we were presented with KiB (kibibyte) as a unit of measure. One GB (the unit of measure used in Azure’s pricing table) equates to roughly 976,563 KiB. It struck us as odd that things would be summarized by a unit of measure different from the billing unit of measure.

Microsoft Azure usage dashboard screenshot

Summary: Storage is being measured in KiB, but is billed by the GB. Even with a calculator, it is unclear how much storage we are using.

Amazon S3

Next we checked on the data we were storing in S3. We again ran into problems.

In the bucket overview, we were able to identify our buckets. However, we could not tell how much data was being stored.

Amazon S3 cloud storage buckets screenshot

Drilling into a bucket, the detail view does tell us file size. However, there was no method for summarizing the data stored within that bucket or for multiple files.

Amazon S3 cloud storage buckets usage screenshot

Summary: Incomplete. From the file browsing user interface, there is no reasonable way to understand how much data is being stored.

Google Cloud Storage (“GCS”)

GCS proved to have its own quirks, as well.

One can easily find the “bucket” summary, however, it does not provide information on data stored.

Google Cloud Storage Bucket screenshot

Clicking into the bucket, one can see files and the size of an individual file. However, no ability to see data total is provided.

Google Cloud Storage bucket files screenshot

Summary: Incomplete. From the file browsing user interface, there is no reasonable way to understand how much data is being stored.

Test 1 Conclusions

We knew how much storage we were uploading and, in many cases, the user will have some sense of the amount of data they are uploading. However, it strikes us as odd that many vendors won’t tell you how much data you have stored. Even stranger are the vendors that provide reporting in a unit of measure that is different from the units in their pricing table.

Test 2: Can a Customer Understand The Bill?

The cloud storage industry has done itself no favors with its tiered pricing that requires a calculator to figure out what’s going on. Setting that aside for a moment, one would presume that bills would be created in clear, auditable ways.

Backblaze

Inside of the Backblaze user interface, one finds a navigation link entitled “Billing.” Clicking on that, the user is presented with line items for previous bills, payments, and an estimate for the upcoming charges.

Backblaze B2 billing screenshot

One can expand any given row to see the the line item transactions composing each bill.

Backblaze B2 billing details screenshot

Summary: Available on demand, and the site clearly defines what has and will be charged for.

Azure

Trying to understand the Azure billing proved to be a bit tricky.

On August 6th, we logged into the billing console and were presented with this screen.

Microsoft Azure billing screenshot

As you can see, on Aug 6th, billing for the period of May-June was not available for download. For the period ending June 26th, we were charged nearly a month later, on July 24th. Clicking into that row item does display line item information.

Microsoft Azure cloud storage billing details screenshot

Summary: Available, but difficult to find. The nearly 30 day lag in billing creates business and accounting challenges.

Amazon S3

Amazon presents a clean billing summary and enables users to “drill down” into line items.

Going to the billing area of AWS, one can survey various monthly bills and is presented with a clean summary of billing charges.

AWS billing screenshot

Expanding into the billing detail, Amazon articulates each line item charge. Within each line item, charges are broken out into sub-line items for the different tiers of pricing.

AWS billing details screenshot

Summary: Available on demand. While there are some line items that seem unnecessary for our test, the bill is generally straight-forward to understand.

Google Cloud Storage (“GCS”)

This was an area where the GCS User Interface, which was otherwise relatively intuitive, became confusing.

Going to the Billing Overview page did not offer much in the way of an overview on charges.

Google Cloud Storage billing screenshot

However, moving down to the “Transactions” section did provide line item detail on all the charges incurred. However, similar to Azure introducing the concept of KiB, Google introduces the concept of the equally confusing Gibibyte (GiB). While all of Google’s pricing tables are listed in terms of GB, the line items reference GiB. 1 GiB is 1.07374 GBs.

Google Cloud Storage billing details screenshot

Summary: Available, but provides descriptions in units that are not on the pricing table nor commonly used.

Test 2 Conclusions

Clearly, some vendors do a better job than others in making their pricing available and understandable. From a transparency standpoint, it’s difficult to justify why a vendor would have their pricing table in units of X, but then put units of Y in the user interface.

Transparency: The Backblaze Way

Transparency isn’t easy. At Backblaze, we believe in investing time and energy into presenting the most intuitive user interfaces that we can create. We take pride in our heritage in the consumer backup space — servicing consumers has taught us how to make things understandable and usable. We do our best to apply those lessons to everything we do.

This philosophy reflects our desire to make our products usable, but it’s also part of a larger ethos of being transparent with our customers. We are being trusted with precious data. We want to repay that trust with, among other things, transparency.

It’s that spirit that was behind the decision to publish our hard drive performance stats, to open source the infrastructure that is behind us having the lowest cost of storage in the industry, and also to open source our erasure coding (the math that drives a significant portion of our redundancy for your data).

Why? We believe it’s not just about good user interface, it’s about the relationship we want to build with our customers.

The post Cloud Storage Doesn’t have to be Convoluted, Complex, or Confusing appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Five Must-Watch Software Engineering Talks

Post Syndicated from Bozho original https://techblog.bozho.net/five-must-watch-software-engineering-talks/

We’ve all watched dozens of talks online. And we probably don’t remember many of them. But some do stick in our heads and we eventually watch them again (and again) because we know they are good and we want to remember the things that were said there. So I decided to compile a small list of talks that I find very insightful, useful and that have, in a way, shaped my software engineering practice or expanded my understanding of the software world.

1. How To Design A Good API and Why it Matters by Joshua Bloch — this is a must-watch (well, obviously all are). And don’t skip it because “you are not writing APIs” — everyone is writing APIs. Maybe not used by hundreds of other developers, but used by at least several, and that’s a good enough reason. Having watched this talk I ended up buying and reading one of the few software books that I have actually read end-to-end — “Effective Java” (the talk uses Java as an example, but the principles aren’t limited to Java)

2. How to write clean, testable code by Miško Hevery. Maybe there are tons of talks about testing code, maybe Uncle Bob has a more popular one, but I found this one particularly practical and the the point — that writing testable code is a skill, and that testable code is good code. (By the way, the speaker then wrote AngularJS)

3. Back to basics: the mess we’ve made of our fundamental data types by Jon Skeet. The title says it all, and it’s nice to be reminded of how fragile even the basics of programming languages are.

4. The Danger of Software Patents by Richard Stallman. That goes a little bit away from writing software, but puts software in legal context — how do legislation loopholes affect code reuse and business practices related it. It’s a bit long, but I think worth it.

5. Does my ESB look big in this? by Martin Fowler and Jim Webber. It’s about bloated enterprise architecture and how to actually do enterprise architecture without complex and expensive middleware. (Unfortunately it’s not on YouTube, so no embedding).

Although this is not a “ranking”, I’d like to add a few honourable mentions: The famous “WAT” lightning talk, showing some quirks of ruby and javascript, “The future of programming” by Bret Victor, “You suck at Excel” by Joel Spolsky, which isn’t really about creating software, but it’s cool. And a tiny shameless plug with my “Common sense driven development talk”

I hope the compilation is useful and enlightening. Enjoy.

The post Five Must-Watch Software Engineering Talks appeared first on Bozho's tech blog.

Amazon QuickSight Now Supports Search, Filter Groups, and Amazon S3 Analytics Connector

Post Syndicated from Luis Wang original https://aws.amazon.com/blogs/big-data/amazon-quicksight-now-supports-search-filter-groups-and-amazon-s3-analytics-connector/

Today, I’m excited to share information about some new features in Amazon QuickSight. First, you can now search for datasets, analyses, and dashboards in Amazon QuickSight using the unified search box, making it faster and easier to find and access your data. Next, you can now create filter groups with multiple filter conditions that are evaluated together using the OR operation. Finally, you can now use the built-in Amazon S3 analytics connector to visualize your S3 storage access patterns across multiple S3 buckets and configurations within a single Amazon QuickSight dashboard to optimize for cost.

Search

You can now easily and quickly find and access your datasets, analyses, and dashboards using the unified search box in Amazon QuickSight. Type in what you’re looking for and you get a list of all matches in a unified view. From there, you can take actions such as creating an analysis from a dataset, modifying a dataset, or accessing an analysis or dashboard.

Filter groups

Filters are one of the most important features in Amazon QuickSight. Before this release, you could create multiple filters that were evaluated using the AND operation. With this release, you can now create multiple filters that are evaluated using the OR operation. This provides you with the flexibility to apply more complex filters to your data and visualizations. For example, you can create a chart that shows customers who have spent less than $100 OR made three or more purchases.

Amazon S3 analytics connector

In July, AWS introduced the ability to analyze and visualize your Amazon S3 storage access patterns using Amazon QuickSight in one click from the S3 console. Today, AWS released a dedicated S3 analytics connector in Amazon QuickSight. This connector allows you to import S3 analytics data for different buckets and configurations into a single Amazon QuickSight dataset. With this dataset, you can then create analyses and dashboards that tracks all of your S3 usage patterns in a single view.

Learn more

To learn more about these capabilities and start using them in your dashboards, see the Amazon QuickSight User Guide.

Stay engaged

If you have questions or suggestions, you can post them on the Amazon QuickSight discussion forum.

Not an Amazon QuickSight user?

To get started for FREE, see quicksight.aws.

 

Stubbing Key-Value Stores

Post Syndicated from Bozho original https://techblog.bozho.net/stubbing-key-value-stores/

Every project that has a database has dilemma: how to test database-dependent code. There are several options (not mutually exclusive):

  • Use mocks – use only unit tests and mock the data-access layer, assuming the DAO-to-database communication works
  • Use an embedded database that each test starts and shuts down. This can also be viewed as unit-testing
  • Use a real database deployed somewhere (either locally or on a test environment). The hard part is making sure it’s always in a clean state.
  • Use end-to-end/functional tests/bdd/UI tests after deploying the application on a test server (which has a proper database).

None of the above is without problems. Unit tests with mocked DAOs can’t really test more complex interactions that rely on a database state. Embedded databases are not always available (e.g. if you are using a non-relational database, or if you rely on RDBMS-specific functionality, HSQLDB won’t do), or they can be slow to start and this your tests may take too long supporting. A real database installation complicates setup and keeping it clean is not always easy. The coverage of end-to-end tests can’t be easily measured and they don’t necessarily cover all the edge cases, as they are harder to maintain than unit and integration tests.

I’ve recently tried a strange approach that is working pretty well so far – stubbing the database. It is applicable more to key-value stores and less to relational databases.

In my case, even though there is embedded cassandra, it was slow to start, wasn’t easy to setup and had subtle issues. That’s why I replaced the whole thing with an in-memory ConcurrentHashMap.

Since I’m using spring-data-cassandra, I just extended the CassandraTemplate class and implemented all the method in the new StubCassandraTemplate, and used it instead of the regular one in the test spring context. The stub can support all the key/value operations pretty easily and you can have a bit more complicated integration tests (it’s not a good idea to have very complicated tests, of course, but unit tests can either be too simple or too reliant on a lot of mocks). Here’s an excerpt from the code:

@Component("cassandraTemplate")
public class StubCassandraTemplate extends CassandraTemplate {
    
    private Map<Class<?>, Map<Object, Object>> data = new ConcurrentHashMap<>();
    
    @Override
    public void afterPropertiesSet() {
        // no validation
    }
    
    @SuppressWarnings("unchecked")
    @Override
    public <T> T insert(T entity) {
        List<Field> pk = FieldUtils.getFieldsListWithAnnotation(entity.getClass(), PrimaryKey.class);
        initializeClass(entity.getClass());
        try {
            pk.get(0).setAccessible(true);
            return (T) data.get(entity.getClass()).put(pk.get(0).get(entity), entity);
        } catch (IllegalAccessException e) {
            throw new IllegalArgumentException(e);
        }
    }

    private <T> void initializeClass(Class<?> clazz) {
        if (data.get(clazz) == null) {
            data.put(clazz, new ConcurrentHashMap<>());
        }
    }
....
}

Cassandra supports some advanced features like CQL (query language), which isn’t as easy to stub as key-value operations like get and put, but in fact it is not that hard. Especially if you do not rely on complicated where clauses (and this is a bad practice in Cassandra anyway), it’s easy to parse the query with regex and find the appropriate entries in the ConcurrentHashMap.

Key-value stores are a good candidate for this approach, as their main advantage – being easy to scale horizontally – is not needed in an integration test scenario. You simply need to verify that your code correctly handles interactions with the database in terms of what it puts there and what it gets back. The exact implementation of that interaction – whether it’s in-memory or using a binary protocol, may be viewed as out of scope.

Note that these tests do not guarantee that the application will work with a real database. They only guarantee that it will behave properly if the database behaves the same way as an in-memory key-value data structure. Which is normally the assumption, but isn’t always true – e.g. the database can impose additional constraints that your stub implementation doesn’t have. Cassandra, for example, doesn’t allow WHERE queries for non-indexed columns. If you don’t take that into account, obviously, your test will pass, but your application will break.

That’s why you’d still need end-to-end tests and possibly some real integration tests, but you can cover most of the code with a simple in-memory stub and only do some “sanity” full integration tests.

This doesn’t mean you should always stub your database, but it’s a good option in your testing toolbox to consider.

The post Stubbing Key-Value Stores appeared first on Bozho's tech blog.

We Are Not Having a Productive Debate About Women in Tech

Post Syndicated from Bozho original https://techblog.bozho.net/not-productive-debate-women-tech/

Yes, it’s about the “anti-diversity memo”. But I won’t go into particular details of the memo, the firing, who’s right and wrong, who’s liberal and who’s conservative. Actually, I don’t need to repeat this post, which states almost exactly what I think about the particular issue. Just in case, and before someone decided to label me as “sexist white male” that knows nothing, I guess should clearly state that I acknowledge that biases against women are real and that I strongly support equal opportunity, and I think there must be more women in technology. I also have to state that I think the author of “the memo” was well-meaning, had some well argued, research-backed points and should not be ostracized.

But I want to “rant” about the quality of the debate. On one side we have conservatives who are throwing themselves in defense of the fired googler, insisting that liberals are banning conservative points of view, that it is normal to have so few woman in tech and that everything is actually okay, or even that women are inferior. On the other side we have triggered liberals that are ready to shout “discrimination” and “harassment” at anything that resembles an attempt to claim anything different than total and absolute equality, in many cases using a classical “strawman” argument (e.g. “he’s saying women should not work in tech, he’s obviously wrong”).

Everyone seems to be too eager to take side and issue a verdict on who’s right and who’s wrong, to blame the other side for all related and unrelated woes and while doing that, exhibit a huge amount of biases. If the debate is about that, we’d better shut it down as soon as possible, as it’s not going to lead anywhere. No matter how much conservatives want “a debate”, and no matter how much liberals want to advance equality. Oh, and by the way – this “conservatives” vs “liberals” is a false dichotomy. Most people hold a somewhat sensible stance in between. But let’s get to the actual issue:

Women are underrepresented in STEM (Science, technology, engineering, mathematics). That is a fact everyone agrees on and is blatantly obvious when you walk in any software company office.

Why is that the case? The whole debate revolved around biological and social differences, some of which are probably even true – that women value job flexibility more than being promoted or getting higher salary, that they are more neurotic (on average), that they are less confident, that they are more empathic and so on. These difference have been studied and documented, and as much as I have my reservations about psychology studies (so much so, that even meta-analysis are shown by meta-meta-analysis to be flawed) and social science in general, there seems to be a consensus there (by the way, it’s a shame that Gizmodo removed all the scientific references when they first published “the memo”). But that is not the issue. As it has been pointed out, there’s equal applicability of male and female “inherent” traits when working with technology.

Why are we talking about “techonology”, and why not “mining and construction”, as many will point out. Let’s cut that argument once and for all – mining and construction are blue collar jobs that have a high chance of being automated in the near future and are in decline. The problem that we’re trying to solve is – how to make the dominant profession of the future – information technology – one of equal opportunity. Yes, it’s a a bold claim, but software is going to be everywhere and the industry will grow. This is why it’s so important to discuss it, not because we are developers and we are somewhat affected by that.

So, there has been extended research on the matter, and the reasons are – surprise – complex and intertwined and there is no simple issue that, once resolved, will unlock the path of women to tech jobs.

What would diversity give us and why should we care? Let’s assume for a moment we don’t care about equal opportunity and we are right-leaning, conservative people. Well, imagine you have a growing business and you need to hire developers. What would you prefer – having fewer or more people of whom to choose from? Having fewer or more diverse skills (technical and social) on the job market? The answer is obvious. The more people, regardless of their gender, race, whatever, are on the job market, the better for businesses.

So I guess we’ve agreed on the two points so far – that women are underrepresented, and that it’s better for everyone if there are more people with technical skills on the job market, which includes more women.

The “final” questions is – how?

And this questions seems to not be anywhere in the discussion. Instead, we are going in circles with irrelevant arguments trying to either show that we’ve read more scientific papers than others, that we are more liberal than others or that we are more pro free speech.

Back to “how” – in Bulgaria we have a social meme: “I don’t know what is the right way, but the way you are doing it is NOT the right way”. And much of the underlying sentiment of “the memo” is similar – that google should stop doing some of the stuff it is doing about diversity, or do them differently (but doesn’t tell us how exactly). Hiring biases, internal programs, whatever, seem to bother him. But this is just talking about the surface of the problem. These programs are correcting something that remains hidden in “the memo”.

Google, on their diversity page, say that 20% of their tech employees are women. At the same time, in another diversity section, they claim “18% of CS graduates are women”. So, I guess, job done – they’ve reached the maximum possible diversity. They’ve hired as many women in tech as CS graduates there are. Anything more than that, even if it doesn’t mean they’ll hire worse developers, will leave the rest of the industry with less women. So, sure, 50/50 in Google would sound cool, but the industry average will still be bad.

And that’s the actual, underlying reason that we should have already arrived at, and we should’ve started discussing the “how”. Girls do not see STEM as a thing for them. Our biases are projected on younger girls which culminate at a “this is not for girls” mantra. No matter how diverse hiring policies we have, if we don’t address the issue at a way earlier stage, we aren’t getting anywhere.

In schools and even kindergartens we need to have an inclusive environment where “this is not for girls” is frowned upon. We should not discourage girls from liking math, or making math sound uncool and “hard for girls” (in my biased world I actually know more women mathematicians than men). This comic seems like on a different topic (gender-specific toys), but it’s actually not about toys – it’s about what is considered (stereo)typical of a girl to do. And most of these biases are unconscious, and come from all around us (school, TV, outdoor ads, people on the street, relatives, etc.), and it takes effort to confront them.

To do that, we need policy decisions. We need lobbying education departments / ministries to encourage girls more in the STEM direction (and don’t worry, they’ll be good at it). By the way, guess what – Google’s diversity program is not just about hiring more women, it actually includes education policies with stuff like “influencing perception about computer science”, “getting more girls to code” and scholarships.

Let’s discuss the education policies, the path to getting 40-50% of CS graduates to be female, and before that – more girls in schools with technical focus, and ultimately – how to get society to not perceive technology and science as “not for girls”. Let each girl decide on her own. All the other debates are short-sighted and not to the point at all. Will biological differences matter then? They probably will – but not significantly to justify a high gender imbalance.

I am no expert in education policies and I don’t know what will work and what won’t. There is research on the matter that we should look at, and maybe argue about it. Everything else is wasted keystrokes.

The post We Are Not Having a Productive Debate About Women in Tech appeared first on Bozho's tech blog.

Ethereum, Proof-of-Stake… and the consequences

Post Syndicated from Григор original http://www.gatchev.info/blog/?p=2070

For those who aren’t cryptocurrency-savvy: Ethereum is a cryptocurrency project, based around the coin Ether. It has the support of many big banks, big hedge funds and some states (Russia, China etc). Among the cryptocurrencies, it is second only to Bitcoin – and might even overtake it with the time. (Especially if Bitcoin doesn’t finally move and fix some of its problems.)

Ethereum offers some abilities that few other cryptocurrencies do. The most important one is the support for “smart projects” – kind of electronic contracts that can easily be executed and enforced with little to no human participation. This post however is dedicated to another of its traits – the Proof of Stake.

To work and exist, every cryptocurrency depends on some proof. Most of them use Proof-of-Work scheme. In it, one has to put some work – eg. calculating checksums – behind its participation in the network and its decision, and receive newly generated coins for it. This however results in huge amount of work done only to prove that, well, you can do it and deserve to be in and receive some of the newly squeezed juice.

As of August 2017, Ethereum uses this scheme too. However, they plan to switch to a Proof-of-Stake algorithm named Casper. In it, you prove yourself not by doing work, but by proving to own Ether. As this requires practically no work, it is much more technically effective than the Proof-of-Work schemes.

Technically, Caspar is an amazing design. I congratulate the Ethereum team for it. However, economically its usage appears to have an important weakness. It is described below.

—-

A polarized system

With Casper, the Ether generated by the Ethereum network and the decision power in it are distributed to these who already own Ether. As a consequence, most of both go to those who own most Ether. (There might be attempts to limit that, but these are easily defeatable. For example, limiting the amount distributed to an address can be circumvented by a Sybil attack.)

Such a distribution will create with the time a financial ecosystem where most money and vote are held by a small minority of the participants. The big majority will have little to no of both – it will summarily hold less money and vote than the minority of “haves”. Giving the speed with which the cryptocurrency systems evolve, it is realistic to expect this development in ten, maybe even in five or less years after introducing Casper.

The “middle class”

Economists love to repeat how important is to have a strong middle class. Why, and how that translates to the situation in a cryptocurrency-based financial system?

In systemic terms, “middle class” denotes in a financial system the set of entities that control each a noticeable but not very big amount of resources.

Game theory shows that in a financial system, entities with different clout usually have different interests. These interests usually reflect the amount of resources they control. Entities with little to no resources tend to have interests opposing to these with biggest resources – especially in systems where the total amount of resources changes slowly and the economics is close to a zero-sum game. (For example, in most cryptocurrency systems.) The “middle class” entities interests are in most aspects in the middle.

For an economics to work, there must be a balance of interests that creates incentive for all of its members to participate. In financial systems, where “haves” interests are mostly opposing to “have-nots” interests, creating such a balance depends on the presence and influence of a “middle class”. Its interests are usually the closest to a compromise that satisfies all, and its influence is the key to achieving that compromise within the system.

If the system state is not acceptable for all entities, these who do not accept it eventually leave. (Usually their participation is required for the system survival, so this brings the system down.) If these entities cannot leave the system, they ultimately reject its rules and try to change it by force. If that is impossible too, they usually resort to denying the system what makes them useful for it, thus decreasing its competitiveness to other systems.

The most reliable way to have acceptable compromise enforced in a system is to have in it a “middle class” that summarily controls more resources than any other segment of entities, preferably at least 51% of the system resources. (This assumes that the “middle class” is able and willing to protect their interests. If some of these entities are controlled into defending someone else’s interests – eg. botnets in computer networks, manipulated voters during elections, etc – these numbers apply to the non-controlled among them.)

A system that doesn’t have a non-controlled “middle class” that controls a decisive amount of resources, usually does not have an influential set of interests that are an acceptable compromise between the interests poles. For this reason, it can be called a polarized system.

The limitation on development

In a polarized system, the incentive for development is minimized. (Development is potentially disruptive, and the majority of the financial abilities and the decision power there has only to lose from a disruption. When factoring in the expected profits from development, the situation always becomes a zero-sum game.) The system becomes static (thus cementing the zero-sum game situation in it) and is under threat of being overtaken by a competing financial system. When that happens, it is usually destroyed together with all stakes in it.

Also, almost any initiative in such a financial system is bound to turn into a cartel, oligopoly or monopoly, due to the small number of participants with resources to start and support an initiative. That effectively destroys its markets, contributing to the weakness of the system and limiting further its ability to develop.

Another problem that stems from this is that the incentive during an interaction to violate the rules and to push the contragent into a loss is greater than the incentive to compete by giving a better offer. This in turn removes the incentive to increase productivity, which is a key incentive for development.)

Yet another problem of the concentration of most resources into few entities is the increased gain from attacking one of them and appropriating their resources, and thus the incentive to do it. Since good defensive capabilities are usually an excellent offense base, this pulls the “haves” into an “arms race”, redirecting more and more of their resources into defense. This also leaves the development outside the arms race increasingly resource-strapped. (The “arms race” itself generates development, but the race situation prevents that into trickling into “non-military” applications.)

These are only a part of the constraints on development in a polarized system. Listing all of them will make a long read.

Trickle-up and trickle-down

In theory, every economical system involves two processes: trickle-down and trickle-up. So, any concentration of resources on the top should be decreased by an automatically increased trickle-down. However, a better understanding how these processes work shows that this logic is faulty.

Any financial exchange in a system consists of two parts. One of them covers the actual production cost of whatever resource is being exchanged against the finances. The other part is the profit of the entity that obtains the finances. From the viewpoint of that entity, the first part vs. the resource given is zero-sum – its incentive to participate in this exchange is the second part, the profit. That second part is effectively the trickle in the system, as it is the only resource really gained.

The direction and the size of the trickle ultimately depends on the balance of many factors, some of them random, others constant. On the long run, it is the constant factors that determine the size and the direction of the trickle sum.

The most important constant factor is the benefit of scale (BOS). It dictates that the bigger entities are able to pull the balance to their side more strongly than the smaller ones. Some miss that chance, but others use it. It makes the trickle-up stronger than the trickle-down. In a system where the transaction outcome is close to a zero-sum game, this concentrates all resources at the top with a speed depending on the financial interactions volume per an unit of time.

(Actually the formula is a bit more complex. All dynamic entities – eg. living organisms, active companies etc – have an “existence maintenance” expense, which they cannot avoid. However, the amount of resources in a system above the summary existence maintenance follows the simple rule above. And these are the only resources that are available for investing in anything, eg. development.)

In the real-life systems the BOS power is limited. There are many different random factors that compete with and influence one another, some of them outweighing BOS. Also, in every moment some factors lose importance and / or cease to exist, while others appear and / or gain importance. The complexity of this system makes any attempt by an entity or entities pool to take control over it hard and slow. This gives the other entities time and ways to react and try to block the takeover attempt. Also, the real-life systems have many built-in constraints against scale-based takeovers – anti-trust laws, separation of the government powers, enforced financial trickle-down through taxes on the rich and benefits for the poor, etc. All these together manage to prevent most takeover attempts, or to limit them into only a segment of the system.

How a Proof-of-Stake based cryptocurrency fares at these?

A POS-based cryptocurrency financial system has no constraints against scale-based takeovers. It has only one kind of clout – the amount of resources controlled by an entity. This kind of clout is built in it, has all the importance in it and cannot lose that or disappear. It has no other types of resources, and has no slowing due to complexity. It is not segmented – who has these resources has it all. There are no built-in constraints against scale-based takeovers, or mechanisms to strengthen resource trickle-down. In short, it is the ideal ground for creating a polarized financial system.

So, it would be only logical to expect that a Proof-of-Stake based Ether financial system will suffer by the problems a polarized system presents. Despite all of its technical ingenuity, its longer-term financial usability is limited, and the participation in it may be dangerous to any entity smaller than eg. a big bank, a big hedge fund or a big authoritarian state.

All fixes for this problem I could think of by now would be easily beaten by simple attacks. I am not sure if it is possible to have a reliable solution to it at all.

Do smart contracts and secondary tokens change this?

Unhappily, no. Smart contracts are based on having Ether, and need Ether to exist and act. Thus, they are bound to the financial situation of the Ether financial system, and are influenced by it. The bigger is the scope of the smart contract, the bigger is its dependence on the Ether situation.

Due to this, smart contracts of meaningful size will find themselves hampered and maybe even endangered by a polarization in the financial system powered by POS-based Ethereum. It is technically possible to migrate these contracts to a competing underlying system, but it won’t be easy – probably even when the competing system is technically a clone of Ethereum, like Ethereum Classic. The migration cost might exceed the migration benefits at any given stage of the contract project development, even if the total migration benefits are far larger than this cost.

Eventually this problem might become public knowledge and most projects in need of a smart contract might start avoiding Ethereum. This will lead to decreased interest in participation in the Ethereum ecosystem, to a loss of market cap, and eventually maybe even to the demise of this technically great project.

Other dangers

There is a danger that the “haves” minority in a polarized system might start actively investing resources in creating other systems that suffer from the same problem (as they benefit from it), or in modifying existing systems in this direction. This might decrease the potential for development globally. As some of the backers of Ethereum are entities with enormous clout worldwide, that negative influence on the global system might be significant.

Exponential Backoff And Jitter

Post Syndicated from Marc Brooker original https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/

Introducing OCC

Optimistic concurrency control (OCC) is a time-honored way for multiple writers to safely modify a single object without losing writes. OCC has three nice properties: it will always make progress as long as the underlying store is available, it’s easy to understand, and it’s easy to implement. DynamoDB’s conditional writes make OCC a natural fit for DynamoDB users, and it’s natively supported by the DynamoDBMapper client.

While OCC is guaranteed to make progress, it can still perform quite poorly under high contention. The simplest of these contention cases is when a whole lot of clients start at the same time, and try to update the same database row. With one client guaranteed to succeed every round, the time to complete all the updates grows linearly with contention.

For the graphs in this post, I used a small simulator to model the behavior of OCC on a network with delay (and variance in delay), against a remote database. In this simulation, the network introduces delay with a mean of 10ms and variance of 4ms. The first simulation shows how completion time grows linearly with contention. This linear growth is because one client succeeds every round, so it takes N rounds for all N clients to succeed.

Unfortunately, that’s not the whole picture. With N clients contending, the total amount of work done by the system increases with N2.

Adding Backoff

The problem here is that N clients compete in the first round, N-1 in the second round, and so on. Having every client compete in every round is wasteful. Slowing clients down may help, and the classic way to slow clients down is capped exponential backoff. Capped exponential backoff means that clients multiply their backoff by a constant after each attempt, up to some maximum value. In our case, after each unsuccessful attempt, clients sleep for:

Running the simulation again shows that backoff helps a small amount, but doesn’t solve the problem. Client work has only been reduced slightly.

The best way to see the problem is to look at the times these exponentially backed-off calls happen.

It’s obvious that the exponential backoff is working, in that the calls are happening less and less frequently. The problem also stands out: there are still clusters of calls. Instead of reducing the number of clients competing in every round, we’ve just introduced times when no client is competing. Contention hasn’t been reduced much, although the natural variance in network delay has introduced some spreading.

Adding Jitter

The solution isn’t to remove backoff. It’s to add jitter. Initially, jitter may appear to be a counter-intuitive idea: trying to improve the performance of a system by adding randomness. The time series above makes a great case for jitter – we want to spread out the spikes to an approximately constant rate. Adding jitter is a small change to the sleep function:

That time series looks a whole lot better. The gaps are gone, and beyond the initial spike, there’s an approximately constant rate of calls. It’s also had a great effect on the total number of calls.

In the case with 100 contending clients, we’ve reduced our call count by more than half. We’ve also significantly improved the time to completion, when compared to un-jittered exponential backoff.

There are a few ways to implement these timed backoff loops. Let’s call the algorithm above “Full Jitter”, and consider two alternatives. The first alternative is “Equal Jitter”, where we always keep some of the backoff and jitter by a smaller amount:

The intuition behind this one is that it prevents very short sleeps, always keeping some of the slow down from the backoff. A second alternative is “Decorrelated Jitter”, which is similar to “Full Jitter”, but we also increase the maximum jitter based on the last random value.

Which approach do you think is best?

Looking at the amount of client work, the number of calls is approximately the same for “Full” and “Equal” jitter, and higher for “Decorrelated”. Both cut down work substantially relative to both the no-jitter approaches.

The no-jitter exponential backoff approach is the clear loser. It not only takes more work, but also takes more time than the jittered approaches. In fact, it takes so much more time we have to leave it off the graph to get a good comparison of the other methods.

Of the jittered approaches, “Equal Jitter” is the loser. It does slightly more work than “Full Jitter”, and takes much longer. The decision between “Decorrelated Jitter” and “Full Jitter” is less clear. The “Full Jitter” approach uses less work, but slightly more time. Both approaches, though, present a substantial decrease in client work and server load.

It’s worth noting that none of these approaches fundamentally change the N2 nature of the work to be done, but do substantially reduce work at reasonable levels of contention. The return on implementation complexity of using jittered backoff is huge, and it should be considered a standard approach for remote clients.

All of the graphs and numbers from this post were generated using a simple simulation of OCC behavior. You can get our simulator code on GitHub, in the aws-arch-backoff-simulator project.

– Marc Brooker

 

Internet Routing and Traffic Engineering

Post Syndicated from Tom Scholl original https://aws.amazon.com/blogs/architecture/internet-routing-and-traffic-engineering/

Internet Routing

Internet routing today is handled through the use of a routing protocol known as BGP (Border Gateway Protocol). Individual networks on the Internet are represented as an autonomous system (AS). An autonomous system has a globally unique autonomous system number (ASN) which is allocated by a Regional Internet Registry (RIR), who also handle allocation of IP addresses to networks. Each individual autonomous system establishes BGP peering sessions to other autonomous systems to exchange routing information. A BGP peering session is a TCP session established between two routers, each one in a particular autonomous system. This BGP peering session rides across a link, such as a 10Gigabit Ethernet interface between those routers. The routing information contains an IP address prefix and subnet mask. This translates which IP addresses are associated with an autonomous system number (AS origin). Routing information propagates across these autonomous systems based upon policies that individual networks define.

This is where things get a bit interesting because various factors influence how routing is handled on the Internet. There are two main types of relationships between autonomous systems today: Transit and Peering.

Transit is where an autonomous system will pay an upstream network (known as a transit provider) for the ability to forward traffic towards them who will forward that traffic further. It also provides for the autonomous system purchasing (who is the customer in this relationship) to have their routing information propagated to their adjacencies. Transit involves obtaining direct connectivity from a customer network to an upstream transit provider network. These sorts of connections can be multiple 10Gigabit Ethernet links between each other’s routers. Transit pricing is based upon network utilization in a particular dominant direction with 95th percentile billing. A transit provider will look at a months worth of utilization and in the traffic dominant direction they will bill on the 95th percentile of utilization. The unit used in billing is measured in bits-per-second (bps) and is communicated in a price per Mbps (for example – $2 per Mbps).

Peering is where an autonomous system will connect to another autonomous system and agree to exchange traffic with each other (and routing information) of their own networks and any customers (transit customers) they have. With peering, there are two methods that connectivity is formed on. The first is where direct connectivity is established between individual networks routers with multiple 10Gigabit Ethernet or 100Gigabit Ethernet links. This sort of connectivity is known as “private peering” or PNI (Private Network Interconnect). This sort of connection provides both parties with clear visibility into the interface utilization of traffic in both directions (inbound and outbound). Another form of peering that is established is via Internet Exchange switches, or IX’s. With an Internet Exchange, multiple networks will obtain direct connectivity into a set of Ethernet switches. Individual networks can establish BGP sessions across this exchange with other participants. The benefit of the Internet Exchange is that it allows multiple networks to connect to a common location and use it for one-to-many connectivity. A downside is that any given network does not have visibility into the network utilization of other participants.

Most networks will deploy their network equipment (routers, Dense Wave Division Multiplexing (DWDM) transport equipment) into colocation facilities where networks will establish direct connectivity to each other. This can be via Internet Exchange switches (which are also found in these colocation facilities) or direct connections which are fiber optics cables ran between individual suites/racks where the network gear is located.

Routing Policy

Networks will define their routing policy to prefer routing to other networks based upon a variety of items. The BGP best path decision process in a routers operating system dictates how a router will prefer one BGP path over another. Network operators will write their policy to influence that BGP best-path decision process based upon factors such as the cost to deliver traffic to a destination network in addition to performance.

A typical routing policy within most networks will dictate that internal (their own) and routes learned from their own customers are to be preferred over all other paths. After that, most networks will then prefer peering routes since peering is typically free and often times can provide a shorter/optimal path to reach a destination. Finally the least preferred route to a destination is over paid transit links. When it comes to transit paths, both cost and performance are typically factors in determining how to reach a destination network.

Routing policies themselves are defined on routers in a simple text-based policy language that is specific to the router operating system. They contain two types of functions: matching on one or multiple routes and an action for that match. The matching can include a list of actual IP prefixes and subnet lengths, ASN origins, AS-Paths or other types of BGP attributes (communities, next-hop, etc). The actions can include resetting BGP attributes such as local-preference, Multi-Exit-Discriminators (MED) and various other values (communities, Origin, etc). Below is a simplified example of a routing policy on routes learned from a transit provider. It has multiple terms to permit an operator to match on specific Internet routes to set a different local-preference value to control what traffic should be forwarded through that provider. There are additional actions to set other BGP attributes related to classifying the routes so they can be easily identified and acted upon by other routers in the network.

Network operators will tune their routing policy to determine how to send traffic and how to receive traffic through adjacent autonomous systems. This practice is generally known as BGP traffic-engineering. Making outbound traffic changes is by far the easiest to implement because it involves identifying the particular routes you are interested in directing and increasing the routing preference to egress through a particular adjacency. Operators must take care to examine certain things before and after any policy change to understand the impact of their actions.

Inbound traffic-engineering is a bit more difficult as it requires a network operator to alter routing information announcements leaving your network to influence how other autonomous systems on the Internet prefer to route to you. While influencing the directly adjacent networks to you is somewhat trivial, influencing networks further beyond those directly connected can be tricky. This technique requires the use of features that a transit provider can grant via BGP. In the BGP protocol, there is a certain type of attribute known as Communities. Communities are strings you can pass in a routing update across BGP sessions. Most networks use communities to classify routes as transit vs. peer vs. customer. The transit-customer relationship usually gives certain capabilities to a customer to control the further propagation of routes to their adjacencies. This grants a network with the ability to traffic-engineer further upstream to networks it is not directly connected to.

Traffic-engineering is used for several reasons today on the Internet. The first reason might be to reduce bandwidth costs by preferring particular paths (different transit providers). The other is for performance reasons, where a particular transit provider may have less-congested/lower-latency path to a destination network. Network operators will view a variety of metrics to determine if there is a problem and start to make policy changes to examine the outcome. Of course on the Internet, the scale of the traffic being moved around counts. Moving a few Gbps of traffic from one path to another may improve performance, but if you move tens of Gbps over you may encounter congestion on this newly selected path. The links between various networks on the Internet today operate where they scale capacity based upon observed utilization. Even though you may be paying a transit provider for connectivity, this doesn’t mean every link to external networks is scaled for the amount of traffic you wish to push. As traffic grows, links will be added between individual networks. So causing a massive change in utilization on the Internet can result in congestion as these new paths are handling an increased amount of traffic than they never had before. The result is that network operators must pay attention when moving traffic over in increments as well as communication with other networks to gauge the impact of any traffic moves.

Complicating the above traffic engineering operations is that you are not the only person on the Internet trying to push traffic to certain destinations. Other networks are also in a similar position where they’re trying to deliver traffic and will perform their own traffic-engineering. There are also many networks that will refuse to peer with other networks for several reasons. For example, some networks may cite an imbalance in in vs. outbound (traffic ratios) or feel that traffic is being dumped on their network. In these cases, the only way to reach these destinations is via a transit provider. In some cases, these networks may offer a “paid peering” product to provide direct connectivity. That paid peering product may be priced at a value that is lower the price of what you would pay for transit or could offer an uncongested path that you’d normally observe over transit. Just because you have a path via transit doesn’t mean the path is uncongested at all hours of the day (such as during peak hours).

One way to eliminate the hops between networks is to do just that – eliminate them via direct connections. AWS provides a service to do this known as AWS Direct Connect. With Direct Connect, customers can connect their network directly into the AWS network infrastructure. This will enable bypassing the Internet via direct physical connectivity and remove any potential Internet routing or capacity issues.

Traceroute

In order to determine the paths traffic is taking, tools such as traceroute are very useful. Traceroute operates by sending sending packets to a given destination network and it sets the initial IP TTL value to one. The upstream device will generate an ICMP TTL Exceeded message back to you (the source) which will reveal the first hop in your path to the destination. Subsequent packets will be sent from the source and increment the IP TTL value to show each hop along the way towards the destination. It is important to remember that Internet routing typically involves asymmetric paths – the traffic going towards a destination will take a separate set of hops on the return path. When performing traceroutes to diagnose routing issues it is very useful to obtain the reverse path to help isolate a particular direction of traffic being an issue. With an understanding of both directions traffic is taking, it is then easier to understand what sort of traffic-engineering changes can be made. When dealing with Network Operation Centers (NOCs) or support groups, it is important to provide the Public IP of the source and destination addresses involved in the communication. This provides individuals with the information they can use to help reproduce the issue that is being encountered. It is also useful to include any specific details surrounding the communication, such as if it was HTTP (TCP/80) or HTTPS (TCP/443). Some traceroute applications provide the user with the ability to generate its probes using a variety of protocols such as ICMP Echo Request (ping), UDP or TCP packets to a particular port. Several traceroute programs by default will use ICMP Echo Request or UDP packets (destined to a particular port range). While these work most of the time, various networks on the Internet may filter these sorts of packets and it is recommended to use a traceroute probe that replicates the type of traffic you intend to use to the destination network. For example, using traceroute with TCP/80 or TCP/443 can yield better results when dealing with firewalls or other packet filtering.

An example of a UDP based traceroute (using well-defined traceroute port ranges), where multiple routes will permit generating TTL Exceeded for packets bound for those destination ports:

Note that the last hop does not respond, since it most likely denies UDP packets destined to high ports.

With the same traceroute using TCP/443 (HTTPS), we find multiple routers do not respond but the destination does respond since it is listening on TCP/443:

TCP Traceroute to port 443 (HTTPS):

The hops revealed within traceroute provide some insight into the sort of network devices your packets are traversing. Many network operators will add descriptive information in the DNS reverse PTR records, though each network is going to be different. Typically the DNS entries will indicate the router name, some sort of geographical code and the physical or logical router interface the traffic has traversed. Each individual network names their own routers different so the information here is going to usually indicate if a device is a “core” router (no external or customer interfaces) or an “edge” router (with external network connectivity). Of course, this is not a hard rule and it is common to find multi-function devices within a network. The geographical identifier can vary between IATA airport codes, telecom CLLI codes (or a variation upon them) or internally generated identifiers that are unique to that particular network. Occasionally shortened versions of a physical address or city names will appear in here as well. The actual interface can indicate the interface type and speed, though these are only as accurate as you believe an operator is to publicly reveal this and keep their DNS entries up to date.

One important part of traceroute is that the data should be taken with some skepticism. Traceroute will display round-trip-time (RTT) of each individual hop as the packets traverse through the network to their destination. While this value can provide some insight into the latency to these hops, the actual value can be influenced from a variety of factors. For instance, many modern routers today treat packets that TTL expire on them as a low priority when compared to other functions the router is doing (forwarding packets, routing protocols). As a result, the handling of the TTL expired packets and subsequent ICMP TTL Exceeded message generated can take some period of time. This is why it is very common to occasionally see high RTT on intermediate hops within a traceroute (up to hundreds of milliseconds). This does not always indicate that there is a network issue and individuals should always measure the end-to-end latency (via ping or some application tests). In situations where the RTT does increase at a particular hop and continue to increase, this can be an indicator of an overall increase in latency at a particular point in the network. Another item frequently observed in traceroutes are hops that do not respond to traceroute which will be displayed as *’s. This means that the router(s) at this particular hop have either dropped the TTL expired packet or has not generated the ICMP TTL Exceeded message. This is usually the result of two possible things. The first is that many modern routers today implement Control-Plane Policing (CoPP) which are packet filters on the router to control how certain types of packets are handled. In many modern routers today, the use of ASICs (Application-Specific Integrated Circuit) have improved packet lookup & forwarding functions. When a router ASIC receives a packet with the TTL value of one, they will punt the packet to an additional location within the router to handle the ICMP TTL Exceeded generation. On most routers, the ICMP TTL Exceeded generation is done on a CPU integrated on a linecard or the main brain of the router itself (known as a route processor, routing engine or supervisor). Since the CPU of a linecard or routing engine is busy performing things such as forwarding table programming and routing protocols, routers will allow protections to be put in place to restrict the rate of how many TTL exceeded packets can be sent to these components. CoPP allows an operator to set functions such as limiting TTL Exceeded messages to a value such as 100 packets per second. Additionally the router itself may have an additional rate-limiter to address how many ICMP TTL Exceeded messages can be generated as well. In this situation, you’ll find that hops in your traceroute may sometimes not reply at all because of the use of CoPP. This is also why when performing pings to individual hops (routers) on a traceroute you will see packet loss because CoPP is dropping the packets. The other area where CoPP can be applied is where the router may simply deny all TTL exceeded packets. Within traceroute, these hops will always respond with *’s no matter how many times you execute traceroute.

A good presentation that explains using traceroute on the Internet and interpreting its results is found here: https://www.nanog.org/meetings/nanog45/presentations/Sunday/RAS_traceroute_N45.pdf

Troubleshooting issues on the Internet is no easy task and it requires examining multiple sets of information (traceroute, BGP routing tables) to come to a conclusion as to what can be occurring. The use of Internet looking glasses or route servers is useful to providing a different vantage point on the Internet when troubleshooting. The Looking Glass Wikipedia page has several links to sites which you can use to perform pings, traceroutes and examining a BGP routing table from different spots around the world in various networks.

When reaching out to networks or posting in forums looking for support for Internet routing issues it is important to provide useful information for troubleshooting. This includes the source IP address (the Public IP, not a Private/NAT translated one), the destination IP (once again, the Public IP), what protocol and ports being used (TCP/80 for example) and the specific time/date of when you observed the issue. Traceroutes in both directions are incredibly useful since paths on the Internet can be asymmetric.

AWS and Compartmentalization

Post Syndicated from Colm MacCarthaigh original https://aws.amazon.com/blogs/architecture/aws-and-compartmentalization/

Practically every experienced driver has suffered a flat tire. It’s a real nuisance, you pull over, empty the trunk to get out your spare wheel, jack up the car and replace the puncture before driving yourself to a nearby repair shop. For a car that’s ok, we can tolerate the occasional nuisance, and as drivers we’re never that far from a safe place to pull over or a friendly repair shop.

Using availability terminology, a spare tire is a kind of standby, a component or system that is idly waiting to be deployed when needed. These are common in computer systems too. Many databases rely on standby failover for example, and some of them even rely on personal intervention, with a human running a script as they might wind a car-jack (though we’d recommend using an Amazon Relational Database instead, which include automated failover).

But when the stakes are higher, things are done a little differently. Take the systems in a modern passenger jet for example, which despite recent tragic events, have a stellar safety record. A flight can’t pull over, and in the event of a problem an airliner may have to make it several hours before being within range of a runway. For passenger jets it’s common for critical systems to use active redundancy. A twin-engine jet can fly with just one working engine, for example – so if one fails, the other can still easily keep the jet in the air.

This kind of model is also common in large web systems. There are many EC2 instances handling amazon.com for example, and when one occasionally fails there’s a buffer of capacity spread across the other servers ensuring that customers don’t even notice.

Jet engines don’t simply fail on their own though. Any one of dozens of components—digital engine controllers, fuel lines and pumps, gears and shafts, and so on–can cause the engine to stop working. For every one of these components, the aircraft designers could try to include some redundancy at the component level (and some do, such as avionics), but there are so many that it’s easier to re-frame the design in terms of fault isolation or compartmentalization: as long as each engine depends on separate instances of each component, then no one component can take out both engines. A fuel line may break, but it can only stop one engine from functioning, and the plane has already been designed to work with one engine out.

This kind of compartmentalization is particularly useful for complex computer systems. A large website or web service may depend on tens or even hundreds of sub-services. Only so many can themselves include robust active redundancy. By aligning instances of sub-services so that inter-dependencies never go across compartments we can make sure that a problem can be contained to the compartment it started in. It also means that we can try to resolve problems by quarantining whole compartments, without needing to find the root of the problem within the compartment.

AWS and Compartmentalization

Amazon Web Services includes some features and offerings that enable effective compartmentalization. Firstly, many Amazon Web Services—for example, Amazon S3 and Amazon RDS—are themselves internally compartmentalized and make use of active redundancy designs so that when failures occur they are hidden.

We also offer web services and resources in a range of sizes, along with automation in the form of auto-scaling, CloudFormation templates, and Opsworks recipes that make it easy to manage a higher number of instances.

There is a subtle but important distinction between running a small number of large instances, and a large number of small instances. Four m3.xlarge instances cost as much as two m3.2xlarge instances and provide the same amount of CPU and storage; but for high availability configurations, using four instances requires only a 33% failover capacity buffer and any host-level problem may impact one quarter of your load, whereas using two instances means a 100% buffer and any problem may impact half of your load.

Thirdly, Amazon Web Services has pre-made compartments: up to four availability zones per region. These availability zones are deeply compartmentalized down to the datacenter, network and power level.

Suppose that we create a web site or web service that utilizes four availability zones. This means we need a 25% failover capacity buffer per zone (which compares well to a 100% failover capacity buffer in a standard two data center model). Our service consists of a front end, two dependent backend services (“Foo” and “Bar”) and a data-store (for this example, we’ll use S3).

By constraining any sub-service calls to stay “within” the availability zone we make it easier to isolate faults. If backend service “Bar” fails (for example a software crash) in us-east-1b, this impacts 1/4th of our over-all capacity.

Initially this may not seem much better than if we had spread calls to the Bar service from all zones across all instances of the Bar service; after all, the failure rate would also be one fifth. But the difference is profound.

Firstly, experience has shown that small problems can often become amplified in complex systems. For example if it takes the “Foo” service longer to handle a failed call to the “Bar” service, then the initial problem with the “Bar” service begins to impact the behavior of “Foo” and in turn the frontends.

Secondly, by having a simple all-purpose mechanism to fail away from the infected availability zone, the problem can be reliably, simply, and quickly neutralized, just as a plane can be designed to fly on one engine and many types of failure handled with one procedure—if the engine is malfunctioning and a short checklist’s worth of actions don’t restore it to health, just shut it down and land at the next airport.

Route 53 Infima

Our suggested mechanism for handling this kind of failure is Amazon Route 53 DNS Failover. As DNS is the service that turns service/website names into the list of particular front-end IP addresses to connect to, it sits at the start of every request and is an ideal layer to neutralize problems.

With Route 53 health checks and DNS failover, each front-end is constantly health checked and automatically removed from DNS if there is a problem. Route 53 Health Check URLs are fully customizable and can point to a script that checks every dependency in the availability zone (“Is Foo working, Is Bar working, is S3 reachable, etc …”).

This brings us to Route 53 Infima. Infima is a library designed to model compartmentalization systematically and to help represent those kinds of configurations in DNS. With Infima, you assign endpoints to specific compartments such as availability zone. For advanced configurations you may also layer in additional compartmentalization dimensions; for example you may want to run two different software implementations of the same service (perhaps for blue/green deployments, for application-level redundancy) in each availability zone.

Once the Infima library has been taught the layout of endpoints within the compartments, failures can be simulated in software and any gaps in capacity identified. But the real power of Infima comes in expressing these configurations in DNS. Our example service had 4 endpoints, in 4 availability zones. One option for expressing this in DNS is to return each endpoint one time in every four. Each answer could also depend on a health check, and when the health check fails, it could be removed from DNS. Infima supports this configuration.

However, there is a better option. DNS (and naturally Route 53) allows several endpoints to be represented in a single answer, for example:

 

When clients (such as browsers or web services clients) receive these answers they generally try several endpoints until they find one that successfully connects. So by including all of the endpoints we gain some fault tolerance. When an endpoint is failing though, as we’ve seen before, the problem can spread and clients can incur retry timers and some delay, so it’s still desirable to remove IPs from DNS answers in a timely manner.

Infima can use the list of compartments, endpoints and their healthchecks to build what we call a RubberTree, a pre-computed decision tree of DNS answers that has answers pre-baked ready and waiting for potential failures: a single node failing, a whole compartment failing, combinations of each and so on. This decision tree is then stored as a Route 53 configuration and can automatically handle any failures. So if the 192.0.2.3 endpoint were to fail, then:

 

will be returned. By having these decision trees pre-baked and always ready and waiting, Route 53 is able to react quickly to endpoint failures, which with compartmentalization means we are also ready to handle failures of any sub-service serving that endpoint.

The compartmentalization we’ve seen so far is most useful for certain kinds of errors; host-level problems, occasional crashes, application-lockups. But if the problem originates with front-end level requests themselves, for example a denial of service attack, or a “poison pill” request that triggers a calamitous bug then it can quickly infect all of your compartments. Infima also includes some neat functionality to assist in isolating even these kinds of faults, and that will be the topic of our next post.

Bonus Content: Busting Caches

I wrote that removing failing endpoints from DNS in a timely manner is important, even when there are multiple endpoints in an answer. One problem we respond to in this area is broken application-level DNS caching. Certain platforms, including many versions of Java do not respect DNS cache lifetimes (the DNS time-to-live or TTL value) and once a DNS response has been resolved it will be used indefinitely.

One way to mitigate this problem is to use cache “busting”. Route 53 support wildcard records (and wildcard ALIASes, CNAMEs and more). Instead of using a service name such as: “api.example.com”, it is possible to use a wildcard name such as “*.api.example.com”, which will match requests for any name ending in “.api.example.com”.

An application may then be written in such a way as to resolve a partially random name, e.g. “sdsHdsk3.api.example.com”. This name, since it ends in api.example.com will still receive the right answer, but since it is a unique random name every time, it will defeat (or “bust”) any broken platform or OS DNS caching.

– Colm MacCárthaigh