[$] Linux in mixed-criticality systems

Post Syndicated from corbet original https://lwn.net/Articles/774217/rss

The Linux kernel is generally seen as a poor fit for safety-critical
systems; it was never designed to provide realtime response guarantees or
to be certifiable for such uses. But the systems that can be used
in such settings lack the features needed to support complex applications.
This problem is often solved by deploying a mix of computers running
different operating systems. But what if you want to support a mixture of
tasks, some safety-critical and some not, on the same system? At a talk
given at LinuxLab 2018, Claudio
Scordino described an effort to support this type of mixed-criticality
system.

New SOC 2 Report Available: Privacy

Post Syndicated from Chris Gile original https://aws.amazon.com/blogs/security/new-soc-2-report-available-privacy/

Maintaining your trust is an ongoing commitment of ours, and your voice drives our growing portfolio of compliance reports, attestations, and certifications. As a result of your feedback and deep interest in privacy and data security, we are happy to announce the publication of our new SOC 2 Type I Privacy report.

Keeping you informed of our privacy and data security policies, practices, and technologies we’ve put in place is important to us. The SOC 2 Privacy Type I report is complementary to that effort . The SOC 2 Privacy Trust Principle, developed by the American Institute of CPAs (AICPA), establishes the criteria for evaluating controls related to how personal information is collected, used, retained, disclosed, and disposed to meet the entity’s objectives. The AWS SOC 2 Privacy Type I report provides you with a third-party attestation of our systems and the suitability of the design of our privacy controls, as stated in our Privacy Notice.

The scope of the privacy report includes systems AWS uses to collect personal information and all 72 services and locations in scope for the latest AWS SOC reports. You can download the new SOC 2 Type I Privacy report now through AWS Artifact in the AWS Management Console.

As always, we value your feedback and questions. Please feel free to reach out to the team through the Contact Us page.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

AWS Security Profile (and re:Invent 2018 wrap-up): Eric Docktor, VP of AWS Cryptography

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profile-and-reinvent-2018-wrap-up-eric-docktor-vp-of-aws-cryptography/

Eric Docktor

We sat down with Eric Docktor to learn more about his 19-year career at Amazon, what’s new with cryptography, and to get his take on this year’s re:Invent conference. (Need a re:Invent recap? Check out this post by AWS CISO Steve Schmidt.)


How long have you been at AWS, and what do you do in your current role?

I’ve been at Amazon for over nineteen years, but I joined AWS in April 2015. I’m the VP of AWS Cryptography, and I lead a set of teams that develops services related to encryption and cryptography. We own three services and a tool kit: AWS Key Management Service (AWS KMS), AWS CloudHSM, AWS Certificate Manager, plus the AWS Encryption SDK that we produce for our customers.

Our mission is to help people get encryption right. Encryption algorithms themselves are open source, and generally pretty well understood. But just implementing encryption isn’t enough to meet security standards. For instance, it’s great to encrypt data before you write it to disk, but where are you going to store the encryption key? In the real world, developers join and leave teams all the time, and new applications will need access to your data—so how do you make a key available to those who really need it, without worrying about someone walking away with it?

We build tools that help our customers navigate this process, whether we’re helping them secure the encryption keys that they use in the algorithms or the certificates that they use in asymmetric cryptography.

What did AWS Cryptography launch at re:Invent?

We’re really excited about the launch of KMS custom key store. We’ve received very positive feedback about how KMS makes it easy for people to control access to encryption keys. KMS lets you set up IAM policies that give developers or applications the ability to use a key to encrypt or decrypt, and you can also write policies which specify that a particular application—like an Amazon EMR job running in a given account—is allowed to use the encryption key to decrypt data. This makes it really easy to encrypt data without worrying about writing massive decrypt jobs if you want to perform analytics later.

But, some customers have told us that for regulatory or compliance reasons, they need encryption keys stored in single-tenant hardware security modules (HSMs) that they manage. This is where the new KMS custom key store feature comes in. Custom key store combines the ease of using KMS with the ability to run your own CloudHSM cluster to store your keys. You can create a CloudHSM cluster and link it to KMS. After setting that up, any time you want to generate a new master key, you can choose to have it generated and stored in your CloudHSM cluster instead of using a KMS multi-tenant HSM. The keys are stored in an HSM under your control, and they never leave that HSM. You can reference the key by its Amazon Resource Name (ARN), which allows it to be shared with users and applications, but KMS will handle the integration with your CloudHSM cluster so that all crypto operations stay in your single-tenant HSM.

You can read our blog post about custom key store for more details.

If both AWS KMS and AWS CloudHSM allow customers to store encryption keys, what’s the difference between the services?

Well, at a high level, sure, both services offer customers a high level of security when it comes to storing encryption keys in FIPS 140-2 validated hardware security modules. But there are some important differences, so we offer both services to allow customers to select the right tool for their workloads.

AWS KMS is a multi-tenant, managed service that allows you to use and manage encryption keys. It is integrated with over 50 AWS services, so you can use familiar APIs and IAM policies to manage your encryption keys, and you can allow them to be used in applications and by members of your organization. AWS CloudHSM provides a dedicated, FIPS 140-2 Level 3 HSM under your exclusive control, directly in your Amazon Virtual Private Cloud (VPC). You control the HSM, but it’s up to you to build the availability and durability you get out of the box with KMS. You also have to manage permissions for users and applications.

Other than helping customers store encryption keys, what else does the AWS Cryptography team do?

You can use CloudHSM for all sorts of cryptographic operations, not just key management. But we definitely do more than KMS and CloudHSM!

AWS Certificate Manager (ACM) is another offering from the cryptography team that’s popular with customers, who use it to generate and renew TLS certificates. Once you’ve got your certificate and you’ve told us where you want it deployed, we take care of renewing it and binding the new certificate for you. Earlier this year, we extended ACM to support private certificates as well, with the launch of ACM Private Certificate Authority.

We also helped the AWS IoT team launch support for cryptographically signing software updates sent to IoT devices. For IoT devices, and for software installation in general, it’s a best practice to only accept software updates from known publishers, and to validate that the new software has been correctly signed by the publisher before installing. We think all IoT devices should require software updates to be signed, so we’ve made this really easy for AWS IoT customers to implement.

What’s the most challenging part of your job?

We’ve built a suite of tools to help customers manage encryption, and we’re thrilled to see so many customers using services like AWS KMS to secure their data. But when I sit down with customers, especially large customers looking seriously at moving from on-premises systems to AWS, I often learn that they have years and years of investment into their on-prem security systems. Migrating to the cloud isn’t easy. It forces them to think differently about their security models. Helping customers think this through and map a strategy can be challenging, but it leads to innovation—for our customers, and for us. For instance, the idea for KMS custom key store actually came out of a conversation with a customer!

What’s your favorite part of your job?

Ironically, I think it’s the same thing! Working with customers on how they can securely migrate and manage their data in AWS can be challenging, but it’s really rewarding once the customer starts building momentum. One of my favorite moments of my AWS career was when Goldman Sachs went on stage at re:Invent last year and talked about how they use KMS to secure their data.

Five years from now, what changes do you think we’ll see within the field of encryption?

The cryptography community is in the early stages of developing a new cryptographic algorithm that will underpin encryption for data moving across the internet. The current standard is RSA, and it’s widely used. That little padlock you see in your web browser telling you that your connection is secure uses the RSA algorithm to set up an encrypted connection between the website and your browser. But, like all good things, RSA’s time may be coming to an end—the quantum computer could be its undoing. It’s not yet certain that quantum computers will ever achieve the scale and performance necessary for practical applications, but if one did, it could be used to attack the RSA algorithm. So cryptographers are preparing for this. Last year, the National Institute of Standards and Technology (NIST) put out a call for algorithms that might be able to replace RSA, and got 68 responses. NIST is working through those ideas now and will likely select a smaller number of algorithms for further study. AWS participated in two of those submissions and we’re keeping a close eye on NIST’s process. New cryptographic algorithms take years of testing and vetting before they make it into any standards, but we want to be ready, and we want to be on the forefront. Internally, we’re already considering what it would look like to make this change. We believe it’s our job to look around corners and prepare for changes like this, so our customers don’t have to.

What’s the most common misconception you encounter about encryption?

Encryption technology itself is decades-old and fairly well understood. That’s both the beauty and the curse of encryption standards: By the time anything becomes a standard, there are years and years of research and proof points into the stability and the security of the algorithm. But just because you have a really good encryption algorithm that takes an encryption key and a piece of data you want to secure and spits out an impenetrable cipher text, it doesn’t mean that you’re done. What did you do with the encryption key? Did you check it into source code? Did you write it on a piece of paper and leave it in the conference room? It’s these practices around the encryption that can be difficult to navigate.

Security-conscious customers know they need to encrypt sensitive data before writing it to disk. But, if you want your application to run smoothly, sometimes you need that data in clear text. Maybe you need the data in a cache. But who has access to the cache? And what logging might have accidentally leaked that information while the application was running and interacting with the cache?

Or take TLS certificates. Each TLS certificate has a public piece—the certificate—and a private piece—a private key. If an adversary got ahold of the private key, they could use it to impersonate your website or your API. So, how do you secure that key after you’ve procured the certificate?

It’s practices like this that some customers still struggle with. You have to think about all the places that your sensitive data is moving, and about real-world realities, like the fact that the data has to be unecrypted somewhere. That’s where AWS can help with the tooling.

Which re:Invent session videos would you recommend for someone interested in learning more about encryption?

Ken Beer’s encryption talk is a very popular session that I recommend to people year after year. If you want to learn more about KMS custom key store, you should also check out the video from the LaunchPad event, where we talked with Box about how they’re using custom key store.

People do a lot of networking during re:Invent. Any tips for maintaining those connections after everyone’s gone home?

Some of the people that I meet at re:Invent I get to see again every year. With these customers, I tend to stay in touch through email, and through Executive Briefing Center sessions. That contact is important since it lets us bounce ideas off each other and we use that feedback to refine AWS offerings. One conference I went to also created a Slack channel for attendees—and all the attendees are still on it. It’s quiet most of the time, but people have a way to re-engage with each other and ask a question, and it’ll be just like we’re all together again.

If you had to pick any other job, what would you want to do with your life?

If I could do anything, I’d be a backcountry ski guide. Now, I’m not a good enough skier to actually have this job! But I like being outside, in the mountains. If there was a way to make a living out of that, I would!

Author photo

Erick Docktor

Eric joined Amazon in 1999 and has worked in a variety of Amazon’s businesses, including being part of the teams that launched Amazon Marketplace, Amazon Prime, the first Kindle, and Fire Phone. Eric has also worked in Supply Chain planning systems and in Ordering. Since 2015, Eric has led the AWS Cryptography team that builds tools to make it easy for AWS customers to encrypt their data in AWS. Prior to Amazon, Eric was a journalist and worked for newspapers including the Oakland Tribune and the Doylestown (PA) Intelligencer.

Bootstrapping to $30 Million ARR

Post Syndicated from Yev original https://www.backblaze.com/blog/startup-tips-bootstrapping-to-30-million-arr/

Backblaze Billboard on Highway 101 in Silicon Valley, 2011
Backblaze will be celebrating its 12th year in business this coming April 20th. We’ve steadily grown over the years, and this year have reached $30 million ARR (annual recurring revenue). We’ve accomplished this with only $3.1 million in funding over the years, having successfully bootstrapped the company with founder contributions and cash flow since the very beginning.

Last year our CEO and co-founder Gleb Budman wrote a series of posts on entrepreneurship that detailed our early years and some lessons learned that entrepreneurs can apply for themselves.

Recently, Gleb did a follow-on webinar on BrightTALK covering many of the series’ main points.
Given the time constraints most entrepreneurs face, I’ll highlight what I consider some of the key lessons for startups that Gleb outlined in both the entrepreneurial series and the webinar.

Gleb Budman on BrightTALK Founders Series

Gleb’s webinar on BrightTALK

Creating Your Product

Gleb’s first article, How Backblaze Got Started: The Problem, The Solution, and the Stuff In-Between, starts with one of the most critical aspects for any successful company: defining the real problem you’re trying to solve. In Gleb’s words, “The entrepreneur builds things to solve problems — your own or someone else’s.”

So the question is how do you go about defining the problem? The most obvious place to start is to look at pain points you’re trying to address and then defining the specific elements that contribute to them. Can you solve the problem by taking away or changing one of those elements or multiple elements? Or, is it a matter of adding in new elements to shift away the pain points?

In our case, there was an obvious need in the market for backing up computers. There were already solutions on the market that, at least in theory, provided a backup solution, yet the majority of people still didn’t use one. The question was why?

Just because solutions exist doesn’t mean the problem is solved. After a series of deep dives into why people weren’t backing up, we discovered that the major problem was that backup solutions were too complicated for most people. They recognized they should be backing up, but weren’t willing to invest the time to learn how to use one of the existing services. So the problem Backblaze was originally solving wasn’t backup in general, it was taking away the learning curve to use a backup solution.

Once you have the problem clearly defined, you can proceed to design a solution that will solve it. Of course the solution itself will likely be defined by market forces, most notably, price. As Gleb touches on in the following video clip, pricing needs to be built into the solution from the outset.

Surviving Your First Year

Once you’ve determined the problem you want to solve, the next step is to create the infrastructure, i.e. the company, in order to build the solution. With that in mind, your primary goals for that first year should be: set up the company correctly, build and launch your minimal viable product, and most importantly, survive.

Setting up the company correctly is critical. A company is only as successful as the people in it. At all stages of growth, it’s critical that people have clear definitions of what is expected of them, but in the beginning it’s especially important to make sure people know what they need to do and the vision that’s driving the business.

From the start you need to determine the company, product, and development resources you need, define roles to be filled, and assign responsibilities once key players start joining your team. It’s very common in the early stages of a startup for everyone to be working on the same tasks in a democratic process. That might be good for morale in the beginning, but can result in a lack of focused direction. Leadership must emerge and help steer the company towards the shared vision. With clearly defined roles and responsibilities, team members can collaborate on achieving specific milestones, ensuring forward momentum.

A far less exciting but equally important foundation for a startup is the legal entity. It’s easy to get caught up in the excitement of building a product and put off the less exciting legal aspects until you are ready to launch. However, trying to retroactively get all the legal requirements in place is far more difficult.

Ownership (equity) ratios need be locked in near the start of the company. Having this hammered down can avoid a lot of potential infighting down the line. If you plan on raising money, you will need to incorporate and issue stock. You may also want to create a Proprietary Information and Inventions Assignment (PIIA) document, which states that what you are all working on is owned by the company.

Once the (admittedly not terribly exciting) legal aspects are taken care of, the focus truly shifts to building your minimal viable product (MVP) and launching it. It’s natural to want to build the perfect product, but in today’s market it’s better to focus on what you think are the most important features and launch. As Gleb writes in Surviving Your First Year, “Launching forces a scoping of the feature set to what’s critical, rallies the company around a goal, starts building awareness of your company and solution, and pushes forward the learning process.” Once you launch your MVP, you’ll start receiving feedback and then the iteration process can start: more on that later.

Lastly, when it comes to surviving your first year, always make an effort to conserve your cash. It might be tempting to scale as quickly as you can by hiring a lot more employees and building out your infrastructure, but minimizing your burn rate is usually more important for long term success. For example, Backblaze spent only $94k to build and launch its beta online backup service. If you scale your startup’s people and infrastructure too fast, you might have to rush to find more funding, which typically means more dilution and more outsiders telling you what you should be doing — not great when you’re first starting out and trying to achieve your vision.

Gleb goes into more detail in this video clip:

Getting Your First Customers

When you’re finally ready to go, you should target people who will give you lots of feedback as your first customers. Often, this means friends and even family members that are willing to give you their opinions on what you’re doing. It’s important to press the people close to you to give you honest feedback, as sugar-coating comments might actually lead you to make incorrect conclusions about your product.

Once you have a chance to evaluate the initial feedback and iterate on it, consider a private beta launch. Backblaze’s initial launch target was to get 1,000 people to use the service. In his article, How to Get Your First 1,000 Customers, Gleb goes into detail on how Backblaze successfully used PR outreach to achieve the beta launch goal.

One of the PR tactics used was to give publications, such as Techcrunch, ArsTechnica, and SimpleHelp, a limited number of beta invites. This not only raised awareness, but it gave early beta users a feeling of exclusivity, which helped in getting beta users to provide honest feedback.

Equally important is to have a system in place to collect contact information from everyone that expresses interest, even if you can’t service them at the time. You always want to be building your customer pipeline and having mechanisms in place to collect leads is important for sustained growth.

Startup Highs and Lows

It’s unavoidable that every startup entrepreneur will have to face a number of unexpected lows that will supplant what seem as increasingly infrequent highs. Dealing with both is vital to sustain your business (and your mental health). Often times, what at first appears to be a low point can inspire actions that ultimately help drive your business to new highs.

In the following clip Gleb gives several examples of seemingly low points that Backblaze was ultimately able to turn into wins, or as Gleb says “turning lemons into lemonade.” Note: I recently wrote a post about similar turnarounds on the social media front, Making Lemonade: The Importance of Social Media and Community.

Building Culture

It might not be foremost in your mind at the start, but from day one of your startup you are building your company culture. Culture is a little more nebulous than product design (maybe a lot more nebulous), but it is equally important in the long run. Culture affects every aspect of how your company operates because it has a day to day effect on every employee and the decisions they make, as Gleb points out in this short clip.

A prime example of how company culture affects your business is Backblaze’s emphasis on transparency. One of the first major wins for Backblaze was the release of our first Storage Pod design back in 2009. Most companies would keep proprietary design IP (intellectual property), like the Storage Pod, under lock and key, because they provide a major competitive pricing advantage. Yet the cultural importance of transparency led to a decision to open source the Storage Pod design despite the risk of competitors taking the designs and copying them. It also enabled us to answer a common question, “How can you provide this service at this low a price?” by writing one blog post with specifications, photos, and a parts inventory showing exactly how we do it.

The result of that very risky decision was a massive increase in brand awareness. Hundreds of articles were written about Backblaze comprised of not just general-interest and news articles, but also business case studies examining the rare business decision to be so transparent about our IP.

All of this attention ultimately positioned Backblaze as a thought leader in the cloud backup space (later, also in cloud storage), allowing us to be mentioned in the same articles and to compete against far bigger companies, including Amazon, Google, and Microsoft.

I hope you enjoyed this TL:DR version of Gleb’s entrepreneurial series and would love to hear your thoughts in the comments section below. I highly encourage anyone involved in a startup to take the time to read the original series as time permits and watch the entire webinar on BrightTALK, Founders Spotlight: Gleb Budman, CEO, Backblaze.

Gleb Budman’s Series on Entrepreneurship on the Backblaze Blog:

  1. How Backblaze got Started: The Problem, The Solution, and the Stuff In-Between
  2. Building a Competitive Moat: Turning Challenges Into Advantages
  3. From Idea to Launch: Getting Your First Customers
  4. How to Get Your First 1,000 Customers
  5. Surviving Your First Year
  6. How to Compete with Giants
  7. The Decision on Transparency
  8. Early Challenges: Managing Cash Flow
  9. Early Challenges: Making Critical Hires

The post Bootstrapping to $30 Million ARR appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Staff Picademy and the sacrificial Babbage

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/staff-picademy-and-the-sacrificial-babbage/

Refill the coffee machine, unpack the sacrificial Babbages, and refresh the micro SD cards — it’s staff Picademy time!

Raspberry Pi Staff Picademy

Staff Picademy

Once a year, when one of our all-staff meeting brings together members of the Raspberry Pi team from across the globe, we host staff Picademy at our office. It’s two days of making and breaking where the coding-uninitiated — as well as the more experienced people! — are put through their paces and rewarded with Raspberry Pi Certified Educator status at the end.

Lest we forget the sacrificial Babbages and all they have done in the name of professional development

What is Picademy?

Picademy is our free two-day professional development programme where educators come together to gain knowledge and confidence in digital making and computing. On Day 1, you learn new skills; on Day 2, you put your learning to the test by finding some other participants and creating a project together, from scratch!

Our Picademy events in the United Kingdom and in North America have hosted more than 2000 Raspberry Pi Certified Educators, who have gone on to create after-school coding clubs, makerspaces, school computing labs, and other amazing things to increasethe accessibility of computing and digital making for tens of thousands of young people.

Why do we run staff Picademy?

Because we stand by what we preach: we believe in learning through making, and we want our staff to be able to attend events, volunteer at Picademy, Code Clubs, CoderDojos, and Raspberry Jams, and feel confident in what they say and do.

And also, because Picademy is really fun!

Stuff and things, bits and bobs: staples of any good Picademy

You don’t need to be techy to work at Raspberry Pi: we’re not all engineers. Our staff ranges from educators and web developers to researchers, programme managers, administrators, and accountants. And we think everyone should give coding a shot, so we love getting our staff together to allow them to explore a new skill — and have some fun in the process.

I *think* this has something to do with The MagPi and a Christmas tree?

At our staff Picademy events, we’ve made everything from automated rock bands out of tin foil to timelapse buggies, and it really is a wonderful experience to see people come together and, within two days, take a skillset that may be completely new to them and use it to create a fully working, imaginative project.

Timelapse buggy is a thing is beauty…as is Brian

Your turn

If you’re an educator looking to try something new in your classroom, keep an eye on our channels, because we’ll be announcing dates for Picademy 2019 soon. You will find them on the Picademy page and see them pop up if you follow the #Picademy tag on Twitter. We’ll also announce the dates and locations in our Raspberry Pi LEARN newsletter, so be sure to sign up.

And if you’d like to join the Raspberry Pi team and build something silly and/or amazing at next year’s staff Picademy, we have roles available in the UK, Ireland, and North America.

The post Staff Picademy and the sacrificial Babbage appeared first on Raspberry Pi.

Без регистри идва хаос

Post Syndicated from Bozho original https://blog.bozho.net/blog/3245

Често разбираме колко е важно нещо чак когато спре да го има. Такъв беше примерът с Търговския регистър – бяхме свикнали всичко да е достъпно онлайн, да можем да проверяваме актуалното състояние на фирма, без да разнасяме хартиени удостоверения, да подаваме онлайн документи за регистрация и промяна на обстоятелства за дружества.

Докато през август регистърът не спря за повече от две седмици. И тогава се оказа, че не могат да се осъществяват сделки, че някои фирми не могат да превеждат заплати. Търговският оборот не спря, но беше затруднен заради липсата на регистъра.

Регистърът „оцеля“ и ни остави важна поука – че публичните регистри са изключително важни и тяхната липса създава хаос. Търговският регистър е един от най-важните, но далеч не е единствен. Други ключови за държавата регистри са Националната база данни „Население“, поддържана от ГД „ГРАО“, имотният регистър, кадастърът, регистърът на МПС, регистърът на особените залози, кредитният регистър, регистърът на обществените поръчки, регистърът на акционерите към Централния депозитар. И още стотици секторни регистри и регистърчета – в сектор „Здравеопазване“, в сектор „Правосъдие“, в сектор „Туризъм“ и т.н.

Тези регистри не са просто следствие от желанието на държавата да контролира всички аспекти на обществения живот. Те до голяма степен допринасят за повече прозрачност и по-голямо спокойствие на участниците. Търговският регистър например ни гарантира, че правим бизнес с истинските представители на съответното дружество. Имотният регистър ни позволява да знаем пълната история на един имот. Регистърът на МПС позволява (макар и неефективно реализиран към момента) контрол на правилата за движение и съответно безопасността на участниците. Кредитният регистър позволява на банките да правят по-добра преценка за своите кредитополучатели. Регистърът на населението пък е необходимо условие за каквото и да било електронно управление.

Немалка част от всички регистри се водят и на хартия, но в дългосрочен план хартията ще отпадне. Това означава, че поддържането на дигиталната инфраструктура става все по-важна задача. За съжаление, много от тези регистри имат съществени проблеми – с поддръжката, с архитектурата, със сигурността и с прозрачността.

По всичко изглежда, че Търговският регистър „падна“, защото поддръжката му е била управлявана изключително зле. Други регистри са разработени по начин, който не предполага голямо натоварване – каквото би имало при работещо електронно управление. Сигурността на данните в регистрите също е спорна – криптирани ли са данните, кой има достъп, кой може да променя данни, оставя ли това следа? Не на последно място е прозрачността – регистрите уж са електронни, но не предоставят достатъчно информация публично или пък я предоставят по твърде неудобен и бюрократичен начин.

За решаване на всички тези проблеми има нормативна уредба (закони и наредби), стратегии и проекти. Но не бихме могли да кажем, че нещата се подобряват. В случаи като този с Търговския регистър може би дори се влошават – такъв сериозен срив се случва за първи път.

Проблемът в крайна сметка се корени не просто в експертизата, която държавата няма, или в невъзможността да контролира и използва външната експертиза, която си купува. Проблемът е в неразбирането на важността и критичността на тези регистри на политическо и управленско ниво.

Един регистър не е просто тетрадка с няколко графи, каквито вероятно са представите на много хора, не е дори и проста база данни с няколко колони. Регистрите са сложна система от данни и процеси, от софтуер и хардуер. Система, в която трябва да се прилагат настоящите добри практики и която има нужда от постоянно осъвременяване.

Регистрите, особено някои, са необходими както за електронното управление, така и за гражданския и търговския оборот. И тяхното неглижиране и неразбиране е проблем без тривиално решение.

(статията е първоначално публикувана в сп. Мениджър)

Marriott Hack Reported as Chinese State-Sponsored

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/12/marriott_hack_r.html

The New York Times and Reuters are reporting that China was behind the recent hack of Mariott Hotels. Note that this is still uncomfirmed, but interesting if it is true.

Reuters:

Private investigators looking into the breach have found hacking tools, techniques and procedures previously used in attacks attributed to Chinese hackers, said three sources who were not authorized to discuss the company’s private probe into the attack.

That suggests that Chinese hackers may have been behind a campaign designed to collect information for use in Beijing’s espionage efforts and not for financial gain, two of the sources said.

While China has emerged as the lead suspect in the case, the sources cautioned it was possible somebody else was behind the hack because other parties had access to the same hacking tools, some of which have previously been posted online.

Identifying the culprit is further complicated by the fact that investigators suspect multiple hacking groups may have simultaneously been inside Starwood’s computer networks since 2014, said one of the sources.

I used to have opinions about whether these attributions are true or not. These days I tend to wait and see.

Making Cluster Updates Easy with Amazon EKS

Post Syndicated from Brandon Chavis original https://aws.amazon.com/blogs/compute/making-cluster-updates-easy-with-amazon-eks/

Kubernetes is rapidly evolving, with frequent feature releases, functionality updates, and bug fixes. Additionally, AWS periodically changes the way it configures Amazon Elastic Container Service for Kubernetes (Amazon EKS) to improve performance, support bug fixes, and enable new functionality. Previously, moving to a new Kubernetes version required you to re-create your cluster and migrate your applications. This is a time-consuming process that can result in application downtime.

Today, I’m excited to announce that EKS now performs managed, in-place cluster upgrades for both Kubernetes and EKS platform versions. This simplifies cluster operations and lets you quickly take advantage of the latest Kubernetes features, as well as the updates to EKS configuration and security patches, without any downtime. EKS also now supports Kubernetes version 1.11.5 for all new EKS clusters.

Updates for Kubernetes and EKS

There are two types of updates that you can apply to your EKS cluster, Kubernetes version updates and EKS platform version updates. Today, EKS supports upgrades between Kubernetes minor versions 1.10 and 1.11.

As new Kubernetes versions are released and validated for use with EKS, we will support three stable Kubernetes versions as part of the update process at any given time.

EKS platform versions

The EKS platform version contains Kubernetes patches and changes to the API server configuration. Platform versions are separate from but associated with Kubernetes minor versions.

When a new Kubernetes version is made available for EKS, its initial control plane configuration is released as the “eks.1” platform version. AWS releases new platform versions as needed to enable Kubernetes patches. AWS also releases new versions when there are EKS API server configuration changes that could affect cluster behavior.

Using this versioning scheme makes it possible to independently update the configuration of different Kubernetes versions. For example, AWS might need to release a patch for Kubernetes version 1.10 that is incompatible with Kubernetes version 1.11.

Currently, platform version updates are automatic. AWS plans to provide manual control over platform version updates through the UpdateClusterVersion API operation in the future.

Using the update API operations

There are three new EKS API operations to enable cluster updates:

  • UpdateClusterVersion
  • ListUpdates
  • DescribeUpdates

The UpdateClusterVersion operation can be used through the EKS CLI to start a cluster update between Kubernetes minor versions:

aws eks update-cluster-version --name Your-EKS-Cluster --kubernetes-version 1.11

You only need to pass in a cluster name and the desired Kubernetes version. You do not need to pick a specific patch version for Kubernetes. We pick patch versions that are stable and well-tested. This CLI command returns an “update” API object with several important pieces of information:

{
    "update" : {
        "updateId" : UUID,
        "updateStatus" : PENDING,
        "updateType" : VERSION-UPDATE
        "createdAt" : Timestamp
     }
 }

This update object lets you track the status of your requested modification to your cluster. This can show you if there was there an error due to a misconfiguration on your cluster and if the update in progress, completed, or failed.

You can also list and describe the status of the update independently, using the following operations:

aws eks list-updates --name Your-EKS-Cluster

This returns the in-flight updates for your cluster:

{
    "updates" : {
        "UUID-1",
        "UUID-2"
     },
     "nextToken" : null
 }

Finally, you can also describe a particular update to see details about the update’s status:

aws eks describe-update --name Your-EKS-Cluster --update-id UUID

{
    "update" : {
        "updateId" : UUID,
        "updateStatus" : FAILED,
        "updateType" : VERSION-UPDATE
        "createdAt" : Timestamp
        "error": {
            "errorCode" : DependentResourceNotFound
            "errorMessage" : The Role used for creating the cluster is deleted.
            "resources" : ["aws:iam:arn:role"] 
     }
 }

Considerations when updating

New Kubernetes versions introduce significant changes. I highly recommend that you test the behavior of your application against a new Kubernetes version before performing the update on a production cluster.

Generally, I recommend integrating EKS into your existing CI workflow to test how your application behaves on a new version before updating your production clusters.

Worker node updates

Today, EKS does not update your Kubernetes worker nodes when you update the EKS control plane. You are responsible for updating EKS worker nodes. You can find an overview of this process in Worker Node Updates.

The EKS team releases a set of EKS-optimized AMIs for worker nodes that correspond with each version of Kubernetes supported by EKS. You can find these AMIs listed in the documentation, and you can find the build configuration in a version-specific branch of the Amazon-EKS-AMI GitHub repository .

Getting started

You can start using Kubernetes version 1.11 today for all new EKS clusters. Use cluster updates to move to version 1.11 for all existing EKS clusters. You can learn more about the update process and APIs in our documentation.

New podcast: VP of Security answers your compliance and data privacy questions

Post Syndicated from Katie Doptis original https://aws.amazon.com/blogs/security/new-podcast-vp-of-security-answers-your-compliance-and-data-privacy-questions/

Does AWS comply with X program? How about GDPR? What about after Brexit? And what happens with machine learning data?

In the latest AWS Security & Compliance Podcast, we sit down with VP of Security Chad Woolf, who answers your compliance and data privacy questions. Including one of the most frequently asked questions from customers around the world, which is: how many compliance programs does AWS have/attest to/audit against?

Chad also shares what it was like to work at AWS in the early days. When he joined, AWS was housed on just a handful of floors, in a single building. Over the course of nearly nine years with the company, he has witnessed tremendous growth of the business and industry.

Listen to the podcast and hear about company history and get answers to your tough questions. If you have a compliance or data privacy question, you can submit it through our contact us form.

Want more AWS news? Follow us on Twitter.

[$] DMA and get_user_pages()

Post Syndicated from jake original https://lwn.net/Articles/774411/rss

In the RDMA microconference of the 2018 Linux Plumbers Conference (LPC),
John Hubbard, Dan Williams, and Matthew Wilcox led a discussion on the
problems surrounding get_user_pages() (and friends) and the
interaction with DMA. It is not the first time the topic has come up,
there was also a discussion about it at the
Linux Storage, Filesystem, and Memory-Management Summit back in April. In
a nutshell, the problem is that multiple parts of the kernel think they
have responsibility for the same chunk of memory, but they do not
coordinate their activities; as might be guessed, mayhem can sometimes ensue.

The x32 subarchitecture may be removed

Post Syndicated from corbet original https://lwn.net/Articles/774734/rss

The x32 subarchitecture
is a software variant of x86-64; it runs the processor in the 64-bit mode,
but uses 32-bit pointers and arithmetic. The idea is to get the advantages
of x86-64 without the extra memory usage that goes along with it. It
seems, though, that x32 is not much appreciated; few distributions support
it and the number of users appears to be small. So now Andy Lutomirski is
proposing
its eventual removal
:

I propose that we make CONFIG_X86_X32 depend on BROKEN for a release
or two and then remove all the code if no one complains. If anyone
wants to re-add it, IMO they’re welcome to do so, but they need to do
it in a way that is maintainable.

If there are x32 users out there, now would be a good time for them to
speak up.

Security updates for Wednesday

Post Syndicated from ris original https://lwn.net/Articles/774731/rss

Security updates have been issued by Arch Linux (chromium, firefox, lib32-openssl, lib32-openssl-1.0, openssl, openssl-1.0, texlive-bin, and wireshark-cli), Fedora (perl), openSUSE (pdns), Oracle (kernel), Red Hat (kernel), Slackware (mozilla), SUSE (kernel, postgresql10, qemu, and xen), and Ubuntu (firefox, freerdp, freerdp2, pixman, and poppler).

New Australian Backdoor Law

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/12/new_australian_.html

Last week, Australia passed a law giving the government the ability to demand backdoors in computers and communications systems. Details are still to be defined, but it’s really bad.

Note: Many people e-mailed me to ask why I haven’t blogged this yet. One, I was busy with other things. And two, there’s nothing I can say that I haven’t said many times before.

If there are more good links or commentary, please post them in the comments.

EDITED TO ADD (12/13): The Australian government response is kind of embarrassing.

Creating an opportunistic IPSec mesh between EC2 instances

Post Syndicated from Vesselin Tzvetkov original https://aws.amazon.com/blogs/security/creating-an-opportunistic-ipsec-mesh-between-ec2-instances/

IPSec diagram

IPSec (IP Security) is a protocol for in-transit data protection between hosts. Configuration of site-to-site IPSec between multiple hosts can be an error-prone and intensive task. If you need to protect N EC2 instances, then you need a full mesh of N*(N-1)IPSec tunnels. You must manually propagate every IP change to all instances, configure credentials and configuration changes, and integrate monitoring and metrics into the operation. The efforts to keep the full-mesh parameters in sync are enormous.

Full mesh IPSec, known as any-to-any, builds an underlying network layer that protects application communication. Common use cases are:

  • You’re migrating legacy applications to AWS, and they don’t support encryption. Examples of protocols without encryption are File Transfer Protocol (FTP), Hypertext Transfer Protocol (HTTP) or Lightweight Directory Access Protocol (LDAP).
  • You’re offloading protection to IPSec to take advantage of fast Linux kernel encryption and automated certificate management, the use case we focus on in this solution.
  • You want to segregate duties between your application development and infrastructure security teams.
  • You want to protect container or application communication that leaves an EC2 instance.

In this post, I’ll show you how to build an opportunistic IPSec mesh that sets up dynamic IPSec tunnels between your Amazon Elastic Compute Cloud (EC2) instances. IPSec is based on Libreswan, an open-source project implementing opportunistic IPSec encryption (IKEv2 and IPSec) on a large scale.

Solution benefits and deliverable

The solution delivers the following benefits (versus manual site-to-site IPSec setup):

  • Automatic configuration of opportunistic IPSec upon EC2 launch.
  • Generation of instance certificates and weekly re-enrollment.
  • IPSec Monitoring metrics in Amazon CloudWatch for each EC2 instance.
  • Alarms for failures via CloudWatch and Amazon Simple Notification Service (Amazon SNS).
  • An initial generation of a CA root key if needed, including IAM Policies and two customer master keys (CMKs) that will protect the CA key and instance key.

Out of scope

This solution does not deliver IPSec protection between EC2 instances and hosts that are on-premises, or between EC2 instances and managed AWS components, like Elastic Load Balancing, Amazon Relational Database Service, or Amazon Kinesis. Your EC2 instances must have general IP connectivity that allows NACLs and Security Groups. This solution cannot deliver extra connectivity like VPC peering or Transit VPC can.

Prerequisites

You’ll need the following resources to deploy the solution:

  • A trusted Unix/Linux/MacOS machine with AWS SDK for Python and OpenSSL
  • AWS admin rights in your AWS account (including API access)
  • AWS Systems Manager on EC2
  • Linux RedHat, Amazon Linux 2, or CentOS installed on the EC2 instances you want to configure
  • Internet access on the EC2 instances for downloading Linux packages and reaching AWS Systems Manager endpoint
  • The AWS services used by the solution, which are AWS Lambda, AWS Key Management Service (AWS KMS), AWS Identity and Access Management (IAM), AWS Systems Manager, Amazon CloudWatch, Amazon Simple Storage Service (Amazon S3), and Amazon SNS

Solution and performance costs

My solution does not require any additional charges to standard AWS services, since it uses well-established open source software. Involved AWS services are as follows:

  • AWS Lambda is used to issue the certificates. Per EC2 and per week, I estimate the use of two 30 second Lambda functions with 256 MB of allocated memory. For 100 EC2 instances, the cost will be several cents. See AWS Lambda Pricing for details.
  • Certificates have no charge, since they’re issued by the Lambda function.
  • CloudWatch Events and Amazon S3 Storage usage are within the free tier policy.
  • AWS Systems Manager has no additional charge.
  • AWS EC2 is a standard AWS service on which you deploy your workload. There are no charges for IPSec encryption.
  • EC2 CPU performance decrease due to encryption is negligible since we use hardware encryption support of the Linux kernel. The IKE negotiation that is done by the OS in your CPU may add minimal CPU overhead depending on the number of EC2 instances involved.

Installation (one-time setup)

To get started, on a trusted Unix/Linux/MacOS machine that has admin access to your AWS account and AWS SDK for Python already installed, complete the following steps:

  1. Download the installation package from https://github.com/aws-quickstart/quickstart-ec2-ipsec-mesh.
  2. Edit the following files in the package to match your network setup:
    • config/private should contain all networks with mandatory IPSec protection, such as EC2s that should only be communicated with via IPSec. All of these hosts must have IPSec installed.
    • config/clear should contain any networks that do not need IPSec protection. For example, these might include Route 53 (DNS), Elastic Load Balancing, or Amazon Relational Database Service (Amazon RDS).
    • config/clear-or-private should contain networks with optional IPSec protection. These networks will start clear and attempt to add IPSec.
    • config/private-or-clear should also contain networks with optional IPSec protection. However, these networks will start with IPSec and fail back to clear.
  3. Execute ./aws_setup.py and carefully set and verify the parameters. Use -h to view help. If you don’t provide customized options, default values will be generated. The parameters are:
    • Region to install the solution (default: your AWS Command Line Interface region)
    • Buckets for configuration, sources, published host certificates and CA storage. (Default: random values that follow the pattern ipsec-{hostcerts|cacrypto|sources}-{stackname} will be generated.) If the buckets do not exist, they will be automatically created.
    • Reuse of an existing CA? (default: no)
    • Leave encrypted backup copy of the CA key? The password will be printed to stdout (default: no)
    • Cloud formation stackname (default: ipsec-{random string}).
    • Restrict provisioning to certain VPC (default: any)

     
    Here is an example output:

    
                ./aws_setup.py  -r ca-central-1 -p ipsec-host-v -c ipsec-crypto-v -s ipsec-source-v
                Provisioning IPsec-Mesh version 0.1
                
                Use --help for more options
                
                Arguments:
                ----------------------------
                Region:                       ca-central-1
                Vpc ID:                       any
                Hostcerts bucket:             ipsec-host-v
                CA crypto bucket:             ipsec-crypto-v
                Conf and sources bucket:      ipsec-source-v
                CA use existing:              no
                Leave CA key in local folder: no
                AWS stackname:                ipsec-efxqqfwy
                ---------------------------- 
                Do you want to proceed ? [yes|no]: yes
                The bucket ipsec-source-v already exists
                File config/clear uploaded in bucket ipsec-source-v
                File config/private uploaded in bucket ipsec-source-v
                File config/clear-or-private uploaded in bucket ipsec-source-v
                File config/private-or-clear uploaded in bucket ipsec-source-v
                File config/oe-cert.conf uploaded in bucket ipsec-source-v
                File sources/enroll_cert_lambda_function.zip uploaded in bucket ipsec-source-v
                File sources/generate_certifcate_lambda_function.zip uploaded in bucket ipsec-source-v
                File sources/ipsec_setup_lambda_function.zip uploaded in bucket ipsec-source-v
                File sources/cron.txt uploaded in bucket ipsec-source-v
                File sources/cronIPSecStats.sh uploaded in bucket ipsec-source-v
                File sources/ipsecSetup.yaml uploaded in bucket ipsec-source-v
                File sources/setup_ipsec.sh uploaded in bucket ipsec-source-v
                File README.md uploaded in bucket ipsec-source-v
                File aws_setup.py uploaded in bucket ipsec-source-v
                The bucket ipsec-host-v already exists
                Stack ipsec-efxqqfwy creation started. Waiting to finish (ca 3-5 min)
                Created CA CMK key arn:aws:kms:ca-central-1:123456789012:key/abcdefgh-1234-1234-1234-abcdefgh123456
                Certificate generation lambda arn:aws:lambda:ca-central-1:123456789012:function:GenerateCertificate-ipsec-efxqqfwy
                Generating RSA private key, 4096 bit long modulus
                .............................++
                .................................................................................................................................................................................................................................................................................................................................................................................++
                e is 65537 (0x10001)
                Certificate and key generated. Subject CN=ipsec.ca-central-1 Valid 10 years
                The bucket ipsec-crypto-v already exists
                Encrypted CA key uploaded in bucket ipsec-crypto-v
                CA cert uploaded in bucket ipsec-crypto-v
                CA cert and key remove from local folder
                Lambda functionarn:aws:lambda:ca-central-1:123456789012:function:GenerateCertificate-ipsec-efxqqfwy updated
                Resource policy for CA CMK hardened - removed action kms:encrypt
                
                done :-)
            

Launching the EC2 Instance

Now that you’ve installed the solution, you can start launching EC2 instances. From the EC2 Launch Wizard, execute the following steps. The instructions assume that you’re using RedHat, Amazon Linux 2, or CentOS.

Note: Steps or details that I don’t explicitly mention can be set to default (or according to your needs).

  1. Select the IAM Role already configured by the solution with the pattern Ec2IPsec-{stackname}
     
    Figure 1: Select the IAM Role

    Figure 1: Select the IAM Role

  2. (You can skip this step if you are using Amazon Linux 2.) Under Advanced Details, select User data as text and activate the AWS Systems Manager Agent (SSM Agent) by providing the following string (for RedHat and CentOS 64 Bits only):
    
        #!/bin/bash
        sudo yum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm
        sudo systemctl start amazon-ssm-agent
        

     

    Figure 2: Select User data as text and activate the AWS Systems Manager Agent

    Figure 2: Select User data as text and activate the AWS Systems Manager Agent

  3. Set the tag name to IPSec with the value todo. This is the identifier that triggers the installation and management of IPsec on the instance.
     
    Figure 3: Set the tag name to "IPSec" with the value "todo"

    Figure 3: Set the tag name to “IPSec” with the value “todo”

  4. On the Configuration page for the security group, allow ESP (Protocol 50) and IKE (UDP 500) for your network, like 172.31.0.0/16. You need to enter these values as shown in following screen:
     
    Figure 4: Enter values on the "Configuration" page

    Figure 4: Enter values on the “Configuration” page

After 1-2 minutes, the value of the IPSec instance tag will change to enabled, meaning the instance is successfully set up.
 

Figure 5: Look for the "enabled" value for the IPSec key

Figure 5: Look for the “enabled” value for the IPSec key

So what’s happening in the background?

 

Figure 6: Architectural diagram

Figure 6: Architectural diagram

As illustrated in the solution architecture diagram, the following steps are executed automatically in the background by the solution:

  1. An EC2 launch triggers a CloudWatch event, which launches an IPSecSetup Lambda function.
  2. The IPSecSetup Lambda function checks whether the EC2 instance has the tag IPSec:todo. If the tag is present, the Lambda function issues a certificate calling a GenerateCertificate Lambda.
  3. The GenerateCertificate Lambda function downloads the encrypted CA certificate and key.
  4. The GenerateCertificate Lambda function decrypts the CA key with a customer master key (CMK).
  5. The GenerateCertificate Lambda function issues a host certificate to the EC2 instance. It encrypts the host certificate and key with a KMS generated random secret in PKCS12 structure. The secret is envelope-encrypted with a dedicated CMK.
  6. The GenerateCertificate Lambda function publishes the issued certificates to your dedicated bucket for documentation.
  7. The IPSec Lambda function calls and runs the installation via SSM.
  8. The installation downloads the configuration and installs python, aws-sdk, libreswan, and curl if needed.
  9. The EC2 instance decrypts the host key with the dedicated CMK and installs it in the IPSec database.
  10. A weekly scheduled event triggers reenrollment of the certificates via the Reenrollcertificates Lambda function.
  11. The Reenrollcertificates Lambda function triggers the IPSecSetup Lambda (call event type: execution). The IPSecSetup Lambda will renew the certificate only, leaving the rest of the configuration untouched.

Testing the connection on the EC2 instance

You can log in to the instance and ping one of the hosts in your network. This will trigger the IPSec connection and you should see successful answers.


        $ ping 172.31.1.26
        
        PING 172.31.1.26 (172.31.1.26) 56(84) bytes of data.
        64 bytes from 172.31.1.26: icmp_seq=2 ttl=255 time=0.722 ms
        64 bytes from 172.31.1.26: icmp_seq=3 ttl=255 time=0.483 ms
        

To see a list of IPSec tunnels you can execute the following:


        sudo ipsec whack --trafficstatus
        

Here is an example of the execution:
 

Figure 7: Example execution

Figure 7: Example execution

Changing your configuration or installing it on already running instances

All configuration exists in the source bucket (default: ipsec-source prefix), in files for libreswan standard. If you need to change the configuration, follow the following instructions:

  1. Review and update the following files:
    1. oe-conf, which is the configuration for libreswan
    2. clear, private, private-to-clear and clear-to-ipsec, which should contain your network ranges.
  2. Change the tag for the IPSec instance to
    IPSec:todo.
  3. Stop and Start the instance (don’t restart). This will retrigger the setup of the instance.
     
    Figure 8: Stop and start the instance

    Figure 8: Stop and start the instance

    1. As an alternative to step 3, if you prefer not to stop and start the instance, you can invoke the IPSecSetup Lambda function via Test Event with a test JSON event in the following format:
      
                      { "detail" :  
                          { "instance-id": "YOUR_INSTANCE_ID" }
                      }
              

      A sample of test event creation in the Lambda Design window is shown below:
       

      Figure 9: Sample test event creation

      Figure 9: Sample test event creation

Monitoring and alarms

The solution delivers and takes care of IPSec/IKE Metrics and SNS Alarms in the case of errors. To monitor your IPSec environment, you can use Amazon CloudWatch. You can see metrics for active IPSec sessions, IKE/ESP errors, and connection shunts.
 

Figure 10: View metrics for active IPSec sessions, IKE/ESP errors, and connection shunts

Figure 10: View metrics for active IPSec sessions, IKE/ESP errors, and connection shunts

There are two SNS topics and alarms configured for IPSec setup failure or certificate reenrollment failure. You will see an alarm and an SNS message. It’s very important that your administrator subscribes to notifications so that you can react quickly. If you receive an alarm, please use the information in the “Troubleshooting” section of this post, below.
 

Figure 11: Alarms

Figure 11: Alarms

Troubleshooting

Below, I’ve listed some common errors and how to troubleshoot them:
 

The IPSec Tag doesn’t change to IPSec:enabled upon EC2 launch.

  1. Wait 2 minutes after the EC2 instance launches, so that it becomes reachable for AWS SSM.
  2. Check that the EC2 Instance has the right role assigned for the SSM Agent. The role is provisioned by the solution named Ec2IPsec-{stackname}.
  3. Check that the SSM Agent is reachable via a NAT gateway, an Internet gateway, or a private SSM endpoint.
  4. For CenOS and RedHat, check that you’ve installed the SSM Agent. See “Launching the EC2 instance.”
  5. Check the output of the SSM Agent command execution in the EC2 service.
  6. Check the IPSecSetup Lambda logs in CloudWatch for details.

The IPSec connection is lost after a few hours and can only be established from one host (in one direction).

  1. Check that your Security Groups allow ESP Protocol and UDP 500. Security Groups are stateful. They may only allow a single direction for IPSec establishment.
  2. Check that your network ACL allows UDP 500 and ESP Protocol.

The SNS Alarm on IPSec reenrollment is trigged, but everything seems to work fine.

  1. Certificates are valid for 30 days and rotated every week. If the rotation fails, you have three weeks to fix the problem.
  2. Check that the EC2 instances are reachable over AWS SSM. If reachable, trigger the certificate rotation Lambda again.
  3. See the IPSecSetup Lambda logs in CloudWatch for details.

DNS Route 53, RDS, and other managed services are not reachable.

  1. DNS, RDS and other managed services do not support IPSec. You need to exclude them from encryption by listing them in the config/clear list. For more details see step 2 of Installation (one-time setup) in this blog.

Here are some additional general IPSec commands for troubleshooting:

Stopping IPSec can be done by executing the following unix command:


        sudo ipsec stop 
        

If you want to stop IPSec on all instances, you can execute this command via AWS Systems Manager on all instances with the tag IPSec:enabled. Stopping encryption means all traffic will be sent unencrypted.

If you want to have a fail-open case, meaning on IKE(IPSec) failure send the data unencrypted, then configure your network in config/private-or-clear as described in step 2 of Installation (one-time setup).

Debugging IPSec issues can be done using Libreswan commands . For example:


        sudo ipsec status 
        
        sudo ipsec whack –debug `
        
        sudo ipsec barf 
        

Security

The CA key is encrypted using an Advanced Encryption Standard (AES) 256 CBC 128-byte secret and stored in a bucket with server-side encryption (SSE). The secret is envelope-encrypted with a CMK in AWS KMP pattern. Only the certificate-issuing Lambda function can decrypt the secret KMS resource policy. The encrypted secret for the CA key is set in an encrypted environment variable of the certificate-issuing Lambda function.

The IPSec host private key is generated by the certificate-issuing Lambda function. The private key and certificate are encrypted with AES 256 CBC (PKCS12) and protected with a 128-byte secret generated by KMS. The secret is envelope-encrypted with a user CMK. Only the EC2 instances with attached IPSec IAM policy can decrypt the secret and private key.

The issuing of the certificate is a full synchronous call: One request and one corresponding response without any polling or similar sync/callbacks. The host private key is not stored in a database or an S3 bucket.

The issued certificates are valid for 30 days and are stored for auditing purposes in a certificates bucket without a private key.

Alternate subject names and multiple interfaces or secondary IPs

The certificate subject name and AltSubjectName attribute contains the private Domain Name System (DNS) of the EC2 and all private IPs assigned to the instance (interfaces, primary, and secondary IPs).

The provided default libreswan configuration covers a single interface. You can adjust the configuration according to libreswan documentation for multiple interfaces, for example, to cover Amazon Elastic Container Service for Kubernetes (Amazon EKS).

Conclusion

With the solution in this blog post, you can automate the process of building an encryption IPSec layer for your EC2 instances to protect your workloads. You don’t need to worry about configuring certificates, monitoring, and alerting. The solution uses a combination of AWS KMS, IAM, AWS Lambda, CloudWatch and the libreswan implementation. If you need libreswan support, use the mailing list or github. AWS forums can give you more information on KMS for IAM. If you require a special enterprise enhancement, contact AWS professional services.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Vesselin Tzvetkov

Vesselin is senior security consultant at AWS Professional Services and is passionate about security architecture and engineering innovative solutions. Outside of technology, he likes classical music, philosophy, and sports. He holds a Ph.D. in security from TU-Darmstadt and a M.S. in electrical engineering from Bochum University in Germany.

Minecraft-controlled real world Christmas tree

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/minecraft-controlled-christmas-tree/

Interact with the real world via the block world, with the Minecraft-controlled Christmas tree from the team at BroCraft Gaming.

Illuminating

David Stevens of BroCraft Gaming reached out to us last month to let us know about the real-life Christmas tree he and his team were planning to hack using Minecraft. Intriguing? Obviously. And after a few more emails, David has been back in touch to let us know the tree hack is now live and ready for the world to interact with.

Here’s a blurb from the BroCraft team:

Join our Minecraft server at brocraftlive.net, complete the tutorial if you haven’t already, and type /mcct to join our snowy wonderland. Collect power from power blocks dotted everywhere, then select a pattern with the Technician, and watch as the tree lights up on the camera stream LIVE before your very eyes! Visit the attractions, play our minigames, and find out what else our server has to offer.

The tree uses individually addressable LEDs and the Adafruit Neopixel Python library. And with the help of a bespoke Java plugin, all instructions from within the Minecraft server are fed to the lights via a Raspberry Pi.

You can view the live Christmas tree camera stream here, along with a brief FAQ on interacting with the tree within the BroCraft Minecraft server.

Minecraft Pi

You’ll need access to Minecraft to be able to interact with the tree. And, lucky for you, Minecraft Pi comes free with Raspbian on the Raspberry Pi!

To flash the Raspbian image onto an SD card, follow this video tutorial from the team at The MagPi. And to get more acquainted with Minecraft on the Raspberry Pi, check out our free resources, including the getting started guide, Minecraft selfies, and the big Minecraft piano.



Find more free Raspberry Pi resources on our projects site, and immerse yourself even further into the world of Minecraft Pi with The MagPi’s Hacking and Making in Minecraft Essentials Guide, available in print and as a free PDF download!

The post Minecraft-controlled real world Christmas tree appeared first on Raspberry Pi.

Няколко числа за мъртвородените в Сливен

Post Syndicated from Боян Юруков original https://yurukov.net/blog/2018/martvorodeni-sliven/

Покрай поредния фатален случай в Сливен и критиката за родилната помощ там се разрових из данните. През 2017-та в Сливен е имало 2391 раждания. 19 от тях са били с мъртвородени деца. Това означава почти 8 на 1000 раждания. За сравнение, средното за цялата страна е 6.28. Миналата година в страната е имало 404 мъртвородени деца. Тази до началото на октомври са били 243.

На пръв поглед  в областта се забелязва известно подобрение през последните 18 години. Дори така проблемът е много по-сериозен от средното за страната. Пикът е бил през 2008-ма, когато е имало 38 мъртвородени. Тук съм показал движението през годините. Числата за 2018-та се базират само на първите 9 месеца от годината. 

Тъй като броят раждания не е голям, а и самите мъртвородени са малко като абсолютен брой, наблюдението за намаление може да е измамно. Всеки един случай има сериозен ефект върху този индекс и затова при малка извадка е трудно да се прецени. Това е обяснението и за големите вариации между годините в Сливен – нещо, което не се наблюдава за страната. 

Друг момент тук е промяната през годините на дефинициите кое е аборт, мъртвородени и кое е починало дете след раждане. От лична гледна точка това беше важно за майките. За статистиката ефектът беше да се прехвърлят някои от случаите между въпросните индикатори. Тези промени създават т.н. break in sequence на данните и правят сравнението между годините малко по-трудно.  Ето, например, абсолютният брой мъртвородени.

Един интересен аргумент, който виждам често, е че има доста фатални случаи заради „преобладаващият“ брой малолетни родилки най-вече от ромски произход в Сливен. Докато наистина е вярно, че немалко от ражданията там са от хора от този етнос, също е вярно, че доста от населението на областта е ромско. Това, както и малко над средната фертилност на ромите обяснява повечето раждания.

Това, което не е вярно обаче, че мнозинството са малолетни или непълнолетни. Писах подробно по тази тема вече. Специално за Сливен средната възраст на раждане на първо детете е 23.1 години, а на което и да е дете – 25.2. Навярно заради този аргумент, впрочем, почти всички новинарски емисии натрапчиво посочваха още в началото на новината, че родилката е била на 27 и това е било първото ѝ дете. Може би, за да оборят презумпцията, че е от „определена демография“ и да накарат читателите им все пак да им пука и да прочетат. 

А проблемът е сериозен. Макар като статистика детската и майчината смъртност да намалява, все още остава доста над средното за Европа. Причините в някои региони наистина са специфични, но като цяло има сериозен проблем със следенето на бременността, здравното образование, подаване на ухо на небивалици в нета, подхода на самите акушери и лекари и не на последно място вътрешноболничните инфекции. За всички тези в една или друга степен носят отговорност здравните власти, но роля имат и неправителствените организации и самите майки. Последните два проблема обаче са най-сериозни и са изцяло в ръцете на болниците. Реалните разследвания, носенето на отговорност, взимането на мерки и стриктното следене на протоколите за работа не са нещо, с което може да се похвалят здравните ни заведения. Докато това не е проблем уникален за България, определено не се вижда напредък.

Още по темата: 
Малко данни за родилната помощ в България
Тази новина може да навреди на вашето здраве
Предотвратимата смърт в България – само наполовина това, което си мислите
Източници: НСИ, НЦОЗА

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close