Results: Linux Foundation Technical Board Election 2018

Post Syndicated from corbet original https://lwn.net/Articles/771902/rss

The results of the 2018 election for members of the Linux Foundation’s
Technical Advisory Board have been posted; the members elected this time
around are Chris Mason, Laura Abbott, Olof Johansson, Dan Williams, and
Kees Cook. Abbott and Cook are new members to the board this time around.
(The other TAB members are Ted Ts’o, Greg Kroah-Hartman, Jonathan Corbet,
Tim Bird, and Steve Rostedt).

Security updates for Wednesday

Post Syndicated from ris original https://lwn.net/Articles/771881/rss

Security updates have been issued by Arch Linux (powerdns and powerdns-recursor), Debian (ceph and spamassassin), Fedora (feh, flatpak, and xen), Red Hat (kernel, kernel-rt, openstack-cinder, python-cryptography, and Red Hat Single Sign-On 7.2.5), and Ubuntu (python2.7, python3.4, python3.5).

The National Centre for Computing Education: your questions answered

Post Syndicated from Sue Sentance original https://www.raspberrypi.org/blog/ncce-questions-answers/

Last week was a very exciting week for us, with the announcement of the National Centre for Computing Education: funded programmes for computing teachers and students for the next four years, to really support the growth and profile of our subject. For me and many others involved in this field over the last decade, it’s an amazing opportunity to have this level of financial support for Computing — something we could previously only dream of. Everybody at Raspberry Pi is very excited about being involved in this important work!

Some background

A new Computing curriculum was introduced in England in September 2014, and it comprises three strands: computer science, information technology, and digital literacy. The latter two have been taught in schools for many years, but the computer science strand had not been taught in schools to the pre-16 age group since the 1980s.

Two Royal Society reports have been widely influential. Firstly, the Shut Down or Restart report (2012) instigated the curriculum change. To support teachers implementing the new curriculum, the CAS Network of Excellence received a modest amount of funding from 2013–2018; the network has had a great impact on the field already, but clearly more government input was needed. The second report, After the Reboot (2017), evaluated current computing education in schools in the UK. It highlighted the challenges faced by teachers who felt unprepared to deliver the Computing curriculum, and recommended that significant government funding be provided to support teachers — and this has now happened! The new programme gives us the opportunity to reach all computing teachers, and to make massive improvements to computing education around the country.

What is the National Centre?

The National Centre, together with specific support for GCSE and A-Level Computer Science, is a government-funded programme of training and support for computing education. It will lead to a great education in the subject for every child from the beginning of primary school to the end of secondary school, enabling them to develop the valuable skills they need, whether or not they choose computing-related careers.

Since last week’s announcement, I’ve received lots of questions from teachers and others about exactly what will be happening and who will be doing the work, and I’ve gathered together answers to many of these questions here. Read on to learn more about our plans.

Key Stages 1–3 and non-GCSE Key Stage 4

If you are a primary teacher or a secondary teacher at Key Stage 3 or non-GCSE KS4, delivering Computing, either as a classroom teacher or as a specialist, you will be able to access professional learning opportunities (CPD) and resources in your region. Initially these will be available via partners working with us, and from September 2019, you will be able to access them via 40 Computing Hubs.

You will be able to register for a certificate and work towards it through a range of activities, working with colleagues and in your region. There will also be a range of online courses to support you at your own pace. Some of these are available now, and many more are to be launched over the next two years.

GCSE Computer Science

If you teach GCSE Computer Science, or you’d like to, there is a unique programme just for you. Bursaries will be available to enable you to take a series of face-to-face and online courses that best suit your needs: these will range from courses aimed at the completely new-to-GCSE teacher to advanced courses for more experienced teachers who are aiming to stretch and challenge students and to hone their subject knowledge.

two young people coding at a computer

The online courses will be free for everyone, forever. There will be a diagnostic test to help you plan your journey, and a final assessment to measure your success. You’ll be able to sign up for this programme from January.

A Level Computer Science

If you teach A Level Computer Science, or would like to, you will have access to comprehensive resources for students and teachers. There will also be a range of face-to-face events for both students and teachers. These will be starting shortly, so watch out for more news!

It will take a few months for the Computing Hubs and CPD provision to be available at scale, but in the meantime, there is much within our existing networks that computing teachers can engage with right now: CAS hubs and other events, Code Clubs in schools, STEM Learning training, and our online courses are some examples.

Building our team

We also announced last week that we are looking for new team members to implement our part of the work.

Developing resources, courses, and publications

Our role involves developing a comprehensive set of resources, lesson plans, and schemes of work from Key Stages 1–4, drawing on the best of existing materials plus some new ones. We will also develop all the online courses. We need content writers to help us with both of these areas. We are working on producing newsletters, case studies, and other publications about evidence-based practice, and this will also be part of the new team’s work. At the Raspberry Pi Foundation, we will be leading on the A Level Computer Science programme content, so we have opportunities for people with the skills and experience to focus on this area.

Many of these roles are available if you want to work remotely, but more senior jobs will involve regular days in Cambridge. We also have fixed-term, part-time work available. You can find all our current job openings on this page.

Finally, as a team, we want to visit lots of schools to see what you need and listen to your thoughts, so that we can get our work right for you. If you’d like to support us in that, please get in touch by emailing [email protected].

Hubs, face-to-face training, and certification

STEM Learning, one of our two consortium partners, will be commissioning the 40 Hubs, and they will also be responsible for face-to-face training. The Hubs will become centres of excellence for computing, where teachers can find regional support. Existing CAS (Computing At School) communities will be linked to the 40 Hubs, and CAS Hubs will also play a really important part in the new structure. Our other partner, BCS, will be supporting certification, building on the work they have already done with the BCS Certificate in Computer Science Teaching.

You will be able to access everything you need on the website of the National Centre for Computing Education, where you’ll soon be able to learn where to find your Computing Hub or local CAS communities and discover what is happening in your region.

Across the consortium we have teams of people who are deeply committed to computing, to Computing At School (CAS), and to teaching; most have of us recent teaching experience ourselves. Our first priority is to work with teachers collegially to meet your needs and make life easier for you. So follow the National Centre on Twitter, talk to us, and give us your feedback!

Outside England?

This post has been all about teachers in England, but our free online resources will be available to anyone, anywhere in the world. If you want to talk to us about the needs in your country, do get in touch.

The post The National Centre for Computing Education: your questions answered appeared first on Raspberry Pi.

AWS Security Profiles: Eric Brandwine, VP and Distinguished Engineer

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-eric-brandwine-vp-and-distinguished-engineer/

Amazon Spheres and author info
In the weeks leading up to re:Invent, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


How long you have been at AWS, and what you do in your current role?

I’ve been at AWS for 11 years, and I report to Steve Schmidt, who’s our Chief Information Security Officer. I’m his Chief Engineer. I engage across the entire AWS Security Org to help keep the bar high. What we do all day, every day, is help AWS employees and customers figure out how to securely build what they need to build.

What part of your job do you enjoy the most?

It’s a trite answer, but it’s true: delivery. I enjoy impacting the customer experience. Often, the way security impacts the customer experience is by not impacting it, but we’re starting to launch a suite of really exciting security services for customers.

What’s the most challenging part of your job?

It’s a combination of two things. One is Amazon’s culture. I love the culture and would not change it, but it poses particular challenges for the Security team. The second challenge is people: I thought I had a computer job, but it turns out that like pretty much all of the Senior Engineers I know, I have a people job as much as or more than I have a computer job. Amazon has a culture of distributed ownership and empowerment. It’s what allows the company to move fast and deliver as much as it does, and it’s magnificent. I wouldn’t change it. But as the Security team, we’re often in a position where we have to say that X is no longer a best practice. Whatever the X is—there’s been a new research paper, there’s a patch that’s been published—everyone needs to stop doing X and start doing Y. But there’s no central lever we can pull to get every team and every individual to stop doing X and start doing Y. I can’t go to senior leaders and say, “Please inform your generals that Y needs to be done,” and have that message move down through the ranks in an orderly fashion. Instead, we spend a lot of our time trying to establish conditions, whether it’s by building tools, reporting on the right metrics, or offering the right incentives that drive the organization towards our desired state. It’s hacking people and groups of people at enormous scale and trying to influence the organization. It’s a tremendous amount of fun, and it can also be maddening.

Do you have any advice for people early in their careers about how to meet the challenge of influencing people?

I’ve got two lessons. The first is to take a structured approach to professional interactions such as meetings and email strings. Before you start, think through what your objective is. On what can you compromise? Where are you unwilling to compromise? What does success look like? Think through the engagement from the perspective of the other parties involved. This isn’t to say that you shouldn’t treat people like people. Much of my success is due to the amount of coffee and beer that I’ve bought. However, once you’re in the meeting or whatever, keep it on topic, drive towards that outcome, and hopefully end early so there’s time for coffee.

The other is to shift the discussion towards customers. As a security professional, it’s my job to tell you that you’ve done your job poorly. That thing that you sweated over for months? The one that you poured yourself into? Yeah, that one. It’s not good enough, it’s not ready to launch. This is always a hard discussion to have. By shifting the focus from my opinion versus yours to trying to delight customers, it becomes a much easier discussion. What should we do for customers? What is right for customers? How do we protect customers? If you take this approach, then you can have difficult conversations with peers and make tough decisions about product features and releases because the focus is always on the customer and not on social cohesion, peer pressure, or other concerns that aren’t customer-focused.

Tell us about your 2018 re:Invent topic. How did you choose it?

My talk is called 0x32 Shades of #7f7f7f: The Tension Between Absolutes and Ambiguity in Security. I chose the topic because a lot of security issues come down to judgment calls that result in very finely graduated shades of gray. I’ve seen a lot of people struggle with that ambiguity—but that ambiguity is actually central to security. And the ability to deal with the ambiguity is one of the things that enables a security team to be effective. At AWS, we don’t have a checklist that we go down when we engage with a product team. That lack of a checklist makes the teams more likely to talk to us and to bring us their problems, because they know we’re going to dig into the problem with them and come up with a reasoned recommendation based on our experience, not based on the rule book. This approach is excellent and absolutely necessary. But on the flipside, there are times when absolutes apply. There are times when you draw a bright line that none shall pass. One of the biggest things that can enable a security team to scale is knowing where to draw those bright lines and how to keep those lines immaculately clean. So my talk is about that tension: the dichotomy between absolute black and white and the pervading shades of gray.

What are some of the common misperceptions you encounter about cloud security?

I’ve got a couple. The first is that choosing a cloud provider is a long-term relationship. Make sure that your provider has a track record of security improvements and flexibility. The Internet, your applications, and your customers are not static. They’re going to change over time, sometimes quite quickly. Making it perfect now doesn’t mean that it will be perfect forever, or even for very long. At Amazon, we talk about one-way doors versus two-way doors. When going through a one-way door you have to be very sure that you’re making a good decision. We’ve not found a way to reliably and quickly make these decisions at scale, but we have found that you can often avoid the one-way door entirely. Make sure that as you’re moving your applications to the cloud, your provider gives you the flexibility to change your mind about configurations, policies, and other security mechanisms in the future.

The second is that you cannot allow the perfect to be the enemy of the good. Both within Amazon and with our customers, I’ve seen people use the migration to the cloud as an opportunity to fix all of the issues that their application has accreted over years or perhaps even decades. These projects very rarely succeed. The bar is so high, the project is so complex, that it’s basically impossible to successfully deliver. You have to be realistic about the security and availability that you have now, and you have to make sure that you get both better security when you launch in the cloud, and that you have the runway to incrementally improve over time. In 2016, Rob Joyce of the NSA gave a great talk about how the NSA thinks about zero-day vulnerabilities and gaining access to systems. It’s a good, clear articulation of a well-known security lesson, that adversaries are going to take the shortest easiest path to their objective. The news has been full of low-level side channel attacks like Spectre and Meltdown. While you absolutely have to address these, you also have to adopt MFA, minimize IAM policies, and the like. Customers should absolutely make sure that their cloud provider is someone they can trust, someone who takes security very seriously and with whom they can have a long-term relationship. But they also have to make sure that their own fundamentals are in order.

If you had to pick a different job, what would it be?

I would do something dealing with outer space. I’ve always read a lot of science fiction, so I’ve always had an interest, and it’s getting to the point where space is no longer the domain of large government agencies. There are these private companies that are doing amazing things in space, and the idea of one of my systems in orbit or further out is appealing.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security news? Follow us on Twitter.

Author

Eric Brandwine

By day, Eric helps teams figure out how to cloud. By night, Eric stalks the streets of Gotham, keeping it safe for customers. He is marginally competent at: AWS, Networking, Distributed Systems, Security, Photography, and Sarcasm. He is also an amateur parent and husband.

Upcoming Speaking Engagements

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/11/upcoming_speaki_2.html

This is a current list of where and when I am scheduled to speak:

The list is maintained on this page.

Oracle and "Responsible Disclosure"

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/11/oracle_and_resp.html

I’ve been writing about “responsible disclosure” for over a decade; here’s an essay from 2007. Basically, it’s a tacit agreement between researchers and software vendors. Researchers agree to withhold their work until software companies fix the vulnerabilities, and software vendors agree not to harass researchers and fix the vulnerabilities quickly.

When that agreement breaks down, things go bad quickly. This story is about a researcher who published an Oracle zero-day because Oracle has a history of harassing researchers and ignoring vulnerabilities.

Software vendors might not like responsible disclosure, but it’s the best solution we have. Making it illegal to publish vulnerabilities without the vendor’s consent means that they won’t get fixed quickly — and everyone will be less secure. It also means less security research.

This will become even more critical with software that affects the world in a direct physical manner, like cars and airplanes. Responsible disclosure makes us safer, but it only works if software vendors take the vulnerabilities seriously and fix them quickly. Without any regulations that enforce that, the threat of disclosure is the only incentive we can impose on software vendors.

Eraser – Windows Secure Erase Hard Drive Wiper

Post Syndicated from Darknet original https://www.darknet.org.uk/2018/11/eraser-windows-secure-erase-hard-drive-wiper/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

Eraser – Windows Secure Erase Hard Drive Wiper

Eraser is a hard drive wiper for Windows which allows you to run a secure erase and completely remove sensitive data from your hard drive by overwriting it several times with carefully selected patterns.

Eraser is a Windows focused hard drive wiper and is currently supported under Windows XP (with Service Pack 3), Windows Server 2003 (with Service Pack 2), Windows Vista, Windows Server 2008, Windows 7,8 ,10 and Windows Server 2012.

Read the rest of Eraser – Windows Secure Erase Hard Drive Wiper now! Only available at Darknet.

[$] Debian, Rust, and librsvg

Post Syndicated from jake original https://lwn.net/Articles/771355/rss

Debian supports
many architectures
and, even for those it does not officially support,
there are Debian ports that try
to fill in the gap. For most user applications, it is mostly a matter of
getting GCC up and running for the architecture in question, then building
all of the different packages
that Debian provides. But for packages
that need to be built with LLVM—applications or libraries that use Rust,
for example—that simple recipe becomes more complicated. How much the lack
of Rust support for an unofficial architecture
should hold back the rest of the distribution was the subject of a somewhat
acrimonious discussion recently.

Малък разказ за телевизията на Софийския университет, предизвикан от един клип

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/11/14/almamatertv/

Действие първо

Акт 1. Университетската телевизия има удостоверение за регистрация от СЕМ.

Акт 2. Тя получава финансиране  по проект Наследство .бг.

Акт 3. Преди да се подаде поръчката за техника и оборудване директорката Светлана Божилова е освободена. Със заповед на ректора Анастас Герджиков   за директор е назначен Башар Рахал.  Важни за разказа са няколко факта:

  •   “Очарователният и готов да сбъдва мечти Башар Рахал има опит като актьор и водещ на Лотария България“- “игра на светлини и истински съспенс, които ще ви накарат да затаите дъх – от първата до последната минута”. Няма данни да има опит в обществени медии. Има медийни публикации, че участва в търговско дружество с продуцент на Биг Брадър – ако е така, не е ясно дали участието на Рахал в това дружество   е прекратено.
  • Башар Рахал е персонално назначение на ректора без конкурс и без концепция на кандидата какво ще прави като директор за реализиране на обществения характер на академичната телевизия.

Действие второ

На 27 юни 2018 г. се провежда събрание на Академичния съвет . Две точки в дневния ред са свързани с  Алма матер ТВ.

Акт 1. Променен е правилникът на Алма матер ТВ. От протокола не са видими промените, текстът на правилника на обществената телевизия на СУ  и преди, и сега не е достъпен.

Акт 2.  Попълнени са квотите на Академичния съвет в УС (проф. Илчев) и в Програмния съвет (проф. Чавдарова и доц. Данаил Данов).

Акт 3. Ректорът иска от АС да бъде упълномощен да сключи договор с инвеститор, неразкрит по име:

за упълномощаване на Ректора на СУ „Св. Климент Охридски“ за сключване на договор с предмет – предоставяне на трето лице на програмно време в телевизионната програма на Телевизия „Алма Матер“ – обслужващо звено на СУ „Св. Климент Охридски“, с цел за създаване и осигуряване на независими и съвместни предавания и продукции.”

В писмо до Академичния съвет бившият зам. ректор доц. Милена Стефанова изразява опасения за превръщането на обществената телевизия в  комерсиална. Тя призовава ректора да представи конкретен договор пред Академичния съвет, а не да иска АС да го упълномощи да сключи  бланкетен договор. Академичният съвет обаче упълномощава ректора – или поне това се разбира от протокола.

Действие трето

Акт 1. На сайта на Софийския университет и в специална страница във ФБ се появява съобщение за кастинг под заглавие „Годината, в която стана известен“.  То се съпровожда от клип.

B_Rahal-645x395

Акт 2. Група хора от университета са истински възмутени от възможността този т.нар. промоционален клип да се свързва със Софийския университет. Катедра Радио и телевизия от ФЖМК  информира ректора за професионалното си мнение.  Реакцията: Софийският университет няма намерение да сваля клипа от страницата си, заявява СУ за bTV на 8 ноември,  вече имало много желаещи за участие в кастинга.

Акт 3. Преподавателското тяло във ФЖМК  не формира обща позиция.  

Позиции има – две:

Катедра Радио и телевизия публикува отворено писмо по казуса, подкрепено впоследствие и от преподаватели от други факултети:

Смущаващ е не само клипът, но и пълната липса на прозрачност за това кой носи отговорност за него.

Деканският съвет приема позиция “Нямаме нищо общо, телевизията е самостоятелен субект”. Паралелно върви обяснение, че това би било намеса в свободата на изразяване.

Във връзка с дискусиите около рекламния клип на ТВ Алма Матер в интернет, бихме искали да напомним, че ТВ Алма Матер е напълно самостоятелно звено в административната структура на СУ “Св. Климент Охридски”.

 

Акт 4. Директорът Рахал обяснява на неразбралите, че това е стратегия и тя постига целта си. Инвестирахме около 50 лева в този клип, а в момента той се върти по всички медии, има над 20 000 гледания. Лоша реклама няма, ако човек може да си я защити, казва директорът на телевизията на Софийския университет Башар Рахал. Клипът постигна целта си, стана скандал.

Акт 5. Медиите достигат до сценариста, избран да отговаря за стилистиката на посланията на обществената академична телевизия – Теньо Гогов. Аз не съм влагал такъв подтекст в текста на клипчето. Ако някой иска да го прочете така, нищо не мога да му кажа. – това е мнението на   Теньо Гогов, известен като сценарист на Слави Трифонов, след това на Росен Петров, след това  като  чалга фигура.

Действие четвърто

Акт 1. Клипът Стани известен изчезва от сайта на Университета и от страницата на кампанията. Разбира се, в цифровото време той вече е в предавания по сайтовете и къде ли не. Изчезва и заявлението на Рахал в мрежите – “Мислех, че е сарказъм, а пък се оказа сексизъм.” Не става ясно и поддържа ли Рахал “Няма лоша реклама, ако човек може  да я защити”.

Сексизъм_или_сарказъм_Реклама_на_студентската_телевизия_предизвика_бурни_реакции_-_bTV_Новините_-_2018-11-1

Акт 2. Управителният и програмният съвет огласяват извинение.

През последните дни клип за студентски кастинг, качен във Фейсбук от Телевизия „Алма матер“, стана обект на основателна критика от страна на обществеността и медиите. Стилистиката и посланието на клипа не са в съответствие с академичния тон, който се очаква от всяко звено на най-стария и най-авторитетен университет в страната. Днес случаят беше обсъден на съвместно заседание на Управителния и Програмния съвет на телевизията. В проведения сериозен разговор директорът на телевизията Башар Рахал изрази съжаление за недобрата преценка докъде пародиен тон е приемлив и къде преминава в лош вкус и сексизъм.

Акт 3. Извинява се и телевизията:

Здравейте, искаме да се извиним на всички, които се чувстват засегнати от нашия клип. Съжаляваме, че не съобразихме, че клипът не отговаря на определени академични и човешки норми, и в този смисъл реакцията е разбираема. Това е и причината да свалим клипа.

Епилог

Но работата вече не е в клипа. Клипът е начало на сюжет.

Работата е  в това има ли гаранции за обществения характер и качествената журналистика в Алма матер ТВ.

Затова: като начало – публичност на правилника, публичност на съставите на съветите, публичност на начина, по който са избрани – защото и това се оказва важно. Очаквал се външен продуцент, готов да инвестира: добра новина, но при известни условия:   да има достатъчно гаранции за обществения характер на Алма матер ТВ и бъдещето на качествената журналистика в нея.

New – CloudFormation Drift Detection

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-cloudformation-drift-detection/

AWS CloudFormation supports you in your efforts to implement Infrastructure as Code (IaC). You can use a template to define the desired AWS resource configuration, and then use it to launch a CloudFormation stack. The stack contains the set of resources defined in the template, configured as specified. When you need to make a change to the configuration, you update the template and use a CloudFormation Change Set to apply the change. Your template completely and precisely specifies your infrastructure and you can rest assured that you can use it to create a fresh set of resources at any time.

That’s the ideal case! In reality, many organizations are still working to fully implement IaC. They are educating their staff and adjusting their processes, both of which take some time. During this transition period, they sometimes end up making direct changes to the AWS resources (and their properties) without updating the template. They might make a quick out-of-band fix to change an EC2 instance type, fix an Auto Scaling parameter, or update an IAM permission. These unmanaged configuration changes become problematic when it comes time to start fresh. The configuration of the running stack has drifted away from the template and is no longer properly described by it. In severe cases, the change can even thwart attempts to update or delete the stack.

New Drift Detection
Today we are announcing a powerful new drift detection feature that was designed to address the situation that I described above. After you create a stack from a template, you can detect drift from the Console, CLI, or from your own code. You can detect drift on an entire stack or on a particular resource, and see the results in just a few minutes. You then have the information necessary to update the template or to bring the resource back into compliance, as appropriate.

When you initiate a check for drift detection, CloudFormation compares the current stack configuration to the one specified in the template that was used to create or update the stack and reports on any differences, providing you with detailed information on each one.

We are launching with support for a core set of services, resources, and properties, with plans to add more over time. The initial list of resources spans API Gateway, Auto Scaling, CloudTrail, CloudWatch Events, CloudWatch Logs, DynamoDB, Amazon EC2, Elastic Load Balancing, IAM, AWS IoT, Lambda, Amazon RDS, Route 53, Amazon S3, Amazon SNS, Amazon SQS, and more.

You can perform drift detection on stacks that are in the CREATE_COMPLETE, UPDATE_COMPLETE, UPDATE_ROLLBACK_COMPLETE, and UPDATE_ROLLBACK_FAILED states. The drift detection does not apply to other stacks that are nested within the one you check; you can do these checks yourself instead.

Drift Detection in Action
I tested this feature on the simple stack that I used when I wrote about Provisioned Throughput for Amazon EFS. I simply select the stack and choose Detect drift from the Action menu:

I confirm my intent and click Yes, detect:

Drift detection starts right away; I can Close the window while it runs:

After it completes I can see that the Drift status of my stack is IN_SYNC:

I can also see the drift status of each checked resource by taking a look at the Resources tab:

Now, I will create a fake change by editing the IAM role, adding a new policy:

I detect drift a second time, and this time I find (not surprise) that my stack has drifted:

I click View details, and I inspect the Resource drift status to learn more:

I can expand the status line for the modified resource to learn more about the drift:

Available Now
This feature is available now and you can start using it today in the US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Canada (Central), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), and South America (São Paulo) Regions. As I noted above, we are launching with support for a strong, initial set of resources, and plan to add many more in the months to come.

Jeff;

 

Support for multi-value parameters in Amazon API Gateway

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/support-for-multi-value-parameters-in-amazon-api-gateway/

This post is courtesy of Akash Jain, Partner Solutions Architect – AWS

The new multi-value parameter support feature for Amazon API Gateway allows you to pass multiple values for the same key in the header and query string as part of your API request. It also allows you to pass multi-value headers in the API response to implement things like sending multiple Set-Cookie headers.

As part of this feature, AWS added two new keys:

  • multiValueQueryStringParameters—Used in the API Gateway request to support a multi-valued parameter in the query string.
  • multiValueHeaders—Used in both the request and response to support multi-valued headers.

In this post, I walk you through examples for using this feature by extending the PetStore API. You add pet search functionality to the PetStore API and then add personalization by setting the language and UI theme cookies.

The following AWS CloudFormation stack creates all resources for both examples. The stack:

  • Creates the extended PetStore API represented using the OpenAPI 3.0 standard.
  • Creates three AWS Lambda functions. One each for implementing the pet search, get user profile, and set user profile functionality.
  • Creates two IAM roles. The Lambda function assumes one and API Gateway assumes the other to call the Lambda function.
  • Deploys the PetStore API to Staging.

Add pet search functionality to the PetStore API

As part of adding the search feature, I demonstrate how you can use multiValueQueryStringParameters for sending and retrieving multi-valued parameters in a query string.

Use the PetStore example API available in the API Gateway console and create a /search resource type under the /pets resource type with GET (read) access. Then, configure the GET method to use AWS Lambda proxy integration.

The CloudFormation stack launched earlier gives you the PetStore API staging endpoint as an output that you can use for testing the search functionality. Assume that the user has an interface to enter the pet types for searching and wants to search for “dog” and “fish.” The pet search API request looks like the following where the petType parameter is multi-valued:

https://xxxx.execute-api.us-east-1.amazonaws.com/staging/pets/search?petType=dog&petType=fish

When you invoke the pet search API action, you get a successful response with both the dog and fish details:

 [
  {
    "id": 11212,
    "type": "dog",
    "price": 249.99
  },
  {
    "id": 31231
    "type": "fish",
    "price": 0.99
  }
]

Processing multi-valued query string parameters

Here’s how the multi-valued parameter petType with value “petType=dog&petType=fish” gets processed by API Gateway. To demonstrate, here’s the input event sent by API Gateway to the Lambda function. The log details follow. As it was a long input, a few keys have been removed for brevity.

{ resource: '/pets/search',
path: '/pets/search',
httpMethod: 'GET',
headers: 
{ Accept: 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
'Accept-Encoding': 'gzip, deflate, br',
'Accept-Language': 'en-US,en;q=0.9',
Host: 'xyz.execute-api.us-east-1.amazonaws.com',
'Upgrade-Insecure-Requests': '1',
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36',
Via: '1.1 382909590d138901660243559bc5e346.cloudfront.net (CloudFront)',
'X-Amz-Cf-Id': 'motXi0bgd4RyV--wvyJnKpJhLdgp9YEo7_9NeS4L6cbgHkWkbn0KuQ==',
'X-Amzn-Trace-Id': 'Root=1-5bab7b8b-f1333fbc610288d200cd6224',
'X-Forwarded-Proto': 'https' },
queryStringParameters: { petType: 'fish' },
multiValueQueryStringParameters: { petType: [ 'dog', 'fish' ] },
pathParameters: null,
stageVariables: null,
requestContext: 
{ resourceId: 'jy2rzf',
resourcePath: '/pets/search',
httpMethod: 'GET',
extendedRequestId: 'N1A9yGxUoAMFWMA=',
requestTime: '26/Sep/2018:12:28:59 +0000',
path: '/staging/pets/search',
protocol: 'HTTP/1.1',
stage: 'staging',
requestTimeEpoch: 1537964939459,
requestId: 'be70816e-c187-11e8-9d99-eb43dd4b0381',
apiId: 'xxxx' },
body: null,
isBase64Encoded: false }

There is a new key, multiValueQueryStringParameters, available in the input event. This key is added as part of the multi-value parameter feature to retain multiple values for the same parameter in the query string.

Before this change, API Gateway used to retain only the last value and drop everything else for a multi-valued parameter. You can see the original behavior in the queryStringParameters parameter in the above input, where only the “fish” value is retained.

Accessing the new multiValueQueryStringParameters key in a Lambda function

Use the new multiValueQueryStringParameters key available in the event context of the Lambda function to retrieve the multi-valued query string parameter petType that you passed in the query string of the search API request. You can use that value for searching pets. Retrieve the parameter values from the event context by parsing the event event.multiValueQueryStringParameters.petType.

exports.handler = (event, context, callback) => {
    
    //Log the input event
    console.log(event)
    
    //Extract multi-valued parameter from input
    var petTypes = event.multiValueQueryStringParameters.petType;
    
    //call search pets functionality
    var searchResults = searchPets(petTypes)
    
    const response = {
        statusCode: 200,
        body: searchResults
        
    };
    callback(null, response);
};

The multiValueQueryStringParameters key is present in the input request regardless of whether the request contains keys with multiple values. You don’t have to change your APIs to enable this feature, unless you are using a key of the same name as multiValueQueryStringParameters.

Add personalization

Here’s how the newly added multiValueHeaders key is useful for sending cookies with multiple Set-Cookie headers in an API response.

Personalize the Pet Store web application by setting a user-specific theme and language settings. Usually, web developers use cookies to store this kind of information. To accomplish this, send cookies that store theme and language information as part of the API response. The browser then stores them and the web application uses them to get the required information.

To set the cookies, you need the new API resources /users and /profiles, with read and write access in the PetStore API. Then, configure the GET and POST methods of the /profile resource to use AWS Lambda proxy integration. The CloudFormation stack that you launched earlier has already created the necessary resources.

Invoke the POST profile API call with the following request body. It is passing the user preferences for language and theme and returning the Set-Cookie headers:

https://XXXXXX.execute-api.us-east-1.amazonaws.com/staging/users/profile

The request body looks like the following:

{
    "userid" : 123456456,
    "preferences" : {"language":"en-US", "theme":"blue moon"}
}

You get a successful response with the “200 OK” status code and two Set-Cookie headers for setting the language and theme cookies.

Passing multiple Set-Cookie headers in the response

The following code example is the setUserProfile Lambda function code that processes the input request and sends the language and theme cookies:

exports.handler = (event, context, callback) => {

    //Get the request body
    var requestBody = JSON.parse(event.body);
 
    //Retrieve the language and theme values
    var language = requestBody.preferences.language;
    var theme = requestBody.preferences.theme; 
    
    const response = {
        isBase64Encoded: true,
        statusCode: 200,
        multiValueHeaders : {"Set-Cookie": [`language=${language}`, `theme=${theme}`]},
        body: JSON.stringify('User profile set successfully')
    };
    callback(null, response);
};

You can see the newly added multiValueHeaders key passes multiple cookies as a list in the response. The multiValueHeaders header is translated to multiple Set-Cookie headers by API Gateway and appears to the API client as the following:

Set-Cookie →language=en-US
Set-Cookie →theme=blue moon

You can also pass the header key along with the multiValueHeaders key. In that case, API Gateway merges the multiValueHeaders and headers maps while processing the integration response into a single Map<String, List<String>> value. If the same key-value pair is sent in both, it isn’t duplicated.

Retrieving the headers from the request

If you use the Postman tool (or something similar) to invoke the GET profile API call, you can send cookies as part of the request. The theme and language cookie that you set above in the POST /profile API request can now be sent as part of the GET /profile request.

https://xxx.execute-api.us-east-1.amazonaws.com/staging/users/profile

You get the following response with the details of the user preferences:

{
    "userId": 12343,
    "preferences": {
        "language": "en-US",
        "theme": "beige"
    }
}

When you log the input event of the getUserProfile Lambda function, you can see both newly added keys multiValueQueryStringParameters and multiValueHeaders. These keys are present in the event context of the Lambda function regardless of whether there is a value. You can retrieve the value by parsing the event context event.multiValueHeaders.Cookie.

{ resource: '/users/profile',
path: '/users/profile',
httpMethod: 'GET',
headers: 
{ Accept: '*/*',
'Accept-Encoding': 'gzip, deflate',
'cache-control': 'no-cache',
'CloudFront-Forwarded-Proto': 'https',
'CloudFront-Is-Desktop-Viewer': 'true',
'CloudFront-Is-Mobile-Viewer': 'false',
'CloudFront-Is-SmartTV-Viewer': 'false',
'CloudFront-Is-Tablet-Viewer': 'false',
'CloudFront-Viewer-Country': 'IN',
Cookie: 'language=en-US; theme=blue moon',
Host: 'xxxx.execute-api.us-east-1.amazonaws.com',
'Postman-Token': 'acd5b6b3-df97-44a3-8ea8-2d929efadd96',
'User-Agent': 'PostmanRuntime/7.3.0',
Via: '1.1 6bf9df28058e9b2a0034a51c5f555669.cloudfront.net (CloudFront)',
'X-Amz-Cf-Id': 'VRePX_ktEpyTYh4NGyk90D4lMUEL-LBYWNpwZEMoIOS-9KN6zljA7w==',
'X-Amzn-Trace-Id': 'Root=1-5bb6111e-e67c17ea657ed03d5dddf869',
'X-Forwarded-For': 'xx.xx.xx.xx, yy.yy.yy.yy',
'X-Forwarded-Port': '443',
'X-Forwarded-Proto': 'https' },
multiValueHeaders: 
{ Accept: [ '*/*' ],
'Accept-Encoding': [ 'gzip, deflate' ],
'cache-control': [ 'no-cache' ],
'CloudFront-Forwarded-Proto': [ 'https' ],
'CloudFront-Is-Desktop-Viewer': [ 'true' ],
'CloudFront-Is-Mobile-Viewer': [ 'false' ],
'CloudFront-Is-SmartTV-Viewer': [ 'false' ],
'CloudFront-Is-Tablet-Viewer': [ 'false' ],
'CloudFront-Viewer-Country': [ 'IN' ],
Cookie: [ 'language=en-US; theme=blue moon' ],
Host: [ 'xxxx.execute-api.us-east-1.amazonaws.com' ],
'Postman-Token': [ 'acd5b6b3-df97-44a3-8ea8-2d929efadd96' ],
'User-Agent': [ 'PostmanRuntime/7.3.0' ],
Via: [ '1.1 6bf9df28058e9b2a0034a51c5f555669.cloudfront.net (CloudFront)' ],
'X-Amz-Cf-Id': [ 'VRePX_ktEpyTYh4NGyk90D4lMUEL-LBYWNpwZEMoIOS-9KN6zljA7w==' ],
'X-Amzn-Trace-Id': [ 'Root=1-5bb6111e-e67c17ea657ed03d5dddf869' ],
'X-Forwarded-For': [ 'xx.xx.xx.xx, yy.yy.yy.yy' ],
'X-Forwarded-Port': [ '443' ],
'X-Forwarded-Proto': [ 'https' ] },
queryStringParameters: null,
multiValueQueryStringParameters: null,
pathParameters: null,
stageVariables: null,
requestContext: 
{ resourceId: '90cr24',
resourcePath: '/users/profile',
httpMethod: 'GET',
extendedRequestId: 'OPecvEtWoAMFpzg=',
requestTime: '04/Oct/2018:13:09:50 +0000',
path: '/staging/users/profile',
accountId: 'xxxxxx',
protocol: 'HTTP/1.1',
stage: 'staging',
requestTimeEpoch: 1538658590316,
requestId: 'c691746f-c7d6-11e8-83b8-176659b7d74d',
identity: 
{ cognitoIdentityPoolId: null,
accountId: null,
cognitoIdentityId: null,
caller: null,
sourceIp: 'xx.xx.xx.xx',
accessKey: null,
cognitoAuthenticationType: null,
cognitoAuthenticationProvider: null,
userArn: null,
userAgent: 'PostmanRuntime/7.3.0',
user: null },
apiId: 'xxxx' },
body: null,
isBase64Encoded: false }

HTTP proxy integration

For the API Gateway HTTP proxy integration, the headers and query string are proxied to the downstream HTTP call in the same way that they are invoked. The newly added keys aren’t present in the request or response. For example, the petType parameter from the search API goes as a list to the HTTP endpoint.

Similarly, for setting multiple Set-Cookie headers, you can set them the way that you would usually. For Node.js, it looks like the following:

res.setHeader("Set-Cookie", ["theme=beige", "language=en-US"]);

Mapping requests and responses for new keys

For mapping requests, you can access new keys by parsing the method request like method.request.multivaluequerystring.<KeyName> and method.request.multivalueheader.<HeaderName>. For mapping responses, you would parse the integration response like integration.response.multivalueheaders.<HeaderName>.

For example, the following is the search pet API request example:

curl https://{hostname}/pets/search?petType=dog&petType=fish \
-H 'cookie: language=en-US' \
-H 'cookie: theme=beige'

The new request mapping looks like the following:

"requestParameters" : {
    "integration.request.querystring.petType" : "method.request.multivaluequerystring.petType",
    "integration.request.header.cookie" : "method.request.multivalueheader.cookie"
}

The new response mapping looks like the following:

"responseParameters" : { 
 "method.response.header.Set-Cookie" : "integration.response.multivalueheaders.Set-Cookie", 
 ... 
}

For more information, see Set up Lambda Proxy Integrations in API Gateway.

Cleanup

To avoid incurring ongoing charges for the resources that you’ve used, delete the CloudFormation stack that you created earlier.

Conclusion

Multi-value parameter support enables you to pass multi-valued parameters in the query string and multi-valued headers as part of your API response. Both the multiValueQueryStringParameters and multiValueHeaders keys are present in the input request to Lambda function regardless of whether there is a value. These keys are also accessible for mapping requests and responses.

Making Lemonade: The Importance of Social Media and Community

Post Syndicated from Yev original https://www.backblaze.com/blog/social-media-marketing-strategy/

Social media manager Yev at his desk at Backblaze

Spending all day on Twitter and Reddit shouldn’t really be a job, but here we are. With every organization from a local coffee shop to mutli-billion dollar enterprise having a social media presence, community and social media management has become not only a job, but a career path that tens of thousands of people have adopted in the last two decades.

Many still question the value, if any, of KPIs (key performance indicators) for tracking social media, and the overall return on investment of a dedicated social media presence. With that in mind, I wanted to share why Backblaze continues to invest in maintaining strong community and social relationships.

Having a strong social presence has not only helped us build a community of fans and advocates, but it has also helped drive serious growth, playing a large part in helping us achieve nearly 50% growth and an ARR of over $30M.

Spoilers — Key Takeaways

In this post I’ll discuss the takeaways below and dive in to two case-studies where bringing all these together helped Backblaze navigate some tricky situations (more on those below):

  • Social media can be fickle. Finding the right balance for your brand between being funny, informative, and helpful can be hard but it is paramount if you want to cultivate the right kind of audience and community.
  • Empathy is important, especially if you provide social tech support — people writing into support are usually not having a great day. Very few of our support tickets are simple praise (and when they are, we share them with the whole company). Being empathetic can help craft your responses in a way that does not put your customers on the defensive, and even if you can’t help them, being understanding of their situation goes a long way in helping them feel heard.
  • Do not use standard answers. As much as I can, I try to use hand-crafted, artisanal responses for every person that writes in. Even if you get a flurry of similar questions that require the same steps to solve (say there’s an outage, or there’s a popular blog post that yields identical questions), addressing people by their names and individualizing responses helps immensely and puts a more approachable face on your company.
  • Some days are going to be bad: not all news is good news. Sometimes there will be a particularly bad day on the horizon that you see coming and start dreading. Know that it’ll pass and get yourself mentally prepared for the onslaught. Sometimes knowing that there will be controversy will help you craft your messaging in advance. Try to think through the edge cases and if you don’t have a good answer, tell people that you’ll find one and get back to them.
  • Follow up. If you are in the middle of a busy day, try to get to everyone within a few hours. If you cannot, flag them and reply later. You want people to have warm and fuzzies when they think about you. Hearing back from you on social, even a day later, can absolutely have that effect. Making sure no one falls through the cracks helps people feel heard and a part of your community, and shows that community members are valued.

Backblaze’s Social Philosophy — Branding First, KPIs Last

Contrary to popular belief, Backblaze is not a large organization (though we keep staffing up, check out our backblaze.com/jobs page!) Our social media team consists of an army of one, Yev (me), and a few folks on staff who also have logins and can act as backups in case I go on vacation. It’s good to have backups.

How did I get involved with the social side of Backblaze? I joined Backblaze in 2011 as part of our Customer Support team. Natasha, a former contractor (and now one of our product marketers) ran our social media efforts — essentially responding to questions and posting new blog entries and interesting tidbits. A few months into the job I took over the outbound social media posts for the company from Natasha, and a little while later took over both aspects of our social channels. Natasha still helps out as one of the people on the social backup rotation.

My philosophy on social and community stems from my strong belief in customer service. I’ve always encouraged having a support aspect to our social channels, which means being communicative when we have issues and often handling simple support cases before referring folks to our customer support team. Since backup and data storage are serious business, every opportunity to offer support is a good one. Twitter, Reddit, Facebook, and other platforms can also act as an early detection canary when people are having issues. That said, I try to avoid debugging issues in a public forum and have found that a lot of people prefer a relevant knowledge base link or a shortcut over going into details about their computer on Twitter. Being receptive to people on different social platforms and giving them individualized answers shows that real people are paying attention to them and that they’re not just talking to a brick wall. I consider these efforts to help customers less about customer support and more about brand-building, but ultimately, it’s just the kind of company that we want Backblaze to be.

Is There Value in Measuring Social Media KPIs?

We’ve historically focused our social efforts more on the brand than on KPIs for several reasons. A common mistake people make is to view social media as just another direct response channel. While social can certainly drive traffic and sales, it has the potential to be far more important for overall business growth and brand building. Focusing too narrowly on direct response KPIs not only undervalues social’s overall benefit but can also move focus onto the wrong activities.

That said, KPIs are not entirely worthless. Some of the most common KPIs include: clicks, views, mentions, trackable purchases, as well as a host of other more narrowly focused engagement metrics. While it’s great to have tracking and to know how many folks are purchasing as a result of your social activities, using those metrics as the sole arbiter of whether or not your social strategy is working is, I believe, a bit short-sighted. Plus, if you’re a shrewd social media manager, you can easily manipulate the KPIs. Need to goose your click numbers for the month? Here come the brand-relevant kitten gifs, but how is that good for your business? You know what is good for your business? Getting people to recommend you.

The Real Value of Social Media

Social proof is one of the single biggest influencers on a customer’s purchase decision. Regardless of whether your product is consumer-focused or enterprise-driven, your social channels are building out your community. That building of community, in turn, builds brand awareness and positive vibes. The better the community and social interactions are with your company, the more your message gets amplified. When things are good that means recommending your service, and when things are bad it means that they’ll be more willing to give you the benefit of the doubt.

Your social channels’ worth is hard to measure day to day, but don’t listen to people who say it can’t be measured at all. Measuring growth takes understanding what your goals are and whether your community can help attain them. You can start to see the benefits when people in various online places stand up for you, or field questions on your behalf in public forums (bonus points if they field them correctly). All those things mean you are doing a good job of educating your fans. Having customers and fans who help get your point across is a wonderful thing, and as I go into a bit later, can lead directly to revenue.

Cloud Backup Suggestions on Reddit

Reddit Recommendations

Comment on Twitter: "We use Backblaze. Love it!"

Unsolicited Twitter Recommendations Are Great

Insight #1

Focus on the customer impact and business results, then determine the right metrics for your business, knowing there may not be any. A branding-first approach of using social media as an extension of your support organization can make customers happier and can reduce complaints. The fewer help tickets coming in, the more kudos you get from your support team (and it lowers your operating expenses). Plus, being outwardly communicative and having quick response times can ease the customer’s anxiety in times of stress. If you can successfully do all that, it can lead to customers who are truly brand advocates and are more likely to amplify your message, recommending you, or giving you the benefit of the doubt.

My Daily Social Media Tools

I’ve covered the importance of engaging community and the strategy we take to make sure those community needs are met. That still leaves the question of how to actually do it. To that end, I’d like to take you through a typical day of mine.

My day starts by checking Twitter. It is the most real-time firehose of information and can act as a harbinger for how the day is going to go. I use the Fenix app on my phone to separate my personal account from my work account. All the Backblaze tweets go to Fenix, while my personal Twitter is tied to the default app. This separation helps prevent me from accidentally posting to the corporate stream. The other benefit is that the notifications become separate so I know whether to investigate quickly if it’s a Fenix notification, or let it slide for a personal one. Once Twitter has been scanned, I respond to the people who need help, either by sending them a helpful link or routing them to our support team.

When I arrive in the office, TweetDeck takes over. If you’ve never seen it, to some it looks a bit overwhelming. There is a series of columns, each one keeping track of separate lookups and keywords. I have streams set to follow @Backblaze, the term “Backblaze,” and some of the hashtags that we use frequently. It’s also a great way to see what competitors are up to, and see if you can add any helpful information to conversations that might be about your organization (e.g. someone is considering you and your competitors and is asking for opinions). An additional benefit is that you can gauge the general sentiment that customers have towards you. If most of the tweets mentioning your competitors are positive, that’s great and you have some work to do in making sure you’re thought of in the same way. If most of the mentions are negative, that can be a great time to jump in and try to win some people over, especially if their issue is with something that you excel in — just be nice.

While TweetDeck acts as a general firehose for Twitter, it can’t monitor what’s happening elsewhere in cyberspace. For that, I use Mention. It’s essentially an aggregator for the mentions that your keywords receive all over the internet. It is pretty close to real-time, only lagging by a few minutes. There are more robust tools available (Hootsuite, Sprout Social, and Buffer are all good robust tools), but for the money, Mention does a great job finding keywords from social media sites, blogs, news articles, and forums. It’s a great way to get an overview of where your organization was mentioned, and even has some analytics tools to help you parse through it all and see where and how folks are talking about you.

Mentions over time

Mentions Over Time

Hard drive stats word cloud

Hard Drive Stats Word Cloud

Thoughts on Twitter

More than any other social media platform, Twitter has become the place where people go when they have an issue, need information, or just want to ask a question. Being responsive and making sure each person is responded to builds a rapport with your followers and encourages a sense of community. Having a temporary unplanned outage? Make sure you tweet it so that folks know you’re on top of it and they aren’t feeling left out of the loop. Have a new feature to announce? Let the masses know so they can update to the latest and greatest version. Getting questions about your service? Respond quickly and with relevant information. Someone taking out their frustration on you? Respond with compassion and empathy; make sure they understand that you have heard their feedback and that it’s understood, even if there is nothing you can do.

This leads to a more cohesive community where no one feels left out and everyone feels like they’re a part of the group. If someone is having a bad day and takes it out on you, that’s the perfect time to try and bring them in for a soft landing. Very few people ever reach out to Customer Support because they’re having the best day ever. For the customer where something has gone wrong, the way that you treat them dictates how the community at large will view your brand and company. Being empathetic does not have any downsides in these cases and builds trust over time.

Insight #2

One of the reasons I use multiple services to track mentions is to make sure that people don’t fall through the cracks. It’s important that folks are responded to when they write us with a question or frustration. Even if the interaction with us is not positive, the goal is to make sure everyone feels heard. That helps establish the positive brand and good vibes I am trying to cultivate and results in brand amplification and customer lead recommendations.

Where Social Fits In At Backblaze

Venture capital backed companies tend to be built around hypergrowth, spending money to acquire customers while honing the product and finding their niche. In the almost thirteen years Backblaze has been around, we’ve raised less than $3M. We’re funded by our own operations; this is known as being a bootstrapped company. Being bootstrapped brings with it a lot of benefits, like the freedom to make our own decisions, but it does also mean we can’t spend a lot of money buying Facebook ads, much less buying radio and TV time. So we have to be creative with how we attract people and keep them happy once we earn their business. We want the Backblaze brand to reflect our culture: transparent, empathetic, and efficient. Compared to our competition, we believe that offers a unique and different proposition to people deciding where they want to store their data. Not only that, but those beliefs also reflect who we are as people.

To attract customers and educate the masses without breaking the bank, we focused on writing interesting blog posts, open-sourcing technology, and being generally available to our customers. Being available is what we think truly sets us apart from the competition. For us, availability and being social means being good internet citizens, responding to hails from around the web, and joining in on conversations about the industry. That means going to where folks are talking about us and sharing some of our insights, like our hard drive stats posts. All of that, plus listening to our customers when making product decisions (like adding much-requested file sharing in version 5.0), helps move the product forward while bringing our customers along for the ride.

One of the tangible side benefits of being present on other platforms is that sometimes being involved in the conversation elsewhere can help stave off support tickets before they enter our system. We have a great Customer Support department and they handle all of our tickets in-house, so being able to head off potential issues on other platforms where currently and possibly future customers are chatting, not only helps us stay engaged in the conversation, but can also reduce the number of tickets coming in.

Insight #3

Community building is paramount. What is community? For me, it means any place where Backblaze is mentioned and anyone who engages with our company. Perception being reality, your company will be judged by its public actions. For better or worse, mindshare on the internet is driven by social interactions. Those interactions have to be genuine and not just lip-service with canned missives written by lawyers. While sometimes review is necessary, honest conversation in real-time is the standard I strive for. A company that does not invest in some form of social presence is actively not investing in its brand.

The Social Media Strategy

The strategy for our social efforts is simple: stay engaged. There’s a literal time component to this — we try to respond quickly, ideally within a few hours. Responding quickly is great, but if you’re responding fast with an automated message, you risk infuriating your customers. Ideally the response is quick and has relevant information. How do you maintain relevance? You do it by sending out useful, topical, or interesting tidbits that are industry related, and by participating in the comments wherever they come up. If you aren’t sure of the right answer or don’t have the necessary information at your fingertips, reply and let them know that you’ll work on getting them the right answer — then follow up.

Participation is one of our most important tactics. If you aren’t shy of wading into the comments, whether they be positive or negative, the community learns to ask questions and expects to receive answers. That, in turn, leads to trust, which is immeasurably important.

That’s especially true when something goes awry, for example when an Adobe issue ate some Backblaze files, or if you want to capitalize on an opportunity, as when we were able to move quickly and gain customers when CrashPlan’s exit of the consumer business was announced. Backblaze is in the data storage business, which makes it important to strike a balance between being funny, informative, and helpful. It’s difficult, but relentless participation is paramount for cultivating the kind of community you want your brand to have.

I am lucky enough to have the latitude to make decisions about what to explore and expand upon in public forums. That means that if something is happening in real-time, I don’t have to wait for an hour and a half to get approval about what I can and cannot say. This isn’t all improv (though I did do that in high school). We create this environment by having honest internal conversations that assume we are going to be discussing things with our customers. We are constantly calibrating and communicating so that when things happen in real-time we can react quickly. Do mistakes happen? Sometimes they do, but the benefits of being able to move quickly in an informed way have thus far outweighed any downsides we occasionally see.

Another thing to consider is that there are a lot of SaaS companies offering ready-made community platforms that allow you to manage your own online community. Platforms such as Vanilla, Chaordix, and CMNTY all help brands build and design their own online spaces. I’ve found that in general, these are great for large companies with huge brand awareness, and while Backblaze is large, I do not need such robust tools (not yet anyway). Instead I look for where people are discussing Backblaze already and join the conversations there. Places like Reddit, HackerNews, Twitter, Facebook, MangoLassi, SpiceWorks, and the comments sections of blogs or articles, are all places where Backblaze gets mentioned. Jumping into those conversations provides for a more natural flow and proves to folks that we really are paying attention, instead of letting the news come to us.

Insight #4

Think there isn’t much community in your B2B space or that joining one isn’t a big deal? If there is no community, or no one talking about your product, then there’s no market. If there’s no market, you should start polishing your resume. As long as your business has customers, there’s likely a community element as well. The trick is to find where they congregate, or if no such place exists, create one. Once established, work with your team to come up with guidelines for communication that will free you up should the need to move quickly arises.

Bringing It All Together: Tales of Disaster and Making Lemonade

We’ve had a lot of interesting experiences that have played out on social media over the years, and I’ll give two real-world examples of how being transparent, empathetic, and efficient has helped us navigate those events.

Adobe Deletes Data — Transparency and Legwork

In February of 2016, Adobe introduced a bug into their Creative Cloud program that deleted data. Specifically, files were deleted from the user’s root directory. Backblaze has a file, .bzvol, that we place on every one of our customers’ hard drives to keep track of the drive and whether it is plugged in or not. If we detect that the .bzvol file is no longer on the computer, we display a pop-up, asking folks to contact support. On the evening of February 10th, 2016, we started receiving a lot of tickets related to disappearing .bzvol files. Twitter also started to light up with people posting screenshots of the pop-up and asking what was going on. While all this happened, I was at a conference and was able to stay plugged into our internal conversations via Slack while we tried to figure out what was happening.

We caught a break the next morning. Our designer and co-founder, Casey, got hit with the error. Our lead Mac developer ran over to his desk, grabbed his logs, and started digging to see what the cause was. The only thing out of the ordinary was that Adobe Creative Cloud software had updated about an hour before he got hit with this error, so we were off to the races chasing things down.

Throughout this whole ordeal we had been tweeting updates and letting people know how they could fix the .bzvol issue. Once we realized that it was tied to Creative Cloud, we communicated that it was not a bug, but that Backblaze was simply a piece of software affected by the Adobe issue. Why was that? It turned out that the files Creative Cloud was deleting were the first files alphabetically found in the root directory. Because .bzvol is a hidden file that started with a “b,” odds were pretty good that if you had Backblaze installed, our .bzvol would be the file that was deleted and the resulting error would alert you of the problem. Backblaze was serving as a canary for a larger problem happening on machines everywhere.

Adobe warning tweet

Adobe Twitter Warnings

We also realized that it wasn’t just Backblaze customers experiencing this. Anyone using Creative Cloud would be affected. They just might not have known their files were deleted silently, since they wouldn’t get the .bzvol pop-up that our customers got.

As the week wore on, our social and support channels were blowing up. We started creating and tweeting videos of how the issue was manifesting and how people could avoid it, and we contacted Adobe with our videos, trying to explain to them what was going on.

Spike in mentions

Spike In Mentions

Mentions around Adobe

Mentions Around Adobe

What did we gain by being proactive and communicative? By February 14th, Adobe had both acknowledged and fixed the issue, releasing an update that wouldn’t silently delete data. We even got a mention in their FAQ on the subject. Plus, because we are lucky enough to have active followers who pay attention to what we post, they were gracious enough to help us spread the message more quickly. In the end we gained the appreciation of our customers and blog readers. While that’s not necessarily a monetary victory, it did reinforce the core strengths of our brand: transparency, empathy, and efficiency. The comments on that post confirmed that we did the right thing by being communicative.

Community response

Community Response On The Blog

CrashPlan’s Exits Consumer — A Strong Community Drives Your Business

On August 22nd, 2017, CrashPlan announced the end of their consumer backup service, shifting focus to their enterprise and SMB offerings. This was rather shocking news to us since CrashPlan was our largest competitor in the online backup space and one that we would send folks to when we weren’t a good fit for their particular need. The news broke early in the morning and our team started to scramble, brainstorming on how we could put ourselves in the best position possible for all of the CrashPlan refugees who were waking up to news of their online backup service going away.

To their credit, CrashPlan tried to communicate with their customers, giving them a couple different options, including a discounted first-year rate with a competitor. We had to move quickly if we wanted to win over some of the people who were going to start looking for alternatives. We did the only thing that made sense: we wrote a blog post.

Within hours of the announcement, we were able to write and publish our post, an Invitation for CrashPlan Customers. It reaffirmed our commitment to unlimited online backup for consumers (something that we see less and less of as tiered and complicated services sprout up) and listed the reasons we thought we’d be a good match for individuals who had used CrashPlan in the past. The initial post highlighted some of our favorite features, the reasons why people love us, and touched on the difference between syncing and backup. We felt that last part was important because we had seen a few tweets that morning of people stating “well at least I have Dropbox” and we wanted to make sure they were aware of the differences before making a possibly costly mistake.

CrashPlan blog post readership

CrashPlan Blog Post Readership

The blog post was widely circulated almost immediately, with over 60,000 people reading it in the first month (it still receives hundreds of visits per day). Once the blog post was rolling we got to work on the next phase of our plan. Phase two was putting the blog post front and center on our computer backup website, creating FAQs based on common questions we were seeing, and writing a guide on how to migrate data from CrashPlan. We also added the ability for folks to create a reminder for themselves once their existing CrashPlan license was started expiring, with instructions on migrating.

While phase two was proceeding, I was hard at work on the social. It was my job to stoke the fire while being respectful. That meant not piling on and being a good community steward by sharing the CrashPlan post and our Version 5.0 release notes (fortuitously released a few weeks prior), which touted faster backup speeds and file sharing. While Twitter was heating up, I was also actively involved in threads on HackerNews and Reddit. Backblaze’s CTO Brian Wilson and I were hard at work making sure that anyone who had questions on the web was responded to.

It’s important to be gracious to your competitors. There’s always someone on the other end of the screen and it was paramount to remember that while we were having a good day, others were having a bad day. To that end, I sent an edible arrangement that day to CrashPlan’s support team, because I knew what it felt like to be having a horrible day in the court of public opinion. Again, we’re all on the same team: get people’s data backed up. I was later told that the gesture was greatly appreciated.

The following are some examples of my Twitter efforts trying to attract customers, having fun, but also being gracious:

What was the benefit? We’ve seen almost a 2x increase in the number of sign-ups that we get for our computer backup service. Not only that, but it has also helped drive almost 50% annual revenue growth this year.

In time sensitive situations, being agile, efficient, and having the ability to execute surgically on a series of tasks can have tremendous impact on the overall business. It’s all about catching the wave at the right time. I’m lucky enough to have the latitude to make decisions about how best to approach things when in the thick of it. That lets me move quickly without having to go up the chain and wait a long time for approvals, which helps the conversation flow more naturally and allows me to stay engaged.

Insight #5

Here’s some advice on competition — it’s important not to pile on. When competitors are having a bad day, attempting to pour gasoline on their fire is considered distasteful and will likely not be taken well by your community. It’s one thing to see and interact with people mentioning or asking questions about you in the comments. It’s entirely different to rub a competitor’s nose in the dirt. Being a jerk online is not only mean but reflects poorly on a company’s brand. Remember that on the other end of every computer is a person who has to deal with their own corporate and social fallout. Treat them the way you’d want to be treated in a crisis situation. We’re all in this together.

Fin

Congratulations on making it to the end! I hope this post was not just verbose, but also helpful. Do you have any social media tips or tricks that have helped you grow your brand? Have questions about our approach? Let’s chat below in the comments!

The post Making Lemonade: The Importance of Social Media and Community appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Security updates for Tuesday

Post Syndicated from ris original https://lwn.net/Articles/771697/rss

Security updates have been issued by Debian (firmware-nonfree and imagemagick), Fedora (cabextract, icecast, and libmspack), openSUSE (icecast), Red Hat (httpd24), Slackware (libtiff), SUSE (apache-pdfbox, firefox, ImageMagick, and kernel), and Ubuntu (clamav, spamassassin, and systemd).

AWS Security Profiles: Ben Potter, Security Lead, Well-Architected

Post Syndicated from Becca Crockett original https://aws.amazon.com/blogs/security/aws-security-profiles-ben-potter-security-lead-well-architected/

Amazon Spheres with author info

In the weeks leading up to re:Invent, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


How long have you been at AWS, and what do you do in your current role?

I’ve been with AWS for four and a half years. I started as a one of the first mid-market territory Solution Architects in Sydney, then I moved to professional services doing security, risk, and compliance. For the last year, I’ve been the security lead for Well-Architected, which is a global role.

What is Well-Architected?

It’s a framework that contains best practices, allowing you to measure your architecture and implement continuous improvements against those measurements. It’s designed to help your architecture evolve in alignment with five pillars: Operational Excellence, Security, Reliability, Performance Efficiency, and Cost Optimization. The framework is based on customer data that we’ve gathered, and learnings that our customers have shared. We want to share these learnings with everyone else.

How do you explain your job to non-tech friends?

Basically, I listen to customers a lot. I work with specialists and service teams around the world to help create security best practices for AWS that drive the Well-Architected framework. My work helps customers make better cloud security decisions.

What are you currently working on that you’re excited about?

I’ve been developing some in-depth, hands-on training material for Well-Architected, which you can find on GitHub. It’s all open-source, and the community is welcome to contribute. We’re just getting started with this sort of hands-on content, but we’ve run AWS-led sessions around the globe using this particular content, including at our AWS Security Lofts throughout the USA — plus Sydney, London, and Singapore — and we’ve gotten very positive feedback.

What’s the most challenging part of your job?

Everyone has different priorities and opinions on security. What a Singapore financial startup thinks is a priority is completely different from what an established bank in London thinks — which is completely different from the entertainment industry. The priorities of startups often center around short time-to-market and low cost, with less focus on security.

I’m trying to make it easy for everyone to be what we call Well-Architected in security from the start, so that the only way to do something is via automated, repeatable, secure mechanisms. AWS is great at providing building blocks, but if we can combine those building blocks into different solution sets and guidance, then we can help every customer be Well-Architected from the beginning. Most of the time, it doesn’t cost anything additional. People like me just need to spend the time developing examples, solutions, and labs, and getting them out there.

What does cloud security mean to you, personally?

Cloud security is an opportunity to rethink cybersecurity — to rethink the boundaries of what’s possible. It’s not just a security guard in front of a data center, with a big, old-fashioned firewall protecting the network. It’s a lot deeper than that. The cloud lets you influence security at every layer, from developers all the way to end users. Everyone needs to be thinking about it. I had a big presentation earlier this year, and I asked the audience, “Put your hand up if you’re responsible for your organization’s security.” Only about a quarter of the audience put their hands up. But that’s not true — it’s everyone’s responsibility. The cloud provides opportunities for businesses to innovate, improve their agility and ability to drive business value, but security needs to go hand-in-hand with all of that.

What’s the biggest issue that you see customers struggling with when it comes to cloud security?

A lot of customers don’t think about the need for incident response. They think: I don’t want to think about it. It’s never gonna happen to me. No, my access keys will never be lost. It’s fine. We’ve got processes in place, and our developers know what they’re doing. We’re never gonna lose any access keys or credentials. But it happens, people make mistakes. And it’s very important for anyone, regardless of whether or not they’re in the cloud, to be prepared for an incident, by investing in the tools that they need, by actually practicing responding to an incident, and by having run books. If X does happen, then where do I start? What do I need to do? Who do I need to communicate with? AWS can help with that, but it’s all very reactive. Incident response needs to be proactive because your organization’s reputation and business could be on the line.

In your opinion, what’s the biggest challenge facing the cloud security industry right now?

I think the biggest challenge is just staying up to date with what’s happening in the industry. Any company that develops software or tools or services is going to have a predefined plan of work. But often, security is forgotten about in that development process. Say you’re developing a mobile game: you’d probably have daily agile-style stand-ups, and you’d develop the game until you’ve got a minimum viable product. Then you’d put it out there for testing. But what if the underlying software libraries that you used to develop the game had vulnerabilities in them, and you didn’t realize this because you didn’t build in a process for hourly or daily checking of vulnerabilities in the external libraries you pulled in?

Keeping up-to-date is always a challenge, and this is where the cloud actually has a lot of power, because the cloud can drive the automated infrastructure combined with the actual code. It’s part of the whole dev ops thing — combining infrastructure code with the actual application code. You can take it all and run automated tools across it to verify your security posture and provide more granular control. In the old days, nearly everyone had keys to the data center to go in and reboot stuff. Now, you can isolate different application teams to different portions of their cloud environment. If something bad does happen, it’s much easier to contain the issue through the segmentation and micro-segmentation of services.

Five years from now, what changes do you think we’ll see across the security landscape?

I think we’re going to see a lot of change for the better. If you look at ransomware statistics that McAfee has published, new infection rates have actually gone down. More people are becoming aware of security, including end users and the general public. Cyber criminals go where the money is. This means organizations are under increasing pressure to do the right thing in terms of public safety and security.

For ransomware specifically, there’s also nomoreransom.org, a global project for which I was the “Chief Architect” — I worked with Europol, McAfee, and Kaspersky to create this website. It’s been around for a couple years now, and I think it’s already helping drive awareness of security and best practices for the public, like, don’t click on this phishing email. I co-presented a re:Invent presentation on this project few years ago, if you want more info about it.

Tell us about the chalk talk you’re giving at re:Invent this year.

The Well-Architected for Security chalk talk is meant to help customers get started by helping them identify which best practices they should follow. It’s an open QA. I’ll start by giving an overview of the Well-Architected framework, some best practices, and some design principles, and then I’ll do a live Q&A with whiteboarding. It’ll be really interactive. I like to question the audience about what they think their challenges are. Last year, I ran a session on advanced web application security that was really awesome because I actually got a lot of feedback, and I had some service team members in the room who were also able to use a lot of feedback from that session. So it’s not just about sharing, it’s also listening to customers’ challenges, which helps drive our content road map on what we need to do for customer enablement in the coming months.

Your second re:Invent session, the Security Framework Shakedown, says it will walk you through a complete security journey. What does that mean?

This session that Steve Laino and I are delivering is about where you should start in terms of design: How to know you’re designing a secure architecture, and how the Cloud Adoption and Well-Architected frameworks can help. As your company evolves, you’re going to have priorities, and you can’t do everything right the first time. So you’ll need to think about what your priorities are and create your own roadmap for an evolving architecture that becomes continually more secure. We’ve got National Australia Bank co-presenting with us. They’ll share their journey, including how they used the Cloud Adoption Framework to get started, and how they use Well-Architected daily to drive improvement across their platform.

Broadly, what are you hoping that your audience will take away from your sessions? What do you want them to do differently?

I want people to start prioritizing security in their day-to-day job roles. That prioritization means asking questions like, “What are some principles that I should include in my day to day work life? Are we using tools and automation to make security effective?” And if you’re not using automation and tools, then what’s out there that you can start using?

Any tips for first-time conference attendees?

Get out there and socialize. Talk to your peers, and try to find some mentors in the community. You’ll find that many people in the industry, both in AWS and among our customers and partners, are very willing to help you on a personal basis to develop your career.

Any tips for returning attendees?

Think about your goals, and go after that. You should be willing to give your honest feedback, too, and seek out service team members and individuals that have influenced you in the past.

You’re from Adelaide. If somebody is visiting your hometown, what would you advise them to do?

The “Mad March” festivities should not be missed. If you like red wine, you should visit the wine regions of Barossa Valley or McLaren Vale — or both. My favorite is definitely Barossa Valley.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security news? Follow us on Twitter.

Author

Ben Potter

Ben is the global security leader for the AWS Well-Architected Framework and is responsible for sharing best practices in security with customers and partners. Ben is also an ambassador for the No More Ransom initiative helping fight cyber crime with Europol, McAfee and law enforcement across the globe.

New IoT Security Regulations

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/11/new_iot_securit.html

Due to ever-evolving technological advances, manufacturers are connecting consumer goods­ — from toys to lightbulbs to major appliances­ — to the internet at breakneck speeds. This is the Internet of Things, and it’s a security nightmare.

The Internet of Things fuses products with communications technology to make daily life more effortless. Think Amazon’s Alexa, which not only answers questions and plays music but allows you to control your home’s lights and thermostat. Or the current generation of implanted pacemakers, which can both receive commands and send information to doctors over the internet.

But like nearly all innovation, there are risks involved. And for products borne out of the Internet of Things, this means the risk of having personal information stolen or devices being overtaken and controlled remotely. For devices that affect the world in a direct physical manner — ­cars, pacemakers, thermostats­ — the risks include loss of life and property.

By developing more advanced security features and building them into these products, hacks can be avoided. The problem is that there is no monetary incentive for companies to invest in the cybersecurity measures needed to keep their products secure. Consumers will buy products without proper security features, unaware that their information is vulnerable. And current liability laws make it hard to hold companies accountable for shoddy software security.

It falls upon lawmakers to create laws that protect consumers. While the US government is largely absent in this area of consumer protection, the state of California has recently stepped in and started regulating the Internet of Things, or “IoT” devices sold in the state­and the effects will soon be felt worldwide.

California’s new SB 327 law, which will take effect in January 2020, requires all “connected devices” to have a “reasonable security feature.” The good news is that the term “connected devices” is broadly defined to include just about everything connected to the internet. The not-so-good news is that “reasonable security” remains defined such that companies trying to avoid compliance can argue that the law is unenforceable.

The legislation requires that security features must be able to protect the device and the information on it from a variety of threats and be appropriate to both the nature of the device and the information it collects. California’s attorney general will interpret the law and define the specifics, which will surely be the subject of much lobbying by tech companies.

There’s just one specific in the law that’s not subject to the attorney general’s interpretation: Default passwords are not allowed. his is a good thing; they are a terrible security practice. But it’s just one of dozens of awful “security” measures commonly found in IoT devices.

This law is not a panacea. But we have to start somewhere, and it is a start.

Though the legislation covers only the state of California, its effects will reach much further. All of us­ — in the United States or elsewhere­ — are likely to benefit because of the way software is written and sold.

Automobile manufacturers sell their cars worldwide, but they are customized for local markets. The car you buy in the United States is different from the same model sold in Mexico, because the local environmental laws are not the same and manufacturers optimize engines based on where the product will be sold. The economics of building and selling automobiles easily allows for this differentiation.

But software is different. Once California forces minimum security standards on IoT devices, manufacturers will have to rewrite their software to comply. At that point, it won’t make sense to have two versions: one for California and another for everywhere else. It’s much easier to maintain the single, more secure version and sell it everywhere.

The European General Data Protection Regulation (GDPR), which implemented the annoying warnings and agreements that pop up on websites, is another example of a law that extends well beyond physical borders. You might have noticed an increase in websites that force you to acknowledge you’ve read and agreed to the website’s privacy policies. This is because it is tricky to differentiate between users who are subject to the protections of the GDPR­ — people physically in the European Union, and EU citizens wherever they are — ­and those who are not. It’s easier to extend the protection to everyone.

Once this kind of sorting is possible, companies will, in all likelihood, return to their profitable surveillance capitalism practices on those who are still fair game. Surveillance is still the primary business model of the internet, and companies want to spy on us and our activities as much as they can so they can sell us more things and monetize what they know about our behavior.

Insecurity is profitable only if you can get away with it worldwide. Once you can’t, you might as well make a virtue out of necessity. So, everyone will benefit from the California regulation, as they would from similar security regulations enacted in any market around the world large enough to matter, just like everyone will benefit from the portion of GDPR compliance that involves data security.

Most importantly, laws like these spur innovations in cybersecurity. Right now, we have a market failure. Because the courts have traditionally not held software manufacturers liable for vulnerabilities, and because consumers don’t have the expertise to differentiate between a secure product and an insecure one, manufacturers have prioritized low prices, getting devices out on the market quickly and additional features over security.

But once a government steps in and imposes more stringent security regulations, companies have an incentive to meet those standards as quickly, cheaply and effectively as possible. This means more security innovation, because now there’s a market for new ideas and new products. We’ve seen this pattern again and again in safety and security engineering, and we’ll see it with the Internet of Things as well.

IoT devices are more dangerous than our traditional computers because they sense the world around us, and affect that world in a direct physical manner. Increasing the cybersecurity of these devices is paramount, and it’s heartening to see both individual states and the European Union step in where the US federal government is abdicating responsibility. But we need more, and soon.

This essay previously appeared on CNN.com.

Three-factor authentication is the new two-factor authentication

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/three-factor-authentication-raspberry-pi/

Two-factor authentication continues to provide our online selves with more security for our email and online banking. Meanwhile, in the physical world, protecting our valuables is now all about three-factor authentication.

A GIF of a thumbprint being scanned for authentication - three-factor authentication

Not sure what I mean? Here’s a video from Switched On Network that demonstrates how to use a Raspberry Pi to build a three-factor door lock comprised of an RFID keyring, 6-digit passcode, and one-time access code sent to your mobile phone.

Note that this is a fairly long video, so feel free to skip it for now and read my rather snazzy tl;dr. You can come back to the video later, with a cup of tea and 20 minutes to spare. It’ll be worth it, I promise.

Build a Raspberry Pi Smart Door Lock Security System with Three Factor Authentication!

https://amzn.to/2A98EaZ (UK) / https://amzn.to/2LDlxyc (US) – Get a free audiobook with a 30-day trial of Audible from Amazon! Build the ultimate door lock system, effectively turning your office or bedroom into a high-security vault!

The tl;dr of three-factor door locks by Alex Bate

To build Switched On Network’s three-factor door lock, you need to source a Raspberry Pi 3, a USB RFID reader and fob, a touchscreen, a electronic door strike, and a relay switch. You also need a few other extras, such as a power supply and a glue gun.

A screenshot from the three-factor authentication video of a glue gun

Once you’ve installed the appropriate drivers (if necessary) for your screen, and rotated the display by 90 degrees, you can skip ahead a few steps by installing the Python script from Switched On Network’s GitHub repo! Cheers!

A screenshot from the three-factor authentication video of the screen attached to the Pi in portrait mode

Then for the physical build: you need to attach the door strike, leads, and whatnot to the Pi — and all that together to the door and door frame. Again, I won’t go into the details, since that’s where the video excels.

A screenshot from the video of the components of the three-factor authentication door lock

The end result is a superior door lock that requires you to remember both your keys and your phone in order to open it. And while we’d never suggest using this tech to secure your house from the outside, it’s a perfect setup for inside doors to offices or basement lairs.

A GIF of Dexter from Dexter's Laboratory

Everyone should have a lair.

Now go watch the video!

The post Three-factor authentication is the new two-factor authentication appeared first on Raspberry Pi.

In the Works – AWS Region in Milan, Italy

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/in-the-works-aws-region-in-milan-italy/

Late last month I announced that we are working on an AWS Region in South Africa. Today I would like to let you know that we are also building an AWS Region in Italy and plan to open it up in early 2020.

Milan in 2020
The upcoming Europe (Milan) Region will have three Availability Zones and will be our sixth region in Europe, joining the existing regions in France, Germany, Ireland, the UK, and the new region in Sweden that is set to launch later this year. We currently have 57 Availability Zones in 19 geographic regions worldwide, and another 15 Availability Zones across five regions in the works for launch between now and the first half of 2020 (check out the AWS Global Infrastructure page for more info). Like all of our existing regions, this one is designed and built to meet the most rigorous compliance standards and to provide the highest level of security for AWS customers.

AWS in Italy
AWS customers in Italy have been using our existing regions for more than a decade. Hot startups, enterprises, and public sector organizations in Italy are all running their mission-critical applications on the AWS Cloud. Here’s a tasting menu to give you an idea of what’s already happening:

Ferrero is one of the world’s largest chocolate manufacturers (including the Pocket Coffee that powers my blogging). They have been using AWS since 2010, and use a template-driven model that lets them share features and functions across 250 web sites for 80 countries, giving them the ability to handle traffic surges while reducing costs by 30%.

Mediaset runs multiple broadcast networks and digital channels, as well as a pay-TV service, advertising agencies, and Italian film studio Medusa. The Mediaset Premium Online soccer service now attracts over 600,000 unique month visitors, doubling in size since it was launched last year. AWS allows them to meet this demand without adding more hardware, while also scaling up and down on an as-needed basis.

Eataly is the largest online marketplace for Italian food and wine products. After moving from physical stores to the web, they decided to use AWS to ensure scalability. Today, they use a wide range of AWS services, deliver 1.5 to 3 million page views daily, and handle holiday peaks ranging from 100 to 1000 orders per day.

Vodafone Italy has more than 30 million customers for their mobile services. They used AWS to power a new pay-as-you-go service to allow mobile customers to add credit to their accounts, building the service from scratch to be PCI DSS Level 1 compliant and to scale rapidly, all in just 3 months, and with a 30% reduction in capital expenses.

The European Space Agency (ESA) Centre for Earth Observation in Frascati, Italy runs the Data User Element (DUE) program. Although much of the work takes place in Earth-orbiting satellites, the program also takes advantage of EC2 and S3, storing up to 30 terabytes of images and observations at peak times and available to a 50,000 person user community.

The new region will give these customers (and many others) a new option with even lower latency for their local customers, and will also open the door to applications that must comply with strict data sovereignty requirements.

Investing in Italy’s Future
The upcoming Europe (Milan) Region is just one step along a long path! Back in 2012 we launched the first Point of Presence (PoP) in Milan and now use it to deliver Amazon CloudFront, Amazon Route 53, AWS Shield, and AWS WAF services to Italy, sharing the load with a PoP in Palermo that we launched in 2017. In 2016 we acquired Asti-based NICE Software (read Amazon Web Services to Acquire NICE).

We are also working to help prepare developers in Italy for the digital future, with programs like AWS Educate, AWS Academy, and AWS Activate. Dozens of universities and business schools across Italy are already participating in our educational programs, as are a plethora of startups and accelerators.

Stay Tuned
I’ll be sure to share additional news about this and other upcoming AWS regions as soon as I have it, so stay tuned!

Jeff;

 

AWS GovCloud (US-East) Now Open

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-govcloud-us-east-now-open/

Last year I told you that we were working on AWS GovCloud (US-East), an eastern US companion to the existing AWS GovCloud (US-West) Region that we launched in 2011. The new region is now open and ready to serve the needs of federal, state, and local government agencies, the IT contractors that serve them, and customers with regulated workloads. It offers added redundancy, data durability, and resiliency, and also provides additional options for disaster recovery. This is an isolated AWS region, subject to FedRAMP High and Moderate baselines, operated by US citizens on US soil. It is accessible only to vetted US entities and root account holders, who must confirm that they are US Persons (citizens or permanent residents) in order to gain access. You can read Achieve FedRAMP High Compliance in the AWS GovCloud (US) Region to learn more.

AWS GovCloud (US) gives vetted government customers and regulated industry customers and their partners the flexibility to architect secure cloud solutions that comply with: the FedRAMP High baseline, the DOJ’s Criminal Justice Information Systems (CJIS) Security Policy, U.S. International Traffic in Arms Regulations (ITAR), Export Administration Regulations (EAR), Department of Defense (DoD) Cloud Computing Security Requirements Guide (SRG) for Impact Levels 2, 4 and 5, FIPS 140-2, IRS-1075, and other compliance regimes.

Lots of Services
Applications running in this region can make use of Auto Scaling (EC2 and Application), AWS Certificate Manager (ACM), AWS CloudFormation, AWS CloudTrail, Amazon CloudWatch, CloudWatch Events, Amazon CloudWatch Logs, AWS CodeDeploy, AWS Config, AWS Database Migration Service, AWS Direct Connect, Amazon DynamoDB, AWS Elastic Beanstalk, Amazon Elastic Block Store (EBS), Amazon ElastiCache, Amazon Elastic Compute Cloud (EC2), EC2 Container Registry, Amazon ECS, Elastic Load Balancing (Application, Network, and Classic), Amazon EMR, Amazon Elasticsearch Service, Amazon Glacier, AWS Identity and Access Management (IAM) (including Access Key Last Used), Amazon Inspector, AWS Key Management Service (KMS), Amazon Kinesis Data Streams, AWS Lambda, Amazon Aurora (MySQL and PostgreSQL), Amazon Redshift, Amazon Relational Database Service (RDS), AWS Server Migration Service, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), Amazon Simple Storage Service (S3), Amazon Simple Workflow Service (SWF), Amazon EC2 Systems Manager (SSM), AWS Trusted Advisor, Amazon Virtual Private Cloud, VM Import, VPN, Amazon API Gateway, AWS Snowball, AWS Snowball Edge, AWS Server Migration Service, and AWS Step Functions.

Crossing the Regions
Many of the cool cross-regions features of AWS can be used to span AWS GovCloud (US-East) and AWS GovCloud (US-West) in order to reduce latency or to increase workload resiliency & availability for mission-critical systems. Here’s what you can do:

We are working to add support for DynamoDB Global Tables and Inter-Region VPC Peering.

AWS GovCloud (US) in Action
Our customers are already hosting many different types of applications in AWS GovCloud (US-West); here’s a small sample:

Enterprise Apps – Oracle, SAP, and Microsoft workloads that were traditionally provisioned for peak demand are now being run on scalable, cloud-based infrastructure.

HPC / Big Data – Organizations with large data sets are spinning up HPC clusters in the cloud in order to extract intelligence and to better serve their constituents.

Storage / DR – The ability to tap in to vast amounts of cost-effective, highly durable cloud storage managed by US Persons supports a variety of DR approaches, from simple backups to hot standby. The addition of a second region allows you to use of the cross-region features that I mentioned earlier.

Learn More
To learn more, check out the AWS GovCloud (US) page. If you are looking forward to making use of AWS GovCloud (US) and need a partner to help you to make it happen, take a look at the list of AWS GovCloud (US) Partners.

Jeff;

Grafana 5.3.3 and 4.6.5 released with important security fix

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2018/11/13/grafana-5.3.3-and-4.6.5-released-with-important-security-fix/

Today we are releasing Grafana 5.3.3 and 4.6.5. These patch releases include an important security fix for all Grafana installations between 4.1.0 and 5.3.2

We also release 5.3.4 at the same time containing some fixes and improvements that we have been holding off for a while to release 5.3.3.

Release 5.3.3 only containing a security fix:

Latest stable release in 4.x:

Latest stable release in 5.x:

File Exfiltration vulnerability (CVE-2018-19039)

On the 5th of November at we were contacted about a potential security issue that could allow any users with Editor or Admin permissions in Grafana to read any file that the Grafana process can read from the filesystem. Note, that in order to exploit this you would need to be logged in to the system as a legitimate user with Editor or Admin permissions.

Affected versions

Grafana releases 4.1.0 through 5.3.2 are affected by this vulnerability.

Solutions and mitigations

All installations between 4.1.0 and 5.3.2 that have users that should not have access to the filesystem where Grafana is running must be upgraded as soon as possible. If you can not upgrade, you should set all users to viewers and remove all dashboards that contain text panels.

All instances of Grafana Cloud have already been updated to 5.3.3. Grafana Enterprise customers have been provided with fixed binaries ahead of this disclosure.

CVE ID: CVE-2018-19039

Timeline and postmortem

Here is a detailed timeline starting from when we originally learned of the issue.

5 Nov 2018 16:30 CET
Received details of vulnerability from Sebastian Solnica.
6 Nov 2018 13:00 CET
Confirmed issue.
Started working on a fix for latest stable in a private mirror.
Backported the fix to 4.6.5 in private mirror.
6 Nov 2018 16:00 CET
Received CVE-2018-19039
Started preparing 5.3.3 and 4.6.5 release from private mirror.
6 Nov 2018 17:33 CET
Started rolling out 5.3.3 to Grafana Cloud customers.
Decided on making release public on Tuesday Nov 13 13:00 CET. The date was chosen to give people time to prepare and not run into the weekend. The time was chosen to fall into main work time of the EU and US while still giving Asia a fair chance to react.
7 Nov 2018 22:05 CET
Proactively provided Grafana Enterprise customers with details and download links.
Completed rollout of 5.3.3 to Grafana Cloud.
13 Nov 2018 13:00 CET
Publish of release & this blog post.

Reporting security Issues

If you think you have found a security vulnerability please send a report to [email protected]. This address can be used for all of
Grafana Labs’s open source and commercial products (including but not limited to Grafana, Grafana Cloud, Grafana Enterprise, and grafana.com). We can accept only vulnerability reports at this address. We would prefer if you encrypted your message to us, please use our PGP key. The key fingerprint is

F988 7BEA 027A 049F AE8E 5CAA D125 8932 BE24 C5CA

The key is available from pgp.mit.edu by searching for grafana.

Security Announcements

We maintain a category on the community site named Security Announcements
where we will post a summary, remediation, and mitigation details for any patch containing security fixes. You can also subscribe to email updates to this category if you have a grafana.com account and sign on the community site or via track updates via an RSS feed.

Conclusion

If you run a Grafana between version 4.1.0 and 5.3.2 with users that should not have access to the filesystem where Grafana is running, please upgrade to Grafana 5.3.3 or 4.6.5 as soon as possible.

We would like to thank Sebastian Solnica And NCC Group for reporting this issue.

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close